Sundar Pichai, CEO of Google recently noted that artificial intelligence (AI) could strengthen online security, a sentiment shared by many industry experts.
AI is transforming the way security teams manage cyber threats, making their work faster and more efficient. By analyzing large amounts of data and identifying complex patterns, AI automates the early stages of incident investigation. New methods allow security professionals to begin their work with a clear understanding of the situation, thereby speeding up response times.
The defensive advantage of AI
“Tools such as machine learning-based anomaly detection systems can flag unusual behavior, while AI-powered security platforms provide comprehensive threat intelligence and predictive analytics,” said Timothy E. Bates, chief technology officer at Lenovo, in an interview with PYMNTS. “Then there is deep learning, which can analyze malware to understand its structure and potentially reverse engineer attacks. These AI agents work in the shadows, continually learning from each attack to not only defend themselves, but also to disarm future threats.
Cybercrime is a growing problem as more countries move toward the connected economy. Losses from cyberattacks totaled at least $10.3 billion in the United States in 2022, according to an FBI report.
Growing threats
The tools used by attackers and defenders are constantly evolving and increasingly complex, said Marcus Fowler, CEO of cybersecurity firm Darktrace Federal, in an interview with PYMNTS.
“AI represents the biggest advancement in terms of actually augmenting today’s cyber workforce, improving situational awareness, and accelerating the mean time of action to enable them to be more efficient, reduce fatigue and prioritize cyber investigation workloads,” he said.
As cyberattacks continue to increase, improving defense tools becomes increasingly important. Britain’s intelligence agency GCHQ recently warned that new AI tools could lead to more cyberattacks, making it easier for novice hackers to cause damage. The agency also said the latest technologies could increase ransomware attacks, where criminals lock files and demand money, according to a report by the GCHQ National Cyber Security Centre.
Google’s Pichai pointed out that AI helps accelerate how quickly security teams can detect and stop attacks. This innovation helps defenders who must intercept every attack to keep systems secure, while attackers only need to succeed once to cause problems.
While AI can empower cyberattackers, it also empowers defenders to combat security breaches.
Vast capabilities
Artificial intelligence has the potential to bring benefits to the cybersecurity field far beyond just automating routine tasks, noted Piyush Pandey, CEO of cybersecurity company Pathlock, in an interview with PYMNTS. As security rules and needs continue to grow, he said, the amount of data for governance, risk management and compliance (GRC) is increasing so much that it could soon become too much to handle.
“Continuous, automated monitoring of the compliance situation using AI can and will significantly reduce manual efforts and errors,” he said. “More granular and sophisticated risk assessments will be available via ML (machine learning) algorithms, capable of processing large amounts of data to identify subtle risk patterns, providing a more predictive approach to reducing risk and loss financial. »
Pattern detection
Using AI to detect specific patterns is one way to catch hackers who keep getting better at what they do. Today’s hackers are great at avoiding typical security checks, which is why many groups are using AI to catch them, Mike Britton, CISO at Abnormal Security, told PYMNTS in an interview. He said one of the ways AI can be used in cyber defense is through behavioral analysis. Instead of just looking for known bad signs like dangerous links or suspicious senders, AI-powered solutions can detect unusual activity that doesn’t fit the normal pattern.
“By defining normal behavior in the email environment, including user-specific communication patterns, styles, and relationships, AI could detect abnormal behavior that could indicate an attack, regardless of whether the content was created by a human or by generative AI tools. ” he added.
AI systems can distinguish fake attacks from real attacks by recognizing ransomware behavior. The system can quickly identify suspicious behavior, including unauthorized key generation, said Zack Moore, chief product security officer at InterVision, in an interview with PYMNTS.
Generative AI, particularly large language models (LLM), allows organizations to simulate potential attacks and identify their weaknesses. Moore said the most effective use of AI to discover and dissect attacks is through continuous penetration testing.
“Instead of simulating an attack once a year, organizations can rely on AI-based penetration testing to continually check the robustness of their system,” he said. “In addition, technicians can view the tool’s logs to reverse engineer a solution after identifying a vulnerability. »
The cat-and-mouse game between attackers and defenders using AI is likely to continue indefinitely. Meanwhile, consumers are wondering how to protect their data. Recent PYMNTS information study showed that people who like to use online shopping features care most about the security of their data, with 40% of U.S. shoppers saying it is their main concern or very important.