Digital security
New ESET white paper reveals risks and opportunities of artificial intelligence for cyber defenders
May 28, 2024
•
,
5 minutes. read
Artificial Intelligence (AI) is the topic du jour, with the latest and greatest AI technologies receiving breathtaking media coverage. And probably few sectors are likely to gain as much, or even be hit as hard, as cybersecurity. Contrary to popular belief, some players in the field have been using this technology in one form or another for more than two decades. But the power of cloud computing and advanced algorithms are combining to further strengthen digital defenses or help create a new generation of AI-driven applications that could transform how organizations protect, detect and respond to attacks .
On the other hand, as these capabilities become less expensive and more accessible, threat actors will also use the technology for social engineering, disinformation, scams and more. A new ESET white paper aims to uncover risks and opportunities for cyber defenders.
A Brief History of AI in Cybersecurity
Large Language Models (LLMs) may be the reason boards around the world are talking about AI, but the technology has been put to good use for years. ESET, for example, first deployed AI over a quarter of a century ago via neural networks with the aim of improving macrovirus detection. Since then, the company has used AI in various forms to:
- Differentiating Between Malicious and Clean Code Examples
- Rapid triage, sorting and labeling of malware samples in mass
- A cloud reputation system, leveraging a continuous learning model via training data
- Endpoint protection with high detection and low false positive rates, using a combination of neural networks, decision trees and other algorithms
- A powerful cloud sandbox tool powered by multi-layer machine learning detection, unboxing and analysis, experimental detection and deep behavior analysis.
- New cloud and endpoint protection powered by transformer AI models
- XDR which helps prioritize threats by correlating, sorting and grouping large volumes of events
Why is AI used by security teams?
Today, security teams need effective AI-powered tools more than ever, thanks to three main factors:
1. Skills shortages continue to hit hard
HAS the last count, there was a shortage of approximately four million cybersecurity professionals worldwide, including 348,000 in Europe and 522,000 in North America. Organizations need tools to improve the productivity of the staff they have and to provide guidance on threat analysis and remediation in the absence of senior colleagues. Unlike human teams, AI can operate 24/7/365 and spot patterns that security professionals might miss.
2. Threat actors are agile, determined, and have sufficient resources
While cybersecurity teams struggle to recruit, their adversaries continue to strengthen. According to one estimate, the economics of cybercrime could cost the world up to $10.5 trillion per year by 2025. Budding cybercriminals can find everything they need to launch attacks, bundled into ready-to-use offers and toolkits “as service “. Third-party brokers provide access to pre-hacked organizations. And even state actors are getting involved in financially motivated attacks – notably North Korea, but also the United States. China and other nations. In states like Russia, the government is suspected of actively maintaining anti-Western hacktivism.
3. The stakes have never been higher
As investments in digital have increased over the years, so has the reliance on IT systems to fuel sustainable growth and competitive advantage. Network defenders know that if they fail to prevent or quickly detect and contain cyberthreats, their organization could suffer significant financial and reputational damage. A data breach costs on average $4.45 million today. But a serious ransomware breach involving service disruption and data theft could reach several times that amount. One estimate claims Financial institutions alone have lost $32 billion in downtime due to service interruptions since 2018.
How is AI used by security teams?
It’s no surprise, then, that organizations are looking to harness the power of AI to help them more effectively prevent, detect and respond to cyber threats. But how exactly do they do it? By correlating indicators across large volumes of data to identify attacks. By identifying malicious code through suspicious activities that go beyond the norm. And helping threat analysts interpret complex information and prioritize alerts.
Here are some examples of current and future positive uses of AI:
- Threat Intelligence: LLM-based GenAI assistants can simplify complex tasks by parsing dense technical reports to summarize key points and takeaways in plain English for analysts.
- AI assistants: Integrating AI “co-pilots” into IT systems can help eliminate dangerous misconfigurations that would otherwise expose organizations to attacks. This could work for both general IT systems such as cloud platforms and security tools such as firewalls, which may require complex settings to be updated.
- Increase SOC productivity: Today, security operations center (SOC) analysts are under enormous pressure to quickly detect, respond, and contain incoming threats. But the sheer scale of the attack surface and the number of tools generating alerts can often be overwhelming. This means that legitimate threats go unnoticed while analysts waste their time on false positives. AI can ease this burden by contextualizing and prioritizing these alerts – and perhaps even resolving minor alerts.
- New detections: Threat actors are constantly evolving their tactics, techniques, and procedures (TTP). But by combining indicators of compromise (IoCs) with publicly available information and threat feeds, AI tools could search for the latest threats.
How is AI used in cyberattacks?
Unfortunately, the bad guys also have their sights set on the AI. According to the United Kingdom National Cyber Security Center (NCSC), the technology will “increase the global threat of ransomware” and “almost certainly increase the volume and impact of cyberattacks over the next two years.” How are threat actors currently using AI? Consider the following:
- Social engineering: One of the most obvious uses of GenAI is to help bad actors craft highly convincing and almost grammatically perfect phishing campaigns at scale.
- BEC and other scams: Once again, GenAI technology can be deployed to imitate the writing style of a specific individual or company, in order to trick a victim into transferring money or transmitting sensitive data/connections. Deepfake audio and video could also be deployed for the same purpose. The FBI has issued several warnings about this in the past.
- Disinformation: GenAI can also simplify content creation for influencer operations. A recent report warned that Russia is already using such tactics – which could be widely replicated if successful.
The limits of AI
For better or worse, AI currently has its limits. It can generate high false positive rates and, without high-quality training sets, its impact will be limited. Human oversight is also often required to verify that the results are correct and to train the models themselves. All this shows that AI is neither a silver bullet for attackers nor defenders.
Over time, their tools could clash: one seeking to breach defenses and deceive employees, while the other looks for signs of malicious AI activity. Welcome to the start of a new cybersecurity arms race.
To learn more about using AI in cybersecurity, visit The new report from ESET