Over the past decade, along with the explosive growth of information technology, the grim reality of cybersecurity threats has also evolved dramatically. Cyberattacks, once carried out primarily by malicious hackers seeking notoriety or financial gain, have become much more sophisticated and targeted. From state espionage to identity and corporate theft, the motivations behind cybercrime are increasingly sinister and dangerous. Although monetary gain remains an important reason for cybercrime, it has been overshadowed by more nefarious goals of stealing critical data and assets. Cyber attackers are widely leveraging cutting-edge technologies, including artificial intelligence, to infiltrate systems and carry out malicious activities. In the United States, the Federal Bureau of Investigation (FBI) reported more than 800,000 cybercrime complaints were filed in 2022, with total losses exceeding $10 billion, dwarfing 2021’s total of $6.9 billion, according to the bureau’s Internet Crime Complaint Center.
With the threat landscape rapidly evolving, it is time for organizations to adopt a multi-faceted approach to cybersecurity. The approach should be to determine how attackers gain entry; prevent an initial compromise; quickly detect incursions; and enable rapid response and corrective action. Protecting digital assets requires harnessing the power of AI and automation while ensuring that skilled human analysts remain an integral part of the security posture.
Protecting an organization requires a multi-layered strategy that takes into account the various entry points and attack vectors employed by adversaries. Broadly speaking, these attacks fall into four main categories: 1) web and network attacks; 2) User behavior and identity-based attacks; 3) Entity attacks targeting cloud and hybrid environments; and 4) malware, including ransomware, advanced persistent threats, and other malicious code.
Leveraging AI and Automation
Deploying AI and machine learning (ML) models tailored to each of these attack classes is essential for proactive threat detection and prevention. For web and network attacks, models should identify threats such as phishing, browser exploitation, and distributed denial of service (DDoS) attacks in real time. Analyzing the behavior of users and entities leveraging AI can detect anomalous activities indicating account compromise or misuse of system resources and data. Finally, AI-based malware analysis can quickly triage new strains, identify malicious behavior, and mitigate the impact of file-based threats. By implementing AI and ML models across this spectrum of attack surfaces, organizations can significantly improve their ability to autonomously identify attacks in the early stages before they escalate into separate incidents whole.
Once AI/ML models identify potential threat activities across various attack vectors, organizations face another major challenge: making sense of frequent alerts and separating critical incidents from the noise. With so many data points and detections being generated, it becomes crucial to apply another layer of AI/ML to correlate and prioritize the most serious alerts that warrant further investigation and response. Alert fatigue is an increasingly critical problem that needs to be addressed.
AI can play a central role in this alert triage process by ingesting and analyzing large volumes of security telemetry data, merging information from multiple detection sources, including threat intelligence, and revealing only the most accurate incidents to answer them. This reduces the burden on human analysts, who would otherwise be inundated with widespread false positives and low-fidelity alerts without adequate context to determine severity and next steps.
Although bad actors have actively deployed AI to power attacks like DDoS, spear phishing, and ransomware, the defensive side has lagged behind in AI adoption. However, the situation is rapidly changing as security vendors strive to develop advanced AI/ML models that can detect and block these AI-powered threats.
The future of defensive AI lies in deploying small, specialized language models tailored to specific attack types and use cases rather than relying solely on large generative AI models. In contrast, large language models show more promise for cybersecurity operations such as automating support functions, retrieving standard operating procedures, and assisting human analysts. The heavy lifting of accurate threat detection and prevention will be better handled by small, highly specialized AI/ML models.
The role of human expertise
It is crucial to use AI/ML alongside process automation to enable rapid remediation and containment of verified threats. At this stage, equipped with high-confidence incidents, AI systems can launch automated responses tailored to each specific attack type: block malicious IP (Internet Protocol) addresses, isolate compromised hosts, apply adaptive policies , etc. However, human expertise remains essential, validating AI results, applying critical thinking, and overseeing autonomous response actions to ensure business is protected without interruption.
Nuanced understanding is what humans bring to the table. Additionally, analyzing new and complex malware threats requires creativity and problem-solving skills that may be beyond the reach of machines.
Human expertise is essential in several key areas:
- Validation and contextualization: AI systems, despite their sophistication, can sometimes generate false positives or misinterpret data. Human analysts are needed to validate AI results and provide necessary context that AI might overlook. This ensures that responses are appropriate and proportionate to the actual threat.
- Investigating Complex Threats: Some threats are too complex for AI to handle alone. Human experts can dig deeper into these incidents, using their experience and intuition to uncover hidden aspects of the threat that AI might overlook. This human insight is essential to understanding the full scope of sophisticated attacks and designing effective countermeasures.
- Strategic decision making: Although AI can handle routine tasks and data processing, strategic decisions regarding overall security posture and long-term defense strategies require human judgment. Experts can interpret AI-generated insights to make informed decisions regarding resource allocation, policy changes, and strategic initiatives.
- Continuous improvement: Human analysts contribute to the continuous improvement of AI systems by providing feedback and training data. Their knowledge helps refine AI algorithms, making them more accurate and efficient over time. This symbiotic relationship between human expertise and AI ensures that the two evolve together to address emerging threats.
Optimized man-machine teaming
Underlying this transition is the need for AI systems that can learn from historical data (supervised teaching) and continuously adapt to detect new attacks using unsupervised/reinforcement learning approaches. Combining these methods will be key to staying ahead of the evolving AI capabilities of attackers.
Overall, AI will be crucial to enabling defenders to expand their detection and response capabilities. Human expertise must remain tightly integrated to investigate complex threats, audit AI system output, and guide strategic defensive strategies. An optimized human-machine team model is ideal for the future.
As huge volumes of security data accumulate over time, organizations can apply AI analytics to this trove of telemetry to derive insights for proactive threat hunting and security hardening. defenses. Continuous learning from previous incidents enables predictive modeling of new attack patterns. As AI capabilities advance, the role of small, specialized language models tailored to specific security use cases will grow. These models can help further reduce “alert fatigue” by precisely triaging the alerts that are most critical for human analysis. Autonomous response, powered by AI, can also expand to handle more Tier 1 security tasks.
However, human judgment and critical thinking will remain essential, particularly for very serious incidents. Undoubtedly, the future is one of an optimized human-machine team, where AI handles big data processing and routine tasks, allowing human experts to focus on investigating complex threats and the high-level security strategy.