In recent years, the cybersecurity landscape has undergone significant transformation, notably with the adoption of artificial intelligence (AI) and automation by defenders and attackers. THE ReliaQuest Annual Cyber Threat Report: 2024 highlights an alarming trend: threat actors are increasingly leveraging AI and automation to improve the efficiency and effectiveness of their attacks. This includes the development of malicious versions of AI models, like ChatGPT, which are used to generate malware, carry out denial of service (DoS) attacks, and even write HTML code for phishing pages.
The implications of these developments are profound. Using AI, malicious actors can automate different stages of their attack chains, for example by exploiting vulnerabilities such as Citrix Bleed (CVE-2023-4966), thereby increasing the speed and scale of attacks. For example, AI-based models named WormGPT and FraudGPT have been developed, capable of automating tasks that previously required significant human effort and technical knowledge.
Growing extortion and sophisticated attacks
The report also highlights the sharp increase in extortion activities, with a record number of entities named on extortion data leak websites. The use of “double extortion” tactics, in which attackers not only encrypt an organization’s data but also threaten to make it public if the ransom is not paid, continues to grow. Notably, the LockBit ransomware group set a new record by naming over a thousand entities in one year.
Additionally, the proliferation of AI tools among cybercriminals enhances their ability to carry out malicious actions. sophisticated social engineering attacks. Phishing remains a dominant method of gaining initial access to networks, with advances in AI enabling the creation of networks. more convincing phishing lures and scenarios.
The ReliaQuest report explains: “GenAI also has the potential to automate spearphishing tactics used in BEC. Machine learning algorithms can analyze large amounts of personal information available online to create personalized victim profiles. By “learning” a target’s preferences, relationships, and activities, AI systems can create highly deceptive emails.
Proactive defense strategies
In response to these evolving threats, organizations are encouraged to integrate AI and machine learning into their cybersecurity strategies for a more proactive defense. By leveraging AI, organizations can improve their detection capabilities, automate responses to security incidents, and conduct more comprehensive behavioral analytics to identify suspicious activity early.
To effectively counter the threats discussed in the report, ReliaQuest advises security defenders to adopt a layered defense strategy. This includes strengthening email security, implementing robust removable media policies, and securing public assets through rigorous testing and patching. Additionally, it is crucial to adopt advanced detection and response technologies that can identify and mitigate AI-based attacks.
The dual use of AI in cybersecurity
The dual use of AI in cybersecurity presents both opportunities and challenges.
While AI can significantly improve an organization’s ability to defend against attacks, it also allows attackers to execute more sophisticated and automated attacks. The ongoing arms race between cyber defenders and attackers highlights the need to continually innovate cybersecurity strategies and adopt advanced technologies to keep pace with the evolving threat landscape.
By staying informed about AI-based cyber threats and being proactive integrating AI into cybersecurity practical, organizations can better prepare to face these emerging challenges. The key to success lies in using AI not only for defense, but also to better understand the tactics, techniques and procedures of threat actors, enabling a more informed and effective response to cyber threats.