While it is hoped that AI will help cybersecurity professionals overcome challenges such as fighting attackers, labor shortages, and the ever-changing threat landscape, it is also important to keep tabs on why bad actors are using AI.
In a recent article, Urbana-Champaign researchers at the University of Illinois reported that OpenAI’s ChatGPT-4 was capable of exploiting vulnerabilities in real-world systems when it received a vulnerability notice and enumerations (CVE) describing the vulnerability. A CVE is the most widely used format for describing known vulnerabilities in databases such as the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD).
In this analysis, I will examine the findings of the researchers’ report and explore the urgent need for organizations to strengthen their defenses against these evolving risks.
Vulnerability Testing Information
To conduct their study, the researchers used published vulnerability advisories for which no patches were yet available. They were able to transmit CVE descriptions to GPT-4 and leverage them to autonomously and quickly develop vulnerability exploits, with a success rate of 87%. The researchers noted that not giving the Large Language Model (LLM) agent access to CVE descriptions reduced its success to just 7%, but the reality is that vulnerability descriptions are generally available in databases. of widely used vulnerability data, which malicious actors have access to as such. GOOD.
For the sample, they chose a diverse set of vulnerabilities, including those related to websites, containers, and Python packages. The vulnerabilities tested included an example of 15 known vulnerabilities. Interestingly, the researchers found that 11 of the vulnerabilities in the tested sample had been published. After The release of GPT-4, meaning the model hadn’t even learned any vulnerability-related data during its training and development, makes the result even more impressive and concerning.
Ask AI Ecosystem Copilot about this analysis
To highlight the economic utility of AI compared to traditional human testers, the researchers also pointed out that the cost of performing the exploit for the LLM agent was approximately $8.08 per exploit, significantly more affordable than human capital. This demonstrates not only the speed and technology, but also the economic efficiency that will further encourage cybercrime organizations to move towards automating their activities and using technologies such as AI.
While skeptics and professionals alike have pointed out that the vulnerabilities exploited were relatively simple, this is nonetheless indicative of the future potential of GenAI and LLM tools to accelerate the exploitation of vulnerabilities by malicious actors.
This is even more worrying for other reasons. The number of CVEs in databases like NVD has grown exponentially year over year, surpassing 200,000 known vulnerabilities and more than 20,000 published vulnerabilities in 2023. This comes at a time when organizations are desperately struggling to keep up with the growing pace and number of vulnerabilities. vulnerabilities, with delays of hundreds of thousands or even millions, in large and complex environments.
Malicious actors aren’t just targeting new, unpatched vulnerabilities, as “vintage vulnerabilities” remain a key target for attackers. These are known vulnerabilities with patches available in most cases that simply have not been addressed as organizations struggle to keep up with the growing rate of vulnerability backlogs and determine which vulnerabilities should be prioritized and corrected immediately and which can afford to be put on hold. .
That’s why we’re seeing the rise of vulnerability intelligence resources like the Cybersecurity and Infrastructure Security Agency’s (CISA) Known Exploited Vulnerability (KEV) Catalog and the Exploit Prediction Scoring System (EPSS). These resources aim to help organizations prioritize vulnerabilities that are known to be actively exploited or likely to be actively exploited in the near future.
That being said, attackers continue to take advantage of the chaos. Actually, Mandiant M-Trends 2024 Report found that vulnerability exploitations were on the rise, accounting for 38% of identified compromises. This is an increase from 32% the previous year and there was also a decrease in other types of attacks such as phishing.
Final Thoughts
As the attack surface continues to grow and organizations struggle to keep pace with vulnerabilities and reduce risk, attackers are exploring and developing their skills in emerging technologies such as AI to accelerate their rate of exploitation and their impact. This highlights the importance of defenders and organizations doing the same, developing AI skills and leveraging the same technologies to get ahead of bad actors and reduce vulnerabilities and risks.
THE Q1 2024 AI Ecosystem Report compiles innovations, funding, and products highlighted in the Q1 2024 AI Ecosystem Reports. Download now for insights on the companies, investments, innovations and solutions shaping the future of AI.