The UK’s National Center for Cyber Security (NCSC) warns that artificial intelligence tools are poised to fuel a new wave of cybercrime. According to their predictions, AI tools will enable hackers of all levels to “do” more. This will create an increase in attacks in the short term.
Experienced hackers are getting smarter with AI
Building on their existing knowledge of AI and cybersecurity, experienced hackers should use artificial intelligence in most of their criminal enterprises. Perhaps more worrying is the prediction that there will be increased activity in virtually all cybersecurity threat areas, especially social engineeringthe development of new malware and data theft.
The NCSC also warns that well-resourced criminal gangs will be able to create their own AI models to generate malware capable of evading detection by current security filters. However, this requires access to quality operational data and existing malware samples to “train” the system. These activities will likely be limited to major actors, such as nation states engaged in cyberwarfare.
Beginner hackers learn about AI
One of the most useful aspects of generative AI and large language models (LLMs) like ChatGPT and DALL-E is that anyone can use them to produce good quality content. However, the same goes for malicious AI: virtually anyone can use them to create effective cybersecurity exploits.
The NCSC warning suggests that low-skilled hackers, opportunists and hacktivists could begin using AI tools to engage in cybercrime. Of particular concern is the use of AI for social engineering attacks. Designed to steal passwords and other sensitive personal data. Experts warn that tools like ChatGPT can generate text for phishing emails for example, allowing virtually anyone to run a moderately effective campaign for minimal cost.
It is at this low end of the scale that we will likely see the greatest increase in criminal activity between now and the end of 2025.
What about AI-related safeguards?
Most generative AI systems include protections to prevent users from generating malicious or other code. You cannot use ChatGPT to write a ransomware exploit for example.
However, free and Open Source artificial intelligence engines exist. And groups of highly skilled and well-funded hackers have already built their own AI models without backup. With access to the “right” training data, these models are more than capable of creating malware and more.
It is important to understand that AI alone will not cause a cybercrime apocalypse. The tools used by hackers are incapable of developing entirely new exploits. They can only use their training to refine and improve existing techniques. Most “AI-powered” attacks in the coming months will simply be updates to exploits we already encounter on a daily basis. Humans are still integral to identifying and creating new threats.
be ready
There will likely be an increase in attacks over the next year, so it pays to be prepared. Download a Panda Dome free trial and ensure your devices are protected today against current and future threats.