AI-generated malware that avoids detection may already be available in nation states, says the United Kingdom cyber security agency.
To produce such powerful software, threat actors must train a AI model on “quality operational data,” the National Cyber Security Center (NCSC) said today. The resulting system would create new code that would evade current security measures.
“There is a realistic possibility that highly competent states will have malware repositories large enough to effectively train an AI model for this purpose,” the NCSC warned.
As for what a “realistic possibility” actually means, the agency’s “probability criterion” provides some clarity.
The warning was part of a wave of alarm bells sounded by the NCSC. The agency expects AI to increase the global ransomware threat, improve victim targeting, and lower entry barriers for cybercriminals.
Generative AI also increases threats. This is particularly useful for social engineering techniques, such as convincing interactions with victims and creating lure documents.
GénAI will make it more difficult to identify phishing, identity theft, and malicious password or email reset requests. But nation states will have the most powerful weapons.
“Highly capable state actors are certainly best positioned among cyber threat actors to harness the potential of AI in advanced cyber operations,” the agency said.
In the short term, however, artificial intelligence is expected to augment existing threats rather than transform the risk landscape. Experts are particularly concerned that this could worsen the global ransomware threat.
“Ransomware continues to pose a threat to national security,” James Babbage, director general of threats at the National Crime Agency, said in a statement.
“As this report shows, the threat is likely to increase in the coming years due to advances in AI and the exploitation of this technology by cybercriminals. »