AI is far from mature, but there are still offensive and defensive uses of AI technology that cybersecurity professionals should monitor, according to a presentation today at the Gartner Security & Risk Management Summit in National Harbor, Maryland.
Jeremy D’Hoinne, Gartner Research vice president for AI and cybersecurity, told conference attendees that large language models (LLMs) that have been attract so much attention are “not intelligent”. He cited an example where ChatGPT was recently asked what the most serious CVEs (common vulnerabilities and exposures) of 2023 were – and the chatbot’s response was essentially nonsense (screenshot below).
Deepfakes, main threats linked to AI
Despite the lack of sophistication of LLM tools so far, D’Hoinne highlighted one area where AI threats should be taken seriously: deep fakes.
“Security leaders should treat deepfakes as an immediate area of focus because the attacks are real and there is no reliable detection technology yet,” D’Hoinne said.
Deepfakes are not as easy to defend against as more traditional phishing attacks which can be combatted through user training. Tighter business controls are essential, he said, such as approval of spending and finances.
It recommended stronger business workflows, a security behavior and culture program, biometric controls and updated IT processes.
AI accelerates security patching
One potential use case for AI security noted by D’Hoinne is patch management. He cited data that AI assistance could cut patching time in half by prioritizing patches based on threat and likelihood of exploitation and verifying and updating code, among other tasks.
Other areas where GenAI security tools could help include: alert enrichment and summarization; interactive threat intelligence; attack surface and risk preview; security engineering automation, and mitigation support and documentation.
AI Security Recommendations
“Generative AI will neither save nor ruin cyber security“, concluded D’Hoinne. “How cybersecurity programs adapt to it will determine its impact.”
Among his recommendations to attendees were “focusing on deepfakes and social engineering as pressing issues to solve” and “experimenting with AI assistants to augment, not replace, staff.” And results should be measured based on predefined metrics for the use case, “not ad hoc AI or productivity metrics.”
Stay tuned The Cyber Express for more coverage this week from the Gartner Security Summit.