AI is not new to cybersecurity (most automated security tools rely on AI and ML to some extent), but generative AI has everyone talking and worrying. world.
If cybersecurity professionals have not yet addressed the security implications around generative AI, they are already behind the times.
“The train has already left the station,” said SlashNext CEO Patrick Harr in a conversation at the RSA 2024 conference in San Francisco.
AI-generated threats have already impacted three-quarters of organizations, but 60% admitted they were unprepared to handle AI-based attacks, study finds study led by Darktrace.
AI-powered cyberattacks expose gaps in cybersecurity talent availability. Organizations are already concerned about skills shortages, especially in areas such as cloud computing, Zero Trust implementation, and AI/ML capabilities.
With the growing threat of AI, cybersecurity teams no longer have the luxury of waiting a few years to fill these talent gaps, Clar Rosso, CEO of ISC2, told an RSAC audience.
Currently, 41% of cybersecurity professionals have little or no experience securing AI, and 21% said they don’t know enough about AI to alleviate their concerns, according to ISC2 research.
It is no wonder then that these same professionals have declared that by 2025, AI will be the biggest challenge facing the industry.
Why the security industry is not yet ready
Businesses have been using AI to detect cyber threats for years. But what’s changing the game is generative AI.
For the first time, thinking about AI goes beyond the corporate network and the threat actor; it now includes the customer.
As organizations rely on AI to engage with consumers through tools like chatbots, security teams must rethink their approach to security detection and incident response centered around interactions between AI and a third-party end user.
The problem lies in the governance around generative AI. Cybersecurity teams, and organizations in general, don’t have a clear picture of what data is being trained on AI, who has access to those training modules, and how AI fits into compliance.
In the past, if a third party asked for information about the company that might have been deemed sensitive, no one would give it; this would have been a potential security risk. Now this information is fed into the AI response model, but who is responsible for governing this information is not defined.
While cybersecurity teams focus on how to thwart malicious actors, they overlook the risks associated with the data they voluntarily share.
“From a security perspective, to adopt technology safely, we need to understand what the ML model is, how is it connected to the data, is it pre-trained, is it continually learning, how does it do you determine the importance? said Nicole Carignan, vice president of strategic cyber AI at Darktrace, in a conversation at RSAC.
Develop security team expertise
It’s important to remember that generative AI is just one type of AI and, yes, its use cases are limited. Knowing where AI tools are effective will help security teams begin to develop the skills and tools needed to address the AI threat landscape.
However, organizations must be realistic. The skills gap won’t shrink like magic in two or five years simply because the need is there.
As the security team gains the skills it needs, managed service providers can step in. The benefit of using an MSP to manage AI security is the ability to see beyond a single organization’s network. They can observe how AI threats are handled in different environments.
But organizations will still want to train their internal AI systems. In this situation, it’s best for the security team to start in a sandbox using synthetic data, said Narayana Pappu, CEO of Zendata. This will allow security practitioners to test their AI systems with secure data.
Regardless of in-house skills, AI threat management will ultimately depend on how AI is used in security toolkits. Security professionals will need to leverage AI to implement basic security hygiene practices and add layers of governance to ensure compliance regulations are met.
“We still have a lot to learn about AI. It’s our job to educate ourselves,” Rosso said.