By 2025, the cybersecurity landscape will become more complex, and new challenges will emerge as quickly as the technologies that underpin them. Brad Jones, CISO at Snowflake, shares his views on how AI will shape the cybersecurity landscape in the coming year.
Generative AI takes center stage as an expert in corporate personal security.
Although there is a lot of talk about the potential security risks introduced by generative AI, and for good reason, there are already real and beneficial applications today that people neglect to mention. As AI tools become more versatile and precise, security assistants will become an important part of the SOC, alleviating the perpetual labor shortage. The benefit of AI will be to summarize incidents at a higher level: rather than an alert that requires analysts to go through all the logs to connect the dots, they will get a high-level summary that makes sense to a human and which is actionable.
Of course, we must keep in mind that these opportunities are very limited in context and scope. We must ensure that these AI tools are trained on an organization’s policies, standards and certifications. When done properly, they can be very effective in helping security teams with routine tasks. If organizations haven’t already taken note, their security teams soon will, as they seek to ease the workload on understaffed departments.
AI models themselves are the next target for AI-centric attacks.
Last year, there was a lot of talk about cybersecurity attacks at the container level – developers’ least secure playgrounds. Now, attackers are moving one level closer to the machine learning infrastructure. I predict that we will start to see patterns such as attackers injecting themselves into different parts of the pipeline so that AI models providing incorrect answers, or worse, revealing the information and data from which it was trained. There are real cybersecurity concerns about bad actors poisoning large language models with vulnerabilities that can then be exploited.
Although AI will bring new attack vectors and defensive techniques, the field of cybersecurity will rise to the occasion, as always. Organizations must establish a rigorous and formal approach to how advanced AI is operationalized. The technology may be new, but the fundamental concerns – data loss, reputational risk and legal liability – are well understood and the risks will be addressed.
Concerns about data exposure through AI are overblown.
People who put proprietary data into large language models to answer questions or help write an email pose no more risk than someone who uses Google or by filling out a support form. From a data loss perspective, AI exploitation does not necessarily pose a new and differentiated threat. Ultimately, this is a risk created by human users when they take data not intended for public consumption and put it into public tools.
This is not to say that organizations should not be concerned. This is increasingly a shadow IT problem, and organizations will need to increase monitoring of unapproved use of generative AI technology to protect against leaks.