Artificial Intelligence and Machine Learning
,
Next generation technologies and secure development
,
Security Operations
Open Questions: What’s the next killer use case? Can the results be better validated?
What is artificial intelligence used for today and what could be its next big use cases in cybersecurity?
See also: AI and ML: Ushering in a New Era of Networking and Security
The theme of AI reality versus hype dominated the closing panel discussion Thursday at the annual Black Hat Europe conference in London. The panel included conference founder Jeff Moss and members of the conference evaluation committee, who helped evaluate the hundreds of submissions the conference received this year, of which they approved approximately 50.
Review committee members said this year, AI-themed submissions, of varying quality, dominated. Some have succeeded. Others included “talks about AI written by AI, so we had to reject them,” said Vandana Verma, also an OWASP board member. The poor quality of AI-generated submissions made them easy to spot.
Accepted papers underpin briefings on everything from discovering vulnerabilities and network security to privacy and incident response. The conference also included an “AI, ML and Data Science” track comprising eight sessions. These covered a range of topics, including transforming a generative AI agent based on the application it was intended to serve; the difficulty of obtaining information extracted from a training set; privacy issues; tools to use LLM code helpers to help developers without exposing sensitive information; and a large financial institution detailing its use of data analytics to streamline certain business processes (see: Black Hat Europe 2024 London Preview: 20 Hot Sessions).
Cybersecurity: powered by Buzzwords
Given the interest in the topic, might it be time to launch a “Black Hat: AI” conference? Moss said he heard this question recently and responded by noting that previous topics of considerable interest and focus, such as mobile and later cloud, seemed to dominate for a little while before becoming “somehow integrated”.
He predicts the same thing will happen with AI. “Everyone in the show has AI in the product, but they’ll look weird in four years saying ‘now with AI,’ right?” he said.
Vendana said hot, stand-alone topics in recent years, including zero trust and supply chain vulnerabilities, now feature in relatively few of the briefings offered. For every new buzzword, vendors claim they are “powered by it,” even though what that means isn’t necessarily clear.
From a research perspective, “the vast majority of the talks we selected were not about applying an LLM to something, because they will become part of the tools we have, but they won’t replace all the tools we have,” said Stefano Zanero, professor of cybersecurity at Politecnico di Milano.
Thematically, “a lot of the discussions around AI are more general discussions, in the sense that if you attack an LLM to do a quick injection, you’re just exploiting a product,” said James Forshaw, security specialist. researcher in Google’s Project Zero.
Killer Use Cases: What Next?
Since ChatGPT launched for public use in November 2022, the use of large language models appears to have captured the public imagination. The use case so far seems to focus on it as a prediction engine that works as a “super autocomplete,” similar to Microsoft Clippy version 2.0, Moss said.
From a business perspective, advances in AI will “make these predictions faster and faster, less and less expensive,” he said. As a result, “if I were in security, I would try to make all my problems prediction problems,” so that they could be solved using prediction engines.
The exact nature of these prediction problems remains an open question, although Zanero said other good use cases include code analysis and extracting information from unstructured text – for example, log analysis for cyber threat intelligence purposes.
“So it speeds up your investigation, but you still have to verify it,” Moss said.
“The verification part eludes most students,” Zanero said. “I say this from experience.”
One of the challenges of verification is that AI often functions as a very complex black box API, and users must adapt their prompt to get the appropriate result, he said. The problem: This approach only works well when you know what the correct answer should be and can thus validate what the machine learning model is doing.
“The real problem in all machine learning – not just using LLMs – is what happens if you don’t know the answer and you try to get the model to give you knowledge you didn’t know not before,” Zanero said. “This is an area of extensive research.”
Another AI challenge can be the age of the training data. Moss offered an example of writing Python code, citing several cases of people using AI generation to quickly write working Python code, only to discover that what was generated could be six years or more out of date with current practices , because it was trained on older data. . So while the generated code may work well as a proof of concept, “it’s not modern,” and putting it somewhere exposed to the Internet could have security implications, he said.
For now: augmentation tool
An audience member asked the panel this question: Will AI replace cybersecurity jobs, like front-line analysts in security operations centers?
Zanero referenced an apocryphal meme attributed to Louis CK, which says that “if you think that an immigrant who has no knowledge of the language, no connections, no degree, is going to steal your job, then it might be -be you who sucks.”
Expect AI to be used not to replace jobs, but to augment them. “It’s the same thing we’ve seen with car driving: autonomous driving doesn’t work, it won’t work in the near future, but human driving assistance to the point where driving is safer and easier, it’s already there.” Zanero said. “The same thing is going to happen, at least for the foreseeable future, with AI.”