When a major vulnerability shakes up the cybersecurity world – like the recent XZ backdoor or the Log4j2 breaches of 2021 – the first question most businesses ask is: “Are we affected?” In the absence of well-written playbooksanswering a simple question can take a lot of effort.
Microsoft and Google are investing heavily in generative artificial intelligence (GenAI) systems that can turn big security questions into concrete actions, assist security operations and, increasingly, take automated actions. Microsoft offers overloaded security operations centers with Safety co-pilota GenAI-based service that can identify breaches, connect threat signals, and analyze data. And that of Google Secure Gemini is a set of security features powered by the company’s Gemini GenAI.
To start up Simbian joins the race with its new GenAI-based platform to help businesses manage their security operations. Simbian’s system combines large language models (LLM) to summarize data and understand native languages, other machine learning models to connect disparate data points, and a software expert system based on security information extracted from ‘Internet.
While setting up a security information and event management (SIEM) system or security orchestration, automation, and response (SOAR) system can take weeks or months, the Using AI reduces the time to, in some cases, seconds, says Ambuj Kumar, co-founder and CEO of Simbian.
“With Simbian, literally, these things are done in seconds,” he says. “You ask a question, you express your goal in natural language, we break down the code execution into steps, and it all happens automatically. It’s self-sufficient.”
Helping busy security analysts and responders streamline their work is a perfect application for GenAI’s more powerful capabilities, says Eric Doerr, vice president of engineering at Google Cloud.
“The opportunity in security is particularly significant given the scale of threats, the high-profile talent gap among cybersecurity professionals, and the grind that is the status quo in most security teams,” Doerr explains. “Accelerating productivity and reducing the time it takes to detect, respond to, and contain (or) mitigate threats through the use of GenAI will allow security teams to catch up and defend their organizations more successfully.” »
Different starting points, different “benefits”
Google’s advantages in the market are obvious. The IT and internet giant has the budget to stay the course, the technical expertise in machine learning and AI from its DeepMind projects to innovate, and access to extensive training data – a essential element for creating LLMs.
“We have a huge amount of proprietary data that we used to train a custom security LLM – SecLM – which is part of Gemini for Security,” says Doerr. “It’s the superset of 20 years of intelligence from Mandiant, VirusTotal and more, and we’re the only platform with an open API – part of Gemini for Security – that enables partners and enterprise customers to extend our security solutions and have a unique AI that can work across the entire enterprise context.
Like Simbian’s advice, Gemini in Security Operations — a capability under the aegis of Gemini in Security – will participate in investigations beginning at the end of April, guiding the security analyst and recommending actions within Chronicle Enterprise.
Simbian uses natural language queries to generate results, asking: “Are we affected by the XZ vulnerability?” will produce a table of vulnerable application IP addresses. The systems also use security knowledge collected from the Internet to create guides for security analysts that show them a script of prompts to give to the system to accomplish a specific task.
“The guide is a way to personalize or create reliable content,” says Kumar of Simbian. “Right now we’re creating the guides, but once…people start using them, then they can create their own.”
Strong ROI claims for LLMs
ROIs will increase as businesses move from manual to assisted to autonomous. Most GenAI-based systems have only progressed to the assistant or co-pilot stage, when they suggest actions or perform only a limited series of actions, after obtaining user permissions.
The real return on investment will come later, says Kumar.
“What excites us about building is autonomous – autonomous is making decisions on your behalf that fall within the framework of the advice you’ve given it,” he says.
Google’s Gemini also seems to straddle the line between an AI assistant and an automated engine. Financial services company Fiserv uses Gemini in its security operations to create detections and playbooks faster and with less effort, and to help security analysts quickly find answers using language search natural, thereby increasing the productivity of security teams, says Doerr.
Yet trust remains an issue and a barrier to increased automation, he says. To build trust in the system and solutions, Google remains focused on creating AI systems that are explainable and transparent in how they make a decision.
“When you use natural language input to create a new detection, we show you the syntax of the detection language and you choose to run it,” he explains. “This is part of the process of building trust and context with Gemini for Security.”