As people try to find more uses for generative AI that are less about creating a fake photo and instead are actually useful, Google plans to direct AI toward cybersecurity and make threat reporting more easy to read.
In a blog postGoogle writes that its new cybersecurity product, Google Threat Intelligence, will bring together the work of its Mandiant cybersecurity unit and VirusTotal on threats with the Gemini AI model.
The new product uses the large Gemini 1.5 Pro language model, which Google says reduces the time it takes to reverse engineer malware attacks. The company claims that Gemini 1.5 Pro, released in February, took just 34 seconds to analyze code the WannaCry virus – the 2017 ransomware attack that crippled hospitals, businesses and other organizations around the world – and identify a kill switch. This is impressive but not surprising, given LLMs’ talent for reading and writing code.
But another possible use of Gemini in the threat space is to summarize natural language threat reports in Threat Intelligence so that businesses can assess the impact of potential attacks on them – or, in other words, to that businesses do not overreact or underreact to threats.
Google says Threat Intelligence also has a vast network of information to monitor potential threats before an attack occurs. It allows users to get an overview of the cybersecurity landscape and prioritize what to focus on. Mandiant provides human experts who monitor potentially malicious groups and consultants who work with businesses to block attacks. The VirusTotal community also regularly publishes threat indicators.
The company also plans to tap Mandiant experts to assess security vulnerabilities around AI projects. Through Google’s Secure AI Framework, Mandiant will test AI model defenses and contribute to red-teaming efforts. While AI models can help summarize threats and reverse engineer malware attacks, the models themselves can sometimes fall prey to malicious actors. These threats sometimes include “data poisoning”, which adds bad code to the data retrieved by AI models so that the models cannot respond to specific prompts.
Of course, Google isn’t the only company combining AI with cybersecurity. Microsoft launched Copilot for security , powered by GPT-4 and Microsoft’s cybersecurity-specific AI model, and allows cybersecurity professionals to ask questions about threats. Whether either actually constitutes a good use case for generative AI remains to be seen, but it’s nice to see it used for something else. photos of a swag pope.