AI was a major theme at the Gartner Security and Risk Management Summit this week in National Harbor, Maryland, and the consensus was that while large language models (LLM) have so far over-promised and under-delivered, there are still AI threats and threats. defensive use cases that cybersecurity professionals need to be aware of.
Jeremy D’Hoinne, Gartner Research vice president for AI and cybersecurity, told conference attendees that hackers’ uses of AI so far include improved phishing and engineering social – with deepfakes are a particular concern.
But D’Hoinne and director analyst Kevin Schmidt agreed during a joint panel that there is no new attack technique from AI yet, just improvements to existing attack techniques such as Business Email Compromise (BEC) or voice scams.
AI security tools also remain underdeveloped, with AI assistants perhaps the most promising. cyber security thus far, capable of potentially assisting with patching, mitigation, alerting, and interactive threat intelligence. D’Hoinne cautions that these tools should be used as a supplement by security personnel so that they do not lose their ability to think critically.
AI Prompt Engineering for Cybersecurity: Accuracy Matters
The use of AI assistants and LLMs for cybersecurity use cases was the subject of a separate presentation by Schmidt, who cautioned that AI prompt engineering needs to be very specific for security uses to overcome the limitations of LLMs, and even then the answer can only get you 70%-80% towards your goal. The results must be validated, and junior staff should be supervised by senior managers, who will be more quickly able to determine the importance of the result. Schmidt also cautioned that chatbots like ChatGPT should only be used for non-critical tasks. data.
Schmidt gave examples of good and bad AI security prompts to help security operations teams.
“Create a query in my
He gave an example of a better way to create a SIEM query: “Create a detection rule in
This prompt should produce something like the following result:
Analyzing firewall logs was another example. Schmidt gave the following as an example of an ineffective prompt: “Analyze firewall logs for any unusual patterns or anomalies. »
A better prompt would be: “Analyze the firewall logs from the last 24 hours and identify any unusual patterns or anomalies. Summarize your findings in a report format suitable for a security team briefing.
This produced the following result:
Another example was XDR tools. Instead of a weak prompt like “Summarize the two most critical security alerts in a vendor’s XDR,” Schmidt recommended something like: “Summarize the two most critical security alerts in a vendor’s XDR.” a provider, including alert ID, description, severity. and the entities concerned. This will be used for the monthly security review report. Provide the answer in table form.
This prompt produced the following result:
Other examples of AI security prompts
Schmidt gave two other examples of good AI prompts, one on incident investigation and another on web applications. vulnerabilities.
For security incident investigations, an effective prompt might be “Provide a detailed explanation of incident DB2024-001.” Include the timeline of events, the methods used by the attacker and the impact on the organization. This information is necessary for an internal investigation report. Produce the result in table form.
This prompt should lead to something like the following result:
For web application vulnerabilities, Schmidt recommended the following approach: “Identify and list the top five vulnerabilities in our web application that could be exploited by attackers. Provide a brief description of each vulnerability and suggest mitigation measures. This will be used to prioritize our security patching efforts. Produce this in table form.
This should produce something like this:
Tools for AI security assistants
Schmidt listed some of the GenAI tools security teams could use, ranging from chatbots to SecOps AI assistants – such as CrowdStrike Charlotte AI, Microsoft Copilot for Security, SentinelOne Purple AI and Splunk AI – and startups such as AirMDR, Crogl, Dropzone and Radiant. Safety (see Schmidt’s slide below).