Average response time: a big winner in an AI world
Many state and local agencies already use endpoint detection tools that accelerate mean time to detection. These have been around for a while and they’re definitely getting better with time.
However, endpoint detection and response (EDR) tools are increasingly developing their own tools. major language models which speeds up response time. Security analysts can query these LLMs as they would with ChatGPT, to try to make sense of what they see.
For example, you can request more information about a particular threat or Attack and Tab Control ID. Sometimes it’s as simple as right-clicking and asking for more information. This enables threat intelligence to be gathered conversationally and in real-time, which can significantly improve the speed and quality of a response.
Here are examples of EDR tools integrating these AI:
These tools incorporate AI to help security teams make sense of what they see inside the environment by enriching or simplifying the information. This is very beneficial to state and local agencies, especially those pressed for time and resources.
Assembling or linking events is another powerful use of AI in a security operations center. A breach or cyber incident can generate thousands or even tens of thousands of additional alerts. In the past, these alerts could have been extremely difficult to link. AI has the pattern recognition capability to understand the connection between these alerts and a particular event. This makes it much easier to tackle the crux of the problem rather than chasing away the echoes that arise from it.
WATCH: Virginia’s CISO explains how AI is affecting the state’s cybersecurity efforts.
In some cases, the value of generative AI is much simpler. EDR tools use AI chatbots to answer basic questions, such as how to access certain features of the tools. Cybersecurity experts who move from one EDR tool to another – or who have been recently hired, for example – can get up to speed more quickly.
Tailor-made AI solutions for large agencies
Leveraging your EDR solution’s existing AI integration — or upgrading to an EDR that provides integration — is the most direct way to harness the power of AI for detection and response.
But larger agencies, such as those at the state level or in a very large city, can create a recovery-augmented generation solution. A RAG solution essentially allows them to query their own LLMs for data specifically curated for cybersecurity. Imagine creating a data repository that an LLM can reference so that anyone can quickly ask it questions and get reliable answers; anyone using the LLM will get answers based only on data that has been uploaded for those specific purposes.
With this customized solution, security personnel can ask specific questions applicable to their own security environments and get very direct answers. This is ideal for large national and local agencies who can fund this project, as it is a bespoke, highly secure LLM that can meet the particularities of a particular environment.
EXPLORE: Agencies need to consider security measures when adopting AI.
The risks of de-prioritizing AI for cybersecurity
Organizations should avoid allowing cybersecurity teams and all staff to freely operate publicly available LLMs such as ChatGPT. These tools are readily available and adept at analyzing and summarizing information. Unauthorized or unregulated use can be attractive. A clear and well-defined AI policy can prevent teams from sharing proprietary data with public LLMs.
Yet simply limiting generative AI is also risky. The tools exist and people want to use them because they are efficient, powerful and work at machine speed. Pretending they don’t exist can cause people to take shortcuts or become dissatisfied with the work environment. I will add that even agencies that rely on sanctioned third-party LLMs integrated with EDR solutions must have a solid understanding of how the data they provide is governed by their vendors.
There is no need to rush into “AI-enabled” products, and it is important to take the time to define AI. Is it LLM, machine learning, deep learning or is it just algorithms? Additionally, data governance should be considered when sending potentially proprietary data to a public LLM or even to a vendor-specific LLM.
Finally, I recommend taking anything people market as “next gen” with a grain of salt, as “next gen” has largely become a marketing term (with a few exceptions).
But it would be a mistake to ignore AI completely. It’s seeing rapid adoption and iteration, and it’s not going away anytime soon.