As organizations increasingly rely on networks, online platforms, data and technology, the risks associated with data breaches and privacy breaches are more serious than ever. Add to this the increasing frequency and sophistication of cyber threats, and it becomes clear that strengthening cybersecurity defenses has never been more important. Cybersecurity analysts are on the front lines of this battle, working around the clock in security operations centers (SOCs) – the units that protect organizations against cyber threats – to sift through massive volumes of data while by monitoring potential security incidents.
They face vast streams of information from disparate sources, ranging from network logs to threat intelligence feeds, in an effort to prevent the next attack. In short, they are overwhelmed. But too much data has never been a problem for artificial intelligence, which is why many experts are turning to AI to bolster cybersecurity strategies and ease the pressure on analysts.
Stephen Schwabdirector of strategy for USC Information Sciences Institute (ISI) Networks and Cybersecurity Division, envisions symbiotic teams of humans and AI collaborating to improve security, so that AI can assist analysts and improve their overall performance in these high-stakes environments. Schwab and his team developed testbeds and models to research AI-assisted cybersecurity strategies in smaller systems, such as protecting a social network. “We’re trying to ensure that machine learning processes can alleviate these concerns, without making them worse, or easing the workload of the human analyst,” he said.
David Balenson, associate director of ISI’s Networks and Cybersecurity division, highlights the critical role of automation in easing the burden on cybersecurity analysts. “SOCs are inundated with alerts that analysts must quickly analyze in real time and determine what the symptoms of a real incident are. This is where AI and automation come in, spotting trends or patterns in alerts that could be potential incidents,” says Balenson.
In search of transparency and explainability
However, integrating AI into cybersecurity operations is not without challenges. One of the main concerns is the lack of transparency and explainability inherent in many AI-based systems. “Machine learning (ML) is useful for monitoring networks and end systems where human analysts are tired,” says Schwab. “Yet they constitute a black box: they can trigger alerts that may seem inexplicable.
This is where explainability comes in, because the human analyst needs to be sure that the ML system is performing within reason. One solution proposed by Schwab is to create explainers presenting the actions of the ML system in computer-based English, similar to natural language, that the analyst can understand. Marjorie Freedman, a senior scientist at ISI is conducting research on this topic. “I studied what it means to generate explanations and what you want from the explanation. We are also exploring how an explanation can help a person verify the generation of a pattern,” she said.
The art of reporting
An example of explaining an AI decision in cybersecurity is the online authentication process. When authenticating to a system, users enter a password or PIN. However, different people enter data in different patterns, which AI can flag even if the code was entered correctly.
These “potentially suspicious” patterns may not actually constitute security vulnerabilities, but the AI still takes them into account. If, in addition to reporting them, an explanation is provided to the human analyst listing the input pattern as one of the reasons for reporting, the analyst will better understand the reasoning behind the AI’s decision-making. And armed with this additional information, the analyst can make more informed decisions and take appropriate action (i.e. validate or overturn the AI’s determination). Freedman believes that cybersecurity operations must run their best ML model to predict, identify and address threats alongside approaches that effectively explain the decision to experts.
“If someone shuts down a system that will cost the company a lot of money, that’s a high-stakes situation where we have to confirm that it’s the right decision,” Freedman said. “The explanation may not exactly be the AI’s derivation of how it got there, but it might be what the human analyst needs to know to determine whether it’s correct or not. “
Keep data secure and private
While trust between human analyst and machine is one of AI’s cybersecurity challenges, trust that the sensitive or proprietary information AIs are trained on will remain private is another. For example, to train a machine learning model to ensure data security or protect systems, an organization may use operational details or security vulnerabilities.
The potential exposure of this type of sensitive information to an organization’s cyber posture is a concern when integrating AI into cybersecurity operations. “Once you put information into systems like large language models, even if you try to remove it, there’s no guarantee that you’ve succeeded in stopping them from discussing that information. We need to look for ways to make this sharing space safe for everyone,” Schwab said.
Schwab, Freedman and the ISI team hope their work will lead to new ways to harness the strengths of humans and AI to strengthen cyber defenses, stay ahead of sophisticated adversaries and mitigate SOC overload .
Published on May 29, 2024
Last updated on May 29, 2024