Over the past year, we’ve seen the explosive use of OpenAI’s ChatGPT, accompanied by laypeople’s fears about the artificial general intelligence (AGI) revolution and the disruption it’s predicted to cause in markets. There’s no doubt that AI will have a massive and transformative impact on much of what we do, but it’s time to take a more sober and thoughtful look at how AI will change the world and, in particular, cybersecurity. Before we do that, let’s take a moment to talk about failures.
In 2018, one of us had the opportunity to hear and speak briefly to Garry Kasparovthe former world chess champion (1985-2000). He talked about what it was like to play and lose against Deep blueIBM’s chess supercomputer, for the first time. He said it was overwhelming, but he pulled himself together and beat it. He was going to go on to win more than he lost.
Over time, things changed: he lost more than he won, and eventually Deep Blue was winning regularly. However, he made a crucial observation: “For a period of about ten years, the chess world was dominated by humans assisted by computers.” Eventually, AI alone dominated, and it is worth noting that today the stratagems used by AI in many games baffle even the greatest masters.
The bottom line is that humans assisted by AI have an advantage. AI is really a toolbox largely comprised of machine learning and LLMs, many of which have been applied for over a decade to tractable problems like detecting new malware and detecting fraud. But there’s more to it than that. We live in an era where breakthroughs in LLMs dwarf anything that’s come before. Even though we’re witnessing the bursting of a stock market bubble, the AI genie is out of the bottle, and cybersecurity will never be the same again.
Before continuing, let’s make one last clarification (borrowed from Daniel Miessler) AI has some understanding so far, but it does not demonstrate reasoning, initiative, or sentience. And this is essential to allay fears and exaggerations about machine takeovers, and to understand that we are not yet in the era of silicon brains fighting each other without carbon brains in the loop.
Let’s look at three aspects at the interface of cybersecurity and AI: AI security, AI defense, and AI offense.
AI Safety
In most cases, companies face a similar dilemma to the advent of instant messaging, search engines, and cloud computing: they must adopt and adapt or face competitors with a disruptive technological advantage. This means they can’t simply block AI if they want to stay relevant. As with these other technologies, the first step is to create private instances of LLM in particular, as public AIs struggle like the legacy public cloud providers to adapt and meet market needs.
Borrowing the language of the cloud revolution in the AI era, those considering private, hybrid, or public AI must think carefully about a number of issues, including privacy, intellectual property, and governance.
However, there are also social justice issues, as datasets can suffer from biases when ingested, models can suffer from inherited biases (or hold a mirror up to us showing truths within ourselves that we should address), or can lead to unintended consequences on the results. With this in mind, the following are essential to consider:
- Ethical Use Review Committee:AI use needs to be regulated and monitored for proper and ethical use, much like other industries regulate research and use like healthcare does with cancer research.
- Controls on data origin:there are of course copyright issues, but also privacy considerations when ingesting. Even though Infernal can re-identify data, anonymization is important, as is looking for poisoning and sabotage attacks.
- Access controls: access should be restricted to specific research purposes and restricted to uniquely named and monitored individuals and systems for ex post accountability. This includes data processing, adjustment and maintenance.
- Specific and general performance:output should be for a specific business-related purpose and application, and no general querying or open API access should be allowed unless agents using that API are similarly controlled and managed.
- Safety of the role of AI: Think of a dedicated AI security and privacy lead. This person focuses on attacks that involve evasion (retrieving features and inputs used to train a model), inference (iterative querying to get the desired result), insanity monitoring (i.e. hallucination, lying, imagination, etc.), feature extraction, and long-term privacy and manipulation. They also review contracts, connect with legal departments, work with supply chain security experts, interface with teams that work with AI toolkits, ensure factual statements in marketing (we can dream!), and more.
AI in defense
AI can also be used in cybersecurity. This is where the AI-assisted human paradigm becomes an important consideration in how we think about future security services. The applications are numerous, of course, but wherever there is a routine task in cybersecurity, from querying and scripting to integration and repetitive analysis, there is an opportunity for the discrete application of AI. When a human with a carbon brain has to perform a detailed task at scale, human error creeps in and that carbon unit becomes less effective.
The human mind excels at tasks related to creativity, inspiration, and tasks that a silicon brain is not good at in reasoning, sensitivity, and initiative. The greatest potential of silicon, the application of AI to cyber defense, lies in process efficiency, extrapolations of data sets, elimination of routine tasks, etc., as long as the dangers of fleeting abstraction are avoided, when the user does not understand what the machine is doing for him.
For example, the possibility of guided incident response that can help predict an attacker’s next steps, help security analysts learn faster, and increase the effectiveness of the human-machine interface with a co-pilot (not autopilot) approach is growing. However, we need to ensure that those receiving incident flight support understand what is being presented to them, can disagree with suggestions, make corrections, and apply their human creativity and inspiration.
If this is starting to sound a bit like our previous one article on automationit should! Many of the issues highlighted here, such as creating predictability that attackers can exploit by automating, can now be addressed and addressed through applications of AI technology. In other words, using AI can make the automation mindset more feasible and effective. For that matter, using AI can make using a Zero Trust platform to analyze the “never never” of the IT outback much more effective and useful. To be clear, these benefits are not free or simply given away by deploying LLMs and the rest of the AI toolkit, but they become manageable projects.
AI on the attack
Security itself must be transformed, as adversaries themselves use AI tools to accelerate their own transformation. Just as businesses cannot ignore the use of AI, as they risk being disrupted by their competitors, Moloch AI is a driving force in our cybersecurity, as it is also used by the adversary. This means that members of security architecture groups should join the enterprise AI review boards mentioned above and potentially lead the way, given AI adoption:
- Red teams must use the tools the opponent has
- Blue teams should use them in case of an incident
- The RCMP should use them to gain efficiency in interpreting natural language into policies
- Data protection must use them to understand the real data flow
- Identity and access should use them to promote zero trust and to obtain progressively more unique and specific rights closer to real time
- Deception technologies need it to win negative confidence in our infrastructure to outwit the adversary
In conclusion, we are entering an era not of AI domination over humans, but of the potential triumph of humans assisted by AI. We cannot prevent the use of AI tools, because competitors and adversaries will use them. So the real issue is how to put the right guidelines in place and how to thrive. In the short term, adversaries in particular will get better at phishing and generating malware. We know that. However, in the long term, the defense applications, the defenders of those who build amazing things in the digital world, and the ability to triumph in cyber conflicts far exceed the capabilities of the barbarians and vandals at the gates of the enterprise.
To learn how Zscaler helps customers reduce business risk, improve user productivity, and reduce costs and complexity, visit https://www.zscaler.com/platform/zero-trust-exchange.