Sunny Tan, Head of Security, East Asia, BT
Artificial intelligence (AI) is a sleeping giant that is quietly creeping into our daily lives. We may not have realized it, but AI has been around for a long, long time.
While OpenAI’s ChatGPT recently highlighted the deep extent of AI’s penetration into our world, AI was first introduced to the mainstream when Apple introduced Siri as a personal assistant in 2011. Three years later, when Google announced Amazon Alexa, it further cemented AI’s role in our daily lives.
The field continues to evolve with recent additions such as Google’s Gemini, Claude AI, and Microsoft’s integration of Copilot into common applications like Microsoft Word. These developments illustrate how AI has evolved from a technological novelty to a crucial support system, simplifying tedious tasks and even becoming a national priority for some countries.
This proliferation of AI is spreading across all industries, with a significant impact on cybersecurity. AI excels at processing massive volumes of data, recognizing patterns, and improving threat detection and response times that are otherwise difficult to manage with the naked eye.
However, this same advantage empowers cybercriminals. Adversaries now have more robust capabilities and are leveraging AI technologies to enhance their malicious capabilities. This includes using GenAI to launch deepfake scams that cost a Hong Kong multinational $34 million, and large language models (LLMs) launching more sophisticated cyberattacks in the form of malware, ransomware, and even disinformation campaigns. These AI tools also help identify vulnerable targets, evade detection systems, and poison data sources to compromise the integrity of these LLMs.
While AI offers enormous promise in cybersecurity, we must recognize that it is not without its dangers.
Why AI is important for cybersecurity
Industry forecasts predict a significant increase in the global AI-powered cybersecurity market. Valued at $20.19 billion in 2023, it is expected to grow at a compound annual growth rate (CAGR) of 24.2%. This trend reflects not only the increasing integration of AI with traditional security tools, but also a growing reliance on AI to combat cybercrime, which caused an estimated $12 trillion in damages by 2023.
Why is AI so important in cybersecurity? AI, particularly machine learning and deep learning models, excels at recognizing patterns in vast volumes of data. This allows AI to identify precursors to attacks that security analysts might miss. In addition to reducing the risk of human error, such as misconfigurations or data leaks, AI technologies facilitate early threat detection and identification of anomalies. This enables proactive threat hunting, preventing breaches and allowing analysts to respond in less than 60 seconds.
Ultimately, AI allows analysts to spend more time on strategic tasks, such as investigating high-priority threats or developing incident response plans and security policies. These are just a few examples: the potential applications of AI in cybersecurity are vast.
However, as with most things, with greater power comes greater responsibility.
Risks of using AI in cybersecurity
The rapid development of AI tools introduces new cybersecurity challenges.
While AI technology offers significant benefits, it can be vulnerable to data poisoning, which can lead to false positives, negatives, and algorithmic bias. This can lead to missed threats and compromised security, potentially opening the door to sophisticated attacks such as deepfakes, cloud jacking, and network exploits.
Industry reports indicate that security analysts are already facing these issues and may not have the resources to effectively address them. Worse yet, with AI tools, cybercrimes can attack not only our digital infrastructure, but also our physical infrastructure, causing systems to exhaust or explode.
This is our great responsibility.
AI safety ultimately lies in our hands
AI-powered tools and solutions are only as effective as the data they’re trained on. Without proper governance, tools could inadvertently expose sensitive information. So it’s incumbent on security analysts—and organizations—to double down on their roles to ensure that AI frameworks and strategies are reliable, accurate, and effective at addressing security vulnerabilities. (The National Institute of Standards and Technology’s AI Risk Management Framework is a good reference point.) Because AI isn’t a silver bullet for cybersecurity, no matter how complex the security event.
To achieve this, it is essential to combine human and artificial intelligence to create a robust and effective defense system. While AI excels in terms of speed and scalability, for example, it lacks the human ability to understand context. Humans can consider factors such as attacker motivations, industry trends, and historical data to make informed decisions.
Additionally, cybersecurity decisions often have ethical implications. Humans are programmed to consider these ethical nuances and make choices that align with the organization’s values, something AI may not be programmed to do.
AI is a powerful tool in cybersecurity. But it is far from being a panacea.
It is therefore our duty to mitigate the risks associated with AI technology, which requires involving humans in final decision-making and establishing responsible technological principles, safeguards and governance.
Only by combining the power of AI with human expertise can we truly secure our digital future.
Sunny Tan is Head of Security for East Asia at BT