Kasada CTO, stopping bot attacks that others can’t with new mitigation techniques that beat cybercriminals at their own game.
As artificial intelligence (AI) technologies evolve, they continue to reshape the cybersecurity landscape, both as a formidable defense mechanism and, paradoxically, as a powerful tool for malicious actors. AI is rapidly lowering the barriers to entry for cybercriminals, providing them with sophisticated means to launch attacks that were once out of reach. Despite efforts by generative AI (GenAI) developers to secure technologies with rigorous guardrails, attackers continue to find creative ways to circumvent these protections.
Securities reveal how attackers leverage AI to create convincing phishing emails or spread disinformation. However, the misuse of AI runs deeper: malicious actors deploy it to undermine security systems. This double-edged technological sword is not only a concern for cybersecurity teams, but also has many implications for business leaders.
Indeed, AI deployments also carry significant economic risks. For example, through techniques such as rapid injection, attackers can manipulate AI applications to elicit unintended responses, thereby extracting sensitive information and inadvertently shifting computational costs onto the victim. In the same way, Denial of wallet (DoW) attacks inflate operational costs by overwhelming AI systems with automated queries, causing financial damage and operational disruption. These two methods illustrate how AI systems can be leveraged to rapidly amplify costs for businesses, turning technology investments into liabilities.
Additionally, the inherent complexity and continued evolution of large language models (LLMs) and GenAI systems contribute to their vulnerability. The opaque nature of these systems often leaves even their developers in the dark about potential weaknesses, making it more difficult to defend against these new attacks. While these models are continually updated to improve their capabilities, new vulnerabilities can emerge quietly, further complicating security teams who must keep up.
How Threat Actors Are Exploiting AI
Threat actors are increasingly finding creative ways to use AI to evolve their tactics and discover new vulnerabilities. Although their tactics vary greatly, most can be categorized into three main methods, each demonstrating the dual-use nature of these technologies.
1. Contradictory contributions: Today, one of the most direct ways threat actors use AI is to create adversarial inputs that manipulate how AI systems classify data by reverse engineering the classification mechanisms of the system. For example, an attacker could create specific queries that would then be submitted to an AI system tasked with classifying the data. By modifying submitted data, attackers can learn to manipulate the system remotely. A notable example of this technique is “repeatedly jailbreaking” large language models, as shown in this recent blog post by Anthropic. This method involves overwhelming an AI with a high volume of requests to trick it into providing a response that it would not typically provide, thereby bypassing the intended operational parameters.
2. Data poisoning: Data poisoning aims to corrupt the underlying model of a security system by blurring the line between legitimate and malicious classifications. By flooding the system with misleading data from various sources, attackers aim to recycle the AI’s perception of normality. For example, an attacker can repeatedly send a malicious payload from numerous points over an extended period of time, with the aim of gradually changing the classification from malicious to benign. By leveraging the learning capabilities of an AI system, this method seeks to turn a strength – adaptability and learning from new information – into a vulnerability.
3. Black box survey: The third important method is black box sampling, which combines elements of conflicting input and data poisoning. This technique involves sending a myriad of queries to a system to understand and potentially reproduce the model’s decision-making process. This method is often used in attempts to break CAPTCHA systems, where attackers gradually adapt to the increasing complexity of CAPTCHAs driven by advanced AI models. By reverse engineering these models, attackers can create algorithms that bypass CAPTCHA with great efficiency, demonstrating a deep understanding of AI behavior.
Three questions to ask about your AI security
For business leaders, understanding the nuances and potential vulnerabilities of AI-based tools should be a priority, especially in light of the risks of economic exploitation. As AI becomes increasingly integrated into security frameworks, the potential for these types of attacks only increases. To get a more complete picture of these risks, consider the following three questions.
1. How does AI improve our visibility into the threat landscape? Improved visibility into the threat landscape is a key benefit of using AI in cybersecurity. Executives should evaluate whether AI tools enable a clearer demarcation between benign and malicious activity, and whether they enable deeper analysis of behavior patterns around these boundaries. The ability to detect subtle anomalies or trends can significantly strengthen a company’s defensive posture, making it an essential part of effectively integrating AI.
2. What strategic benefits does AI bring to our cybersecurity efforts? It is essential to evaluate the efficiency gains of deploying AI within your security operations. Executives should ask themselves how these technologies enhance the capabilities of their security teams and whether they translate into more effective risk mitigation. This involves examining both the direct benefits of in-house AI applications and those derived from partnerships with AI-enabled vendors. The goal is to ensure that the implementation of AI is not just a technology upgrade but a strategic improvement that strengthens the overall security framework of the organization.
3. How reliable are the AI capabilities of our security partners? Choosing the right partners is essential to leveraging external AI capabilities. Business leaders should carefully evaluate how their security partners use AI in service delivery. For example, if a partner uses AI to analyze network traffic patterns to predict and anticipate potential breaches, you should work to understand the data-driven methodologies behind these capabilities. It is essential that leaders are informed and critical of the AI tools used by their partners, to ensure that these technologies match their own data security needs and standards.
By answering these questions, business leaders can provide some assurance that their investment in AI-based cybersecurity is not just about adopting new technologies, but also about making a strategic, informed decision that improves their resilience against conventional and new wave AI. motivated threats.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Am I eligible?