The introduction of artificial intelligence in the field of cybersecurity has created a vicious cycle. Cyber professionals are now using AI to improve their tools and strengthen their detection and protection capabilities, but cybercriminals are also leveraging AI for their attacks. Security teams then use AI more in response to AI-generated threats, and threat actors augment their AI to keep pace, and the cycle continues.
AI Security Solutions
Despite its great potential, AI is significantly limited when used in cybersecurity. There are trust issues with AI-based security solutions, and the data models used to develop AI-based security products appear to be under constant threat. Additionally, when implemented, AI often comes into conflict with human intelligence.
The double-edged nature of AI makes it a complex tool to manage, which organizations must understand more deeply and use more carefully. In contrast, malicious actors take advantage of AI without any limitations.
Lack of confidence
One of the main issues in adopting AI-based cybersecurity solutions is building trust. Many organizations are skeptical of AI-based products from security companies. This is understandable because many of these AI security solutions are overrated and don’t work. Many products touted as AI-enhanced fail to meet expectations.
One of the most widely advertised benefits of these products is that they simplify security tasks so much that even non-security personnel will be able to accomplish them. This statement is often disappointing, especially for organizations facing a cybersecurity talent shortage. AI is supposed to be one of the solutions to the cybersecurity talent shortage, but companies that overpromise and underdeliver are not helping to solve the problem – in fact, they are undermining the credibility of AI-related claims. ‘AI.
Making tools and systems more user-friendly, even for non-technological users, is one of the main aspirations of cybersecurity. Unfortunately, this is difficult to achieve given the evolving nature of threats, as well as various factors (like insider attacks) that weaken security posture. Almost all AI systems still require human direction, and AI is not capable of overriding human decisions. For example, AI-assisted SIEM can accurately flag anomalies for security personnel to evaluate; However, an internal malicious actor can prevent security issues detected by the system from being properly handled, making the use of AI in this case virtually useless.
However, some cybersecurity software companies offer tools that make the most of the benefits of AI. Extended detection and response (XDR) systems that incorporate AI, for example, have a good track record of detecting and responding to complex attack sequences. By leveraging machine learning to scale security operations and ensure more effective detection and response processes over time, XDR delivers substantial benefits that can help alleviate skepticism about security products AI.
Limitations of data models and security
Another concern that undermines the effectiveness of using AI to combat AI-assisted threats is the tendency of some organizations to focus on limited or unrepresentative data. Ideally, AI systems should be powered by real-world data to describe what is happening on the ground and the specific situations an organization encounters. However, this is a gargantuan undertaking. Collecting data from various locations around the world to represent all possible threats and attack scenarios is very expensive, and even the largest companies try to avoid it as much as possible.
Security solution providers competing in a crowded market are also trying to get their products to market as quickly as possible, with all the features they can offer, but with little or no concern for data security. This exposes their data to possible manipulation or corruption.
The good news is that there are many free, cost-effective resources to address these concerns. Organizations can turn to free sources of threat intelligence and reputable cybersecurity frameworks like MITER ATT&CK. Additionally, to reflect behaviors and activities specific to a particular organization, AI can be trained on the behavior of users or entities. This allows the system to look beyond general threat intelligence data – such as indicators of compromise and good and bad file characteristics – and examine details specific to an organization.
On the security front, there are many solutions to keep data breach attempts at bay, but these tools alone are not enough. It is also important to have appropriate regulations, standards and internal policies in place to comprehensively thwart data attacks aimed at preventing AI from properly identifying and blocking threats. The ongoing government-initiated negotiations on AI regulation and MITER’s proposed AI security regulatory framework are steps in the right direction.
The supremacy of human intelligence
The era where AI will be able to circumvent human decisions is still decades, if not centuries, away. This is generally a positive thing, but it has its dark side. It’s good that humans can override AI judgments or decisions, but it also means that threats targeting humans, like social engineering attacks, remain powerful. For example, an AI security system can automatically remove links in an email or web page after detecting risks, but human users can also override or disable this mechanism.
In short, our ultimate reliance on human intelligence hinders AI technology’s ability to counter AI-assisted cyberattacks. As threat actors indiscriminately automate the generation of new malware and the spread of attacks, existing AI security solutions are designed to yield to human decisions and prevent fully automated actions, especially in light of “black box problem” of AI.
For now, the goal is not to achieve an AI cybersecurity system capable of operating entirely autonomously. Vulnerabilities created by the dominance of human intelligence can be addressed through cybersecurity education. Organizations can conduct regular cybersecurity training to ensure employees use security best practices and help them become better at detecting threats and assessing incidents.
It is okay – and necessary – to rely on human intelligence, at least for the moment. However, it is important to ensure that this does not become a vulnerability that cybercriminals could exploit.
Takeaways
It is harder to build and protect things than to destroy them. Using AI to combat cyber threats will always be a challenge due to various factors, including the need to establish trust, the caution needed when using data for machine learning training, and the importance of human decision making. Cybercriminals can easily ignore all of these considerations, sometimes giving the impression that they have the upper hand.
However, this problem is not without solutions. Trust can be built through standards and regulations, as well as the sincere efforts of security vendors to demonstrate that they have a proven track record. Data models can be secured with sophisticated data security solutions. In the meantime, our continued reliance on human decision-making can be addressed through extensive cybersecurity education and training.
The vicious cycle remains in motion, but we can find hope in the fact that it also applies in reverse: as AI threats continue to evolve, AI-based cyber defense AI will also evolve.
Guest blog courtesy of Stellar Cyber. Read more Stellar Cyber guest blogs and news here. Regularly contributed guest blogs are part of MSSP alerts Sponsorship Program.