Artificial intelligence has proven useful in helping humans find patterns they might not otherwise be able to connect. By sifting through reams of data, AI can find the necessary information in seconds, acting much faster than a real person would. The problem is that AI relies on data provided by humans and, unlike humans, it cannot use empathy to determine when certain details are wrong.
This phenomenon is frequently called “bias”. When the data pool or algorithm is incomplete, it can produce false positives and negatives that affect the results. With pirates become more sophisticated Each year, there is a good chance that this bias will become a growing threat to cybersecurity.
Security threats may be overlooked in the future
Security threats can come from different directions. That said, China, Russia and India are at the top of the list among the countries with the highest number of cybercriminals. This marks them as a “danger,” meaning the AI defense system will keep most of its traffic in those countries.
The problem is that many countries that we often consider low priority are regularly but surely developing a cybercrime problem. For example, Japan was previously considered a country where cyberattacks were few and therefore low priority. However, in 2012, the country showed a 8.3% increase in cyberattacksmarking the highest point in over 15 years.
Humans know this, but AI has not yet been fully trained to stay focused on these emerging countries. This can cause malware detection systems to ignore a particular threat simply because it comes from a place where it was not initially considered a problem. Without regular database and algorithm updates, this can significantly threaten cybersecurity efforts.
Hackers learn to take advantage
As more companies rely on AI systems to detect threats, hackers will likely learn to take advantage of this flaw. Many are start using VPNs to hide where they attack from, choosing to appear in countries with low crime rates. This could bias the AI’s defense system, failing to take into account a threat until it is too late.
The biggest problem here is that development teams may not even realize that their system has this type of bias. If they decide to rely solely on the AI system to detect these threats, the malware can easily go unnoticed in the system. This is one of the main reasons why it is recommended to mix AI with human intelligence, as this type of collaboration can minimize bias.
The growing risks of a false positive
We talked about how AI bias can lead to a false negative and wrongly classify a real threat as a non-issue. But the opposite could also happen. The AI’s biases could lead to false positives in their reports, meaning they may find a problem where none exists.
This factor is particularly easy to overlook, especially now that many companies use AI detection tools to reduce these false positives. That said, this could also lead to overclassification, especially since training the data could result in detection systems no longer having differences. This becomes very problematic, because social media made slang and code words very popular.
For example, someone developing a threat detection algorithm for AI might associate slang and word abbreviations with phishing. This could result in important emails being classified as spam, leading to potential delays in production. When employees communicate informally via email or chat, a phishing warning may be triggered unnecessarily, sending a ticket to the cybersecurity team.
This may seem like a good thing because the system “at least detects”. However, these false positives could distract from things that pose real threats. As the AI is biased and unable to differentiate between spam and actual communication between teams, it puts unnecessary pressure on the security department. These are the moments that hackers will likely take advantage of to launch an attack.
An ever-changing cybersecurity landscape
Perhaps the biggest threat AI poses to cybersecurity is its inability to keep up with the changing dynamics of the threat landscape. With technology continually evolving at a faster pace than ever before, cybersecurity threats too. Hackers are also becoming more resourceful in their attacks, with more than 150,000 attacks occurring per hour. Some of these attacks follow a pattern, but others try to find new ways to bypass security.
Train an AI model can take months, sometimes even years, until it manages to recognize a new threat. This can create blind spots in a company’s security system, leading to more breaches because the malware detection system does not detect the attack. This can become a huge problem, especially as people rely on the AI system’s rapid ability to sift through significant amounts of data. Human error can pose a significant cybersecurity threat, as can relying on a slow-to-evolve system.
AI technology is constantly evolving, especially when it comes to deep learning models. They can be very opaque and complex, making them quite difficult to navigate. In this case, finding the origin of the bias can be very demanding, making it difficult to mitigate. Completely removing all biases is also not the ideal path, as obvious threats are still there and therefore should not be ignored. This is why a hybrid model of human intelligence and AI should be used, as it could prevent bias from getting out of control.
The essential
Combating AI bias can be challenging, especially as the landscape shifts in several areas. However, with frequent testing, the bias could be mitigated, preventing an attack from growing disproportionately. Although bias cannot be entirely eliminated, it can be control with appropriate human involvement.