THE National Security Agency (NSA) is sounding the alarm on the cybersecurity risks posed by artificial intelligence (AI) systems, issuing new guidelines to help companies protect their AI from hackers.
As AI becomes increasingly integrated into business operations, experts warn that these systems are particularly vulnerable to cyberattacks. The NSA Cybersecurity fact sheet provides insight into the unique security challenges of AI and suggests steps businesses can take to strengthen their defenses.
“AI offers unprecedented opportunities, but can also present opportunities for malicious activity. The NSA is uniquely positioned to provide cybersecurity advice, AI expertise and advanced threat analysis,” said the NSA Cybersecurity Director. Dave Luber declared Monday April 15 in a Press release.
Hardening against attacks
The report suggests that organizations using AI systems should implement strict security measures to protect sensitive data and prevent misuse. Key measures include conducting continuous assessments of compromises, hardening the IT deployment environment, enforcing strict access controls, using robust logging and monitoring, and limiting access to model weights.
“AI is vulnerable to hackers due to its complexity and the large amounts of data it can process. » Jon Clayvice president of threat intelligence at cybersecurity company Micro Trend, said PYMNTS. “AI is software, and as such there are likely vulnerabilities that can be exploited by adversaries. »
As reported by PYMNTS, AI is revolutionizing the way security teams approach cyber threats by accelerating and streamlining their processes. With its ability to analyze large data sets and identify complex patterns, AI automates the early stages of incident analysis, allowing security experts to start with a clear understanding of the situation and respond more quickly. quickly.
Cybercrime continues to grow with the increasing adoption of a connected global economy. According to a FBI reportthe United States alone saw losses from cyberattacks exceed $10.3 billion in 2022.
Why AI is vulnerable to attacks
AI systems are particularly prone to attacks because of their reliance on data for training models, according to Clay.
“Since AI and machine learning depend on providing and training data to build their models, compromising this data is an obvious way for bad actors to poison AI/ML systems,” Clay said.
He highlighted the risks of these hacks, explaining that they can lead to theft of confidential data, insertion of harmful commands and biased results. These issues could upset users and even lead to legal problems.
Clay also highlighted the challenges of detecting vulnerabilities in AI systems.
“It can be difficult to identify how they process data and make decisions, making vulnerabilities harder to detect,” he said.
He noted that hackers are looking for ways to circumvent AI security to change its results, and this method is being discussed more in secret online forums.
When asked about steps companies can implement to improve AI security, Clay emphasized the need for a proactive approach.
“It is not realistic to ban AI outright, but organizations must be able to manage and regulate it,” he said.
Clay recommended adopting zero trust security models and using AI to improve security measures. This method means AI can help analyze emotions and tones of communications and verify web pages to stop fraud. He also stressed the importance of strict access rules and multi-factor authentication to protect AI systems from unauthorized access.
As businesses adopt AI to improve efficiency and innovation, they also expose themselves to new vulnerabilities. Malcolm Harkinsdirector of security and trust of the cybersecurity company Hidden layertold PYMNTS.
“AI was the most vulnerable technology deployed in production systems because it was vulnerable on multiple levels,” Harkins added.
Harkins advised businesses to take proactive steps, such as implementing purpose-built security solutions, regularly assessing the robustness of AI models, ongoing monitoring, and developing comprehensive response plans. incidents.
“If real-time monitoring and protection were not in place, AI systems would surely be compromised, and the compromise would likely go unnoticed for long periods of time, creating a risk of greater damage,” Harkins said .