Interest in artificial intelligence, and specifically generative AI, continues to grow as platforms such as OpenAI’s ChatGPT and Google Bard seek to disrupt multiple industries over the coming years. A recent report from research firm IDC found spending on generative AI to exceed $19 billion in 2023; that AI spending is expected to double this year and reach $151 billion by 2027.
For technology professionals looking to take advantage of lucrative career opportunities this developing field offers, understanding how these AI models work is essential. While many of these conversations focus on how these platforms can automate manual processes and streamline operations, there is growing concern about how AI can be corrupted and manipulated…and why it is also essential to know these aspects of technologies.
To shed additional light on these questions, the National Institute of Standards and Technology (NIST) has published a new document entitled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigation”, which examines the security and privacy issues that organizations may face when deploying AI and machine learning (ML) technologies. The document details several troubling security issues, including scenarios such as corrupted or manipulated data used to train extended language models (also known as “poisoning”), supply chain vulnerabilities, and breaches involving personal data or business.
“Despite significant progress made by AI and machine learning (ML) In many different application areas, these technologies are also vulnerable to attacks that can cause spectacular failures with disastrous consequences,” the NIST report warns in the introduction.
Although the NIST report is written primarily for AI developers, other technology and cybersecurity professionals may benefit from reading the document and incorporating its lessons into their skills, especially as AI becomes a more important part of their daily responsibilities.
“Understanding the evolving threat landscape and the techniques adversaries use to manipulate AI is critical so defenders can test these use cases against their own models to effectively secure their AI systems and to defend against AI-powered attacks,” said Nicole Carignan, vice president of strategic cyber AI at security firm Darktrace.
The NIST report offers guidelines for how tech professionals should approach AI, which can make them more valuable to their current organization or potential employers. Several security experts and industry insiders gave their views on the document’s three key points.
AI Security Matters Right Now
The NIST document highlights several important security issues that AI and generative AI technologies are vulnerable to, whether from a malicious actor or from using bad data to program the models.
THE four major threats identified by NIST include:
- Evasion Attacks: The technique is designed to create malicious content or code after deploying the AI model.
- Poison attack: This technique uses corrupted or malicious data to damage (or poison) the model before it is deployed.
- Privacy attacks: These incidents involve the collection of private personal data or sensitive company information by exploiting weaknesses in the model.
- Abuse attacks: In this scenario, an insertion of incorrect or questionable information into a source (such as a web page or online document) that an AI then absorbs as part of its training.
“There are many opportunities for malicious actors to corrupt this data, both during the training period of an AI system and afterward, as the AI continues to refine its behaviors by interacting with the physical world “, note the NIST authors. “This may cause the AI to operate undesirably. Chatbots, for example, could learn to respond with abusive or racist language when their guardrails are circumvented by carefully crafted malicious prompts.
The rapid emergence of various AI tools over the past year demonstrates how quickly things can change for the cybersecurity workforce and why technology professionals need to stay up to date, especially when it comes to security, said Dean Webb, Cybersecurity Solutions Engineer at Merlin Cyber.
“While defensive AI tools will help counteract most AI-based attacks, the improved generation of phishing and other social engineering attacks directly prey on often untrained humans” , Webb told Dice. “We’ll need to find better ways to automate defenses across corporate and personal email, text, and chatbots to help us keep up with AI-enhanced social engineering.”
While large companies like OpenAI and Microsoft can deploy red teams To test their AI products for vulnerabilities, other organizations do not have the experience or resources to do so. However, with the growing popularity of generative AI, businesses will need security teams that understand the technologies and their vulnerabilities.
“As AI is used in more and more software systems, the task of securing AI against adversarial machine learning (AML) attacks may increasingly fall under the responsibility of organizational security departments said Theus Hossman, Director of Data Science at Ontinue. “In anticipation of this shift, it is important that CISOs and security experts familiarize themselves with these emerging threats and integrate this knowledge into their broader security strategies. »
Create secure AI code and applications
The NIST report details how generative AI LLMs can be corrupted during the training process.
Corruptions during the development process also demonstrate that tech professionals, developers, and even cybersecurity workers need to take the same approach to AI as they do when creating secure code for others types of applications.
“AI security and AI innovation go hand in hand. Historically, security was an afterthought in developing AI models, leading to a skills gap between security practitioners and AI developers,” Darktrace’s Carignan told Dice. “As we continue to embark on the AI revolution, research innovation and information sharing within the industry are critical to enabling AI developers and security practitioners to expand their knowledge. »
As technology becomes increasingly integrated into organizations’ infrastructure, developing AI models and anticipating how they can be corrupted will become an essential skill for developers and security teams looking for vulnerabilities, noted Mikhail Kazdagli, head of AI at Symmetry Systems.
“When AI algorithms are trained on incorrect, biased, or unrepresentative data, they can develop erroneous patterns and biases. This can lead to inaccurate predictions or decisions, perpetuating existing biases or creating new ones,” Kazdagli told Dice. “In extreme cases, if data is maliciously tampered with, this can lead to unpredictable or harmful behavior in AI systems. This is particularly important when AI is used in decision-making processes. Data integrity and quality are therefore essential to ensure that AI systems perform as intended and produce fair and reliable results.
Adversaries Understand AI…and Tech Pros Should Too
Since ChatGPT’s release in November 2022, researchers have warned about how adversaries, whether cybercriminals or sophisticated state actors, are likely to take advantage of these new platforms.
Already, phishing and other cyber threats have been linked to the malicious use of generative AI technologies, and these trends are likely to increase, the NIST paper notes. This means that technology and cybersecurity professionals need to know the vulnerabilities inherent in AI models and how adversaries exploit these flaws.
“Threat actors and adversaries are not only looking to use AI to optimize their operations, but geopolitical threat actors are also looking to acquire valuable intellectual property about AI. Adversaries look for vulnerabilities to obtain valuable intellectual property, such as patterns or weights used in models, or the ability to extract the sensitive data the model was trained on,” Carignan explained. “Attackers could have various AML goals, such as poisoning the competition, reducing accuracy to outperform competitors, or controlling the processing or output of a machine learning system in order to be used maliciously or to impact on critical AI use cases. »
As AI and machine learning applications become more commonplace, not only will technology and cybersecurity professionals need to understand what they can and cannot do, but this knowledge will also need to be disseminated throughout the organization, noted Gal Ringel, CEO of Mine, a company specializing in data protection. management company.
This will require knowing how attackers exploit technology and what defenses can prevent threats from spiraling out of control.
“For those who don’t know the full extent of new attack techniques, it will be virtually impossible to build an infrastructure that is agile and flexible enough to deal with them,” Ringel told Dice. “Given the evolution of deepfakes and audio cloning, among other things, an AI knowledge base is going to become indispensable for everyone who uses the Internet in a few years, and frameworks like an updated NIST can provide a foundation reference for the first wave of people learning about the subject.