Businesses are vulnerable to a range of cybersecurity and privacy risks as 70% of business leaders prioritize innovation over security when it comes to generative AI projects , according to a new report from IBM.
The survey also found that less than a quarter (24%) of generative AI projects are secure.
This is despite 82% of respondents admitting that secure and reliable AI is essential to the success of their business.
Early checks essential to mitigate violations
Talk to Information security Commenting on the findings, Akiba Saeedi, vice president of data security at IBM, said it is critical to avoid mistakes made in the past when deploying cloud technologies, which were often implemented without that adequate security controls are incorporated.
For example, she noted that cloud misconfigurations are now one of the most common ways bad actors infiltrate cloud environments. AI misconfigurations are also likely to be a major contributor to breaches in the future if proper security controls are not established up front by organizations.
“We’re in this education phase to really help organizations become more mature,” Saeedi noted.
Executives interviewed in the report highlighted a range of concerns related to deploying generative AI tools in their organization. More than half (51%) cited unpredictable risks and new security vulnerabilities resulting from generative AI, while 47% highlighted new attacks targeting existing AI models, data and services.
The main forms of emerging threats to AI operations highlighted in the report were:
- Extracting the model: Steal the behavior of a model by observing relationships between inputs and outputs
- Rapid injection: Manipulate AI models to perform unintended actions by removing guardrails and limitations put in place by developers
- Inversion Feats: Insights into the data used to train a model revealed
- Data poisoning: Change the behavior of AI models by modifying the data used to train them
- Backdoor exploits: Subtly modify a pattern during training to cause involuntary behaviors under certain triggers
- Model Escape: Circumvent the intended behavior of an AI model by creating inputs that fool it
- Supply Chain Exploits: Generate harmful models that hide malicious behavior or target vulnerabilities in systems connected to AI models
- Data exfiltration: Access and steal sensitive data used in training and tuning models via vulnerabilities, phishing, or misused privilege credentials.
Saeedi warned: “The generative AI model itself presents a new threat landscape that did not exist before. »
Securing Generative AI in the Workplace
Most (81%) of respondents agree that generative AI requires a fundamentally new security governance model to mitigate the types of risks posed to these technologies.
IBM noted that governments around the world are introducing a series of regulations on AI. These include the European AI law and that of President Joe Biden Executive Decree “Promoting the use of trustworthy AI in the federal government” in the United States.
This further requires an overall governance strategy specifically for generative AI, the researchers said.
The following security considerations should be integrated into this governance framework, according to the report:
- Perform threat modeling to understand and manage emerging threat vectors
- Identify open source and widely used models that have been thoroughly scanned for vulnerabilities, tested, and approved.
- Manage training data workflows, such as using encryption in transit and at rest
- Protect training data from poisoning and exploits that could introduce inaccuracies or biases, compromising model behavior.
- Strengthen security for API and plugin integrations with third-party templates
- Monitor patterns over time to detect unexpected behavior, malicious output, and security vulnerabilities that may appear.
- Use identity and access management practices to manage access to training data and models
- Manage compliance with laws and regulations related to data privacy, security and responsible use of AI
Saeedi highlighted that most mature organizations will already have a governance, risk and compliance framework that can be built upon to realize such a model, developing the new metrics required for generative AI.
She said the most important aspect remains having an adequate basic security infrastructure in place.
“Data moves, and you need data security and the identity of those accessing those systems to know the lifecycle of where that data is going,” Saeedi advised.
Security components should fit into a broader AI governance program, encompassing aspects such as bias and trustworthiness.
“We need to integrate the security context in this area, without there being two distinct silos,” added Saeedi.
Shadow AI emerges as a threat
THE report also highlighted the growing threat of “shadow AI” to businesses.
This issue stems from employees sharing private organizational data with third-party applications integrated with generative AI, such as OpenAI’s ChatGPT and Google Gemini. This can lead to:
- Sensitive or privileged data exposed
- Proprietary data incorporated into third-party models
- Expose data artifacts that could be vulnerable in the event of a vendor data breach
Such risks are particularly difficult for security teams to assess and mitigate because they are unaware of their use, the report said.
IBM has outlined the following actions that organizations should consider taking to mitigate the risk of shadow AI:
- Establish and communicate policies that address the use of certain organizational data in public models and third-party applications
- Understand how third parties will use prompt data and whether they will claim ownership of that data
- Assess the risks of third-party services and applications and understand what risks they are responsible for managing
- Implement controls to secure the application interface and monitor user activity, such as rapid input/output content and context.