IBM and AWS study: Less than 25% of current generative AI projects are secure
The business world has long operated on the idea that trust is the currency of good business. But as AI transforms and redefines how businesses operate and how customers interact with them, trust in the technology must be built.
Advances in AI can free up human capital to focus on high-value deliverables. This evolution is bound to have a transformative impact on business growth, but the user and customer experience depends on organizations’ commitment to creating secure, responsible and trustworthy technology solutions.
Businesses need to determine whether generative AI interacting with users is trustworthy, and security is a fundamental part of trust. So therein lies one of the biggest challenges businesses face: securing their AI deployments.
Innovate now, secure later: a disconnect
Today, the IBM® Institute for Business Value released the Securing Generative AI: What Matters Now study, co-authored by IBM and AWS, presenting new data, practices and recommendations on securing generative AI deployments. According to the IBM study, 82% of executives surveyed said that secure and reliable AI is essential to the success of their business. While this sounds promising, 69% of executives surveyed also indicated that when it comes to generative AI, innovation trumps security.
Prioritizing innovation over security may seem like a choice, but in fact it’s a test. There is a clear tension here; organizations recognize that the stakes are higher than ever with generative AI, but they are not applying their lessons learned previous technological disruptions. Much like the transition to hybrid cloud, agile software development, or Zero Trust, generative AI security can be an afterthought. More than 50% of respondents are concerned about unpredictable risks that could impact generative AI initiatives and fear they will create increased potential for business disruption. Yet they report that only 24% of current generative AI projects are secure. Why is there such a disconnect?
Security indecision may be both an indicator and result of a broader knowledge gap in generative AI. Nearly half of respondents (47%) said they are unsure where and how much to invest in generative AI. Even as teams test new features, leaders continue to determine which generative AI use cases make the most sense and how to adapt them to their production environments.
Securing generative AI starts with governance
Not knowing where to start can also be a hindrance to security measures. That’s why IBM and AWS have joined forces to develop an action guide and practical recommendations for organizations looking to protect their AI.
To build trust and security in their generative AI, organizations must start from the basics, with governance as the baseline. In fact, 81% of respondents indicated that generative AI requires a fundamentally new security governance model. Starting with governance, risk and compliance (GRC), leaders can lay the foundation for a cybersecurity strategy to protect their AI architecture aligned with business objectives and brand values.
For a process to be secure, you must first understand how it should work and what the expected process should look like so that deviations can be identified. AI that deviates from what it was operationally designed to do can introduce new risks with unanticipated business impacts. Thus, identifying and understanding these potential risks helps organizations understand their own risk threshold, informed by their unique compliance and regulatory requirements.
Once governance safeguards are in place, organizations are able to more effectively establish a governance strategy. securing the AI pipeline. The data, the models and their use, and the underlying infrastructure in which they build and integrate their AI innovations. The shared security responsibility model may change depending on how the organization uses generative AI. Many tools, controls and processes are available to help mitigate the risk of business impact when organizations develop their own AI operations.
Organizations must also recognize that while hallucinations, ethics, and biases often come to mind when thinking about trustworthy AI, the AI pipeline faces a threat landscape that challenges trust yourself in danger. Conventional threats take on new meaning, new threats use AI’s offensive capabilities as a new attack vector, and new threats seek to compromise the AI assets and services we increasingly rely on.
The trust-security equation
Security can help build trust in generative AI use cases. To achieve this synergy, organizations must take steps approaching the village. The conversation must extend beyond IS and IT stakeholders to strategy, product development, risk, supply chain and customer engagement.
Because these technologies are both transformative and disruptive, managing the organization’s AI and generative AI fleets requires collaboration across security, technology, and business domains.
A technology partner can play a key role. Utilizing the breadth and depth of technology partners’ expertise across the threat lifecycle and across the security ecosystem can be an invaluable asset. In fact, IBM’s study found that more than 90% of surveyed organizations have third-party products or a technology partner for their AI generative security solutions. When it comes to selecting a technology partner for their generative AI security needs, surveyed organizations reported the following:
- 76% are looking for a partner to help them build a compelling cost case with a strong ROI.
- 58% ask for advice on an overall strategy and roadmap.
- 76% are looking for partners who can facilitate training, sharing and knowledge transfer.
- 75% choose partners who can guide them through the changing landscape of legal and regulatory compliance.
The study clearly shows that organizations recognize the importance of security for their AI innovations, but are still trying to understand how best to approach the AI revolution. Building relationships that can help guide, advise, and technically support these efforts is a critical next step toward protected and trustworthy generative AI. In addition to sharing key insights into executive perceptions and priorities, IBM and AWS have included an action guide with practical recommendations for taking your generative AI security strategy to the next level.
Learn more about the joint IBM-AWS study and how organizations can protect their AI pipeline.
Was this article helpful?
YesNo