Generative artificial intelligence (Gen AI) and its use by businesses to improve operations and profits is at the center of innovation across virtually every sector and industry. Gartner predicts that global spending on AI software will grow from $124 billion in 2022 to $297 billion in 2027. Companies are upskilling their teams and hire expensive experts to implement new use cases, new ways of exploiting data and new ways of use open source tools and resources. What they failed to look at as carefully is AI Safety.
The IBM Institute for Business Value (IBV) executives interviewed to learn more about their awareness and approach to AI security. The survey found that only 24% of Gen AI projects have a security component. These findings show that AI implementations are proliferating while AI security and governance controls are lagging.
This worrying statistic is likely not limited to AI implementations. As with any security program, organizations that lack foundational security are often ill-prepared to deal with threats and attacks that can impact their business. Generation AI applications.
Growing concerns about disruption and impact on data
The same IBV Survey The study found that most executives are aware of the threats and incidents that can impact AI initiatives. Respondents expressed concerns about their adoption of next-gen AI, with more than half (56%) saying they fear an increased risk of business disruption. A whopping 96% said that adopting next-gen AI made a security breach likely in their organization in the next three years.
As the likelihood of attackers targeting AI initiatives increases, breaches and disruptions are likely to surprise organizations unless concerns about risks are translated into actionable action plans.
Incident Response planning and drilling can benefit from extra attentioneven for technical teams. one in five companies Fewer companies have response plans in place. The numbers are lower for those that also have executive/strategic plans in place. The statistics can become alarming, given that the stakes of breaches involving AI-related data are higher, mainly due to the volume of data involved, the sensitive classification, and the attack surface when interacting with third-party platforms, models, and infrastructure.
Significant threats are already here
Do you expect AI security to include more exotic threats that will take attackers a while to understand? Impactful AI threats already exist. While it’s true that a fundamentally new set of threats to organizations’ Gen AI initiatives is emerging, most are tied to existing threats or the same issues with new vulnerabilities and exposures to consider. As such, costly disruptions can come from familiar places in the technology stack.
Unprotected infrastructure, applications, application programming interfaces (APIs) and data are common places where attackers target organizations daily. These targets promise high rewards for stealing sensitive data, personally identifiable information (PII), and intellectual property. High-impact attacks too often occur due to supply chain compromise and collateral damage, and are more likely to occur when using external data sources, third-party and open-source models and APIs.
For an attacker, successfully compromising shared models is a “hack once, hit all” jackpot without requiring sophisticated skills to succeed.
Discover AI-powered cybersecurity solutions
Is Your Security Team Prepared for an AI Compromise?
Next-gen AI can be very beneficial to the business, and innovation trumps AI security. In most cases, next-gen AI initiatives are not secure, and attacks are more likely than ever to impact AI implementations.
Given these factors, is your security team prepared to handle compromises that could impact your AI workloads? What kind of thinking goes into detecting attacks that could impact your models? What about data that is transferred and used by your Gen AI initiatives in a public cloud? Beyond controls, are response plans in place to contain, root cause, and remediate an AI compromise?
When it comes to AI initiatives, let’s not forget that a lot of this is happening in the cloud, where visibility can be siloed. It’s in the cloud that security and response models are a shared responsibility with vendors. Can you say that security teams and other teams working on AI initiatives know how these two aspects will be handled in the event of an AI-related breach? Are there clear plans for each type of active use case involving the relevant stakeholders? Being prepared to activate your third-party support in the event of an AI crisis will be critical.
The view from the executive suites
Let’s assume that the technical and security teams involved in AI implementations have plans in place to detect, contain, and recover from an AI compromise. How prepared is management to do the same in other aspects of the business? A major compromise of AI models, data, or infrastructure can cause significant disruption with no clear timeline for recovery. Such an attack can quickly escalate into a crisis-level event that will require leadership to step up and lead the response.
The impact of AI is as varied as the myriad use cases organizations are implementing and can differ across industries. Consider the implications for AI-powered industrial operations, web services that use AI-powered assistants, or AI-enhanced fraud detection. A cyberattack may initially disrupt these programs, but the resulting business impact requires executive decisions about managing the cyberattack, prioritizing recovery based on real-time impact analyses, and implementing executive intent throughout the event.
Cyber attacks Cyber attacks are known to have damaging consequences due to unauthorized access to sensitive data. Organizations may seem well prepared to deal with them if they have to, but just as some threats present new challenges related to AI, so do the response requirements. Are your data protection officer and compliance team equipped with plans to address new regulatory requirements specific to AI? Imagine a scenario where a core model has been poisoned by a cyber attacker, causing unintended bias against specific groups of individuals: what strategic thinking is being done to detect, remediate, support, and compensate affected communities?
And what about other response scenarios? For example, controlled dismantling of AI-based services or operations, adapting the legal eDiscovery process to the data used by AI systems, communicating about the breach in a way that preserves customer loyalty and reputation, assessing specific legal ramifications, etc. Security teams are often not skilled at handling these types of issues.
Prepare for Generation AI disruption at every level of the organization
There is much to learn in terms of leadership and cross-functional support for AI-related breaches. Decisions about high-level direction and policies are best aligned with business goals when made from a top-down perspective.
Let’s say your organization is already implementing Gen AI use cases. In this case, the leadership team and board members should also Focus on AI-related cyber crisis preparedness. Getting on track doesn’t have to be complicated. It starts with reviewing organizational governance around major cyberattacks and gaining senior management support for an AI preparedness and response plan.
Figure 1: IBM Framework for Securing Next-Generation AI
Once plans are in place to define thresholds, crisis management, integration of response flows and coverage of the most likely scenarios, the next step would be to test a high-impact AI trade-off and test the organization’s ability to manage an effective response.
According to Cost of a data breach Organizations that regularly test their incident response can significantly reduce the duration and costs of breaches and better withstand disruptions, a report finds.
Strengthening preparedness by planning for AI-related disruptions
A large-scale cyberattack, especially one involving your AI implementations, can quickly escalate and significantly impact your operations, brand, reputation, and financial position. It can even threaten the very existence of the company. Preparedness is a critical and recurring activity that can help reduce the impact of crisis-level attacks.
Start by developing plans for your technical and executive teams, with corresponding roles and action plans as well as shared escalation paths and criteria. Once the plan is in place, develop playbooks for your most impactful AI-related disruption scenarios to guide teams through activities to detect, contain, eradicate, and launch the most effective recovery strategy.
Remember to build key performance indicators into your plans and develop a feedback process that is robust enough to provide lessons learned across the organization. These lessons learned can serve as a solid benchmark for evolving plans over time.
IBM X-Force is here to help. Our proactive experts can help you develop plans that align with industry standards and best practices. You can count on X-Force’s extensive experience gained from countless engagements across all industries.
Detailed plans for technical teams: IBM X-Force Incident Response specializes in incident preparation, detection, response, and recovery. Through planning and testing, our goal is to reduce the business impact of a breach and improve resilience to attacks. Schedule a Discovery Briefing with our X-Force team to discuss planning the technical response to an AI-related compromise.
Strategic Planning for Management Teams: If your team wants to develop executive plans and playbooks for AI-related compromise, check out X-Force Cyber Crisis Management.