The innovations and advancements in generative artificial intelligence (GenAI) over the past 18 months have been unprecedented. Gartner predicts that by 2026, more than 80% of enterprises will have deployed GenAI applications in production environments and/or used GenAI application programming interfaces or models. This figure jumps from less than 5% in 2023.
But the development and deployment of GenAI applications is fraught with security risks, and security has not kept pace with innovation. This is part of a broader trend in AI security: 82% of respondents to a recent IBM Institute for Business Value study acknowledged that safe and reliable AI is now critical to their business’s success, but 69% of respondents still said that innovation precedes security.
How can organizations benefit from developing and deploying GenAI applications without compromising security?
Risks and Opportunities of Building GenAI Applications
Enterprises can benefit the most from customizing AI models with proprietary data. Using a generic, off-the-shelf model provides limited value. Gartner predicts that by 2027, more than 50% of GenAI models used by enterprises will be specific to an industry or business function, up from about 1% in 2023.
Many organizations are using the Retrieval Augmented Generation (RAG) architecture for their GenAI applications. The RAG architecture feeds an organization’s proprietary data into a Extended Language Model (LLM), helping to train those models, as well as create prompts for the LLM to generate accurate, desired results. This approach allows teams to develop an application specific to their organization and its unique needs.
But this creates a risk that confidential data will leave the organization and even be used to train other models. Proprietary data is provided in the LLM in the form of a prompt, which can then be subject to attack.
The risk of exfiltration of sensitive or confidential data is a major concern when using these models, but we don’t have to live that way. Implementing governance and security controls at the ingress and egress levels will address the major security concerns. We will need fine-grained controls to regulate data and traffic coming from the internet and external sources, as well as for traffic leaving the GenAI application. The main concern here is that specific data doesn’t leave the enterprise in an unregulated manner.
Tensions are rising between developers and security teams
Companies looking to develop and deploy GenAI applications need to empower their developers and give them the freedom to experiment. But platform engineers and security teams tasked with avoiding and mitigating security risks want to establish as many controls as possible. This creates tension between these two equally important groups.
We need to build security into every layer of the GenAI development and deployment lifecycle to avoid potentially devastating consequences. Today, teams are finding it challenging to implement governance and security guardrails. Platform and/or security engineers need to implement granular controls at the GenAI application level, on both the input and output sides, to regulate what can go in and what can go out. Without guardrails, it’s impossible to empower developers to experiment and innovate safely.
To foster mutually beneficial relationships, security leaders must include developers in their security efforts, allow them to participate in defining their own security controls, and provide them with the appropriate tools to achieve their goals.
What place for open source?
Open source creates additional opportunities and risks. Assuming organizations have multiple GenAI applications—and each application is comprised of multiple services—they likely use a fair amount of open source, both in their applications and in some off-the-shelf open source models.
While users can use off-the-shelf tools, these models come with risks because there is no way to guarantee that they have not been compromised. It is essential to enforce some level of multi-tenancy or some level of isolation between applications. This way, if one application is compromised, the boundaries are in place to ensure that other applications are not. It becomes essential to be able to enforce these controls, to establish isolation at the application level, or even at the namespace level. These security safeguards are used as a preventative measure to ensure that in the event of a compromise, the user can contain the breach with a limited blast radius.
The excitement and energy around GenAI is incredible, but we can’t sacrifice security for all this additional innovation. A decade ago, we saw an implosion of SaaS companies, and we’re starting to see something similar: thousands of GenAI companies are being created to tackle different problems and address niche categories.
But they themselves must put in place security controls and safeguards to prevent security incidents and protect data so that the companies that use their services do not face security issues. Now is the time for everyone to start prioritizing security, as it is critical to the future success of securely developing and deploying GenAI applications.
Ratan Tipirneni, Chairman and CEO of Tigera