The business transformations brought about by generative AI come with risks that AI itself can help secure in a sort of flywheel of progress.
Companies that quickly embraced the open Internet more than 20 years ago were among the first to reap the benefits and become proficient in modern network security.
Enterprise AI is following a similar trend today. Organizations that continue its advancements—particularly through powerful generative AI capabilities—are applying these learnings to improve their security.
For those just starting this journey, here are ways to address three of the problems with AI. major security threats Industry experts have identified large language models (LLM Masters).
AI safeguards prevent rapid injections
Generative AI services are prone to attacks from malicious prompts designed to disrupt the underlying LLM or access its data. As the report cited above points out, “direct injections override system prompts, while indirect injections manipulate inputs from external sources.”
The best antidote to rapid injections is the use of AI guardrails, either built into or placed around LLMs. Like metal crash barriers and concrete curbs on the road, AI guardrails help LLM applications stay on track and on topic.
The industry has provided solutions in this area and continues to work on these solutions. For example, NVIDIA NeMo Guardrails The software enables developers to protect the reliability, safety and security of generative AI services.
AI detects and protects sensitive data
The answers LLMs give to questions asked can sometimes reveal sensitive information. With multi-factor authentication and other best practices, credentials are becoming increasingly complex, expanding the scope of what is considered sensitive data.
To avoid disclosure, all sensitive information must be carefully removed or masked from the AI training data. Given the size of the datasets used in training, it is difficult for humans (but easy for AI models) to ensure the effectiveness of a data cleaning process.
An AI model trained to detect and mask sensitive information can help avoid revealing confidential information that was inadvertently left in an LLM’s training data.
By using NVIDIA Morpheusan AI framework for construction cybersecurity With Morpheus, enterprises can build AI models and accelerated pipelines that detect and protect sensitive information across their networks. Morpheus enables AI to do what no human using traditional rules-based analytics can do: track and analyze massive data flows across an entire enterprise network.
AI can help strengthen access control
Finally, hackers may attempt to use LLMs to gain access control over an organization’s assets, so companies must prevent their generative AI services from exceeding their level of authority.
The best defense against this risk is to use security-by-design best practices. In particular, grant an LLM the least privileges possible and continually evaluate those permissions, so that they can access only the tools and data they need to perform their intended functions. This simple, standard approach is probably all that most users need in this case.
However, AI can also help provide access controls for LLMs. A separate online model can be trained to detect privilege escalation by evaluating the results of an LLM.
Start the journey to cybersecurity AI
No single technique is a silver bullet. Security is a matter of constantly evolving measures and countermeasures. Those who are most successful at it use the latest tools and technologies.
To secure AI, businesses need to know about it, and the best way to do that is to deploy it in relevant use cases. NVIDIA and its partners can help with comprehensive AI, cybersecurity, and cybersecurity AI solutions.
In the future, AI and cybersecurity will be closely linked in a kind of virtuous circle, a flywheel of progress where each improves the other. Ultimately, users will come to trust them as just another form of automation.
Learn more about NVIDIA Cybersecurity AI Platform and how does it happen put into practice. And listen to cybersecurity lectures given by experts from NVIDIA AI Summit in October.