Editor’s Note: The following is a guest post by Anton Chuvakin, Security Advisor in the Office of the CISO at Google Cloud.
AI presents a fascinating paradox for security leaders — It is a powerful technology that promises immense benefits, but it also brings many risks, both new and old. To fully realize its potential, these persistent risks must be proactively addressed through effective risk management strategies.
By implementing safeguards combining human monitoring, strong underlying security architecture and technical controls — supported by a carefully refined cyber strategy — Organizations can reap greater benefits from AI. Let’s take a closer look at each of these strategies.
1. Build safeguards to ensure secure and compliant AI
To begin, organizations should use existing risk and governance frameworks as a basis for creating AI-specific safeguards.
Security teams must review and refine existing security policies, identify and mitigate new threat vectors introduced by generative AI, refine the scope of risk monitoring, and update training programs to keep pace with rapid advances in AI capabilities.
Essentially, a critical review of current security capabilities can provide the foundation for required AI policies.
Human involvement remains essential to oversee the AI systems that the organization builds and operates, and to establish effective frameworks. A “human in the loop” approach can help mitigate risks and promote responsible use of AI in three key areas:
- Assessing the risks associated with the use of AI: Classify AI use case risks by categorizing them based on factors such as data sensitivity, impact on individuals, and importance of mission-critical functions, which can help assess the implications and uncertainties of business decisions around AI.
- Technical or operational triggers: Once risks are identified and classified, security teams must implement technical or operational triggers that require human intervention to make critical decisions.
- Do’s and Don’ts of AI: To mitigate the risk of unauthorized use of generative AI tools (such as “shadow AI”), organizations should create an acceptable use policy, establishing an agreement on the “dos and don’ts” of how an organization and its employees will use AI in the work environment.
2. Prioritize security architecture and technical controls to support AI
Implementing secure AI requires infrastructure and application-level controls that support AI security and data protection. This involves prioritizing security architecture and using technical controls using the infrastructure/application/model/data approach:
- Building a secure infrastructure: Strengthen security with traditional measures such as network and endpoint controls, and prioritize updates to address vulnerabilities throughout the AI supply chain.
- Prioritize application security: Integrate secure development practices into your workflow, use modern analytics tools, and enforce strong authentication authorization measures. While some focus on AI-specific issues like fast injection, a classic SQL injection can “solve” them all in the same way.
- Securing the AI model: Train models to resist adversarial attacks, detect and mitigate bias in training data, and perform regular analysis AI red team exercises to identify issues. Models are also highly portable and very expensive to create, and prone to theft by malicious actors. Test the model, then protect it.
- Data security implementation: Apply strong protocols including encryption and data masking, maintain detailed data records to ensure integrity, and enforce strict access controls to protect sensitive information. Focus on training data provenance, model inputs/outputs, and other related datasets.
By prioritizing and implementing these measures, organizations can help ensure the security of their AI systems and data.
3. Expand your security strategy to protect AI from cyber threats
A live and constantly refined strategy is essential to mitigate cybersecurity threats against AI, as the field is evolving so quickly. That’s why it’s important to build strong and resilient defenses, as Google points out Secure AI Framework.
When developing a resilient cyber strategy to cover AI systems, organizations must understand the risks associated with AI — including prompt attacks, training data theft, model manipulation, adversarial examples, data poisoning, and data exfiltration.
They should also consider using AI for security purposes, such as for their threat detection and response initiatives.
Finally, organizations should strengthen their cyber resilience by developing a comprehensive incident response plan that addresses AI-specific issues and sets clear protocols for detecting, containing, and eradicating security incidents involving AI. This will ensure organizations have the right training and tools to protect their AI deployments from evolving cyber threats.
To navigate the complex AI landscape, security leaders must balance rapid technological advancements with increased risks. By adopting a multi-layered approach that combines robust safeguards, human oversight, technical security controls, and a proactive threat defense strategy, organizations can prepare for a secure and innovative future.