The rapid evolution of generative AI is sparking interest in finding ways to prevent these technologies from creating more harm than good. This is a major concern in cybersecurity as organizations grapple with the role GénAI could play a role in creating or supporting a security breach.
One way to combat such attacks is to establish a cybersecurity policy that includes AI. Let’s discuss some key security issues presented by GenAI and take a look at what to include in a generative AI security policy.
How does AI affect cybersecurity measures?
AI, and GenAI in particular, introduces a number of cybersecurity risks. Cyber adversaries use GenAI to convince social engineering and phishing scams, including deepfakes. Organizations unable to manage the risks associated with AI face data loss, system access by unauthorized users, and malware and ransomware attacks, among other things.
GenAI is also subject to rapid injection attacks, in which malicious actors use specially crafted inputs to bypass the normal restrictions of a large language model, as well as data poisoning attacks, in which attackers modify or corrupt the data of an LLM.
Organizations must also be aware of others challenges related to GenAIincluding employee exposure to sensitive data, use of shadow AI, vulnerabilities in AI tools, and breaches of compliance obligations.
AI standards and frameworks
Standards and frameworks play a key role in helping organizations develop and deploy secure AI. Each of the following ISO standards addresses AI risks to varying degrees:
- ISO/IEC 22989:2022 — Information technology: Artificial intelligence: Artificial intelligence concepts and terminology.
- ISO/IEC 23053:2022 – Framework for artificial intelligence (AI) systems using machine learning (ML).
- ISO/IEC 23984:2023 — Information technology: Artificial intelligence: Risk management guidelines.
- ISO/IEC 42001:2023 — Information technology: Artificial intelligence: Management system.
NIST developed the Artificial Intelligence Risk Management Frameworkan essential document for organizations developing and deploying secure and trusted AI.
Dozens of independently developed frameworks, many addressing risk and cybersecurity, have also been developed to help organizations build and deploy AI-based systems.
Consult the references and tools above when developing an AI-based system, especially when cybersecurity is a concern. These guidelines and their recommended activities can result in controls that can be incorporated into a GenAI security policy document. These can then be translated into detailed AI-based cyber threat management procedures.
What does an AI security policy contain?
The first decision is whether to update an existing cybersecurity policy to include AI, use a separate AI cybersecurity policy that includes GenAI, or develop a separate GenAI cybersecurity policy. For our purposes, the goal is to develop a GenAI security policy.
If the policy defines a comprehensive approach to managing AI security, it could be called the “AI Security Policy”. If it focuses on GenAI, the name could be “Generative AI Security Policy.” Logically, the former could also include GenAI.
Next, consider the following four conditions:
- Accept this security breaches occur.
- Establish procedures to identify and address suspicious activity and its source.
- Collaborate with departments such as legal services and human resources.
- Initiate activities and procedures to reduce the likelihood of such security events occurring and mitigate their severity and impact on the organization.
The following sections identify people, processes, technology, and other issues to consider in a GenAI security policy.
People
- Establish a process for identifying suspicious activities that may be associated with GenAI activities.
- Work with HR to establish procedures to identify and address employees suspected of GenAI-based security exploits.
- Work with Legal to determine how to pursue GenAI security violations.
- Identify how the company responds to such activities – for example, reprimand or termination – based on HR policies.
- Determine the legal implications if the perpetrators fight the lawsuits.
- Identify external expertise (e.g. legal, insurance) that can help with GenAI security attacks.
Process
- Review existing procedures for recovering and restoring disrupted IT operations to see if they can be used for AI-based breaches.
- Review existing disaster recovery (DR) and incident response plans to see if they can be used to recover operations from GenAI-based events.
- Develop or update existing procedures to recover, replace and reactivate IT systems, networks, databases and data affected by GenAI-based security breaches.
- Expand or update existing procedures to address the business impact (e.g., loss of revenue, reputational damage) of GenAI-based security breaches.
- Consider bringing in external experts to assist with GenAI-based events.
- Determine whether any standards or regulations have been violated by GenAI-based cyberattacks, as well as how to restore compliance.
Technology Operations
- Examine technology that can identify and track cybersecurity activities with suspected GenAI signatures, whether within the company’s IT infrastructure or with external companies, for example cloud services.
- Establish methods to stop GenAI-based activities once they are detected and verified. Quarantine affected resources until the issues are resolved.
- Review and update existing network security policies and procedures following GenAI-based attacks.
- Update or replace existing cybersecurity software and systems to be more effective in GenAI-based cyberattacks.
- Repair or replace hardware devices that have been damaged by attacks.
- Repair or replace systems and data affected by attacks.
- Ensure systems, data, network services and other critical assets are backed up.
- Ensure encryption of data at rest and in motion is in force.
- Recover IT operations, applications and systems that may have been affected by GenAI-based attacks.
- If additional expertise is needed, consider bringing in external vendors or consultants.
Security Operations
- Establish and regularly test procedures to address physical and logical violations caused by GenAI-based security events.
- Establish and regularly test procedures to prevent the theft of intellectual property and personally identifiable information.
- Establish and regularly test procedures to respond to attacks on physical security systems (e.g., closed-circuit television cameras, building access systems) from GenAI-based attacks.
- Establish and regularly test an incident response plan that addresses all types of cybersecurity events, including those resulting from GenAI-based breaches.
- If additional expertise is needed, consider bringing in external vendors or consultants.
Operation of facilities
- Develop, document and regularly test procedures to repair, replace and reactivate data center and other facilities that may have been disrupted by GenAI-based security breaches.
- Establish and regularly test procedures to deal with attacks on physical security systems from GenAI-based attacks.
- Establish and regularly test a disaster recovery plan that responds to all types of cybersecurity events, including those related to GenAI-based attacks.
- If additional expertise is needed, consider bringing in external vendors or consultants.
Financial performance
- Develop and regularly review procedures to assess the impact of GenAI-based security attacks on general financial and business operations.
- Define potential legal and regulatory penalties for non-compliance with specific regulations following GenAI-based security breaches.
- Identify potential insurance implications of GenAI-based cybersecurity attacks with the company’s insurance provider(s).
- If additional expertise is needed, consider bringing in external vendors or consultants.
Business Performance
- Develop procedures to repair potential reputational and other damage caused by GenAI-based cyberattacks.
- Develop procedures to respond to media inquiries regarding reported AI-based security violations.
Policy template
A generative AI security policy template that covers GenAI-based attacks incorporates many of the same elements as a standard template. cybersecurity policy. It also recognizes that the organization must be able to identify security vulnerabilities that have signatures indicating something other than a “normal” attack.
Use the accompanying model as a starting point for creating policy to combat GenAI-based attacks and exploits. Again, the result could be a standalone policy, or AI content could be added to an established cybersecurity policy.
Paul Kirvan is an independent consultant, IT auditor, technical writer, editor and educator. He has over 25 years of experience in business continuity, disaster recovery, security, enterprise risk management, telecommunications and IT auditing.