Last year, Executive Order 14110 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence) stated that “Artificial Intelligence (AI) holds extraordinary potential, both promising and perilous. In response to this reality, the U.S. Department of Homeland Security (DHS) recently issued guidelines to help owners and operators of critical infrastructure develop AI Security and security.
The DHS guidance is derived from information gleaned from CISA’s cross-sector analysis of AI risk assessments conducted by relevant Sector Risk Management Agencies (SRMAs) and independent regulatory agencies. DHS relied on this analysis, along with input from existing U.S. government policy, to develop specific safety and security guidance to mitigate AI-related risks to critical infrastructure.
“Based on CISA’s expertise as National Coordinator for Critical Infrastructure Security and ResilienceThe DHS guidance is the agency’s first cross-sector analysis of AI-specific risks for critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI-related risks “, said Jen Easterly, Director of CISA in a report.
Cross-industry AI security threats
The DHS document guidelines highlight three categories of system-level AI risks, which CISA developed in its cross-sector analysis of AI risks. Categories include:
- Attacks using AI: Refers to the use of AI to automate, improve, plan or scale physical attacks or cyberattacks against critical infrastructure. Common attack vectors include AI-based cyber compromises, automated physical attacks, and AI-based social engineering.
- Attacks targeting AI systems: Focuses on attacks targeting AI systems supporting critical infrastructure. Common attack vectors include adversarial manipulation of AI algorithms, evasion attacks, and service disruption attacks.
- Failures in AI design and implementation: Refers to issues related to the planning, structure, implementation, execution, or maintenance of an AI tool or system. This may result in malfunctions or other unintended consequences affecting critical infrastructure operations. Common failures include self-reliance, fragility, and inscrutability.
Learn more about AI cybersecurity
The four main functions of the DHS guidelines
The new DHS guidance also incorporates NIST AI Risk Management Framework (AI RMF)comprising four key functions that help organizations address the risks of AI systems:
- Governs: This function supports the establishment of policies, processes and procedures to anticipate, identify and manage the benefits and risks of AI throughout the AI lifecycle. It follows a “secure by design” philosophy, prioritizing safety and security when building organizational structures.
- Map: This establishes a fundamental context for assessing and mitigating AI risks. This includes an inventory of all current or proposed AI use cases. Mapping begins with documenting context- and industry-specific AI risks, including attacks using AI, attacks against AI, and AI design and implementation failures.
- Measure: Refers to repeatable methods and metrics for measuring and monitoring AI risks and impacts. Critical infrastructures can develop their own context-specific testing, evaluation, verification and validation (TEVV) processes to inform AI use and risk management decisions. The measure should include continuous testing of AI systems for errors or vulnerabilities, including cybersecurity and compliance vulnerabilities.
- Manage: Defines risk management controls and best practices to increase the benefits of AI systems while reducing the likelihood of harm. This requires regularly allocating resources and applying mitigation measures, as indicated by governance processes, to mapped and measured AI risks.
Strengthening AI Cybersecurity
In a flurry of activity to establish national AI cybersecurity solutions, DHS’s new AI guidance coincides with CISA’s appointment as National Coordinator for AI Security and Resilience. critical infrastructure.
Additionally, DHS recently appointed a new Artificial Intelligence Safety and Security Council. The Council will develop AI security recommendations for critical infrastructure organizations such as transportation, pipeline and power grid operators and internet service providers. Meanwhile, the NIST GenAI the program aims to create Generative AI benchmarks to resolve the thorny question of whether content is human or AI generated.
All of these efforts are crucial as the country strengthens its cyber defenses in the age of AI.