This article is part of a series of written products inspired by discussions at the R Street Institute’s Cybersecurity and Artificial Intelligence Working Group sessions. Visit the group Web page for additional information and perspectives on this series.
Rapid advances in artificial intelligence (AI) highlight the need for a nuanced governance framework that actively engages stakeholders in definition, assessment and management. AI Risks. A comprehensive understanding of risk tolerance— which involves delimiting the risks deemed acceptable in the continued exploitation of AI capabilities benefitsIdentifying the entities responsible for defining these risks and clarifying the processes by which risks can be assessed and then accepted or mitigated – is essential.
The risk tolerance assessment exercise also creates the necessary space for stakeholders to question and evaluate the extent to which regulatory interventions are necessary compared to less restrictive, alternative and additional solutions, such as issuing recommendations, sharing guidance on best practices and launching awareness campaigns. The clarity gained from this exercise also sets the stage for our evaluation of three risk-based AI approaches to cybersecurity: implementing risk-based AI frameworks; create safeguards in the design, development and deployment of AI; and advance AI accountability by updating legal standards.
1. Implementing risk-based AI frameworks
Risk-based cybersecurity frameworks provide a structured and systematic approach for organizations to identify, assess and manage evolving risks associated with AI systems, models and data. THE National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (NIST AI RMF) is a notable example of a risk-based AI framework that builds on established cybersecurity and privacy frameworks to help organizations responsibly design, develop, deploy, and use systems of AI. By describing how AI risks differ from traditional software risks, such as the scale and complexity of AI systems, the NIST AI RMF helps organizations prepare for and navigate the changing landscape of AI. cybersecurity with greater confidence, coordination and precision. The voluntary nature of the NIST AI RMF also provides organizations with the flexibility to tailor the framework to their specific needs and risk profiles. Congress has already taken steps to integrate the NIST AI RMF into federal agencies and procurement of AI technologies through its bipartisan, bicameral introduction of the Federal Law on Risk Management Related to Artificial Intelligence.
The NIST AI RMF is purpose-built for agility, which is critical to keeping pace with technological innovations and ensuring that safety and security protocols evolve in tandem with the growing role of AI. To complement the efforts of the NIST AI RMF, the Executive Order on the Safe, Secure, and Reliable Development and Use of Artificial Intelligence highlights the importance of continuous improvement and adaptation of AI governance by extending its reach and robustness. Initiatives like the new American AI Security Institute and the AI Security Institute Consortium are instrumental in expanding the NIST AI RMF’s core purpose by strengthening the framework’s ability to address safety and security challenges in the AI domain. Promoting collaboration and innovation, they illustrate the proactive steps taken to ensure the NIST AI RMF remains responsive to the dynamic nature and implications of AI.
2. Create safeguards in the development and deployment of AI
Guarantees ensure that AI systems operate within defined boundaries regarding ethics, safety and security. Some AI companies have already voluntarily committed to incorporating safeguards such as rigorous internal and external security testing procedures before their public release. This strategy is essential to maintaining user trust and ensuring responsible deployment and use of AI technologies.
However, acquiring the resources necessary to implement these safeguards may prove difficult for some organizations. Creating and implementing safeguards throughout AI development and deployment can also result in delays in achieving key innovation milestones. In addition, the risk that the guarantees are bypassed or deleted highlights a significant challenge ensuring that these protective measures are effective and sustainable. These challenges require a mix of protection strategies that must be leveraged, evaluated, and continually adapted to keep pace with the evolving AI technology landscape. Integrate traditional cybersecurity principles such as security by design and by default in AI systems can also improve the effectiveness of backup strategies.
3. Advancing AI accountability by updating legal standards
THE ongoing debate on Responsibility for AI reflects the desire of some to act on the basis of legal standards likely to meet the complexities AI-induced risks and encourage stakeholders to proactively mitigate cybersecurity and security risks. More recently, the National Telecommunications and Information Administration published its AI Accountability Policy Report, which calls for, among other recommendations, increased transparency of AI systems and independent evaluations. However, some skeptics express your concernsciting the need for balance and the potential harm that could occur if these efforts were transformed into a broad regulatory regime imposed from above, which would inflict significant compliance and innovation costs.
The three proposed policy actions include:
- Licensing regime. Implement a licensing regime which requires organizations to obtain licenses or certifications demonstrating compliance with specified standards before working on AI systems and models. “High-risk” AI applications like facial recognition would do so require companies must obtain a government license ensuring that they have rigorously tested their AI models for potential risks before deployment, disclosed harmful instances, and authorized audits of the AI models by an independent third party. For example, the Food and Drug Administration verification process Approval of AI-based medical devices requires rigorous pre-market evaluation and ongoing monitoring to ensure devices meet safety and effectiveness standards. This approach could strengthen AI accountability by increasing transparency and oversight, requiring AI systems to meet strict security standards before deployment. Nevertheless, licensing regimes could stifle innovation by introducing bureaucratic delays and compliance costs, making it more difficult for U.S. small businesses and new entrants to succeed.
- Corporate responsibility regime. This approach holds AI companies accountable if their systems and models cause harm or can be exploited to inflict harm. For example, Congress could hold AI companies accountable through enforcement and private rights of action if their models or systems violate privacy. Increasing corporate responsibility could encourage them to prioritize AI Security, Responsible AIAnd cybersecurity considerations; advance accountability; and ensure compensation for damages caused by AI systems. Critics say the rush to implement corporate responsibility frameworks can introduce regulatory barriers that stifle AI innovation and development and risk being exploited for financial gain. Congress also proposed the preemptive elimination of Section 230 immunity protections for generative AI technology. While supporters of this approach argue that it would give consumers tools to protect themselves against harmful content created by generative AI technology, reviews maintaining it would inhibit free speech, impede algorithmic innovation, and inflict devastating economic consequences on the United States.
- Tiered liability and responsibility regime. Building on ideas advanced in existing projects national cybersecurity strategies, this proposed update involves establishing a legal framework that recognizes the different degrees of risk and liability associated with different applications of AI. In such a regime, companies would face different levels of responsibility And responsibility depending on the nature and severity of the damage caused by their AI systems. For example, a company developing AI-based medical diagnostic systems could face higher accountability standards and reporting requirements due to the potentially life-threatening consequences of misdiagnosis than a company deploying the AI for personalized advertising purposes. Although a tiered liability regime provides flexibility and proportionality in the allocation of responsibilities, it can also lead to less transparency and ambiguity or inconsistency in the application of the law. Additionally, larger companies may have an unfair advantage over new entrants or smaller companies.
While these proposed legal updates to advance AI accountability aim to get businesses to prioritize cybersecurity and AI security considerations, each has drawbacks. These complexities highlight the need for continued debate and informed decision-making among policymakers.
Conclusion
It is imperative to ensure that proposed and emerging policy measures to mitigate the potential risks of AI do not inadvertently stifle innovation or erode U.S. leadership in technological innovation. AI Systems only exists inside real-world settings, and “when (they) go rogue, the implications are multidimensional.” To mitigate AI’s potential to impose amplified or new cybersecurity threats, policymakers should view AI systems holistically, as a technology inextricably linked and integrated with ethical and legal frameworks that are both disparate and which overlap. Integrating risk tolerance principles into AI regulatory and governance solutions is essential to ensure we are equipped to balance the considerable benefits of AI with its potential risks.