Invoice Summary
Federal Law on Artificial Intelligence Risk Management 2023/2024 (S.3205 / RH 6936). Establishes guidelines for use within the federal government to mitigate risks associated with artificial intelligence (AI). Referred to the Senate Committee on Homeland Security and Governmental Affairs/House Committee on Oversight and Accountability and the House Committee on Science, Space, and Technology.
Cybersecurity Score Assessment
Note: Cyber positive. This bill has the potential to improve the safety and security of AI technologies deployed within the federal government. (Last update: February 22, 2024)
Key provisions
- Requires the Office of Management and Budget (OMB) to direct federal agencies to adopt the Artificial Intelligence Risk Management Framework (RMF) developed by the National Institute of Standards and Technology (NIST) regarding the use of AI.
- Specifies appropriate cybersecurity strategies and installation of effective cybersecurity tools to improve the security of AI systems
- Establishes an initiative to deepen AI expertise among the federal workforce
- Ensures federal agencies purchase AI systems that comply with the framework
- Requires NIST to develop sufficient testing, evaluation, verification, and validation capabilities for AI acquisitions
Background
Federal agencies use AI systems for a variety of purposes, from resolving cybersecurity vulnerabilities to automating redundant processes to improving healthcare outcomes. However, with the adoption of new technologies and the lack of universally applied standards for safety and security, the federal government’s use of this technology is likely to be compromised. challenges And risksincluding:
- How to best mitigate the privacy and data security risks associated with data collected and processed about Americans;
- How to address the challenges associated with the lack of transparency in AI decision-making; And
- Reduce or eliminate potential negative outcomes resulting from the use of false or unverified data.
In 2023, NIST released its first iteration of AI RMFa set of voluntary best practices that individuals, organizations and society can use to better manage risks partner with AI. The RMF has two main components. The first frames AI risks and discusses the characteristics of trustworthy AI systems: valid and reliable; on; secure and resilient; accountable and transparent; explainable and interpretable; improved privacy; and fair with their harmful prejudices managed. The second part describes four specific functions to address the risk of AI systems. The RMF has has been rented to be a “rights-preserving, non-sector-specific” framework that is adaptable for all types and sizes of organizations; the framework is also interoperable with international standards.
Given the opacity of some AI systems and the potential inconsistencies in results, the risks posed by AI are unique. The NIST AI RMF provides a structured methodology to ensure that organizations can formulate internal processes and tools to address risks that can cause harm. President Joe Biden in 2023 Executive Order (EO) 14110 on the Safe, Secure, and Reliable Development and Use of Artificial Intelligence sought to integrate the AI FMR into federal agency guidelines and best practices (sections 4.1(a)(i)(A) and 4.3 (a)(iii)), and promote the AI RMF as a worthy global technical standard (sections 11(b) and 11(c)).
Rating: Cyber positive
Key takeaways
A legislative approach can summarize and provide statutory support for some of the directives set forth in President Biden’s EO and avoid some typical EO pitfalls (e.g., the risk that a future administration will rescind components (or all) of the EO). ‘EO, or overreach of executive power). concerns). These bills being a bipartisan, bicameral effort indicates that there is broad consensus around their merits and that there is political will for their passage. This would also be one of the first times adoption or use of NIST frameworks would be required for the federal government and private sector vendors. These bills would notably bring a number of improvements to AI security and cybersecurity, including:
- Suppliers attesting to their compliance with the RMF in order to be eligible for award of a federal AI contract;
- Strengthen public sector resilience to AI abuse and risks and improve harmonization of technical and security standards across federal agencies; And
- Consistent engagement, review and updating of standards for testing, evaluation, verification and validation of AI acquisitions.