The U.S. Department of Commerce has posted a review in the Federal Register proposing a rule to amend the Bureau of Industry and Security’s (BIS) Industrial Base Surveys—Data Collections regulations. The amendment would establish reporting requirements for the development of advanced technologies artificial intelligence (AI) models and computing clusters, under the Executive Order of October 30, 2023, entitled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This measure also emphasizes stronger cybersecurity reporting rules to strengthen national security and innovation.
The Agency has requested that stakeholders submit comments on all aspects of this proposed rule to BIS by October 11, 2024. BIS recognizes that the information collected under these reporting requirements is extremely sensitive. In an effort to gather information on the priority to be given to the security of respondent data, BIS welcomes comments on how this data should be collected and stored.
BIS welcomes comments on the technical parameters, including the requirements to report dual-use base model training triggers if it uses more than 10^26 compute operations; and large-scale compute clusters are defined as clusters having a set of machines transitively connected by a network of more than 300 Gbps and having a theoretical peak performance greater than 10^20 compute operations (For example, integer or floating-point operations) per second (OP/s) for AI training, without sparsity.
The proposed rule for cybersecurity software developers outlines a potential notification and reporting process for companies that develop or plan to develop dual-use core AI models and for companies, individuals, or other organizations or entities that acquire, develop, or own computing clusters that meet technical conditions issued by the Department. These entities would be required to report required information to BIS on a quarterly basis for activities that occurred during that quarter or are anticipated to occur within six months of the quarter.
BIS is seeking information from U.S. companies that are developing, planning to develop, or have the hardware necessary to develop dual-use baseline models. AI models are rapidly becoming an integral part of many U.S. industries critical to national defense. For example, manufacturers of military equipment are using AI models to improve the maneuverability, accuracy, and effectiveness of equipment. Similarly, manufacturers of signals intelligence equipment are using AI models to improve how those devices capture signals and eliminate noise.
Additionally, as a final example, developers of cybersecurity software, which can be applied to protect various systems and infrastructures critical to national defense, are using AI models to increase the speed at which such software detects and responds to cyberattacks.
Integrating AI models into the defense industrial base also requires that the U.S. government take steps to ensure that dual-use baseline models operate safely and reliably. As the development and implementation of AI technology is expected to advance in the coming years, the number of covered U.S. persons participating in it will also increase. However, as required EO 14110The Secretary will update the technical conditions that trigger the reporting requirements over time, which may limit the number of additional impacted entities over time.
Products incorporating these models may operate unpredictably or unreliably, which could lead to dangerous accidents. A lack of reliability will make it difficult for the U.S. government to use these products in situations where the margin of error is small, such as defense-related activities, where accidents could result in injury or loss of life. The U.S. government therefore requires information about how companies that develop dual-use baseline models train their models to respond to different types of inputs, as well as information about how these companies have tested their models for safety and reliability.
This information will enable the U.S. government to determine the extent to which certain dual-use foundation designs can be used by the defense industrial base and whether measures are necessary to ensure that the defense industrial base produces the safest and most reliable products and services in the world.
For similar reasons, the U.S. government must minimize the vulnerability of dual-use foundation designs to cyberattacks. Dual-use foundation designs can potentially be disabled or manipulated by hostile actors, and it will be difficult for the U.S. government to rely on a particular design unless it can determine that the design is robust to such attacks.
The United States therefore needs information about the cybersecurity measures that companies that develop dual-use baseline designs use to protect those designs, as well as the cybersecurity resources and practices of those companies. The government also needs to prepare the defense industrial base for the possibility that foreign adversaries or non-state actors could use dual-use baseline designs for activities that threaten national defense, including the development of weapons and other dangerous technologies.
The United States requires information on the safety and reliability of AI models, including any potentially dangerous capabilities that developers of dual-use baseline models have identified for those models. This includes the results of reliability-related testing as well as the results of any red team testing the company has conducted regarding lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and the development of associated exploits; the use of software or tools to influence real or virtual events; and the potential for self-replication or propagation.
This information will enable the government to determine whether investments in the defense industrial base are necessary to ensure that the United States has access to safe and reliable AI systems, as well as to counter identified dangerous capabilities or to ensure that adequate safeguards are in place to prevent the theft or misuse of dual-use foundation designs by foreign adversaries or non-state actors.
In short, dual-use foundation models will likely lead to significant advances in many sectors on which national defense depends. These advances require BIS to conduct an ongoing assessment of the AI sector to ensure that the government has the most accurate and up-to-date information when making policy decisions about the international competitiveness of the industrial base and its ability to support national defense.
Last week, the Department of Homeland Security’s (DHS) Science and Technology (S&T) Directorate published a request for information (RFI) to commercial port operators to advance the work of the Directorate’s Seaport Resilience and Security Research Testbed project, which studies vulnerabilities in U.S. ports and the effectiveness of current protections and mitigation measures. Based on the information received, S&T will provide Practical recommendations on cybersecurity that the maritime port industry can implement to ensure safe and efficient maritime trade.