Part routine 90-day regulatory update, part campaign rhetoric, the Biden administration on Monday reported on progress on the Executive Order (EO) on artificial intelligence (AI) — and the focus was largely on the safety of AI models and national policies. security.
First announced on October 30here are four important takeaways from today’s announcement:
AI developers must now report their test results to the government
Biden’s AI EO used Defense Production Act authorities to require AI developers to report AI safety testing results to the Department of Commerce. These companies must now share this information about the most powerful AI systems – and must account for the large computing clusters capable of training these systems.
Cloud providers must report ‘malicious’ activity
If finalized as the Commerce Department proposed in the original EO, cloud providers must now alert the government when foreign customers train the most powerful AI models that could potentially be used for activities “malicious”.
Top federal agencies have taken steps to ensure AI security in critical infrastructure
Nine federal agencies, including the Department of Defense, the Department of Transportation, the Treasury Department and the Department of Health and Human Services, have submitted risk assessments to the Department of Homeland Security, reports that aim to ensure that the United States has a head start in terms of integration. AI security in critical infrastructure.
The “AI Talent Surge” is underway
The AI and Tech Talent Task Force created by the Biden EO on AI has launched an aggressive recruiting effort and has been actively recruiting AI talent. For example, the Office of Personnel Management has granted flexible hiring authorities to federal agencies to hire AI technology talent. And various organizations, such as the Presidential Innovation Fellows, the US Digital Corps, and the US Digital Service, have intensifying recruitment of AI talent.
Over the past 90 days, the administration also launched its EducateAI initiative, which aims to provide educational opportunities for K-12 through undergraduate students, and to advance the work of the National Science Foundation to develop the country’s labor needs.
Industry reaction to the AI decree
“It’s good to see the focus on developing local talent by creating K-12 opportunities,” said Morgan Wright, chief security advisor at SentinelOne.
Wright pointed out that in a previous article for SC Media, he wrote that the best time to start closing the knowledge gap was twenty years ago. The next best time is today.
“Ensuring clear regulation for AI is another good goal, one that has so far eluded the federal government,” Wright added.
While AI companies and cloud providers have their marching orders, other industry professionals still have questions about the language used in the EO.
John Allison, public sector director at Checkmarx, said he wanted to know the details of what the government would do with the additional information submitted by the industry.
“I would also like to have a definition of what foreign AI activities qualify as ‘malicious,’ so that AI vendors can report accurately,” Allison said. “Like most things with AI, changes are happening at the speed of light and security and compliance are catching up with the technology. I don’t think anyone can argue that if AI isn’t developed from safe and secure manner, the consequences could be catastrophic. We’re already seeing AI being used by bad actors, and there’s no reason to think this will ever stop.”
Craig Burland, chief information security officer at Inversion6, said that while we’re still a long way from controlling AI as “real,” the administration has put another stepping stone on the path. Burland said that, unsurprisingly, the government started by focusing on national security and critical infrastructure – areas they can influence directly without causing an avalanche of litigation.
“However, the testing requirement is limited to models that pose a ‘serious risk to national security, national economic security, or national public health and safety,’ which restricts who will be subject to the new rule.” , explained Burland. “This is a fairly subjective scope that could see agents in dark suits and sunglasses appear in the lobby, or miss the next OpenAI altogether. AI continues to be a double-edged sword, promising benefits in innovation, design and efficiency; but this brings with it an alarming potential for abuse and chaos. »
Mona Ghadiri, senior director of product management at BlueVoyant, added that having the National Institute of Standards and Technology (NIST) lead the framework for AI security testing makes perfect sense because they are already creating this type of framework for cybersecurity. Many other testing practices necessary to meet other government requirements, such as automobile crash testing, can also be leveraged.
“I hope we get to the point where every ‘car’ has windshield wipers and a seat belt,” Ghadiri said. “AI is not like that yet. The interesting part will be how groups will be certified to test their own AI – or become certified testers – and what the actual assessment by an external third party will look like in terms of duration. Introducing these types of third-party reviews can slow down development and prevent rapid prototyping, but we really need them.
Gal Ringel, co-founder and CEO of Mine, said the executive order is a significant step forward, especially since comprehensive AI legislation from Congress is likely not on the horizon.
In the coming months, the focus on AI governance should be on establishing transparent working relationships with the technology companies behind the most powerful generative AI models, especially that the threshold for the capability of AI models before having to undergo these tests and security checks is so high, Ringel continued.
“EU AI law is not expected to be formally adopted until at least May, so there is no urgency to immediately institute risk assessment or data protection requirements on AI generative, although that time will come,” Ringel said. “As we are only in the early days of this technological change, ensuring that the government can establish a working relationship with big tech on this issue and lay the groundwork for how these security tests will take place is not Maybe not a glamorous goal for the next few months, but it’s a critical time. Government and big tech never aligned on data privacy issues until it was too late and the government’s hand is forced by broad public support. There can therefore be no repeat of this failure, otherwise the consequences could be infinitely more damaging when it comes to AI.”