California lawmakers on Wednesday passed a bill aimed at preventing catastrophic damage from artificial intelligence software. The legislation, known as Safe and Secure Innovation Act for Advanced Artificial Intelligence Modelsrequires certain AI companies doing business in California to limit mass loss of life or damages exceeding $500 million with safety assessments and other measures.
Most of the world’s largest AI companies are based in the Golden State, and nearly all of those that aren’t will continue to operate there. As a result, the bill will have far-reaching, perhaps even global, implications.if Gov. Gavin Newsom signs it in the coming weeks. The bill is currently awaiting his signature after being passed by both the State Assembly and the State Senate.
The bill has been the subject of heated debate. It was passed after nine rounds of amendments resulting from exchanges between lawmakers and the AI industry. It has also sparked disagreements within the AI industry, Some support the bill, even if with hesitationand others saying that it would stifle innovation and discourage small companies and investors from developing AI products.Open source advocates have also expressed concern that the bill would onerous requirements for those publishing AI models so that others can build freely.
Yoshua Bengiocomputer scientist and self-described “godfather of AI,” said the potential for both extremely good and extremely bad consequences requires a balance. Speaking at a news conference Monday hosted by California state Sen. Scott Wiener, the bill’s sponsor, Bengio said the foreseeable risks call for action.
“We should aim for a path that allows for innovation but also protects us in the plausible scenarios that scientists have identified,” said Bengio, who supports the bill.
Regulating a technology with unknown power
At stake is the future of a technology with revolutionary potential. As programmers create software that replicates aspects of human intelligence, the potential for automation and significant acceleration of tasks that require advanced cognitive abilities increases.
The inherent possibilities of AI mean that governments should adopt a “moonshot mentality” to support the technology’s development, Fei Fei Li written in an essay for Forbes. Li, a computer scientist often referred to as the “godmother of AI,” also wrote that an earlier version of the bill had run into trouble because the original developer of the AI software was liable for misuse by a third party (the bill also holds the third party liable). Following Li’s remarks, Wiener made several rounds of amendments aimed at easing the burden on the original programmers.
The implications of AI for business, military and government are difficult to predict, but both promoters and concerned observers We agree that the widespread use of technology will be transformative.
Concerns about AI include apocalyptic scenarios like creating a bioweapon, as well as amplifying more mundane horrors like identity theft (think hackers stealing and selling your personal information at an ever-increasing rate). Then there’s the specter of Human prejudices are becoming stronger and stronger in software that approves mortgages, schedules job interviews or decides whether someone accused of a crime should be released on bail.
Wednesday’s bill aims to limit the most catastrophic consequences of AI Models With a level of computing power beyond that of current models, which cost more than $100 million to train, the California attorney general can seek a court injunction against companies offering software that does not meet the bill’s security requirements and authorizes the office to take legal action if AI leads to a mass death or cyberattacks on infrastructure that cause an additional $500 million in damages.
Why California’s Law Affects the Entire AI Industry
As a state that often puts itself at the forefront of emerging policy issues, California is in a unique position to set AI safeguards. Its laws have a long history of influencing regulation across the United States, sometimes serving as proof of concept but also defining how companies must operate if they want to do business in the state.
For example, egg producers around the world must keep their chickens in cage-free systems if they want to sell their products in California’s market of more than 39 million consumers. In the tech space, companies must give California residents some level of control over their personal data. Many companies have said they will extend these rights to all U.S. users when the privacy regulations go into effect, because it’s costly and complicated to offer two different levels of control to users based on where they live. It’s also not always possible to tell whether a user is a California resident logging in from somewhere else.
Some legislators, including Representative Nancy PelosiAI companies have joined the ranks of those pushing for a federal solution, fearing that a state-by-state approach would create a complex patchwork of regulations. But Sen. Wiener said the state has an obligation to act. In the absence of regulation from the U.S. Congress, it is up to California, he said, to turn AI companies’ voluntary commitments into legal requirements.
At a news conference Monday, Wiener said the risks posed by AI demanded action. “We should be trying to get ahead of these risks,” he said, “instead of playing catch-up.”
Open Source Concerns
Some open-source advocates say the bill risks discouraging programmers from releasing AI software openly, despite amendments meant to address their concerns. Ben Brooks, a new fellow at the Berkman Klein Center for Internet & Society, worries that the updated bill still requires original programmers to track what their models do once they’re in the hands of other users.
These requirements, he said, are “simply not compatible with the open distribution of this technology.”
Weiner argued that the amendments to the bill focus enforcement on the user of a given AI model.
Geoffrey Hinton, another so-called AI godfather, said in a statement Wednesday that the bill balances critics’ concerns with the need to protect humanity from abuse.
“I remain passionate about the potential of AI to save lives through advances in science and medicine,” he said, “but it is essential that we have truly bold legislation to address the risks.”
Articles from your site
Related articles on the web