Lawmakers on both sides of the Atlantic are racing to establish regulations on artificial intelligence (AI), with California set to vote on strict AI oversight while the US Congress considers a “regulatory sandbox” for financial services.
At the same time, the European Union’s AI law is expected to transform the health technology landscape, highlighting the complex balance between promoting innovation and ensuring public safety when it comes to using AI.
California set to vote on AI regulation
California lawmakers are set to vote Thursday (August 15) on a landmark bill that could reshape the AI industry, as tech giants and startups grapple with its potential implications.
The legislation, Bill SB 1047must be approved by the Assembly’s Appropriations Committee before moving to a vote in the full Assembly. It would be the first of its kind in the country to impose sweeping restrictions on the development and deployment of AI.
At the heart of the bill are provisions requiring companies to conduct safety tests on AI systems before they are released. It would also give the California attorney general the power to sue companies whose technologies cause serious harm, such as mass loss of life or significant property damage.
The bill has sparked heated debate in Silicon Valley and beyond. Supporters say it is a necessary safeguard against the unchecked proliferation of artificial intelligence, while critics warn it could stifle innovation in a field seen as critical to future economic growth.
“Senate Bill 1047 is fundamentally flawed because it targets AI technologies rather than their applications, posing a significant threat to the competitiveness of U.S. AI companies, particularly smaller ones and open source projects,” Vipul Ved PrakashCEO and Founder of AI Settold PYMNTS. “We believe this bill will stifle innovation and unfairly burden startups. Open source AI, which is essential for responsible, sustainable and safe AI advancements, would suffer greatly.”
Tech companies, venture capitalists and AI researchers are scrambling to understand the implications of the bill, with some predicting that if passed, it could push AI development out of California.
Governor Gavin Newsom has not yet indicated whether he would sign the bill if it reaches his desk, adding another layer of uncertainty.
As Thursday’s vote approaches, California lawmakers could set a precedent for AI regulation that would reverberate beyond the state’s borders.
Congress Considers ‘Regulatory Sandbox’‘ for AI in financial services
A new invoice The bill introduced in the US Senate aims to spur AI innovation within the financial sector by creating “regulatory sandboxes” that would allow companies to experiment with AI technologies under relaxed regulatory oversight.
The Unleashing AI Innovation in Financial Services Act would require federal financial regulators to establish programs that allow regulated entities to test AI-powered financial products and services without fear of enforcement action, provided certain conditions are met.
Under the proposed legislation, financial institutions could apply to conduct “AI test projects” for products that make extensive use of AI and may be subject to federal regulation. Applicants would have to demonstrate that their projects serve the public interest, improve efficiency or innovation, and do not pose systemic risks or national security concerns.
If the request is approved, businesses will receive temporary relief from some regulations for up to one year, with the possibility of extension. Regulators will have 90 days to review the requests, with automatic approval if no decision is made within that time frame.
The bill would require regulators to coordinate joint applications and establish procedures to amend approved projects, manage confidentiality and address noncompliance. Annual reports to Congress on the projects’ results would also be required.
Supporters of the measure say it will help the United States maintain its competitive edge in financial technology. Critics, meanwhile, worry about consumer protection and financial stability.
The legislation reflects growing interest in balancing innovation and regulation as AI rapidly advances. It remains to be seen how the bill will fare in Congress and what amendments may be proposed during the legislative process.
European AI law shakes up healthcare technology landscape
The European Union’s landmark AI law, which came into force August 1st set to impact medical AI industry, new report says Nature This legislation, the first of its kind, aims to promote “human-centered and trustworthy AI” while preserving public health and safety.
The law introduces a risk-based tiered approach, prohibiting practices deemed “unacceptable” while imposing strict requirements on high-risk systems. For the healthcare sector, this means that most AI solutions will be subject to greater scrutiny.
“Most current solutions will be classified as high risk,” the study says, signaling a sea change for medical device manufacturers. The authors predict an increase in “regulatory complexity and costs” that could have a disproportionate impact on smaller players.
Critics fear the law will stifle innovation, particularly among startups and SMEs. “Small and medium-sized enterprises with fewer resources are likely to suffer from the regulatory burden,” the researchers noted.
But proponents of the regulation say it is necessary to ensure patient safety in an era of rapid technological advances. The paper stresses the need to “continuously reassess and refine” AI regulation to keep pace with innovation.
As the EU positions itself as a global leader in AI governance, the healthtech sector is preparing for significant disruption. With time running out for implementation, companies are scrambling to adapt to this new regulatory landscape.