The adoption of the law on artificial intelligence (AI) in the European Union (EU) this year sparked speculation The Brussels effect could occur when European regulation has a global impact, as companies adopt rules to facilitate their international operations, or new laws are adopted elsewhere based on the EU’s approach. The way the General Data Protection Regulation (GDPR) – the EU’s data privacy rules – has influenced state-level legislation and corporate self-governance in the US is a prime example of how this can happen, particularly when federal legislation is blocked and states take the lead, which is the case with AI governance in the US today.
So far, there is little evidence that states are following the EU’s lead in developing their own AI legislation. There is strong evidence of lobbying by the tech industry with state legislators, who appear unwilling to adopt EU rules, preferring to push for less stringent legislation that minimizes compliance costs but ultimately provides fewer protections for individuals. Two bills passed in Colorado and Utah and two bills in Oklahoma and Connecticut, among others, illustrate this phenomenon.
Nature Index 2024 Artificial Intelligence
The main difference between the state bills and the AI Act is their scope. The AI Act takes a broad approach to protecting fundamental rights and establishes a risk-based system in which certain uses of AI, such as “social scoring” people based on factors like family ties or education, are prohibited. High-risk AI applications, such as those used in law enforcement, are subject to the strictest requirements, while low-risk systems have fewer or no requirements.
By contrast, state bills are more restrictive. Colorado’s legislation is directly modeled after Connecticut’s, and both include a risk-based framework, albeit one that is narrower in scope than the AI Act. The framework covers similar areas—including education, employment, and government services—but only systems that make “consequential decisions” that impact consumers’ access to those services are considered “high risk,” and there are no prohibitions on specific AI uses. (Connecticut’s bill would prohibit the dissemination of political deepfakes and explicit nonconsensual deepfakes, for example, but not their creation.) Additionally, definitions of AI vary between the U.S. bills and the AI Act.
While there is some overlap between the Connecticut and Colorado bills and the AI Act in terms of the documentation they require companies to create when developing high-risk AI systems, the two states’ bills bear a much stronger resemblance to a model AI bill created by U.S. software company Workday, which develops workforce and financial management systems. Workday’s document, which was shared in a Cybersecurity News Platform Article The disc The March 2018 bill, which includes requirements for AI developers and deployers, regulates the systems used in resulting decisions, as do the Colorado and Connecticut bills. Indeed, the documentation those bills require AI developers to produce is similar in scope and wording to an impact assessment that Workday’s bill suggests be produced alongside AI system proposals. Workday’s document also contains language similar to that in bills introduced in California, Illinois, New York, Rhode Island, and Washington. A Workday spokesperson said the company has been transparent in stating that it plays “a constructive role in advancing workable policies that balance consumer protection and fostering innovation,” including by “providing input in the form of technical language” informed by “policy conversations with legislators” globally.
The power of the tech industry as a whole, however, may extend beyond this kind of passive inspiration. The Connecticut bill contained a section on generative AI inspired by part of the AI Act, but it was removed after concerted lobbying by the industry. And while the bill later gained support from some major tech companies, it remains on hold. Industry associations argue that the bill would stifle innovation, leading Connecticut Gov. Ned Lamont to threaten to veto it. Its progress is frozen, as are many other more comprehensive AI bills being considered by various states. The Colorado bill is expected to be amended to avoid stifling innovation before it becomes law.
One explanation for the lack of a Brussels effect and a strong “big-tech effect” on state laws is that, compared to discussions around data protection measures on the GDPR, the legislative debate on AI is more advanced at the US federal level. This includes a Senate policy roadmap and active engagement by industry players and lobbyists. Another explanation is Governor Lamont’s hesitation. In the absence of unified federal laws, states fear that strong legislation could cause a local tech exodus to states with weaker regulation, a risk less pronounced in data protection legislation.
For these reasons, lobby groups say they prefer unified national AI regulation to state fragmentation, a position echoed in public by big tech companies. But in private, some advocate for flexible, voluntary rules for all, signaling their aversion to both national and state AI legislation. If neither form of regulation emerges, AI companies will have preserved the status quo: a bet that two divergent regulatory environments in the EU and the US – with a flexible regime in the latter case – favour them more than the benefits of a harmonised but heavily regulated system.
As with GDPR, there may be cases where complying with EU rules makes sense for US companies, but it would mean the US would be less regulated overall, meaning individuals would be less protected from AI abuses. While Brussels has faced its share of lobbying and compromise, the core of the AI law has remained intact. We’ll see if US state laws stick.
Competing interests
The author declares no conflicts of interest.