There is no firm commitment to a bill yet, but a pathway to AI legislation was formally announced in the King’s Speech as Keir Starmer’s Labour government moves away from the wait-and-see approach from the previous Conservative administration.
During the election campaign, Labour made several promises of legislative measures to ensure the safety of AI. The confirmation of this promise by the King, at the State Opening of Parliament, is the first concrete step in establishing a new regulatory regime for AI in the UK.
What do we know so far?
In the King’s Speech, Charles III said the government would “seek to establish appropriate legislation to impose requirements on those working to develop the most powerful models of artificial intelligence.”
This is not an explicit commitment to introduce an AI bill, as previously expected. However, it does indicate that Labour is looking to push forward a complex area that will impact UK AI startups.
Work manifest The party has made a brief allusion to its plans for artificial intelligence. Ahead of the general election, it said it intended to introduce “binding regulation for the handful of companies developing the most powerful artificial intelligence models.”
The manifesto also said Labour would ban the “creation of sexually explicit deepfakes”.
Technology Secretary Peter Kyle has expanded on Labour’s position on AI. In an interview with BBC News, Kyle, as shadow technology secretary, said a Labour government would impose a “statutory code” requiring companies developing AI to share safety testing data with the government and its AI Safety Institute.
This would be a tougher approach than the previous government, which relied on a voluntary, non-binding agreement by tech companies on AI safety.
Under the Conservatives, the AI Safety Institute has received information from some AI developers. However, there is no legal requirement for companies like OpenAI and Microsoft to give it, or the rest of the government, access to its safety information.
In February, Kyle spoke at a policy event hosted by industry body techUK, where he said Labour would create a “regulatory innovation office” to encourage greater speed and adaptability to new technologies from regulators.
Chief Secretary to the Treasury Darren Jones, formerly said United Kingdom that existing regulators “lack the capacity” to oversee AI regulation and lack “formal coordination.”
Many questions remain about the details, including whether the government will support open source requirements on AI models and the legislative timeline for the bill.
How does the European law on AI fit in?
Details of the proposed UK AI legislation remain limited. However, officials will likely look closely at the EU’s AI law, which has been approved in March and provides binding rules for AI developers.
The European AI law is structured into four risk levels: minimal, limited, high and unacceptable. The use of AI in the unacceptable category, including intentional disinformation, social scoring and web scraping of facial images, is completely prohibited.
The UK has existing legal frameworks for some areas covered by recent EU legislation, including the use of facial recognition technology outside law enforcement.
It is possible that the UK AI bill will draw inspiration from elements of the European AI law, such as requiring developers to keep detailed logs of security tests to share with regulators.
What does the tech industry think?
Much of the tech industry, including the AI industry, is pleased with the legislative progress that has been made, although few expect it to be implemented very quickly.
“It’s clear that we don’t have immediate answers and there is some level of risk. However, the fact that we’re talking about it, that research into AI explainability is ongoing, and that legislation is being developed is encouraging,” said Jennifer Belissent, chief data strategist at Snowflake.
However, as with any regulation of advanced technology, some fear that blanket rules could stifle innovation. Eleanor Lightbody, CEO of Luminance, pointed out that the “multifaceted” nature of AI means that blanket regulations will not be effective.
“There are a multitude of AI technologies and many applications of large language models. A one-size-fits-all approach to AI regulation risks being inflexible and, given the pace of AI development, quickly becoming obsolete,” Lightbody said.
Ekaterina Almasque, general partner at tech venture capitalist OpenOcean, stressed that while the previous government’s “light touch” approach had its merits, UK legislation is needed as other international bodies develop their own systems.
According to Almasque, if the UK aligns its legislation to some extent with that of the EU and the US, “this can promote interoperable reporting systems and provide a clear roadmap for AI companies operating in the UK.”