The UK government has suspended £1.3 billion in funding that had been earmarked for AI and technology innovation. This includes £800 million for the creation of the exascale supercomputer at the University of Edinburgh and £500 million for the AI Research Resource — another supercomputer installation including Isambard at the University of Bristol And Dawn at Cambridge University.
The funding was originally announced by the then Conservative government as part of the November recovery plan. Autumn StatementHowever, on Friday, a spokesperson for the Ministry of Science, Innovation and Technology revealed to the BBC that the Labour government, which came to power in early July, was redistributing the funds.
The Conservative administration claimed the money had been promised by the government but never allocated in its budget. In a statement, a spokesperson said: “The government is making difficult and necessary spending decisions across all departments in the face of billions of pounds of unfunded commitments. This is essential to restoring economic stability and delivering our national growth mission.”
“We have launched the AI Opportunities Action Plan, which will identify how we can strengthen our IT infrastructure to better meet our needs and will examine how AI and other emerging technologies can best support our new industrial strategy.”
£300m grant for AIRR already been committed and will continue as planned. Some of this investment has already gone into the first phase of the Dawn supercomputer. However, the second phase, which would improve its speed by 10 times, is now under threat, according to The registerThe BBC said the University of Edinburgh had already spent £31m building housing for its exascale project and that it was considered a priority project by the last government.
“We are absolutely committed to building a technology infrastructure that delivers growth and opportunity to people across the UK,” the DSIT spokesperson added.
AIRR and exascale supercomputers were expected to enable researchers to analyze advanced AI models for greater safety and make breakthroughs in areas such as drug discovery, climate modeling and clean energy. The GuardianThe University of Edinburgh’s Principal and Vice-Chancellor, Professor Sir Peter Mathieson, is urgently seeking a meeting with the Technology Secretary to discuss the future of exascale.
The funding cut goes against commitments made in the government’s AI action plan
The suspended funds appear to run counter to Science, Innovation and Technology Secretary Peter Kyle’s statement on 26 July, where he said he was “putting AI at the heart of the government’s agenda to drive growth and improve our public services”.
He made this statement as part of the announcement of the new AI Action Planwhich, once developed, will define the best way to develop the country’s AI sector.
Next month, Matt Clifford, one of the main organizers of November AI Security Summitwill publish its recommendations on how to accelerate the development and drive adoption of useful AI products and services. An AI Opportunities Unit will also be created, comprised of experts who will implement the recommendations.
The government’s announcement identifies infrastructure as one of the “key enablers” of the Action Plan. With the necessary funding, exascale and AIRR supercomputers would provide the immense processing power required to run complex AI models, accelerating AI research and application development.
SEE: 4 ways to drive digital transformation in the UK
AI bill to focus on continued innovation, despite funding changes
Although the UK Labour government has cut investment in supercomputers, it has taken some steps to support AI innovation.
On July 31, Kyle told executives from Google, Microsoft, Apple, Meta and other major tech players that Artificial Intelligence Bill will focus on large ChatGPT-style foundation models created by only a handful of companies, according to the The Financial Times.
He reassured tech giants that this would not become a “Christmas Bill” in which new regulations would be added through the legislative process. Limiting AI innovation in the UK could have a significant economic impact, with a Microsoft report revealing that adding five years to the timeframe for AI deployment could cost more than £150 billionAccording to the IMF, the AI Action Plan could lead to annual productivity gains of 1.5%.
FT sources heard Kyle confirm that the AI bill would focus on two things: making voluntary agreements between companies and the government legally binding and transforming the AI Safety Institute into an independent government body.
AI Bill, Axis 1: Make voluntary agreements between the government and big tech companies legally binding
At the AI Safety Summit, representatives from 28 countries signed the Bletchley Declaration, committing them to jointly manage and mitigate AI risks while ensuring safe and responsible development and deployment.
Eight companies involved in AI development, including ChatGPT creator OpenAI, have voluntarily agreed to work with the signatories, allowing them to evaluate their latest models before publication so that the declaration can be upheld. These companies have also voluntarily agreed to the Frontier AI Safety Commitments at May Seoul AI Summitwhich include stopping the development of AI systems that pose serious and unmitigated risks.
According to the FT, UK government officials want to make these deals legally binding so that companies cannot walk away if they lose commercial viability.
AI Bill, Item 2: Transform the AI Safety Institute into an independent government body
The UK AISI was launched at the AI Safety Summit with three main objectives: to assess existing AI systems for risks and vulnerabilities, to conduct fundamental research into AI safety, and to share information with other national and international stakeholders.
A government official said making AISI an independent body would reassure businesses that they did not have the government “on their heels” while strengthening their position, according to the FT.
UK government’s position on regulating AI versus innovation remains unclear
The Labour government has shown that it both limits and supports the development of AI in the UK
Alongside the redistribution of AI funds, the government has hinted that it will severely restrict AI developers. This announcement was made in July The King’s Speech that the government “will seek to establish appropriate legislation to impose requirements on those working to develop the most powerful artificial intelligence models.”
The speech was in line with Labour’s pre-election manifesto, which pledged to introduce “binding regulation for the handful of companies developing the most powerful AI models”. After his speech, Prime Minister Keir Starmer also told the House of Commons that his government would “harness the power of artificial intelligence to strengthen security frameworks”.
On the other hand, the government has promised tech companies that the AI bill will not be too restrictive and has apparently put its introduction on hold. It was expected to be included in the named legislative texts which were announced as part of the King’s speech.