OpenUK CEO Amanda Brock said the new government should “learn from the recent past” and not let AI end up being controlled “in the hands of a few”.
With the Labour Party winning a landslide victory in the UK general election today (July 5), many are wondering what this shift in power means for the country’s thriving tech sector.
The UK became the first major country to host an AI safety summit last November, aimed at promoting greater international collaboration on the emerging technology. Bletchley Declaration – an agreement between countries including the United States, China, India, Ireland and the United Kingdom – aimed at work together on how to address some of the risks associated with the rapid advances in AI.
In February, the United Kingdom set aside more than £100 million to support the creation of nine new research hubs across the country focused on developing responsible AI. Two months earlier, Microsoft engaged to invest £2.5 billion in the UK over the next three years to expand its AI data centre footprint and drive AI research.
“An opportunity to stop the slide”
Today, some prominent voices in the tech industry believe it’s time to redouble efforts to consolidate the progress made so far. Marc Warner, CEO and co-founder of Facultyan AI company that has worked widely with the UK government under Conservative rule, believes that with Labour now in charge, it is time for the UK to “release the handbrake and fully embrace” the benefits of what he calls “safe and narrow AI”.
“For too long, governments have accepted a managed decline in public services, with increasingly poor outcomes eroding trust in institutions. AI offers the opportunity to reverse this decline and create experiences for citizens similar to those they receive in the private sector,” Warner said.
Founded in 2014, Faculty works to introduce AI technology to various sectors, such as defense, life sciences, and the public sector. The company gained notoriety when it was hired to work with Dominic Cummings on the UK’s Vote Leave campaign and has managed to win a significant number of UK government contracts in a short time period.
“Remember that AI has been used successfully and safely for decades, from predicting train arrivals to preventing bank fraud. So Starmer must shamelessly embrace narrow AI tools, with specific, predetermined goals, that have been proven to be both safe and effective.”
Amanda Brock, CEO of open source non-profit OpenUK, said the new government should “learn from the recent past” and not let AI end up being controlled by a few.
“To protect the UK’s leadership in AI, Labour must seek to open up AI wherever possible… but it must do so with a thoughtful understanding of what it means to open up every component of it, from models to data, and what it means to be partially or fully open,” Brock said.
“It is complex, yes, but we expect our leaders to be able to understand complex tasks and to tune out the noise created by those who can shout the loudest. The biggest risk the UK faces today from AI is that our leaders fail to learn the lessons of the last 20 years of technology and promote the openness of AI.”
In May, the United Kingdom launched its own security testing platform to help organizations around the world develop safe AI models. Known as Inspect, the open-source platform is a software library that allows testers such as startups, researchers, and governments to assess the capabilities of AI models and produce scores for various criteria based on the results.
Lack of a concrete cybersecurity plan
In the wake of a rapid increase in AI complexity, attacks by malicious actors exploiting the technology are also expected to become more frequent, according to data protection platform Protegrity, meaning the new government will need to treat cybersecurity as a priority.
“As AI is disruptive and presents advances in the ability to process logic differently, it is attracting the attention of businesses and consumers alike, creating a potential risk of putting their data at risk,” the company said in a statement.
“At the same time, the cybercrime industry will rapidly adopt AI technologies, enabling more innovative AI-based attacks. Thus, by 2024, AI-based attacks could continue to increase until businesses and government agencies can implement robust and ethical AI cybersecurity measures. The importance at this stage will be to employ safe data practices so that private information is always protected.”
Spencer Starkey, vice president EMEA at cybersecurity firm SonicWall, said that as hacking tactics become “more sophisticated”, so too must the UK’s national cybersecurity strategy – but that the lack of “concrete cyber plans” from political parties is worrying.
“Governments hold vast amounts of sensitive data, and a successful cyberattack could have serious consequences, including identity theft, espionage or disruption of essential services and critical infrastructure,” Starkey said.
“In addition, governments set cybersecurity standards and policies that the private sector often follows, so inadequate regulation could leave both the public and private sectors vulnerable. Therefore, emphasizing a robust and forward-looking cyber strategy should be a top priority for the current government and its potential successors. This will ensure national security and instill public confidence in the digital age.”
Discover how new technology trends are transforming the future with our new podcast, Future Human: The Series. Listen now on Spotifyon Apple or wherever you get your podcasts.
Keir Starmer and his wife Victoria Starmer voted in the UK general election on 4 July 2024. Image: Labour Party/Flickr (CC BY-NC-ND 2.0)