The recent approval of European law on AI marks a pivotal moment for Europe and its emerging status as a global reference in artificial intelligence. However, it also serves as a wake-up call for the UK. Despite a good start, the reception of first global AI security summit in November last year, the UK has now remained silent compared to its continental counterpart and risks losing its leading position in AI innovation.
The discourse around AI in 2023 largely revolved around the threat it represents, with many warning of the “existential risk” of technology. This year the focus has been much more practical, looking at regulation, adoption and the framework we can create to enable businesses to see the tangible positive impacts AI can bring.
Europe is making progress thanks to comprehensive regulatory frameworks and the rapid growth of new AI companies such as French company Mistral AI. It is time for the UK to face the facts: its initial momentum may not be enough to maintain its leadership in a rapidly changing landscape.
Without proactive measures to strengthen its position and encourage safe innovation, we risk losing our competitive advantage to those who actively define the trajectory of AI development and deployment.
Startup ecosystems
The UK, and particularly London, remains one of the best destinations in the world for technology companies and startups. According to recent rankings, London ranks second in the world, tied with New York, just behind Silicon Valley, in terms of best startup ecosystem for tech companies to thrive.
London also continues to be Europe’s largest tech hub, with its startups having raised almost as much investment in 2023 as the next three European cities – Paris, Stockholm and Berlin – combined. Additionally, the UK remains the third-largest tech economy in the world, behind the US and China, boosting its appeal as a hub for international pioneers and small businesses.
Gavin Poole, Here Is
However, maintaining this status requires continuous adaptation. Success, both domestically and globally, requires a delicate balance between innovation and regulatory agility. Robust frameworks are invaluable for ensuring the responsible deployment of generative AI, but must also be careful not to stifle the very innovation that propels us forward.
Last year, the UK government highlighted the importance of an innovation-friendly attitude towards AI, avoiding restrictive rules or standards that might stifle progress rather than encourage it. However, AI is advancing at an unprecedented speed, and the UK must ensure that its light-touch approach to regulation does not become a completely hands-off approach.
Rather than slowing down progress, as some think, managers can create a safe environment for businesses implement and experiment with emerging technologies. Understanding AI is not only about constraining it, but also about helping to build solid foundations and pillars on which businesses can build.
Responsible innovation
While excessive restrictions can hamper the creativity and potential of AI-driven companies, a lack of oversight risks undermining public trust and fueling ethical concerns. Success is not only about paving the way for the big tech giants, but also about creating and nurturing an environment in which these new disruptive models can be used by small businesses and startups to generate value and gain a competitive advantage.
In this context, regulation should be seen as an enabler of responsible innovation – as a means of ensuring that humans remain in charge. By implementing clear guidelines and standards, government can provide businesses and organizations with the confidence and certainty they need to invest in emerging tools and technologies.
THE Overview of the UK government’s regulatory framework for AI, published in February, highlights a “pro-innovation” approach. It is promising to see this awareness of the need for guidelines, but the non-legislative principles focus on the main players involved in the development of AI.
To encourage inclusive innovation, the UK needs to involve as many voices as possible when shaping the regulatory landscape. This means actively engaging not only with large tech companies, but also with startups, educational institutions, and small businesses.
As we’ve already seen, targeted investments in research and development can drive the creation of AI tools that can solve some of the world’s most pressing challenges, from reducing pollution to optimizing locations from work to helping with healthcare solutions. Recently we saw a NHS AI tool identify tiny cancers missed by human doctors and the development of a new tool this may predict which breast cancer patients are most at risk for post-treatment side effects.
Next generation talents
Another key element to ensuring success and growth is investing in the next generation of talent. The government’s recent announcement of over £1 billion in funding for PhD students in future technology areas is a step in the right direction.
Moving forward, we must continue to facilitate this collaboration between academia and industry, ensuring that students gain practical experience in these increasingly important sectors and that businesses have access to talent of tomorrow to protect and grow our economy.
Ultimately, finding the balance between regulation and innovation will require close collaboration between policymakers, industry leaders and academic institutions. The UK cannot afford to rest on its laurels or underestimate the challenges ahead. The EU’s adoption of its groundbreaking AI law is a wake-up call for the UK – a reminder that we need to adapt and evolve our guidelines to stay ahead of the curve.
The ideal scenario is a framework that fosters growth and invests in local talent and future potential, to further cement our position as a global leader in AI innovation.
Gavin Poole is CEO of Here in the East.