The AI debate rages and skepticism is high. But AI is here to stay. While some headlines criticize tech giants for their AI-driven social networks or questionable consumer tools, AI itself is becoming indispensable.
Very soon, AI will be as integral to our lives as electricity – powering our cars, shaping our healthcare, securing our banks, and keeping the lights on. The big question is: are we ready for what’s next?
The public debate around AI has largely focused on ethics, misinformation and the future of work. But one vital issue goes unnoticed: the safety of AI itself. With AI integrated into almost every area of society, we are creating massive, interconnected systems with the power to shape – or, in the wrong hands, destroy – our daily lives. Are we prepared for the risks?
As we give AI more control over tasks – from diagnosing diseases to managing physical access to sensitive locations – the consequences of a cyberattack increase exponentially. It is worrying to note that some AI is as fragile as it is powerful.
There are two main ways to attack AI systems. The first involves stealing data, compromising everything from personal health records to sensitive corporate secrets. Hackers can trick models into spitting out secure information, whether by exploiting medical databases or tricking chatbots into bypassing their own safety nets.
The second is to sabotage the models themselves, skewing the results in dangerous ways. An AI-powered car tricked into misinterpreting a “Stop” sign as “70 mph” illustrates how real the threat can be. And as AI develops, the list of possible attacks will only grow
Yet abandoning AI because of these risks would be the biggest mistake of all. Sacrificing competitiveness for security would leave organizations dependent on third parties, lacking experience and control over technology that is quickly becoming essential.
So how can we reap the benefits of AI without capitalizing on its risks? Here are three critical steps:
Choose AI wisely. Not all AI is equally vulnerable to attack. Large language models, for example, are very sensitive because they rely on large data sets and statistical methods. But other types of AI, like symbolic or hybrid models, consume less data and operate according to explicit rules, making them harder to decipher.
Deploy proven defenses. Tools like digital watermarking, cryptography, and personalized training can harden AI models against emerging threats. For example, Thales’ Battle Box allows cybersecurity teams to test AI models to detect and fix vulnerabilities before hackers can exploit them.
Improve organizational cybersecurity. AI does not work in isolation: it is part of a larger information ecosystem. Traditional cybersecurity measures must be strengthened and adapted for the AI era. It starts with employee training; after all, human error remains the Achilles heel of any cybersecurity system.
Some might think that the battle over AI is just another chapter in the ongoing conflict between bad actors and unwitting victims. But this time, the stakes are higher than ever. If AI safety is not prioritized, we risk ceding control to those who would use its power for harmful purposes.
In the UK, Thales has also invested in a state-of-the-art facility in Ebbw Vale, South Wales, to carry out pioneering work in cybersecurity and its real-world applications, including AI .
Situated on the former site of one of Europe’s largest steelworks, the site was first opened in 2019 thanks to investment from the company, academia and the Welsh Government.
The facility grew from a single project to a cyber campus, including the creation of the Global Operational Technology Competence Center. The facility can also be used as a platform to test and secure AI-based systems critical to the UK’s infrastructure.
It includes real-world examples of how secure AI can prevent disruptions, such as fake financial transactions or faulty automated braking systems.
Ebbw Vale is also home to Thales’ UK Cyber range, test and reference benches, workshops and an autonomous vehicle test track as well as an immersive customer experience centre.
In this area, the facility also specializes in the resilience of autonomous vehicles and systems, how they can be measured, quantified and ultimately “proven” as part of safety-critical systems.
Patrice Cain, president and CEO of Thales