In 2017, Want to cry caused significant disruption to the British public and private sectors. This disruption has exposed vulnerabilities in business and government systems, particularly in the UK. It affected hospitals, health facilities and social services, leading to the cancellation, delay or postponement of operations and admissions.
The attack revealed a lack of robust cybersecurity measures, failures in basic IT administration and highlighted the importance of investing in strong defenses to protect critical public infrastructure. This has sparked renewed interest in cybersecurity in the UK and launched efforts to build resilience against future cyber threats.
Globally, the estimated cost to recover from the impact of the WannaCry attack is between $4 billion and $8 billion.
Shortly after the WannaCry attack, businesses and governments around the world were hit by a similar, more devastating attack known as PasPetya. This attack used some of the same exploits to spread between devices and encrypt data on and attached to the devices. This automated propagation method spread further than its initial target and was the first malware reported to cost more than $10 billion.
The costs and impacts of automated attacks are therefore significant, particularly when they exceed the limits and targets intended by their creators.
What would the next generation of cyberattacks look like?
Attacks based on artificial intelligence (AI) would likely possess greater ability to adapt and evade. They could continually learn from their environment, dynamically adjust their attack vectors, and use advanced obfuscation techniques to bypass security systems. They could also have the ability to analyze defensive measures and detect weaknesses in real time, making them very difficult to detect and mitigate.
AI-based attacks could have even more profound global impacts than WannaCry or NotPetya due to their increased sophistication and adaptability. They could target multiple critical sectors simultaneously, causing cascading outages and disrupting essential services on a larger scale.
This may seem fanciful or futuristic, especially since AI, in the form of ChatGPT, has only just been launched and only responds to textual questions or requests. However, AI has been around for a very long time and has become increasingly fashionable by showing great promise and struggling to deliver on its promises. As with many things, it is very often the lesser-known, longer-term developments that hold the most promise.
The great challenge of cyberspace
In 2016, a year before the NotPetya attack, an event took place at the popular Defcon cybersecurity conference in Las Vegas, Nevada. The event was the final of a competition launched in 2013 by the US Defense Advanced Research Projects Agency (DARPA). The Cyber Grand Challenge, as it was called, offered a $2 million prize for first place and featured no human competitors. Instead, the ten finalists were seven-foot-tall shiny machine stands that represented the culmination of three years of research and development by different teams made up of some of the brightest (human) minds in AI .
Security at the event was tight, with only authorized referees allowed into the arena. At the start of the competition, a network cable was symbolically cut, isolating the competitors from their creators and the outside world.
For the competition itself, DARPA had created an entirely new operating system and several applications and services running on it. These systems were all created by humans, but contained subtle flaws and vulnerabilities similar to vulnerabilities previously identified in various legitimate production systems.
Nonetheless, creating an entirely new operating system and applications meant that all competitors faced a zero-knowledge challenge against which they would be evaluated.
This rating was based on three measures:
Defense: Competitors must defend themselves against attacks from other competitors. To do this, competitors dynamically inserted or added code to prevent others from exploiting the vulnerabilities they discovered.
Functionality: Competitors lost points if their fixes affected functionality, degraded performance, or took systems offline.
Attack: Competitors had to identify vulnerabilities in other competitors’ systems, configuration, and code, then successfully create and exploit them.
To assist observers of the competition, “pew pew” cards and human commentators were on hand to recount each competitor’s actions and activities and how they factored into the overall score. The atmosphere, at least among observers and competitors’ teams, was tense and full of drama, especially when, about halfway through the multi-round event, “Mayhem”, a competitor who had gradually built up a lead significant on his opponents, stopped working. The development team behind Mayhem requested to reboot the competitor. But the organizers declined their request.
By the time Mayhem stopped running, he had accumulated about a 10,000 point lead, while a points comparison between third and fourth place, the separating places for those who won prize money, was less than 1,000 points.
All was not lost for Mayhem. Even though the system stopped attacking and defending itself, it still earned points for functionality and keeping its systems online. The impact was that other competitors cut into Mayhem’s lead, but at a slower pace than expected.
It is unclear why Mayhem stopped working or why it started working again in the final stages of the competition. What we do know is that once all was said and done and the competition was over, Mayhem always came out on top.
The lead Mayhem had built up in the early stages of the competition had been enough to see it through. Mayhem had built the lead, not by being significantly better at offense or defense, but through the strategic engine it used to do so. Mayhem did not take its systems offline to patch them when it discovered a vulnerability. Instead, it only took the system offline and patched the vulnerability when one of its competitors discovered the flaw and attempted to exploit it. While many competitors lost points by taking systems offline to fix them, Mayhem gained points by keeping systems online, high-performance and functional.
In terms of attack, Mayhem dominated the field by creating distractions to make other competitors believe that it had found a vulnerability in some aspect of the system when in reality it had not found it and was instead exploiting it. more furtive way, another aspect. This distraction technique was so successful that Mayhem even tricked human commentators into believing and announcing that Mayhem had found a vulnerability when there was none.
While this story of the Cyber Grand Challenge and Mayhem is fascinating in its own right, it’s worth remembering that the event and challenge as a whole were created by DARPA. Not surprisingly, among the audience were several senior officials from the U.S. military’s various cyber commands. At least one of them openly said, “I want one.”
Cyber Command’s new approach
In 2018, two years after the Cyber Grand Challenge and a year after the WannaCry and NotPetya attacks, Plan-X, a US Army project aimed at automating cyber operations, merged with similar US Air Force projects and an obscure and secret department of the American army. The Pentagon, called the Strategic Capabilities Office, to create Project Ike. The collaboration between the different entities of the US military and the Pentagon is called Joint Cyber Command and Control (JCC2). Beyond year-over-year funding increases, very little is known about Project Ike’s progress, other than that updates from its developers are released every three weeks.
In August 2020, Paul Nakasone (Commander of U.S. Cyber Command, Director of the National Security Agency, and Chief of the Central Security Service) and Michael Sulmeyer (Senior Advisor to the Commander of U.S. Cyber Command) wrote an article titled “ The new Cyber Command.” Approach’, in which they say:
“It’s not hard to imagine an AI-powered worm that could disrupt not only personal computers but also mobile devices, industrial machines and much more.”
The shape of things to come
Although many point out that the US and UK are allies and conduct joint operations across multiple adversarial borders and are unlikely to directly target, attack or impact UK interests, it is worth remembering that the enabling element for WannaCry and NotPetya were originally tools developed and lost or stolen by US intelligence agencies.
In July 2023, Lindsay Cameron, CEO of the UK’s National Cyber Security Center (NCSC), spoke to an audience on the topic of AI and machine learning (ML). In a social media post about the event, the NCSC summarized the speech: “These technologies will shape our future and that of the UK. But our adversaries seek to exploit AI for their purposes.”
At the same time, a month earlier, in June 2023, Jen Easterly, director of the US Cybersecurity and Infrastructure Security Agency (CISA), noted that “Chinese cyber operations (one of the largest investors, developers and leaders in AI) have moved from espionage activities to targeting infrastructure and societal disruption” while adding worrying context and comparison to the weaponization of AI:
“If we can have conversations with our adversaries about nuclear weapons, I think we should probably think about having those conversations with our adversaries about AI, which, after all, in my opinion, will be the most powerful weapon of this century. »
While the idea that AI is the exclusive preserve of large multinational corporations and state actors may comfort some, it is worth keeping in mind that AI and ML performance accelerators are widely available , which range in price from around $30 to several thousand dollars. dollars, and that at least one of the Cyber Grand Challenge competitors has committed to always keeping its competitor and its future developments open source.