“Peace is the virtue of civilization. War is its crime. Yet it is often in the furnace of war that the sharpest instruments of peace are forged.” – Victor Hugo.
In 1971, an ominous message began appearing on several computers that made up the ARPANET, the precursor to what we know today as the Internet. The message, which read “I am the Creeper: Catch me if you can,” was the result of a program called Creeper, which was developed by famed programmer Bob Thomas while he was working at BBN Technologies. Although Thomas’ intentions were not malicious, the Creeper program represents the advent of what we now call a computer virus.
The appearance of Creeper on the ARPANET paved the way for the emergence of the first antivirus software. Although not confirmed, it is believed that Ray Thomlinson, known for inventing email, developed Reaper, a program designed to remove Creeper from infected machines. The development of this tool used to defensively track and remove malware from a computer is often considered the birth of the field of cybersecurity. It highlights an early recognition of the potential power of a cyberattack and the need for defensive measures.
The revelation of the need for cybersecurity should come as no great surprise, as cyberspace is nothing more than an abstraction of the natural world. Just as we have progressed from fighting with swords and spears to bombs and airplanes, so has the war for cyberspace progressed. It all started with a crude virus called Creeper, which was a brazen representation of what could be a harbinger of the digital doom. The discovery of weaponized electronic systems necessitated the invention of antivirus solutions such as Reaper, and as attacks became more sophisticated, so did defensive solutions. With the advent of networked attacks, digital battlefields began to take shape. Firewalls have emerged to replace vast city walls, load balancers act as generals directing resources to ensure that a single point is not overwhelmed, and intrusion detection and prevention systems replace sentinels in watchtowers. This is not to say that all systems are perfect; there is always the existential fear that a globally favored benevolent rootkit we call an EDR solution could contain a null pointer dereference that would act as a Trojan horse capable of crashing tens of millions of Windows devices.
Leaving aside catastrophic, and even accidental, situations, there remains the question of what comes next. This is where Offensive AI, the most dangerous cybernetic weapon to date, comes in. In 2023, Foster Nethercott published a white paper The SANS Technology Institute paper details how malicious actors could leverage ChatGPT with minimal technical capabilities to create new malware that can evade traditional security controls. Numerous other papers have also examined the use of generative AI to create advanced worms such as Morris II and polymorphic malware such as Black Mamba.
The seemingly paradoxical solution to these growing threats is to continue the development and research of more sophisticated offensive AI. Plato’s adage, “Necessity is the mother of invention,” perfectly characterizes today’s cybersecurity, where new threats generated by AI are driving the innovation of more advanced security controls. While the development of more sophisticated offensive AI tools and techniques is far from morally laudable, it continues to emerge as an unavoidable necessity. To effectively defend against these threats, we must understand them, which requires further development and study.
The logic of this approach is based on a simple truth. It is impossible to defend against a threat that we do not understand, and without the development and research of these new threats, we cannot hope to understand them. The unfortunate reality is that malicious actors are already exploiting offensive AI to innovate and deploy new threats. Trying to refute this idea would be misguided and naive. That is why the future of cybersecurity lies in the development of offensive AI.
If you are interested in learning more about offensive AI and gaining hands-on experience implementing it in penetration testing, I invite you to attend my upcoming workshop at SANS 2024 Network Security: Offensive AI for Social Engineering and Deep Fake Development on September 7th in Las Vegas. This workshop will be a great introduction to my new course, SEC535: Offensive AI – Attack Tools and Techniques, which will be released in early 2025. The entire event will also be a great opportunity to meet several leading AI experts and learn how it is shaping the future of cybersecurity. You can get the event details and the full list of bonus activities here.
Note: This article was expertly written by Foster Nethercott, a U.S. Marine Corps and Afghanistan veteran with nearly a decade of experience in cybersecurity. Foster owns the security consulting firm Fortisec and is an author for the SANS Technology Institute, which is currently developing the new SEC 535 Offensive Artificial Intelligence course.