As we continue to delegate more infrastructure operations to artificial intelligence (AI), quantum computers are moving towards Q-day (i.e. the day when quantum computers can break encryption methods current). This could compromise the security of digital communications, as well as autonomous control systems that use AI and ML to make decisions.
As AI and quantum converge to reveal extraordinary new technologies, they will also combine to produce new threat vectors and quantum cryptanalysis.
Convergence of AI and quantum computing
The evolution of post-LLM AI will require quantum improvements as we now reach the hard cap on the energy and computing limits of GPU-powered data centers. Efficiency improvements aside, doubling the power of a digital computer requires twice as many transistors.
For quantum computers, just one additional logical qubit doubles their computing power, justifying the vast investments and global arms race to produce them. The implications are profound, because a few hundred logical qubits represent more computing power than all the digital computers that could ever be built at any scale. This opens the door to otherwise inaccessible resources and algorithms for AI and many scientific fields. Although Schor is now most famous for breaking encryption, there are many more to come.
As these hybrid computers grow, so will the data needs. AI may have already surpassed human-generated content, and the extinction of reliable representation of human-dominated data likely began around 2020. A glance at LinkedIn reveals charts and uniform vignettes with generic language resembling summaries rather than original thoughts. AI diagnosticians described it as symptoms of a chronic illness variously characterized as pattern collapse, insanity, etc., where the AI’s primary source of nutrition was junk food generated by the AI. AI, euphemistically known as synthetic data. A better analogy and name is Kuru, but it’s not just a disease, it’s an omen.
While hackers and intelligence agencies have exploited all cyber machines to their advantage, the same will happen for AI long before a big name turns a profit.
Everyone has now received a perfectly written text phishing emails without any telltale signs. It is increasingly difficult for AI detection tools to differentiate between human and AI-generated content.
Attacking systems controlled entirely by AI will be more like Stuxnet and less like WannaCry, where the attack was apparent. Data will not be targeted for theft but for corruption, influence and exploitation of AI systems – and these attacks will be among the hardest to detect and remediate because they mimic the way AI is trained today, often on synthetic data presenting the same statistical characteristics. like the authentic original.
Data contamination and poisoning has already begun, but the most secure networks maintain their integrity through strong encryption channels and cryptographic discipline established over the past two decades. However, this will be insufficient against cryptographically relevant quantum computers.
How far away is this threat?
The transition to post-quantum cryptography (PQC) will take at least a decade for large businesses and governments, and probably much longer.
The scale of networks and data has exploded since the last upgrade of encryption standards and at the same time has given rise to large language models (LLM) and their possible associated specialized technologies. While generic versions are interesting and even fun, powerful AI will be trained on carefully selected data for specific tasks. This will quickly consume all research and historical information produced and deliver deep insights and innovations at an accelerated pace. This will augment human ingenuity, not replace it, but it will lead to a period of disruption for cybersecurity.
If a cryptographically relevant quantum computer is available before PQC is fully deployed, the consequences are unknowable in the AI era. Regular hacking, data loss and even misinformation on social media will remain fond memories of the good old days before AI controlled by bad actors became the largest producer of cyber-carcinogens. When AI models themselves are compromised, the cumulative impact of feeding tailored data to AI-controlled systems with bad intentions will become a global concern.
Debate is already raging in Silicon Valley and government circles over whether AI should be allowed to carry out deadly military actions. This is absolutely the future, whatever the current difficulties.
However, defensive actions are clear and urgent for most networks and business activities. Critical infrastructure architecture and networks must evolve rapidly with much stronger security to meet both AI and quantum. The universal simplicity of upgrading libraries like TLS won’t be enough with so much at stake and new unknowable AI-quantum combination attacks.
The development of Internet 1.0 was based on outdated assumptions and parameters from the 1970s, predating modern cloud technology and its massive redundancy. The next version must be exponentially better and anticipate the unknown based on the assumption that our current security estimates are wrong. Cybersecurity should not be shocked by the AI version of Stuxnet because the last go-around had warning signs years earlier.