Denis Mandich, technical director of Qrypta quantum cybersecurity company, founding member of the Quantum Economic Development Consortium and CQT.
With the flood of content about generative AI (GenAI), many people now believe that the oft-repeated misconception that “no one understands quantum mechanics” now applies to GenAI, thinking that no one knows how GenAI works. concise quote by physicist Richard Feynman is false, even though it has been relayed as fact by many industry leaders.
This perspective is forgivable because AI is a new and empirically evolving field, unlike quantum physics, a science established over a century ago. The LLM hallucinations are a euphemism for “wrong,” “incompetent,” “liar,” “incoherent,” or “stupid.”
This is a good reminder of how far we are from artificial general intelligence (AGI), however defined and how the statistical sequencing of LLM words is done. not correlated with intelligence.
Even amidst this confusion, cybersecurity marketing is plagued by associating the latest buzzword with every product, without substantial innovation or differentiation, rendering it meaningless. Anomaly detection, zero trust, and XYZ detection response have all been rebranded as AI.
Whatever the nature of AI, it is now essential to have a version of almost fully automated systems. Humans are inefficient and incapable of dealing with today’s cyberattacks.
This is why the question of AI cybersecurity is difficult to address, let alone solve. Could the best AI-based anomaly detection tools have flagged the errors generated by AI tools? No, because they rely on vast training data scraped from the internet, much of which is wrong. AI is incapable of making intelligent decisions in the face of new cyberattacks because, by definition, it has learned nothing that was not already in its training data.
The Challenge of AI in Cybersecurity
Even with countless machine learning and AI models, datasets, and applications in open source communities, AI can only grow incrementally without a bigger quantum computer. The next advances will require a hybrid quantum solution. Until then, difficult-to-quantify questions remain regarding AI.
The ability to advance AI quickly enough will create cybersecurity challenges, as new attacks are truly insidious. The goals are to move from data theft and reconnaissance to modify AI control systems and training data. An example is the Security flaw on Hugging Facewhere access keys and tokens of authenticated services were compromised. No information about the attackers and their methods was disclosed.
There are similarities here to the Storm-0558 compromise of Microsoft’s master signing key, which pirates allowed “to gain complete access to virtually any Exchange Online account anywhere in the world,” according to the Cyber Safety Review Board. Leaving aside the fact that a God Key shouldn’t exist, a year has passed and we still don’t know how this happened.
How AI Can Bridge the Gap with Quantum Computing
It is wrong to claim that LLMs in AI are not well understoodThe premise is absurd: it would mean that AI-powered cybersecurity companies would want you to use their tools even if they don’t know how or why they work.
That said, security measures are not built into AI systems, and it is not rational to expect an AI expert to have the skills to understand how to implement them. Cybersecurity companies’ products fail regularly, so it would be unrealistic to expect an AI developer to anticipate the sophisticated intrusion methods used by nation states.
That doesn’t mean the industry can’t work to bridge the gap. There was recent outrage over OpenAI’s appointment of former NSA director Paul Nakasone to its board of directorsDespite the concerns, this should be seen as a rational step in the right direction. Few can argue that Nakasone knows the adversary’s capabilities and malicious goals when it comes to AI.
As we automate and add AI decision-making tools to our critical infrastructure, we need extraordinary security assurances, not amazing AI features with questionable security measures. These assurances should include third-party evaluation and validation of AI tools used for cybersecurity, which should be accompanied by some level of government regulation.
Virtually every major industry is subject to compliance requirements that can result in financial consequences, industry sanctions, or decertification. AI can be no exception when given so much responsibility. Demand must be driven by the companies that adopt a technology, not the producers and advocates of that technology.
Towards quantum computing
While fears of deep fakes dominate headlines, the real threat is silent versions of this methodology by external adversaries.
Think Stuxnetwhere false sensor data fooled an automated nuclear technology control system into self-destruction. The Log4j zero-day also rated 10 out of 10 in CVSS severity because it allowed remote control and code execution of vulnerable systems.
There is no equivalent scoring system when the compromised AI already has control by design. Triggering a desired response may simply require that certain conditions be met from the data or sensor feeds informing the AI’s control plane.
These are the invisible, machine-readable equivalents of headline-grabbing celebrity deep fakes, but they have a much bigger impact. The basic methodology is the same: create a corrupted imitation of what a human or AI “sees” to precipitate the toxic response and desired goals of the attacker.
When larger quantum computers become available, these channels could be compromised by data crafted by brute-force hackers, even if it is strongly encrypted. This is called Quantum and should have a fancy but apocalyptic CVSS severity score of 11 because it affects virtually every digital system deployed today. There should be an AI name that alludes to digital mind control a la MKUltra but pays homage to its grandfather Stuxnet—Stultra, Muxnet?
The problem is hard and the solutions are complex, but they are manageable if we understand that no new buzzword will solve them. The race to deploy AI with built-in quantum security must be a major issue in discussions about critical systems.
These technologies are too important to our economy to ignore, but security comes first. Building security into the core of the architecture is far less expensive than patching it later. Start by hiring a team or assigning existing staff to develop expertise in both areas before deploying these technologies.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs, and technology leaders. Am I eligible?