Even without taking into account AIData breaches seem bad enough these days. Almost every week, a new high-profile data breach is reported in the news. Sometimes they are very costly, like Recent Breach of UnitedHealth Group’s Change Healthcare Unitwhich is expected to cost the company up to $1.6 billion.
Now imagine if this fire could be put out using AI, and there is good reason to be concerned. Cyber attacks have always required a certain degree of intelligence and patience from hackers. With AI, however, the danger is sharp increase in frequency and scale of the many attacks we are already seeing.
The chain could of course be tempted to sell AI-powered cybersecurity solutions – ‘Fight AI with AI‘, as they say. But it’s not the solution that will end the root cause of violations. In fact, AI is not even the biggest threat that companies should be concerned about: social engineering East.
The Rise of AI in Social Engineering
Social engineering is the main cause of cyber attacksTo get an idea of its prevalence, 68% of cyber attacks involve the human element. What do these attacks look like in practice? You receive an email from your HR department asking for your password to access a retirement platform.
It can also happen right after your company has announced a change in pension provider. The timing is certainly right, so what’s the harm, right? The problem is that the “HR person” is not part of the HR department. They’re a copycat who only found out about the pension provider change because they saw your company talking about it on LinkedIn.
These attacks continue to succeed, whether people find this fact ridiculous or not, and Generative AI will likely catapult their scale to stratospheric volumes. Generative AI Tools like WormGPT, also known as “hackbot as a service,’ can now enable cybercriminals to design more convincing phishing campaigns and impersonation of identities, while reducing the time and cost required to launch these cyberattacks. A teenager in a basement of Ohio will theoretically be able to carry out a large number of social engineering attacks in a single day.
Want to Stop Social Engineering? Don’t Fragment Employee Identity
The reason why these violations What continues to happen is not the sophistication of the scheme, nor an elaborate software vulnerability or exploit. People are the problem, or rather, the secrets they leave behind open the door for malicious actors to penetrate the system. Modern infrastructure and pivot through resources.
Most breaches involve cybercriminals targeting some form of privilege, such as credentials like passwords, browser cookies, or API KeysThese credentials are everywhere. They appear in 86% of security breaches involve web applications and platforms. Most organizations even have hard-coded credentials into their codebase.
To what extent do bad actors decide to rely on AI to Phishing Campaigns are not the issue to focus on. The real battle is preventing employees and companies from leaving secrets, like credentials, where they shouldn’t be. It’s that simple, social engineering will never go away. That’s why the modern security imperative must be to eliminate secrets, now scattered like plastic in an ocean basin across many disparate layers of the technology stack. Kuberneteswaiters, Cloud APIspecialized dashboards, databases and much more.
These layers all handle security in different ways, and the consequence for our industry is a multitude of silos that each open up new vectors of penetration for malicious actors. Adding AI to workflows will inevitably create another silo, but that doesn’t have to be the case.
Consolidate AI with the rest of your identity
For a while, data transparency has been a tricky problem for generative AI. If a leak occurs, you quickly want to know what data AI Agent had access to the agent itself and who had access to the agent itself. Not all companies handle data the same way, so it’s rarely easy to find the source of truth.
To reduce friction, companies must resist the urge to treat AI agents as a separate technology silo. They must consolidate the identity of their AI agents with all of their other assets, including waiters, laptop, microservicesetc. into a single inventory that provides a single source of truth for identity and access relationships. To further streamline things, these companies should apply the same rules and policies to AI as they do to everything else.
Additionally, no company should ever present employee identities as “information” like passwords or usernames. It’s time for every company hosting modern infrastructure to secure identities with cryptography. That means basing access not on passwords but on physical-world attributes like biometric authenticationand enforcing access with short-term privileges that are only granted for individual tasks that need to be performed.
A cryptographic identity for employees can consist of three key elements: machine identity of the device used, the employee biometric and a personal identification number (PIN). The goal of this approach is to significantly reduce the attack surface that threat actors can exploit with social engineering tactics. If you need an example for this security model, it already exists and it’s called the iPhone. It uses facial recognition for biometric authentication, a PIN, and a Trusted Platform Module (TPM) inside the phone that controls its “machine identity”. This is why you never hear about iPhone hacking.
This doesn’t mean the channel won’t be successful in selling AI-based cybersecurity tools. Clearly, they have their uses for analyzing threat activity and detecting anomalies in Infrastructure. But they don’t address human error. With or without AI, social engineering will always rely on human error This is where anti-malware and virus remediation tools won’t cut it, no matter how much AI you throw at them. People are still going to leave their passwords lying around on an unlocked laptop in a coffee shop. So companies can sing the praises of AI, but they need to get rid of passwords first.