When ChatGPT went public in late 2022, it sparked a wave of Skynet references. While it’s still a far cry from the dystopian surveillance neural network from the Terminator movie, it poses unique security challenges that need to be addressed. Skynet had complete control over all U.S. ballistic missile systems without proper security and authorization controls, allowing it to launch a nuclear war to kill humanity.
As AI entities become more autonomous and gain access to more sensitive data and systems, CISOs risk facing their own cybersecurity crisis: current security technologies and practices are not equipped to deal with AI. It won’t be Armageddon, but no company wants its most valuable assets to be leaked as a result of using AI.
Not human, but more than a machine
Chatbots and other AI entities are difficult to classify in terms of IT security. They are non-human identities that are largely responsible for protecting their corporate access keys, aka passwords. They are also not akin to machine identities, which are the software, devices, virtual machines, APIs, and robots that operate within a network. They are inherently different from the other two types of identities and require similar security oversight in a way that takes into account their unique attributes.
AI is a fusion of human-guided learning and machine autonomy. It can take a lot of unstructured data and structure it for human consumption. AI models are designed to distribute information without question—that’s their sole purpose. It’s somewhere between humans, who know how to protect their passwords (at least in theory), and machines, which store passwords that can be compromised and stolen. AI has more autonomy than machines, but less than humans. It needs access to other systems to do its job, but it doesn’t have the reasoning to know when to apply limits.
Investment goes beyond security
Enterprise spending on AI is taking off, with increasing investments in AI servers and applications as well as leasing infrastructure to train large language models (LLMs) from cloud providers. recent survey found that companies plan to spend an average of $11.8 million on AI this year, and Accenture has set its expenses over 3 years on data and AI practices to the tune of $3 billion last year.
Investment in AI is overshadowing security efforts. The average security professional struggles to keep up with their current tasks and isn’t investing time in securing AI workloads. Current security solutions like access controls and least privilege policies also aren’t easily ported to AI systems. Even with machine identities, organizations don’t always understand the risks they pose or follow security best practices.
In fact, machine identities are often overlooked. Identity Security Threat Landscape Report 2024 It found that 68% of respondents said that up to half of their machine identities access sensitive data, while only 38% of organizations consider machine identities that have access to sensitive data as part of their definition of privileged users.
Data Leaks and Compromises in the Cloud
While AI security risks aren’t unique, their scope and scale could be. Constantly loaded with new training data from across an organization, LLMs are the next high-value target for attackers as soon as companies create them. Because they can’t be trained with test data, the data is up-to-date and can potentially reveal intellectual property, financial secrets, and other highly sensitive information. AI systems are designed to be trusted, which puts them at significant risk of being tricked into providing information they shouldn’t.
Cloud attacks on AI systems enable lateral movement and jailbreaks that can be used to trick systems into providing false information to the public. Cloud identity and account compromises are common, with a number of recent attacks using stolen credentials causing untold damage to some of the world’s biggest brands. the technology, banking and consumer space.
AI could also be used in attacks. For example, it could allow attackers to evaluate each permission associated with a specific role to be able to easily move within an organization.
So where does this leave us? The use of AI and LLMs in organizations is so new that it will take time for security best practices to be established. In the meantime, CISOs can’t just sit back and do nothing; they need to start developing strategies to protect AI identities before they’re forced to, whether by a cyberattack or regulation. AI is now covered by compliance standards, regardless of where the data is stored. GDPR complaint filed in Europe against AI company, OpenAIclaiming that ChatGPT’s responses provided false consumer data.
AI Safety Culture
While there is no silver bullet when it comes to AI security, there are steps organizations can take to address some of these issues. Here are some steps that will help CISOs improve their AI identity security posture as the market evolves.
- Look for an overlap: Look for areas where existing security practices and policies can add value with AI. Leverage existing controls such as access and least privilege where possible.
- Secure the environment: Understand and protect the environment in which AI will live. There is no need to purchase an AI security platform; just secure the environment in which AI activity takes place.
- Creating an AI Safety Culture: Foster an AI security mindset. Include security representatives in the AI think tank and Skunkworks efforts. Enlist the support of security team members who can leverage resources and skills to mitigate risk. This cultural shift involves thinking about how data is handled and the LLM being trained.
The AI at the heart of Skynet is very different from the AI that helps us write, code, and use data to improve our business operations today, but there’s an important security lesson at the heart of this story that applies to generative AI and LLMs today. We can’t let AI entities fall through the identity cracks because they’re neither human nor machine, and they play by different rules. Start planning for your AI security today with the resources you have and avoid AI identity security oversights.