There is no doubt that the rapid democratization and accessibility of artificial intelligence (AI) resources poses a threat. Most information security professionals don’t have the time to keep up with the lightning-fast developments in AI, so in this article I aim to give a taste of the current situation of the field (January 2024) – and how AI tools could be used. exploited for attack and defense.
Nearly five years ago, I decided to learn and document the inner workings of AI. From this body of work, one of the most essential lessons is that in the age of AI, there is a need for perpetual learning. We live in a time where today’s news will almost certainly be out of date next month. Here are some of the AI possibilities that have changed over the past 12 months.
Voice cloning
From a cybercriminal’s perspective, the first low-hanging fruit is that many biometric authentication methods become extremely inexpensive to neutralize. Take a three-second audio clip of someone, perhaps from a video, phone call or podcast, transmit it through a free or low-cost AI voice cloning service and There you go – now you have the chance to appear to get it. no one to say anything, in real time, in any emotion, style or language.
Consider what this means. For example, any entity that still relies on voice authentication is at high risk. Certainly, it should be relatively easy to prove that money or other items extorted via voice authentication cannot be reliably and robustly traced back to the account holder. This also means that any phone conversation you think you’re having with someone you know might actually be with a nefarious actor.
This voice cloning capability has been around for over a year now and there have been many cases of people being called by fake versions of close relatives demanding immediate money. This was highlighted in a warning from the US Federal Trade Commission (FTC). issued in March 2023.
A year ago, this form of voice cloning required a longer voice sample and longer processing time. Over the past few months, the effort required for a voice attack has been reduced to roughly the same level as the time it takes to write a decent phishing email.
Image generation
The situation is now similar for facial recognition. More recently, AI image generators have been shown to be able to create not only faces, but also images of identity documents such as driving licenses. In recent weeks, these image generators have also demonstrated their ability to generate completely convincing, but completely fake, images of real people brandishing their driver’s licenses.
Fingerprints are harder to forge, mainly because they are less easily shared or available (currently) online. However, even for fingerprints, if the print itself can be imaged, then fake prints can be printed and will pass through most basic sensors.
All of this means security teams will likely need to quickly improve authentication by requiring additional layers using information capable of deeper authentication of access, geographic location, IP address, time of day , behavior patterns, installed security certificates – and even the custom challenge. /answers to questions.
Creating AI entities
On the other hand, there is an emerging capability that can potentially be exploited by both security teams and cybercriminals: the ability to create and train your own autonomous AI entity. Whereas just a year ago, running an AI with the skills of a large language model would have required a reasonably sized data center, anyone with a decent personal computer and graphics processing unit (GPU) can now run and train its own AI.
Where such an undertaking until recently required considerable effort, the website jan.ai now offers the ability to install and run LLMs on a PC in your own home. From a purely technical perspective, I can create my own AI extended language model and train it to do anything I want, like make it a virtual CISO or train it as a nefarious cybercriminal (although I’m sure some uses may violate the terms). and conditions – not to mention the laws in force in many territories).
Leveraging AI Opportunities in Cybersecurity
What is the benefit for security teams? Is there a? In short – yes.
In 2024, as we delve deeper and deeper into the skills and possibilities of AI, knowledge is no longer difficult to achieve. The right investments in freely available AI tools and insights can provide security teams with what they need to stay ahead of AI-enabled cybercriminals.
You don’t need to be an AI expert to navigate the tools that are emerging – you just need the time and desire to know and exploit them. Even if cybercriminals form their own LLMs, nothing (except time and money) stops companies passionate about security from freeing up people and resources to bring together their own AI expertise and tools .
Yet one recent ISACA survey shows that 54% of organizations do not provide AI training, even to teams directly impacted by AI.
In the past, only organizations with the most security control gaps could be removed. What will quickly happen now is that the use of malicious AI will destroy all kinds of once-secure organizations through the smallest chains of vulnerabilities. The solution could be to allow your team and security personnel to invest time and training to build your own AI defense force.
It has never been more urgent to invest in freeing and training security resources to understand and use AI for countermeasures. This effort is crucial both to understanding emerging risks quickly and to building the most robust AI-based defenses.
Right now, most of these defensive tools are easy to access – but in this rapidly evolving era of AI, this situation could be different in a few months. Staying informed about these changes isn’t just about staying informed: it’s an essential strategy for maintaining security in an increasingly AI-dominated landscape.