COMMENT
Cybersecurity has always been a game of cat and mouse between the “good guys” and the “bad guys.” With the increasing prevalence of AI, including new forms like Generative AIThis ongoing chess match has only become more intense – and it’s increasingly clear that AI will serve as a powerful “queen” that can tip the game in favor of whoever wields that piece the most effectively.
Cyberattacks are more sophisticated than ever
Bad actors have wasted no time finding ways to integrate generative AI into their businesses. They were able to take their phishing efforts to a whole new level: messages now arrive in the form of subtle, flowing prose, free of spelling or grammatical errors.
A clever scammer can even “trick” generative AI models into a persona to make the phishing email more convincing; for example, “Make this email look like it’s from the accounting department of a Fortune 500 company” or “Imitate the writing style and mannerisms of Executive X.” With this type of highly targeted, AI-powered phishing attack, bad actors increase their chances of stealing an employee’s login credentials so that the employee can access highly sensitive information, such as email details. finances of a company.
Threat actors are also developing their own malicious versions of traditional GPT tools. DarkGPT, for example, is able to access every corner of the Dark Web, making it easier to collect information and resources that can be used for nefarious purposes. There is also FraudGPT, which allows cybercriminals to create malicious code and viruses with just a few keystrokes. The result? Devastatingly effective ransomware attacks, easier than ever to launch, with a lower barrier to entry.
Unfortunately, as long as these illicit activities give results, there will always be bad actors looking for creative ways to use new technologies like generative AI for sinister reasons. The good news is that businesses can leverage these same capabilities to strengthen their own security posture.
Context is key
In the same way that DarkGPT and FraudGPT can spread harmful resources faster than ever, a responsibly deployed GPT tool can serve useful resources – providing the context needed to help evade potential attacks and facilitate a more effective response to any threats.
For example, let’s say a security professional notices irregular activity or abnormal behavior in their environment, but they don’t know what the next steps are for proper investigation or remediation. Generative AI can very quickly extract relevant information, best practices and recommended actions from the collective intelligence of the security domain. Having this complete context allows practitioners to quickly understand the nature of the attack, as well as the respective actions they should take.
This ability becomes particularly powerful when security teams can look at their environment holistically and analyze all available data.
View full image
Before, it was common to observe a single system for normal behavior, or perhaps, more importantly, unnatural behavior. It is now possible to examine multiple systems and configurations, including how they interact. together – to provide a much more detailed picture of what is happening in the environment. As a result, professionals can have a much deeper contextual understanding of the unfolding situation and make better, more informed decisions.
Additionally, generative AI not only helps security professionals better decisions, it also helps them to make faster decisions – with less manual effort.
Today, there is a lot of work to be done to gain visibility into the technology stack and digital footprint within the organization, gather the data and try to understand what is happening. Given the scale and complexity of today’s technology environments and the volumes of data involved, it has always been impossible to provide comprehensive security coverage or identify every blind spot – and this is largely what benefit malicious actors.
Generative AI not only brings all this data together, but it also democratizes it. This allows security professionals to perform analysis on enormous amounts of information in near real-time and identify potential threats based on landscape changes that previously might have only been discovered by chance. . This alone can reduce bad actors’ response time from days to just minutes – a significant advantage for the good ones.
There is reason to be optimistic
As automobiles became more common in the early 1900s, it was customary for someone carrying a red flag on the road in front of the car to warn other travelers in advance of the arrival of something new and unexpected, and to be aware of their surroundings.
Obviously, society has long become accustomed to the presence of vehicles on the road. They have simply become part of the world we live in, even as they have become increasingly sophisticated and powerful.
When it comes to AI, we are at a critical juncture: we must proceed thoughtfully and carefully. Whether it’s cars or AI, there is always risk. But just as we’ve added enhanced safety features to vehicles and tightened regulations, we can do the same with AI.
Ultimately, there are reasons for optimism here. The cat and mouse game between hackers and defenders will continue, as always. But by using AI, and generative AI in particular, as a way to strengthen their overall security posture and strengthen their defenses, the good guys will be able to up their game and improve their ability to keep the bad guys in their place. : in check.