Dr Ian Pratt, Global Head of Security at HP, explains how recent advances in AI are becoming essential tools in detecting, responding to and exploiting threats.
Recent advances in AI provide cybersecurity defenders and threat actors with new tools and capabilities. Cybercriminals have already begun exploring how AI can scale up attacks and are targeting businesses with a new generation of fast-moving threats. But AI can also be put to good use: security teams use it to strengthen threat detection and make remediation more effective. This couldn’t come at a better time, with a report four millions cybersecurity professionals are needed globally.
This year, we expect AI and its impact to become prevalent in cybersecurity, strengthening phishing lures, identifying weak points in defenses, and reducing the time it takes to develop and respond to attacks. This year, cybersecurity teams will also prioritize AI, preparing for the new wave of AI-based threats and using this technology to their advantage.
Three ways AI is expected to impact the cybersecurity landscape:
1. AI will boost social engineering
Cybercriminals will leverage AI to scale social engineering attacks on an unprecedented scale, generating convincing, hard-to-spot phishing lures in seconds. These hypotheses will likely be very plausible, as cybercriminals automate personalized lures using data collected from social media or compromised mailboxes. As a result, lures will be very difficult for employees to spot, even after phishing training. Attackers will also use AI to generate more lures in minority languages, which will appear even more legitimate.
We will also likely see AI-generated mass campaigns increase on key dates. For example, 2024 is expected to see the most people in history vote in the elections. Using AI, cybercriminals will be able to easily create localized lures targeting specific regions. Likewise, major annual events, such as year-end tax filings, sporting events like the Paris Olympics and the UEFA Euro 2024 tournament, as well as retail events like Black Friday and Singles’ Day, will also give cybercriminals hooks to deceive users.
As fake emails have become indistinguishable from legitimate emails, businesses can no longer rely solely on employee training. To protect against AI-based social engineering attacks, organizations must create a virtual safety net for their users. Micro-virtualization creates disposable virtual machines isolated from the PC’s operating system, so they remain protected even if users unintentionally click on something they shouldn’t.
See more : Cybersecurity and AI/ML, before the new era of AI: a recap and a look to the future
2. LLMs bring both opportunities and challenges
Local Large Language Models (LLM) are expected to come to PCs this year, with “AI PCs” revolutionizing the way people interact with their devices. These LLMs will increase user efficiency and productivity and provide several security and privacy benefits by leveraging internet-agnostic AI. These personalized assistants and chatbots will reduce the security risks of sending and storing personal data in the cloud. However, with even more data collected by these local models, the endpoint will become a prime target for threat actors.
As organizations look to use LLM chatbots for convenience, security teams will have another system to defend. These chatbots could serve as a gateway to previously unavailable data. By using targeted prompts to fool corporate chatbots and bypass controls, bad actors could socially engineer corporate LLMs to access confidential data.
3. AI Reduces Barriers to Harmful Attacks on Firmware and Hardware
Advances in cybersecurity technologies will make it more difficult for attackers to access systems and evade detection. But with AI putting powerful technology in the hands of more people, sophisticated capabilities will become more accessible. This availability will allow attackers to innovate and continue to increase attacks against the firmware and hardware layer, where security teams have less visibility on a daily basis. Historically, access to the operating system (OS) required in-depth technical knowledge. But AI will make attacks targeting lower levels of the tech stack more accessible.
We expect to see an increase in the number of advanced cyberattacks, which are harder to detect and more damaging. Cyber events will become more common as attackers use AI to find and exploit vulnerabilities and gain a foothold under the operating system. To defend against this, organizations must now invest more in hardware and firmware security.
A new era for cybersecurity
AI is set to have a significant impact on the threat landscape. However, there is an equal opportunity for security teams to leverage AI to improve threat detection and response and ease pressure on security teams. AI co-pilots will also help defend users with automated analysis to identify targeted phishing lures that attempt to trick employees into making bank transfers or sharing sensitive data.
The arrival of AI PCs in 2024 will bring huge security benefits, allowing users to operate AI more securely on their devices without the security risk of sending and retaining data in the cloud. They will also bring a new level of data privacy, such as automatically locking a device without a driver or launching a privacy screen when a device is monitored.
To securely use AI to their advantage, organizations urgently need an integrated approach to security that prioritizes protection over detection and deploys zero trust principles. Partnering with trusted AI security providers will ensure customers maximize the benefits of AI while being protected against new security and privacy threats.
What new AI protections has your organization adopted against cybersecurity risks? Let us know on Facebook, XAnd LinkedIn. We would love to hear from you!
Image source: Shutterstock