Mikko Hyppönen has spent decades on the front lines of the fight against malware. The 54-year-old defeated some of the world’s most destructive computer worms, tracked down the creators of the First time PC virus and had been selling his own software since he was a teenager in Helsinki.
In the years that followed, he won Vanity Fair Profilesplaces among the world’s top 100 foreign policy thinkers and the role of director of research at With Secure– the largest cybersecurity company in the Nordic countries.
The pony-tailed Finn is also the curator of the online site Malware Museum. Yet the entire history of its archives could be eclipsed by the new technological era: the era of artificial intelligence.
“AI changes everything,” Hyppönen told TNW in a video call. “The AI revolution is going to be bigger than the Internet revolution. »
As a self-proclaimed optimist, the hacker hunter expects the revolution to leave a positive impact. But he also worries about the cyber threats this will trigger.
As 2024 dawns, Hyppönen revealed his five most pressing concerns for the year ahead. They don’t come in any particular order, although there is one that causes the most sleepless nights.
Researchers have long described deep fakes as the the most alarming the use of AI for criminal purposes, but the synthetic media has still not fulfilled its predictions. Not yet anyway.
In recent months, however, their fears have begun to come true. Deepfake fscam attempts increased by 3,000% in 2023, according to Onfido searchan identity verification unicorn based in London.
In the world of information warfare, fabricated videos are also making headway. The crude deepfakes of Ukrainian President Volodymyr Zelensky of the early days of the full-scale invasion of Russia have recently been replaced by media manipulation.
Deepfakes are also now appearing as simple inconveniences. The most notable example was discovered in October, when a video appeared on TikTok that purported to show MrBeast offering new iPhones for just $2.
A lot of people are getting this fraudulent ad from me… are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem pic.twitter.com/llkhxswQSw
– Mr.Beast (@MrBeast) October 3, 2023
Yet financial scams that exploit convincing deepfakes remain rare. Hyppönen has only seen three so far – but he expects that number to proliferate quickly. As deepfakes become more refined, accessible and affordable, their scale could increase rapidly.
“This is not happening on a large scale yet, but it will become a problem in a very short time,” says Hyppönen.
To reduce the risk, he suggests an old-fashioned defense: safe words.
Imagine a video call with colleagues or family members. If someone requests sensitive information, such as a money transfer or a confidential document, you will ask for the security word before fulfilling the request.
“At the moment it seems a bit ridiculous, but we should still do it,” advises Hyppönen.
“Putting a safeword in place now is very cheap insurance against when this starts happening on a large scale. This is what we should remember now for 2024.”
Even though their name sounds like deepfakes, deep fakes don’t necessarily involve manipulated media. In their case, the “deep” refers to the massive scale of the scam. This is achieved through automation, which can expand targets from a handful to infinity.
These techniques can boost all kinds of scams. Investment scams, phishing scams, real estate scams, ticket scams, romance scams… wherever there is manual labor, there is room for automation.
Remember the The tinder scammer? The scammer stole around $10 million from women he met online. Imagine if he had been equipped with large language models (LLMs) to spread his lies, image generators to add apparent photographic evidence, and language converters to translate his messages. The pool of potential victims would be enormous.
“You could scam 10,000 victims at the same time instead of three or four,” says Hyppönen.
Airbnb scammers can also reap the rewards. Currently, they typically use images stolen from real listings to convince vacationers to make a reservation. It’s a laborious process that can be foiled with a reverse image search. With GenAI, these barriers no longer exist.
“With Stable Diffusion, DALL-E, and Midjourney, you can simply generate an unlimited number of totally plausible Airbnbs that no one will be able to find.”
AI already writes malware. Hyppönen’s team discovered three worms that launch LLMs to rewrite the code every time the malware replicates. None have been found on actual networks yet, but they have been posted to GitHub – and they work.
Using an OpenAI API, the worms leverage GPT to generate different code for each target they infect. This makes them difficult to detect. OpenAI can, however, blacklist the malware’s behavior.
“This is achievable with the most powerful code-writing generative AI systems because they are closed,” says Hyppönen.
“If you could download the entire large language model, you could run it locally or on your own server. They couldn’t blacklist you anymore. This is the advantage of closed generative AI systems.
The advantage also applies to image generating algorithms. Provide open access to the code and watch your restrictions on violence, pornography and deception be dismantled.
With this in mind, it’s no surprise that OpenAI is more closed-ended than its name suggests. Well, that and any revenue they would lose to copycat developers, of course.
Another emerging concern is zero-day exploits, which are discovered by attackers before developers have created a solution to the problem. AI can detect these threats, but it can also create them.
“It’s great to be able to use an AI assistant to find zero days in your code so you can fix them,” says Hyppönen. “And it’s horrible when someone else uses AI to find zero days in your code so they can exploit you.
“We’re not exactly there yet, but I believe it will be a reality – and probably a reality in the short term.”
A student working at WithSecure has already demonstrated the threat. As part of a thesis work, they received regular user rights to access the command line on a Windows 11 computer. The student then fully automated the process of scanning for vulnerabilities to become the administrator local. WithSecure decided to file the thesis.
“We didn’t think it was responsible to publish this research,” says Hyppönen. “It was too good.”
WithSecure has been integrating automation into its defenses for decades. This gives the company an advantage over attackers, who still rely largely on manual operations. For criminals, there is an obvious way to close the gap: fully automated malware campaigns.
“It would turn the game into good AI versus bad AI,” says Hyppönen.
This game should start soon. When this happens, the results could be alarming. So alarming that Hyppönen ranks fully automated malware as the number one security threat for 2024. Yet an even bigger threat lurks around the corner.
Hyppönen has a notorious hypothesis about IoT security. Known as Hyppönen’s Law, this theory states that whenever a device is described as “smart”, it is vulnerable. If this law applies to superintelligent machines, we could be in serious trouble.
Hyppönen expects to witness the impact.
“I think we will become the second most intelligent being on the planet in my lifetime,” he says. “I don’t think it will happen in 2024. But I think it will happen in my lifetime.”
This would add urgency to fears about general artificial intelligence. To maintain human control over AGI, Hyppönen argues for a alignment with our goals and needs.
“The things we build must have an understanding of humanity and share its long-term interests with humans… The benefits are enormous – greater than ever – but the disadvantages are also greater than ever. »