RSA 2024 CONFERENCE – San Francisco – Everyone is talking about deepfakes, but the majority of AI-generated synthetic media circulating today will seem antiquated compared to the sophistication and volume of what is about to happen.
Kevin Mandia, CEO of Mandiant at Google Cloud, says it will likely be a few months before the next generation of more realistic and convincing fake audio and video is mass-produced with AI technology. “I don’t think deepfake content is good enough yet,” Mandia said here in an interview with Dark Reading. “We’re right before the storm of synthetic media, where it’s actually a massive manipulation of people’s hearts and minds.”
The election year is of course a factor in expected boom in deepfakes. The good news is that to date, most audio and video deepfakes have been fairly easy to spot, either by existing detection tools or by savvy humans. Voice identity security provider Pindrop claims it can identify and stop most fake audio clips, and many AI imaging tools infamously fail to render realistic human hands – with some generating hands with nine fingers, for example – a dead giveaway of a fake. picture.
Security tools that detect synthetic media are just emerging in the industry, including one from Reality Defender, a startup that detects AI-generated media, nicknamed the Most innovative startup of 2024 here this week as part of the RSA Conference Innovation Sandbox competition.
Source: Mandiant/Google Cloud
Mandia, who says he is investing in a startup working on fraud detection in AI-generated content called Real Factors, says the main way to stop deepfakes from misleading users and overshadowing real content is that content creators embed “watermarks”. Microsoft Teams and Google Meet clients, for example, would be watermarked, he says, with immutable metadata, signed files and digital certificates.
“You’re going to see a huge increase in this phenomenon, at a time when there’s an emphasis on privacy,” he also notes. “Identity will improve and provenance will be much better,” he says, to ensure authenticity on both ends.
“I think this watermark could reflect the policies and risk profiles of each company that creates content,” says Mandia.
Mandia warns that the next wave of AI-generated audio and video will be particularly difficult to detect as fake. “What if you had a 10-minute video and two milliseconds were wrong? Will there ever be such effective technology to say, ‘This is wrong’? We’re going to have the famous arms race, and defense will lose in an arms race.
Make cybercriminals pay
Overall, cyberattacks have become more costly financially and reputationally for victim organizations, Mandia says. So it’s time to flip the equation and make it riskier for the threat actors themselves by doubling down on efforts to share attribution information and naming names.
“We’ve actually gotten good at threat intelligence. But we’re not good at attributing threat intelligence,” he says. The model of continually putting the onus on organizations to strengthen their defenses does not work. “We are imposing costs on the wrong side of the pipe,” he says.
Mandia believes it’s time to revisit treaties with cybercriminal safe harbors and redouble efforts to challenge the individuals behind the keyboard and share attribution data during attacks. Take sanctions and name the leader of the prolific group LockBit ransomware group by international law enforcement this week, he said. Australian, European and American officials joined forces and imposed sanctions against Russian national Dmitry Yuryevich, 31, of Voronezh, Russia, for his alleged role as leader of the cybercrime organization. They offered a $10 million reward for information on him and released his photo, a move Mandia hails as the right strategy to increase the risk for the bad guys.
“I think it matters. If you’re a criminal and all of a sudden the whole world has your picture, that’s a problem for you. It’s a deterrent, and a lot more of a deterrent than ” increase the price” for an aggressor”, Mandia asserts.
Law enforcement, governments and the private sector need to rethink how to begin to effectively identify cybercriminals, he says, emphasizing that the big challenge to unmasking lies in privacy and civil liberties laws in different countries. country. “We need to start solving this problem without affecting civil liberties,” he says.