LONDON — AI tampering is quickly becoming one of the biggest problems we face online. Misleading images, videos and audio proliferate due to the rise and misuse of generative artificial intelligence tools.
With AI deepfakes appearing almost every day, representing everyone from Taylor Swift has Donald Trump, it is increasingly difficult to distinguish what is real from what is not. Video and image generators like DALL-E, Midjourney and Sora from OpenAI make it easy for people without any technical skills to create deepfakes: just type a request and the system spits it out.
These fake images may seem harmless. But they can be used to commit fraud and identity theft or to carry out propaganda and electoral manipulation.
Here’s how to avoid being fooled by deepfakes:
HOW TO SPOT A DEEPFAKE
In the early days of deepfakes, the technology was far from perfect and often left telltale signs of manipulation. Fact-checkers have flagged images with obvious errors, such as six-fingered hands or glasses with differently shaped lenses.
But as AI has improved, it has become much more difficult. Some widely shared advice — like looking for unnatural blink patterns among people in deepfake videos — no longer holds up, said Henry Ajder, founder of consultancy Latent Space Advisory and a leading expert in generative AI.
Still, there are some things to watch out for, he said.
A lot of Deepfake AI photosespecially in people, have an electronic glow, “a sort of aesthetic smoothing effect” that leaves skin “incredibly polished,” Ajder said.
He cautioned, however, that encouraging creativity can sometimes eliminate this and many other signs of AI manipulation.
Check the consistency of shadows and lighting. Often the subject is clearly in focus and looks realistic, but elements in the background may not be as realistic or polished.
LOOK AT THE FACES
Face swapping is one of the most common deepfake methods. Experts advise looking closely at the edges of the face. Does the facial tone match the rest of the head or body? Are the edges of the face sharp or blurry?
If you think a video of someone speaking has been doctored, look at their mouth. Do their lip movements match the sound perfectly?
Ajder suggests looking at the teeth. Are they clear, or are they blurry and don’t match how they look in real life?
Cybersecurity company Norton says algorithms may not yet be sophisticated enough to generate individual teeth, so the lack of outlines for individual teeth could be a clue.
THINK OF THE BIGGER PICTURE
Sometimes context matters. Take the time to determine whether what you see is plausible.
The Poynter journalism site advises that if you see a public figure doing something that seems “exaggerated, unrealistic, or out of character,” it could be a deepfake.
For example, would the Pope actually wear a luxury down jacket, like represented by a notorious fake photo? If he did, wouldn’t there be additional photos or videos posted by legitimate sources?
USING AI TO FIND FAKES
Another approach is to use AI to fight AI.
Microsoft has developed a authentication tool which can analyze photos or videos to determine if they have been manipulated. Intel chip maker FakeCatcher uses algorithms to analyze pixels in an image to determine whether it is real or fake.
There are online tools that promise to detect fakes if you download a file or paste a link to the suspicious material. But some, like Microsoft’s authenticator, are only available to selected partners and not to the public. This is because researchers do not want to alert bad actors and give them an increased advantage in the deepfake arms race.
Open access to detection tools could also make people feel like they are “god technologies that can outsource critical thinking for us,” when instead we need to be aware of their limitations, Ajder said .
OBSTACLES TO FINDING FAKES
That being said, artificial intelligence is advancing at breakneck speed and AI models are being trained on internet data to produce increasingly quality content with fewer flaws.
This means that there is no guarantee that this advice will still be valid even a year from now.
Experts say it could even be dangerous to put the burden on ordinary people to become digital Sherlocks, as it could give them a false sense of confidence as it becomes increasingly difficult for even trained eyes to spot deepfakes.
___
Swenson reported from New York.
___
The Associated Press receives support from several private foundations to improve its explanatory coverage of elections and democracy. Learn more about AP’s Democratic Initiative here. The AP is solely responsible for all content.