Facebook and Instagram users will start seeing labels on AI-generated images that appear on their social media feeds, part of a broader tech industry initiative to sort out what is real and what is not.
Meta said Tuesday that it is working with industry partners on technical standards that will make it easier to identify images and possibly videos. and audio generated by artificial intelligence tools.
It remains to be seen how well this will work in an age where it is easier than ever to create and distribute AI-generated images that can cause harm – from electoral disinformation to non-consensual fake nudes celebrities.
“It’s kind of a sign that they’re taking seriously the fact that the generation of fake content online is a problem for their platforms,” said Gili Vidan, assistant professor of information sciences at the University Cornell. It could be “fairly effective” at flagging a lot of AI-generated content created with commercial tools, but it probably won’t catch everything, she said.
Meta President of Global Affairs Nick Clegg did not say Tuesday when the labels would appear, but said it would be “in the coming months” and in different languages, noting that a “number of elections important events are taking place in the world. »
“As the difference between human and synthetic content blurs, people want to know where the line lies,” he said in a blog post.
Meta already puts the “Imagined with AI” label on photorealistic images made by its own tool, but most of the AI-generated content flooding its social media services comes from elsewhere.
A number of collaborations with the tech industry, including the Adobe-led Content Authenticity Initiative, have worked to establish standards. A push for watermarking and labeling AI-generated content was also part of an executive order that US President Joe Biden signed in October.
Clegg said Meta would work to label “images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock as they implement their plans to add metadata to images created by their tools.”
Google said last year that AI labels would come to YouTube and its other platforms.
“In the coming months, we’re introducing labels that inform viewers when the realistic content they’re seeing is synthetic,” YouTube CEO Neal Mohan reiterated Tuesday in a blog post published a year ago.
A potential concern for consumers is that technology platforms become better at identifying AI-generated content from a set of major commercial providers, but miss out on what is created with other tools, creating a false sense of security.
“A lot will depend on how platforms communicate this information to users,” said Cornell’s Vidan. “What does this mark mean?” With what confidence should I take it? What is its absence supposed to tell me?