Metathe parent company of Facebook, Instagram And Topicsannounced Tuesday that the company will increase transparency on artificial intelligence-generated images as the tech giant prepares for the November election.
Meta plans to start labeling AI-generated images with a note reading “Imagined with AI” to identify photos created with its Meta AI feature, as part of a goal to remain transparent with its users, the social media platform said in a statement. blog post.
AI-generated images are photographs created by computer software that can appear realistic.
The tech giant said it was working with other industry companies to develop “common technical standards” to better detect AI-generated content.
“Being able to detect these signals will allow us to label AI-generated images that users post on Facebook, Instagram and Threads. We are building this capability now and in the coming months we will begin applying labels across the board. languages supported by each app,” Nick Clegg, Meta’s president of global affairs, wrote in the blog post. “We will follow this approach over the next year, during which a number of elections important events will take place all over the world.”
Fake robocalls. Doctored videos: Why Facebook is being asked to solve its election problem.
Steps Meta Takes to Identify AI-Generated Images
When photos are created using Meta’s AI functionality, they include:
- Visible markers: Messages on user posts visible on images.
- Invisible markers: These will not be visible right away, however, invisible watermarks and metadata will be embedded in an image file, the blog post states.
Additionally, Meta works with other companies like Adobe, Google, Microsoft, Mid Road, OpenAI And Shutterstock as companies implement plans to add metadata to images created by their tools. This will help Meta add invisible markers to images when they are published to any of its platforms from these sites.
Labels on ad audio and video content
Although AI-generated content is popular with photos, it also has a significant place in audio and video content. Meta said it is working on strategies to help identify pieces of content that may be harder to tell whether they were human or AI generated.
“While companies are starting to include signals in their image generators, they haven’t yet started including them in AI tools that generate audio and video at the same scale. We don’t So we can’t yet detect these signals and label this content from other companies,” Clegg wrote in the post. “As the industry works on this feature, we are adding a feature to allow users to disclose when they share AI-generated video or audio so we can add a label to it.”
Meta asks its users to use this disclosure and the tagging tool when posting digitally altered audio and video content. Users who fail will face penalties, the company warns.