Next month, the Federal Trade Commission’s new rules on fake reviews will go into effect as part of a crackdown on deceptive marketing and altered testimonials from both humans and AI models.
The rules are part of the FTC’s broader actions against AI-related companies. Earlier this week, the agency made a separate decision when it announcement legal actions against five companies as part of a broader crackdown on deceptive marketing and claims by companies regarding AI products and services.
“Using AI tools to deceive, mislead, or defraud people is illegal,” FTC Chair Lina Khan said in a statement. “The FTC’s enforcement actions make it clear that there are no AI exemptions from existing laws. By cracking down on unfair or deceptive practices in these markets, the FTC ensures that honest businesses and innovators can get a fair chance and that consumers are protected.
As for the agency’s new regulations regarding fake reviews, the updates were approved last month and will go into effect on October 24. The rules update the existing ban on fake reviews to clarify, strengthen and enforce laws aimed at protecting consumers and add new penalties. In addition to addressing fake reviews from celebrities, employees, everyday people and fake identities, the changes also address fake reviews created using generative AI tools – a risk to consumers , businesses and online information in general.
The rules regulate company websites, independent review sites, social media platforms, advertising content and other types of marketing materials. Here’s a look at what the changes address, what they don’t address, and other information to know before enforcement begins in a few weeks.
What types of reviews are prohibited
The new rules outline several types of reviews prohibited by the FTC. Businesses may not create, buy or sell consumer or celebrity reviews that misrepresent a reviewer’s experience. The rules also prohibit offering incentives for positive or negative reviews, prevent companies from receiving reviews from executives or employees without proper disclosure, and restrict reviews solicited from relatives. Businesses also cannot misrepresent the reviews section of their website as independent, cannot threaten reviewers, and cannot buy or sell false influence in the form of subscribers or view counts. on a commercial basis.
“AI tools make it easier for bad actors to pollute the review ecosystem by quickly and inexpensively generating large numbers of realistic but false reviews that can then be widely distributed across multiple platforms,” reads- on in a footnote to the finalized text of the rules. “AI-generated reviews are covered by the final rule, which the Commission hopes will deter the use of AI for these illicit purposes.”
The FTC’s updates also help clarify and consolidate existing rules into a more cohesive whole, said Mary Engle, vice president of policy at BBB National Programs. A former member of the FTC’s advertising practices division, Engle said it appears the agency is trying to distinguish “which particular practices would always be illegal and which would not always be illegal or would be more difficult to trace”. Instead, the rules address behavior that is clearly illegal or deceptive.
When it comes to AI-generated content, large language models make it harder to identify if something is part of a network of fake reviews orchestrated around the world. However, appropriate disclosures are also essential with respect to reviews, advertising, and other types of notices and approvals.
Although some might think that reputable businesses try to avoid fake reviews to protect their brand, this is not always the case. However, Engle believes the risk of higher fines and reputational damage could motivate companies and appraisers to comply and maintain goodwill with customers. “One of the benefits of the Internet is that the truth about fakes is discovered, which can then provoke a backlash,” she said.
“The reason these rules are important is because everyone relies on reviews,” Engle said. “You need them to be valuable and legitimate. But since everyone relies on reviews, there is a strong incentive to fake them or inflate them in some way. I think the FTC is trying to counteract the incentives so that this happens less frequently.
Approval, application and risks of AI
Companies like Google and Yelp have also endorsed the changes, but some experts believe the updates don’t go far enough, although they are a step in the right direction. Rather than regulating social media and e-commerce giants, the rules simply prevent companies from creating or posting fake reviews. The existence of the fake review industry outside of U.S. laws also creates regulatory challenges.
The market is already saturated with a lot of fake reviews online,” said former criminal investigator Kay Dean, now founder of the monitoring site Fake Review Watch. “With the advent of AI, I predict the problem will only get worse. I experimented with AI-generated reviews and wasn’t surprised to see how easy it was to spit out content quickly. However, it is not easy to determine whether the fake reviews are written by a real person or generated by AI.”
Some say the FTC could have chosen not to regulate third-party platforms in order to avoid Questions related to Section 230 regarding the way the government is authorized to regulate social media. However, Dean said the FTC could have taken other actions. For example, she said it could have required platforms to show users how many false or misleading reviews that platform has removed from a given business’s page, more thoroughly identify all reviewers, and allow users to ‘access all reviews, including deleted ones.
“These recommendations, along with other specific recommendations I have provided to the FTC, would provide much more transparency to consumers and allow them to see what is really happening,” she said. “Wouldn’t you want to know that Google or Yelp removed dozens of fake reviews for a contractor you were considering hiring to complete a $50,000 kitchen remodel? »
Matt Schwartz, a policy analyst at Consumer Reports, said his watchdog group generally supported the new rules. Although the finalized rules removed some of the initially proposed text on issues like backdoor reviews on Amazon, he believes increased transparency could be improved, while higher penalties could help deter companies from being of bad actors.
“The whole issue of law enforcement is the key,” Schwartz said.
Prompts and Products – AI News and Announcements
- Google announced new updates for Gemini and how a number of companies are using the extended language model for various products and services.
- Several key members of OpenAI have left the company, including CTO Mira Murati and key members of the research team. The startup would move away from its non-profit model and become a for-profit company. (It also rolled out the new advanced voice feature, which was the subject of controversy this spring.)
- Meta made his debut a range of new AI updates at its annual Meta Connect event, including Meta AI updates, a new Llama 3.2 model and new features for Meta Ray Ban smart glasses.
- The deep fake detection startup Defender of reality And Intel track AI-generated disinformation related to elections.
- Apple CEO Tim Cook and late night host Jimmy Fallon took a to walk in Central Park as part of Apple’s efforts to market the new iPhone 16 and its Apple Intelligence features.
- More than 100 Hollywood actors and producers sign a letter urging the governor of California to sign AI safety legislation in the state.
- During UN Climate Week, a nonprofit installed a new art installation in New York’s Bryant Park to raise awareness about the vast amounts of energy and water needed to power AI models .
- Notion, the productivity app, announced new generative AI features for search, analytics, content generation and other tools.
- Open source AI platform HuggingFace has announced a new MacOS app.
- More than 100 companies have signed a new European AI pact to promote reliable and safe AI.
Other AI stories this week:
- Business Insider used an AI-powered paywall strategy to increase conversions by 75%. (Digiday)
- A new report from 404 Media claims that images of mushrooms generated by Google’s AI could spread misleading and dangerous information.
- “A stranger criticized Meta’s smart glasses. Now she’s the one in charge” (Bloomberg)
- EU antitrust chief Margarethe Vestager spoke to Axios on the Google adtech antitrust case, AI and other issues facing big tech.
- Perplexity is reportedly in talks with major brands to introduce ads in the fourth quarter. (FT)
- The Chronicle of San Francisco made his debut a new AI chatbot “Kamala Harris News Assistant” to inform readers about the candidate and the presidential election.
- “Hacker plants fake memories in ChatGPT to steal user data in perpetuity.” (ArsTechnica)