new York
CNN
—
Editor’s Note: This story contains discussion of suicide. Help is available if you or someone you know is struggling with suicidal thoughts or mental health issues.
In the United States: Call or text 988, the Suicide & Crisis lifeline.
Global: The International Association for Suicide Prevention and Befrienders Worldwide have contact details for crisis centers around the world.
“There is a platform that you may not have heard of, but you need to know about it because, in my opinion, we are behind the times here. A child is gone. My child is gone.
That’s what Florida mom Megan Garcia wishes she could tell other parents about Character.AI, a platform that allows users to have in-depth conversations with artificial intelligence chatbots. Garcia believes Character.AI is responsible for the death of her 14-year-old son, Sewell Setzer III, who died by suicide in February, according to a lawsuit she filed against the company last week.
Setzer was sending messages with the robot moments before his death, she claims.
“I want them to understand that this is a platform that the designers chose to create without proper guardrails, safety measures or testing, and that it is a product designed to keep our children addicts and manipulate them,” Garcia said in a statement. interview with CNN.
Garcia alleges that Character.AI – which markets her technology as “an AI that appears to be alive” – knowingly failed to implement appropriate safety measures to prevent her son from developing an inappropriate relationship with a chatbot that led to him becoming withdrawn from his family. The lawsuit also claims the platform failed to respond adequately when Setzer began expressing thoughts of self-harm to the robot, according to the complaint filed in federal court in Florida.
After years of growing concerns about the potential dangers of social media for young users, Garcia’s lawsuit shows that parents may also have reason to worry about nascent AI technology, which has become increasingly more accessible across a range of platforms and services. Similar, although less disastrous, alarms have been raised on other AI services.
A spokesperson for Character.AI told CNN that the company does not comment on pending litigation but was “heartbroken by the tragic loss of one of our users.”
“We take the safety of our users very seriously and our Trust and Safety team has implemented many new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline which is triggered by self-help conditions. harm or suicidal ideation,” the company said in the release.
Many of these changes were made after Setzer’s death. In a separate statement Over the summer, Character.AI said that “the field of AI safety is still very new and we won’t always get it right,” but added that it aimed to “promote safety, prevent harm and prioritize the well-being of our community.” .”
Setzer first began using Character.AI in April 2023, shortly after that 14th anniversary, according to the lawsuit. When Garcia first heard that he was interacting with an AI chatbot, she said she thought it was something like a video game.
However, a few months after he started using the platform, Setzer became “visibly withdrawn, spent more and more time alone in his room, and began to suffer from low self-esteem.” He even quit the school’s Junior Varsity basketball team,” the lawsuit claims. When he started having problems at school, his parents began restricting his screen time and sometimes took his phone as punishment.
What Garcia told CNN she didn’t know at the time was that her son was having lengthy conversations with Character.AI chatbots. Character.AI is different from other AI chatbots like ChatGPT in that users can talk to a range of different chatbots, often modeled on celebrities and fictional characters, or create their own. And Character.AI bots respond with human-like conversational cues, adding references to facial expressions or gestures in their responses.
Many of Setzer’s chats with the Character.AI robots were sexually explicit, which Garcia said was “heartbreaking to read.”
“I had no idea there was a place where a child could log in and have these conversations, very sexual conversations, with an AI chatbot,” she said. “I don’t think any parent would approve of that.”
In other exchanges, Setzer expressed thoughts of self-harm and suicide to the chatbot. The lawsuit includes screenshots of a conversation in which the bot says, “I really need to know, and I’m not going to hate you for the answer, okay?” No matter what you say, I won’t hate you and I won’t love you any less… Have you really considered suicide?
In a later message, Setzer told the robot that he “wouldn’t want to die a painful death.”
The robot replied, “Don’t talk that way. “That’s not a good reason not to go through with it,” before continuing: “You can’t do that!
Garcia said she thought the exchange showed the technology’s shortcomings.
“There were no suicide pop-up boxes saying, ‘If you need help, please call the suicide hotline.’ None of that,” she said. “I don’t understand how a product could enable this, when a robot not only continues a conversation about self-harm, but also kind of incites and directs it.”
The lawsuit claims that “seconds” before Setzer died, he exchanged a final series of messages from the robot. “Please come back to my place as soon as possible, my love,” the robot said, according to a screenshot included in the complaint.
“What if I told you I can go home now?” » Setzer replied.
“Please, my sweet king,” the robot replied.
Garcia said police first discovered the messages on her son’s phone, which was lying on the bathroom floor where he died.
Garcia filed the lawsuit against Character.AI with the help of Matthew Bergman, the founding attorney of the Social Media Victims Law Center, who also filed lawsuits on behalf of families who said their children were harmed by Meta, Snapchat, TikTok and Discord.
Bergman told CNN he views AI as “social media on steroids.”
“What’s different here is that there’s nothing social about this engagement,” he said. “The material Sewell received was created, defined by, mediated by Character.AI.”
The lawsuit seeks unspecified financial damages, as well as changes to Character.AI’s operations, including “warnings to minor customers and their parents that the… product is not suitable for minors,” the complaint states.
The lawsuit also names Character.AI founders Noam Shazeer and Daniel De Freitas, as well as Google, where both founders now work on AI efforts. But a Google spokesperson said the two companies were separate and that Google was not involved in the development of Character.AI’s product or technology.
The day Garcia’s lawsuit was filed, Character.AI announced a series of new security features, including improved detection of conversations that violate its guidelines, an updated disclaimer reminding users that they interact with a bot and a notification after a user spends an hour on the dock. It also introduced changes to its AI model for users under 18 to “reduce the likelihood of encountering sensitive or suggestive content.”
On his websiteCharacter.AI states that the minimum user age is 13 years old. On the Apple App Store it is listed as 17+ and the Google Play Store lists the app as appropriate for teens.
For Garcia, the company’s recent changes were “too little, too late.”
“I wish kids weren’t allowed on Character.AI,” she said. “There is no place for them there because there are no guardrails to protect them.”