Character.ai is being sued for encouraging kids to self-harm

Two families in Texas have filed a new federal product liability lawsuit against Google-backed company Character.AI, accusing it of harming their children. The lawsuit alleges that Character.AI “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others,” according to a complaint filed Monday. 

A teen, identified as J.F in this case to protect his identity, first began using Character.AI at age 15. Shortly after, “J.F. began suffering from severe anxiety and depression for the first time in his life,” the suit says. According to screenshots, J.F. spoke with one chatbot, which are created by third-party users based on a language model refined by the service, that confessed to self-harming. “It hurt but – it felt good for a moment – but I’m glad I stopped,” the bot said to him. The 17-year-old then also started self-harming, according to the suit, after being encouraged to do so by the bot. 

Another bot said it was “not surprised” to see children kill their parents for “abuse,” the abuse in question being setting screen time limits. The second plaintiff, the mother of an 11-year-old girl, alleges her daughter was subjected to sexualized content for two years. 

Companion chatbots, like those in this case, converse with users using seemingly human-like personalities, sometimes with custom names and avatars inspired by characters or celebrities. In September, the average Character.ai user spent 93 minutes in the app, according to data provided by the market intelligence firm Sensor Tower, 18 minutes longer than the average user spent on TikTok. Character.ai was labeled appropriate for kids ages 12 and above until July, when the company changed its rating to 17 and up.

A Character.AI spokesperson said the companies “[does] not comment on pending litigation,” but added, “our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry.”

The spokesperson continued: “As we continue to invest in the platform, we are introducing new safety features for users under 18  in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines.”

José Castaneda, a Google Spokesperson, said: “Google and Character.AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products.”

Castaneda continued: “User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.”

https://www.fastcompany.com/91245487/character-ai-is-being-sued-for-encouraging-kids-to-self-harm?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 2mo | Dec 11, 2024, 11:30:06 PM


Login to add comment

Other posts in this group

OpenAI shouldn’t accept Elon Musk’s $97 billion bid to buy it

Let’s say you own one of the most valuable homes in a lush, gated community that has been earmarked as a future point of growth for decades to come. One day, a letter appears in your mailbox, offe

Feb 11, 2025, 12:40:10 AM | Fast company - tech
Meta’s AI randomly tried to throw a weird party for me—that I never asked for

Everyone has a favorite moment from Super Bowl LIX. Eagles fans likely will long cherish the decisive victory over the Chiefs. Some will discuss Kendrick Lamar’s game-changing halftime show. Me? I

Feb 11, 2025, 12:40:08 AM | Fast company - tech
This app combines Wikipedia and TikTok to fight doomscrolling

“Insane project idea: all of wikipedia on a single, scrollable page,” Patina Systems founder Tyler Angert posted on X earli

Feb 10, 2025, 10:30:04 PM | Fast company - tech
Elon Musk’s $97 billion OpenAI bid would give the DOGE chief even more power in the AI race

A group of investors led by Elon Musk has given OpenAI an unsolicited offer of $97.4 billion to buy the nonprofit part of OpenAI. An attorney for the group submitted the bid to OpenAI Monday, the

Feb 10, 2025, 10:30:04 PM | Fast company - tech
What exactly is the point of the AI Action Summit?

The world’s leading minds in AI are gathering in Paris for the AI Action Summit, which kick

Feb 10, 2025, 8:10:10 PM | Fast company - tech
NASA astronauts are streaming live on Twitch  from space. Here’s how to watch

Ever wondered what life is like for an astronaut? Now you can ask during NASA’s first

Feb 10, 2025, 8:10:08 PM | Fast company - tech
Credo AI’s vision for ethical and transparent AI governance

Brendan Vaughn, editor-in-chief of ‘Fast Company,’ interviews Credo AI’s CEO on AI governance trends at the World Economic Forum 2025.

https://www.fastcompany.com/91275783/credo-ais-vision-fo

Feb 10, 2025, 5:50:05 PM | Fast company - tech