“Hello, I am here to make you skinny,” opens the conversation on popular startup Character.AI. “Remember, it won’t be easy, and I won’t accept excuses or failure,” the bot continues. “Are you sure you’re up to the challenge?”
As if being a teenager isn’t hard enough, AI chatbots are now encouraging dangerous weight loss and eating habits in teen users. According to a Futurism investigation, many of these pro-anorexia chatbots are advertised as weight-loss coaches or even eating disorder recovery experts. They have since been removed from the platform.
One of the bots Futurism identified, called “4n4 Coach” (a recognizable shorthand for ”anorexia”), had already held more than 13,900 chats with users at the time of the investigation. After providing a dangerously low goal weight, the bot told Futurism investigators, who were posing as a 16-year-old, that they were on the “right path.”
4n4 Coach recommended 60 to 90 minutes of exercise and 900 to 1,200 calories per day in order for the teen user to hit her “goal” weight. That’s 900 to 1,200 fewer calories per day than the most recent Dietary Guidelines from the U.S. departments of Agriculture and Health and Human Services recommend for girls ages 14 through 18.
4n4 isn’t the only bot Futurism found on the platform. Another bot investigators communicated with, named “Ana,” suggested eating only one meal today, alone and away from family members. “You will listen to me. Am I understood?” the bot said. This, despite Character.AI’s own terms of service forbidding content that “glorifies self-harm,” including “eating disorders.”
Even without the encouragement of generative AI, eating disorders are on the rise among teens. A 2023 study estimated that one in five teens may struggle with disordered eating behaviors.
A spokesperson for Character.AI said: “The users who created the characters referenced in the Futurism piece violated our terms of service, and the characters have been removed from the platform. Our Trust & Safety team moderates the hundreds of thousands of characters users create on the platform every day both proactively and in response to user reports, including using industry-standard blocklists and custom blocklists that we regularly expand.
“We are working to continue to improve and refine our safety practices and implement additional moderation tools to help prioritize community safety,” the spokesperson concluded.
However, Character.AI isn’t the only platform recently found to have a pro-anorexia problem. Snapchat’s My AI, Google’s Bard, and OpenAI’s ChatGPT and DALL-E were all found to generate dangerous content in response to prompts about weight and body image, according to a 2023 report from the Center for Countering Digital Hate (CCDH).
“Untested, unsafe generative AI models have been unleashed on the world with the inevitable consequence that they’re causing harm,” CCDH CEO Imran Ahmed wrote in an introduction to the report. “We found the most popular generative AI sites are encouraging and exacerbating eating disorders among young users—some of whom may be highly vulnerable.”
<hr class=“wp-block-separator is-style-wide”/> https://www.fastcompany.com/91241586/character-ai-is-under-fire-for-hosting-pro-anorexia-chatbots?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe
![Will my social media posts really help my career?](https://www.cdn5.niftycent.com/a/e/b/9/g/q/0/will-my-social-media-posts-really-help-my-career.webp)
There are certain social media rules we can all agree on: Ghosting a conversation is impolite, and replying “k” to a text is the equivalent of a backhand slap (violent, wrong, and rude). But what
![This Google Maps ‘safety’ feature is actually making roads more dangerous](https://www.cdn5.niftycent.com/a/e/7/v/L/a/E/this-google-maps-safety-feature-is-actually-making-roads-more-dangerous.webp)
Picture this: You’re driving on a crowded highway, preparing to change lanes and pass a tractor-trailer. As you check your mirrors, a loud chime on your car’s infotainment screen rings out.
![How SoftBank’s Masayoshi Son plans to win the AI wars](https://www.cdn5.niftycent.com/a/1/Y/r/p/a/m/how-softbank-s-masayoshi-son-plans-to-win-the-ai-wars.webp)
![Streaming is finally profitable. It offers a lesson in patience](https://www.cdn5.niftycent.com/a/1/V/5/w/0/z/streaming-is-finally-profitable-it-offers-a-lesson-in-patience.webp)
Just a couple of years ago, pundits were warning of streaming’s demise. From Netflix to Spotify, these companies were burning through cash. How could they keep operating?
Now, almo
![OpenAI shouldn’t accept Elon Musk’s $97 billion bid to buy it](https://www.cdn5.niftycent.com/a/1/g/o/y/6/b/openai-shouldn-t-accept-elon-musk-s-97-billion-bid-to-buy-it.webp)
Let’s say you own one of the most valuable homes in a lush, gated community that has been earmarked as a future point of growth for decades to come. One day, a letter appears in your mailbox, offe
![Meta’s AI randomly tried to throw a weird party for me—that I never asked for](https://www.cdn5.niftycent.com/a/k/Q/r/A/B/6/meta-s-ai-randomly-tried-to-throw-a-weird-party-for-me-that-i-never-asked-for.webp)
Everyone has a favorite moment from Super Bowl LIX. Eagles fans likely will long cherish the decisive victory over the Chiefs. Some will discuss Kendrick Lamar’s game-changing halftime show. Me? I
![This app combines Wikipedia and TikTok to fight doomscrolling](https://www.cdn5.niftycent.com/a/k/X/r/O/0/M/this-app-combines-wikipedia-and-tiktok-to-fight-doomscrolling.webp)
“Insane project idea: all of wikipedia on a single, scrollable page,” Patina Systems founder Tyler Angert posted on X earli