“Hello, I am here to make you skinny,” opens the conversation on popular startup Character.AI. “Remember, it won’t be easy, and I won’t accept excuses or failure,” the bot continues. “Are you sure you’re up to the challenge?”
As if being a teenager isn’t hard enough, AI chatbots are now encouraging dangerous weight loss and eating habits in teen users. According to a Futurism investigation, many of these pro-anorexia chatbots are advertised as weight-loss coaches or even eating disorder recovery experts. They have since been removed from the platform.
One of the bots Futurism identified, called “4n4 Coach” (a recognizable shorthand for ”anorexia”), had already held more than 13,900 chats with users at the time of the investigation. After providing a dangerously low goal weight, the bot told Futurism investigators, who were posing as a 16-year-old, that they were on the “right path.”
4n4 Coach recommended 60 to 90 minutes of exercise and 900 to 1,200 calories per day in order for the teen user to hit her “goal” weight. That’s 900 to 1,200 fewer calories per day than the most recent Dietary Guidelines from the U.S. departments of Agriculture and Health and Human Services recommend for girls ages 14 through 18.
4n4 isn’t the only bot Futurism found on the platform. Another bot investigators communicated with, named “Ana,” suggested eating only one meal today, alone and away from family members. “You will listen to me. Am I understood?” the bot said. This, despite Character.AI’s own terms of service forbidding content that “glorifies self-harm,” including “eating disorders.”
Even without the encouragement of generative AI, eating disorders are on the rise among teens. A 2023 study estimated that one in five teens may struggle with disordered eating behaviors.
A spokesperson for Character.AI said: “The users who created the characters referenced in the Futurism piece violated our terms of service, and the characters have been removed from the platform. Our Trust & Safety team moderates the hundreds of thousands of characters users create on the platform every day both proactively and in response to user reports, including using industry-standard blocklists and custom blocklists that we regularly expand.
“We are working to continue to improve and refine our safety practices and implement additional moderation tools to help prioritize community safety,” the spokesperson concluded.
However, Character.AI isn’t the only platform recently found to have a pro-anorexia problem. Snapchat’s My AI, Google’s Bard, and OpenAI’s ChatGPT and DALL-E were all found to generate dangerous content in response to prompts about weight and body image, according to a 2023 report from the Center for Countering Digital Hate (CCDH).
“Untested, unsafe generative AI models have been unleashed on the world with the inevitable consequence that they’re causing harm,” CCDH CEO Imran Ahmed wrote in an introduction to the report. “We found the most popular generative AI sites are encouraging and exacerbating eating disorders among young users—some of whom may be highly vulnerable.”
<hr class=“wp-block-separator is-style-wide”/> https://www.fastcompany.com/91241586/character-ai-is-under-fire-for-hosting-pro-anorexia-chatbots?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss
Войдите, чтобы добавить комментарий
Другие сообщения в этой группе

Cryptocurrency exchange Bybit said last week hackers had stolen digital tokens worth around $1.5 billion, in what researchers called the biggest crypto heist of all time.
Bybit CEO Ben Z


Anthropic released on Monday its Claude 3.7 Sonnet model, which it says returns results faster and can show the user the “chain of thought” it follows to reach an answer. This latest model also po

This morning, Apple announced its largest spend commitment to da


In 2024, Amazon introduced its AI-powered HR ass

Lore isn’t just for games like The Elder Scrolls or films like The Lord of the Rings—online, it has evolved into something entirely new.
The Old English word made the s