Meta AI feature on Facebook and Instagram undermines the point of online communities

A parent asked a question in a private Facebook group in April 2024: Does anyone with a child who is both gifted and disabled have any experience with New York City public schools? The parent received a seemingly helpful answer that laid out some characteristics of a specific school, beginning with the context that “I have a child who is also 2e,” meaning twice exceptional.

On a Facebook group for swapping unwanted items near Boston, a user looking for specific items received an offer of a “gently used” Canon camera and an “almost-new portable air conditioning unit that I never ended up using.”

Both of these responses were lies. That child does not exist and neither do the camera or air conditioner. The answers came from an artificial intelligence chatbot.

According to a Meta help page, Meta AI will respond to a post in a group if someone explicitly tags it or if someone “asks a question in a post and no one responds within an hour.” The feature is not yet available in all regions or for all groups, according to the page. For groups where it is available, “admins can turn it off and back on at any time.”

Meta AI has also been integrated into search features on Facebook and Instagram, and users cannot turn it off.

As a researcher who studies both online communities and AI ethics, I find the idea of uninvited chatbots answering questions in Facebook groups to be dystopian for a number of reasons, starting with the fact that online communities are for people.

Human connections

In 1993, Howard Rheingold published the book “The Virtual Community: Homesteading on the Electronic Frontier” about the WELL, an early and culturally significant online community. The first chapter opens with a parenting question: What to do about a “blood-bloated thing sucking on our baby’s scalp.”

Rheingold received an answer from someone with firsthand knowledge of dealing with ticks and had resolved the problem before receiving a callback from the pediatrician’s office. Of this experience, he wrote, “What amazed me wasn’t just the speed with which we obtained precisely the information we needed to know, right when we needed to know it. It was also the immense inner sense of security that comes with discovering that real people – most of them parents, some of them nurses, doctors, and midwives – are available, around the clock, if you need them.”

This “real people” aspect of online communities continues to be critical today. Imagine why you might pose a question to a Facebook group rather than a search engine: because you want an answer from someone with real, lived experience or you want the human response that your question might elicit – sympathy, outrage, commiseration – or both.

Decades of research suggests that the human component of online communities is what makes them so valuable for both information-seeking and social support. For example, fathers who might otherwise feel uncomfortable asking for parenting advice have found a haven in private online spaces just for dads. LGBTQ+ youth often join online communities to safely find critical resources while reducing feelings of isolation. Mental health support forums provide young people with belonging and validation in addition to advice and social support. https://www.youtube.com/embed/16Qu46rK0LM?wmode=transparent&start=0 Online communities are well-documented places of support for LGBTQ+ people.

In addition to similar findings in my own lab related to LGBTQ+ participants in online communities, as well as Black Twitter, two more recent studies, not yet peer-reviewed, have emphasized the importance of the human aspects of information-seeking in online communities.

One, led by PhD student Blakeley Payne, focuses on fat people’s experiences online. Many of our participants found a lifeline in access to an audience and community with similar experiences as they sought and shared information about topics such as navigating hostile healthcare systems, finding clothing and dealing with cultural biases and stereotypes.

Another, led by Ph.D student Faye Kollig, found that people who share content online about their chronic illnesses are motivated by the sense of community that comes with shared experiences, as well as the humanizing aspects of connecting with others to both seek and provide support and information.

Faux people

The most important benefits of these online spaces as described by our participants could be drastically undermined by responses coming from chatbots instead of people.

As a type 1 diabetic, I follow a number of related Facebook groups that are frequented by many parents newly navigating the challenges of caring for a young child with diabetes. Questions are frequent: “What does this mean?” “How should I handle this?” “What are your experiences with this?” Answers come from firsthand experience, but they also typically come with compassion: “This is hard.” “You’re doing your best.” And of course: “We’ve all been there.”

A response from a chatbot claiming to speak from the lived experience of caring for a diabetic child, offering empathy, would not only be inappropriate, but it would be borderline cruel.

However, it makes complete sense that these are the types of responses that a chatbot would offer. Large language models, simplistically, function more similarly to autocomplete than they do to search engines. For a model trained on the millions and millions of posts and comments in Facebook groups, the “autocomplete” answer to a question in a support community is definitely one that invokes personal experience and offers empathy – just as the “autocomplete” answer in a Buy Nothing Facebook group might be to offer someone a gently used camera.

Keeping chatbots in their lanes

This isn’t to suggest that chatbots aren’t useful for anything – they may even be quite useful in some online communities, in some contexts. The problem is that in the midst of the current generative AI rush, there is a tendency to think that chatbots can and should do everything.

There are plenty of downsides to using large language models as information retrieval systems, and these downsides point to inappropriate contexts for their use. One downside is when incorrect information could be dangerous: an eating disorder helpline or legal advice for small businesses, for example.

Research is pointing to important considerations in how and when to design and deploy chatbots. For example, one recently published paper at a large human-computer interaction conference found that though LGBTQ+ individuals lacking social support were sometimes turning to chatbots for help with mental health needs, those chatbots frequently fell short in grasping the nuance of LGBTQ+-specific challenges.

Another found that though a group of autistic participants found value in interacting with a chatbot for social communication advice, that chatbot was also dispensing questionable advice. And yet another found that though a chatbot was helpful as a preconsultation tool in a health context, patients sometimes found expressions of empathy to be insincere or offensive.

Responsible AI development and deployment means not only auditing for issues such as bias and misinformation, but also taking the time to understand in which contexts AI is appropriate and desirable for the humans who will be interacting with them. Right now, many companies are wielding generative AI as a hammer, and as a result, everything looks like a nail.

Many contexts, such as online support communities, are best left to humans.

Casey Fiesler is an associate professor of information science at the University of Colorado Boulder.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://www.fastcompany.com/91128360/meta-ai-chatbot-facebook-instagram-online-communities?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creado 11mo | 20 may 2024, 20:20:05


Inicia sesión para agregar comentarios

Otros mensajes en este grupo.

Feeling lonely? X cofounder Ev Williams has an app for that.

When Twitter cofounder and Medium founder Evan “Ev” Williams was planning his 50th birthday party, he didn’t know who to invite. Having spent more of his life building and scaling tech

18 abr 2025, 23:30:05 | Fast company - tech
A TikToker sues Roblox for using her viral Charli XCX dance without permission

If you thought you’d heard the last of the viral “Apple” dance, think again. The TikToker behind it is now suing Roblox over its unauthorized use.

Last year, during the height of Brat su

18 abr 2025, 18:50:08 | Fast company - tech
What to know about Jared Birchall, Elon Musk’s right-hand man

A Wall Street Journal report this week gave an extensive look into how Elon Musk, the

18 abr 2025, 16:40:03 | Fast company - tech
Netflix beats first quarter forecast, revealing it hasn’t been touched by Trump’s tariffs, yet

Netflix fared better than analysts anticipated during the first thr

18 abr 2025, 14:20:07 | Fast company - tech
Why are AI companies so bad at naming their models?

Six hours after OpenAI’s launch of GPT-4.1, Sam Altman was already apologizing. 

This time, it wasn’t about

18 abr 2025, 9:40:03 | Fast company - tech
TikTok is obsessed with this investor who bought 30 floors of a Chicago skyscraper

One of the more unique takes on the POV trend on TikTok: “POV: You bought a 100-year-old skyscraper . . . ”

For those unlikely to ever own a skyscraper themselves, TikTok’s Skyscraper Gu

18 abr 2025, 5:10:03 | Fast company - tech
Instagram launches ‘Blend’ to share personalized Reels with friends

When it comes to sharing Instagram Reels with friends, the process of three taps to get a Reel from A to B can feel surprisingly tedious. Now, Instagram has addressed that issue with its latest fe

17 abr 2025, 22:10:04 | Fast company - tech