Why experts say we don’t need warning labels for social media platforms

Warning labels are put onto products that can seriously harm our health, like alcohol and tobacco. But the U.S. surgeon general wants to have a warning label for social media. Dr. Vivek Murthy has said that the labels would be justified because of the supposed impact on teenage users’ mental health. He’s been clear about his desire for legislators to put into law rules that would require users to be warned about the potential impact being on platforms would have on them. “In an emergency, you don’t have the luxury to wait for perfect information,” he wrote in a New York Times op-ed this week.

It’s a compelling case. But some experts say it’s also entirely wrong and misguided. 

“Social media shouldn’t be presented as something to fear, but as something to be understood,” says Jess Maddox, a digital media assistant professor at the University of Alabama.

At present, a lot of conversation around social media appears to be trying not to understand it—at least not through a rigorous, science-based process. The New York Times, in its reporting of Murthy’s plans, says “the science on the harmful effects of social media is not settled.” That’s in many ways an understatement. The drumbeat saying that action must be taken to counter the purported pernicious effects of social media has gotten louder in the past few months thanks to the release of an eye-catching book by social psychologist Jonathan Haidt.

His book, The Anxious Generation, became an instant bestseller when it was released in March. But critics say the book’s premise is built on a misrepresentation of past scientific research.

One analysis by Reason looked at the 476 studies cited in Haidt’s book. Only 22 of the papers actually refer to heavy social media use and serious mental health issues. (One study’s actual text says the opposite of what Haidt claims it does, while other studies he relies heavily on are often fundamentally flawed.)

Yet the eye-catching title and the outlandish—literally—claims in the book have captured the public’s attention, as well as those of policymakers. 

Murthy’s Times op-ed never mentions Haidt, but it does point to research suggesting adolescents who spent more than three hours a day on social media were twice as likely to suffer symptoms of anxiety and depression, and that around half of adolescents said social media negatively affected their body image. What he doesn’t say, however, is that more recent research shows using smartphones and social media can improve teenagers’ mood.

“I think this sets a dangerous precedent,” writes Pete Etchells, a science communication professor at Bath Spa University, in an email. “We simply don’t have clear and consistent evidence that social media (however you define that) causes poorer mental health (however you define that).”

Not everyone agrees with Etchells’ assessment, though. Andy Burrows, an adviser at the Molly Rose Foundation, says tech companies have brought Murthy’s plans on themselves. “Whether or not warning labels would be an effective intervention, it’s appropriate to be considering all options to ensure tech companies do more to tackle inherently preventable harm, and to draw attention to the nuanced but increasingly clear evidence of risks to children’s safety and well-being,” he says. 

Murthy’s justification for adding a warning label to social media platforms appears to be that the tech companies running them haven’t done enough themselves to protect users from the harms many of them indubitably experience. And in that sense, the University of Alabama’s Maddox is in agreement with Murthy. “In the U.S., we should demand that these platforms be reined in,” she tells Fast Company

She believes slapping a warning label on social media and effectively blocking younger users from accessing it is tantamount to giving up. “We should not be shrugging our shoulders and saying, ‘Well, there’s nothing we can do,’” she says.

Etchells also worries that Murthy’s attempt to foist warning labels on us all could do more harm than good. “It’s a shame because it feels like a missed opportunity,” he says. “I’d like to see lawmakers having conversations about how we boost digital literacy skills in a consistent and supportive way, and how we get tech companies to put user well-being at the core of their design processes.”

https://www.fastcompany.com/91143312/vivek-murthy-warning-labels-for-social-media-platforms-experts-weigh-in?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 8mo | Jun 19, 2024, 4:30:08 AM


Login to add comment

Other posts in this group

OpenAI launches cross-country search to build data center sites for the Stargate project

OpenAI is scouring the U.S. for sites to build a network of huge data centers to power

Feb 7, 2025, 5:50:07 PM | Fast company - tech
‘It’s not only centered around video anymore’: Zoom’s CEO explains the video conference giant’s next act

Zoom made a name for itself during the pandemic, becoming synonymous with video conference calls. But the company

Feb 7, 2025, 1:20:06 PM | Fast company - tech
These groups are pushing for the NFL to end facial recognition

Ahead of Super Bowl Sunday, online privacy groups Fight for the Future and the Algorithmic Justice League are reiter

Feb 7, 2025, 1:20:04 PM | Fast company - tech
Welcome to HillmanTok University, the digital HBCU inspired by ‘A Different World’

You can learn many things from TikTok, like how to make a dense bean salad or how to tell if you have “

Feb 7, 2025, 10:50:09 AM | Fast company - tech
Academic researchers are going to have to learn algospeak

Imagine you’re an academic researcher. You’re writing a pitch for funding to the National Science Foundation (NSF), the independent agency of the federal government that funds projects designed to

Feb 7, 2025, 10:50:07 AM | Fast company - tech
Like TikTok, lawmakers want to ban DeepSeek from government devices

A bipartisan duo in the the U.S. House is proposing legislation to ban the Chinese artificial intelligence app DeepSeek from federal devices,

Feb 6, 2025, 11:20:08 PM | Fast company - tech