Katie Couric, Rashad Robinson, and Chris Krebs say it’s time to pull immunity for social media platforms

Misinformation hit a crescendo during the pandemic, sowing distrust in COVID-19 vaccines and inciting riots at the Capitol. Now a coalition of experts on misinformation and disinformation are making a specific set of recommendations to lawmakers on how to fix the issue–and big tech might not be so happy. Most notably, the proposal calls for changes to Section 230, the controversial part of the 1996 Communications Decency Act that protects online platforms from getting sued over user-generated content. Research center and think tank Aspen Institute brought together a who’s who commission of experts on disinformation  to illuminate the problem and offer strategic steps to address it. The commission’s chairs include journalist Katie Couric, civil rights leader and president of Color of Change Rashad Robinson, and Chris Krebs, the former director of the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency. Spread via the internet, disinformation and its close cousin misinformation have contributed to a series of public harms over the past decade, including interference in the 2016 U.S. elections, disruption of pandemic-related public health efforts, fomentation of genocide in Myanmar, and the January 6th siege of the Capitol. Disinformation, intentionally misleading information, is engineered to go viral, taking advantage of social media algorithms that favor outrageous perspectives. Misinformation, false information with no clear intent to deceive, similarly keeps slipping past social media companies efforts to curtail it. Last month, Facebook whistleblower Francis Haugen helped explain why those mitigation strategies fail. The former Facebook product manager and member of the company’s civic integrity group called out the social network for misleading the public about how much it actually does to protect users from harmful content. “The thing I saw at Facebook, over and over again, was there were conflicts of interest between what was good for the public and what was good for Facebook,” Haugen told 60 Minutes. “And Facebook, over and over again, chose to optimize for its own interests, like making more money.” For years, Facebook has ducked responsibility for content on its platform, assuring regulators and the public that it is doing its best to balance free speech while reining in misinformation and speech that incites violence and hate. Haugen’s account suggests that is not the full picture. Social media companies have not yet been held to account, shielded by Section 230. Legislators have threatened to change the law (rhetoric that reached a fever pitch after the Capitol riots), but so far haven’t touched it. The Aspen Institute’s Commission on Information Disorder Final Report suggests removing this immunity for content that advertisers have paid to promote, as well as any content that has gone viral because of a platform’s recommendation algorithms. They also note that while free speech is a constitutional right, private platforms are not the public square and companies have the right to restrict speech. The commission’s recommendations are thorough, going much farther than simply suggesting amendments to Section 230. The report faults the federal government for failing to understand the issues and create meaningful rules that protect the public. (“Congress…remains woefully under-informed about the titanic changes transforming modern life,” the authors write.) The commission also notes that despite Big Tech’s pleas to be regulated, industry leaders have “outsized influence in shaping legislative priorities favorable to its interests.” To guide future legislative efforts, the commission suggests the government force social media platforms to be more transparent through data audits. One of the biggest hurdles to understanding both the effects of disinformation and the magnitude of the problem has been a lack of cooperation from the platforms themselves. Researchers often struggle to get the depth of information they need. (Facebook has been known to outright ban researchers who attempt to get this information without the company’s explicit participation.) The report says social platforms should be required to disclose “categories of private data to qualified academic researchers, so long as that research respects user privacy, does not endanger platform integrity, and remains in the public interest.” It also says that there should be federal protections in place for researchers and journalists who investigate social platforms in the public’s interest (even if they violate the platform’s terms and conditions in the process). It suggests that Congress require social media companies to publish transparency reports that include content, source accounts, reach, and impression data for posts that reach large audiences, and offer regular disclosures on key data points about digital ads and paid posts that run on their platforms. And it calls for clear content moderation practices as well as an archive of moderated content that researchers can access. In addition to these transparency measures, the commission asks the federal government to establish a strategic approach to countering misinformation and disinformation and the creation of an independent organization devoted to developing well-informed countermeasures. This could include efforts to educate the public on misinformation and how to discern between fact and propaganda online. Finally, the report calls for investment in local newsrooms and diversity measures, both in newsrooms and at social media companies. To support newsrooms, the report points to the creation of a digital advertising tax, much like the one Maryland passed. The report says some of those proceeds should go towards struggling local newsrooms to bolster reputable reporting. The report also suggests incentivizing donations to local news operations through tax credits. The report also recommends platforms hire diverse workforces to ensure that a broad spectrum of experiences are considered when companies design rules and content mitigation strategies. Rashad Robinson, president of Color of Change and one of the report’s commissioners, says that the government could play a role here. “Diversity should be part of how the government evaluates these companies,” especially their efforts to protect users, he says. Robinson has worked for years on civil rights issues related to the web and has spent a fair amount of time talking to regulators. “These are recommendations that I fundamentally believe are actionable,” he says.

https://www.fastcompany.com/90696655/katie-couric-rashad-robinson-and-chris-krebs-say-its-time-to-pull-immunity-for-social-media-platforms?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Létrehozva 3y | 2021. nov. 15. 11:21:32


Jelentkezéshez jelentkezzen be

EGYÉB POSTS Ebben a csoportban

TikTok’s ‘SkinnyTok’ trend is under fire from EU regulators

The European Commission is coming for “SkinnyTok.”

EU regulators are investigating a recent wave of social media videos that promote extreme thinness and “tough-love” weight loss advice,

2025. ápr. 24. 0:10:04 | Fast company - tech
The subreddit r/AITA is headed for the small screen

The infamous “Am I The A**hole?” subreddit is making its way to the small screen.

Hosted by Jimmy Carr, the new game show for Comedy Central U.K. will feature members of the public appea

2025. ápr. 23. 19:30:03 | Fast company - tech
Ex-OpenAI workers ask state AGs to block for-profit conversion

Former employees of OpenAI are asking the top law enforcement officers in California and Delaware to s

2025. ápr. 23. 17:10:06 | Fast company - tech
‘Thank you for your attention to this matter!’: Trump’s favorite sign-off has become a viral meme

Thanksgiving may not arrive until November, but you wouldn’t know it from perusing Donald Trump’s social media feeds. He’s been giving thanks quite a lot lately. “

2025. ápr. 23. 14:50:08 | Fast company - tech
Microsoft says these are the AI terms you need to know

Microsoft released its annual Work Trend Index report on Tuesday, which argued that 2025 is the year that companies stop simply experimenting with AI and start building it into key missions.

2025. ápr. 23. 14:50:07 | Fast company - tech
Microsoft thinks AI colleagues are coming soon

Artificial intelligence has rapidly started finding its place in the workplace, but this year will be remembered as the moment when companies pushed past simply experimenting with AI and started b

2025. ápr. 23. 14:50:06 | Fast company - tech
José Andrés on AI, crisis tech, and rethinking the food system

As the founder of World Central Kitchen, renowned chef and humanitarian José Andrés has truly mastered the art of leading through crisis. Andrés shares insights from his new book, Change the R

2025. ápr. 23. 14:50:04 | Fast company - tech