3 lingering questions about Twitter’s new ‘crisis misinformation’ policy

Twitter announced on Thursday the creation of new safeguards to stop disinformation and misinformation from spreading on its platform in times of crisis—that is, any “armed conflict, public health emergencies, and large-scale natural disasters.” Twitter’s head of trust and safety Yoel Roth, wrote in a blog post Thursday that the new policy “will guide our efforts to elevate credible, authoritative information, and will help to ensure viral misinformation isn’t amplified or recommended by us during crises.” The policy shift is designed to put the breaks on false—often politically motivated—content that can spill out of the internet and make situations in the real world far worse. (For example, Twitter found itself awash in disinformation and fraudulent accounts around the beginning of the war in Ukraine.) But roughing out a new content-moderation policy in a blog post is one thing, and actually enforcing it is another. Some important questions remain about how they’ll go about it. How will this actually work in practice? In order to determine if a tweet is mis- or disinformation, Twitter says it will rely on evidence from “conflict monitoring groups, humanitarian organizations, open-source investigators, journalists, and more.” Interestingly, Twitter says it won’t target tweets that contain “strong commentary, efforts to debunk or fact check, and personal anecdotes or first person accounts.” The company seems to suggest it favors a narrow definition to potentially dangerous tweets, perhaps targeting those that flatly present false information as if it were truth. But it’s hard to predict what kind of tweets might go viral and do real-world damage. “Strong commentary” on Facebook led to widespread COVID denial, and even to the mass killing of Rohingya people in Myanmar, Reuters reports. The policy also focuses on Twitter accounts with large potential reach, such as those with sizable followings, including state-affiliated media accounts (Sputnik and RT, for example) and official government accounts. If it finds tweets containing falsehoods about an emerging crisis, it plans to stop recommending or amplifying them on its platform, including in home feeds, search results, and Explore. Secondly, Twitter will slap warning labels on tweets that have been judged false. Now, there are lots of widely followed accounts. Will Twitter keep constant watch on all such accounts, all the time? Big social media companies rely heavily on machine learning models to identify violative content. Viral tweets can potentially come from accounts with 2 followers or 2 million. So the same old question arises: Will Twitter’s AI be able to locate and stop the sharing of harmful content fast enough? In a recent example, links to the Buffalo mass murderer’s live video and manifesto were reposted on Facebook and Twitter, and the two companies scrambled to remove them in the wake of the event last weekend. Why now? The company says it’s been working on the policy since last year. It’s possible the work was inspired by the January 6, 2021, attack on the Capitol. Former CEO Jack Dorsey told Congress his platform had played a role, and Twitter was a major sounding board for the baseless “Stop the Steal” conspiracy that was the basis of the insurrection. Twitter might also see that with white supremacy and extreme partisan rancor still at a boil in the U.S., future “crisis” situations are possible and even likely. Will this policy survive when/if Elon Musk takes over the company? Twitter itself is in crisis, as its board of directors recently approved a sale of the company to Elon Musk. Musk is himself a self-styled troll-provocateur, and a loud crusader for more “free speech” and less content moderation on Twitter. If Musk is serious about his purist views on social speech, he may well invalidate Twitter’s new policy and allow people to spew whatever nonsense they want, even during “times of crisis.” Eventually, Musk will have to confront the question of whether “freedom of speech” equals “freedom of reach.” Global social networks are different than all earlier telecommunications networks because they offer massive reach for viral content. That is, they can deliver popular content to billions of people around the world in milliseconds. Is limiting that reach a violation of Twitter users’ right to free expression? Musk seems to think so. But Twitter’s current leadership clearly does not. With the new crisis-moderation policy, Twitter doesn’t prevent people from publishing untrue content on the platform, but it does reserve the right to limit the content’s reach.

https://www.fastcompany.com/90753983/three-questions-twitter-crisis-misinformation-policy?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 3y | 19 mai 2022, 22:21:14


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

OpenAI announces more secure ChatGPT for government workers, Trump administration

OpenAI on Tuesday announced a new ChatGPT system for U.S. government workers that it calls more secure than its Enterprise

28 ian. 2025, 21:30:03 | Fast company - tech
‘Grave concern’ for cables in the Baltic Sea as NATO ramps up its guard

With its powerful camera, the French Navy surveillance plane scouring the Baltic Sea zoomed

28 ian. 2025, 16:50:03 | Fast company - tech
Why did DeepSeek tell me it’s made by Microsoft?

The release of Chinese AI company DeepSeek’s R1 model on January 20 trigge

28 ian. 2025, 12:20:03 | Fast company - tech
Bookshop.org is launching e-books to help local bookstores compete with Amazon’s Kindle

Andy Hunter decided something needed to be done about the endless rise of Amazon in 2018—the year that the e-commerce giant surpassed 50% of book sales in the U.S. market. “I was concerned at that

28 ian. 2025, 12:20:02 | Fast company - tech
Everything wrong with the AI landscape in 2025, hilariously captured in this ‘SNL’ sketch

“Isn’t AI supposed to make things simpler?” asks a student in ">a new Saturday Night Live sketch.

Technically, the answer

28 ian. 2025, 00:40:04 | Fast company - tech
How the U.S. chip bans led to a monster called DeepSeek

The Chinese AI company DeepSeek has put the AI industry in an uproar. Deni

28 ian. 2025, 00:40:02 | Fast company - tech
TikTok users are prepping for tariffs to raise grocery prices with anti-Trump stickers

A small business is cashing in on President Donald Trump’s tariffs with a new viral product: stickers of Donald Trump pointing with the caption “I did that.” 

A TikTok

27 ian. 2025, 22:20:07 | Fast company - tech