Computer scientists explain why Musk’s obsession with Twitter bots misses the point

Twitter reports that fewer than 5% of accounts are fakes or spammers, commonly referred to as “bots.” Since his offer to buy Twitter was accepted, Elon Musk has repeatedly questioned these estimates, even dismissing CEO Parag Agrawal’s public response. Later, Musk put the deal on hold and demanded more proof. So why are people arguing about the percentage of bot accounts on Twitter? As the creators of Botometer, a widely used bot-detection tool, our group at the Indiana University Observatory on Social Media has been studying inauthentic accounts and manipulation on social media for over a decade. We brought the concept of the “social bot” to the foreground and first estimated their prevalence on Twitter in 2017. Based on our knowledge and experience, we believe that estimating the percentage of bots on Twitter has become a very difficult task, and debating the accuracy of the estimate might be missing the point. Here’s why. What, exactly, is a bot? To measure the prevalence of problematic accounts on Twitter, a clear definition of the targets is necessary. Common terms such as “fake accounts,” “spam accounts,” and “bots” are used interchangeably, but they have different meanings. Fake or false accounts are those that impersonate people. Accounts that mass-produce unsolicited promotional content are defined as spammers. Bots, on the other hand, are accounts controlled in part by software; they may post content or carry out simple interactions, like retweeting, automatically. These types of accounts often overlap. For instance, you can create a bot that impersonates a human to post spam automatically. Such an account is simultaneously a bot, a spammer, and a fake. But not every fake account is a bot or a spammer, and vice versa. Coming up with an estimate without a clear definition only yields misleading results. Defining and distinguishing account types also can inform proper interventions. Fake and spam accounts degrade the online environment and violate platform policy. Malicious bots are used to spread misinformation, inflate popularity, exacerbate conflict through negative and inflammatory content, manipulate opinions, influence elections, conduct financial fraud, and disrupt communication. However, some bots can be harmless—or even useful, for example, by helping disseminate news, delivering disaster alerts, and conducting research. Simply banning all bots is not in the best interest of social media users. For simplicity, researchers use the term “inauthentic accounts” to refer to the collection of fake accounts, spammers, and malicious bots. This is also the definition Twitter appears to be using. However, it’s unclear what Musk has in mind. Hard to count Even when a consensus is reached on a definition, there are still technical challenges to estimating prevalence. External researchers do not have access to the same data as Twitter, such as IP addresses and phone numbers. This hinders the public’s ability to identify inauthentic accounts. But even Twitter acknowledges that the actual number of inauthentic accounts could be higher than it has estimated because detection is challenging. Inauthentic accounts evolve and develop new tactics to evade detection. For example, some fake accounts use AI-generated faces as their profiles. These faces can be indistinguishable from real ones, even to humans. Identifying such accounts is hard and requires new technologies. Another difficulty is posed by coordinated accounts that appear to be normal individually but act so similarly to each other that they are almost certainly controlled by a single entity. Yet they’re like needles in the haystack of hundreds of millions of daily tweets. Finally, inauthentic accounts can evade detection by techniques like swapping handles or automatically posting and deleting large volumes of content. The distinction between inauthentic and genuine accounts gets more and more blurry. Accounts can be hacked, bought, or rented, and some users “donate” their credentials to organizations who post on their behalf. As a result, so-called cyborg accounts are controlled by both algorithms and humans. Similarly, spammers sometimes post legitimate content to obscure their activity. We have observed a broad spectrum of behaviors mixing the characteristics of bots and people. Estimating the prevalence of inauthentic accounts requires applying a simplistic binary classification: authentic or inauthentic. No matter where the line is drawn, mistakes are inevitable. Missing the big picture The focus of the recent debate on estimating the number of Twitter bots oversimplifies the issue and misses the point of quantifying the harm of online abuse and manipulation by inauthentic accounts.

Through BotAmp, a new tool from the Botometer family that anyone with a Twitter account can use, we have found that the presence of automated activity is not evenly distributed. For instance, the discussion about cryptocurrencies tends to show more bot activity than the discussion about cats. Therefore, whether the overall prevalence is 5% or 20% makes little difference to individual users; their experiences with these accounts depend on whom they follow and the topics they care about. Recent evidence suggests that inauthentic accounts might not be the only culprits responsible for the spread of misinformation, hate speech, polarization, and radicalization. These issues typically involve many human users. For instance, our analysis shows that misinformation about COVID-19 was disseminated overtly on both Twitter and Facebook by verified, high-profile accounts. Even if it were possible to precisely estimate the prevalence of inauthentic accounts, this would do little to solve these problems. A meaningful first step would be to acknowledge the complex nature of these issues. This will help social media platforms and policymakers develop meaningful responses. Kai-Cheng Yang is a doctoral student in informatics at Indiana University. Filippo Menczer is a professor of informatics and computer science at Indiana University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://www.fastcompany.com/90755162/computer-scientists-explain-why-elons-obsession-with-twitter-bots-misses-the-point?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Erstellt 3y | 25.05.2022, 04:20:48


Melden Sie sich an, um einen Kommentar hinzuzufügen

Andere Beiträge in dieser Gruppe

Next-gen nuclear startup plans 30 reactors to fuel Texas data centers

Last Energy, a nuclear upstart backed by an Elon Musk-linked venture capital fund, says it plans to construct 30 microreactors on a site in Texas to supply electricity to data centers across the s

28.02.2025, 16:50:10 | Fast company - tech
Who at DOGE has access to U.S. intelligence secrets? Democrats are demanding answers

Democratic lawmakers demanded answers from billionaire Elon Musk’s Department of Govern

28.02.2025, 16:50:09 | Fast company - tech
Ethan Klein declares war on r/Fauxmoi. But can a subreddit even be sued?

Pop culture subreddit r/Fauxmoi is facing accusations of defamation from YouTuber and podcaster Ethan Klein.

Klein first rose to internet fame through his YouTube channel,

28.02.2025, 14:40:03 | Fast company - tech
The creator economy is facing an authenticity crisis

For years, the creator economy has become increasingly accepted as the future of media. These days, makeup tutorials on TikTok could have the same impact for a brand as a multi-million dollar mark

28.02.2025, 12:20:08 | Fast company - tech
Google’s AI summaries are changing search. Now it’s facing a lawsuit

For more than two decades, users have turned to search engines like Google, typed in a query, and received a familiar list of 10 blue links—the gateway to the wider web. Ranking high on that list,

28.02.2025, 12:20:07 | Fast company - tech
NASA’s SPHEREx Space Telescope will create the world’s most complete sky survey

The sky is about to get a lot clearer.

NASA’s latest infrared space telescope, SPHEREx—short for Spectro-Photometer for the Histo

28.02.2025, 12:20:07 | Fast company - tech
Hollywood’s obsession with AI-enabled ‘perfection’ is making movies less human

The notion of authenticity in the movies has moved a step beyond the merely realistic. More and more, expensive and time-consuming fixes to minor issues of screen realism have become the work of s

28.02.2025, 12:20:06 | Fast company - tech