How Abeba Birhane is cleaning up AI’s dirty data

One day in 2020, Abeba Birhane found herself on Wikipedia, scouring a list of slurs. At the time, Birhane was pursuing a PhD in cognitive science at the University College Dublin and was trying to see how many of those slurs appeared in the image descriptions for a massive data set that’s often used to train AI systems. 

She had already turned up plenty of matches on the obvious filth, but Birhane was running out of ideas for what to search next. “The reason I went to Wikipedia is because I couldn’t think of enough slur words,” she says. 

As the list of terms grew, so did Birhane’s findings, until she had amassed enough evidence to co-author a paper detailing just how rampant derogatory terms were within this important bit of technological infrastructure. That paper prompted the Massachusetts Institute of Technology, which housed the data set, to take it offline, and cemented Birhane’s position as a leading auditor of the data sets that feed the world’s increasingly sophisticated AI models. Now Birhane is continuing that work under a newly launched independent research lab of her own, called the AI Accountability Lab.

Birhane’s research focuses on the fact that AI models are trained on massive quantities of unfiltered data scraped from the open internet, much of which consists of hateful 4chan boards and misogynistic porn sites. Without proper safeguards in place, those AI models can end up replicating the same hate and misogyny when people prompt them for answers later on. In one recent paper, Birhane and her co-authors found that the bigger data sets get, the more likely the AI models trained on them will produce biased results, like classifying Black people as criminals. 

“We are not evaluating systems for some hypothetical, potential risks in the future,” Birhane says. “These audits are uncovering actual real issues, real problems, whether it’s racism, sexism, or encoding of stereotypes and historical injustices and so on.” 

Birhane, who is from Ethiopia, said these questions about where data comes from and how it translates into biased outputs were not always top of mind in the research labs where she worked. “Traditional computer scientists tend to be male, white, or Asian. They would not think about how is Africanness represented? How are Black women represented?” she says. “My experience and background has effects in how I approach my audits.” 

Her work couldn’t come soon enough. There are already plenty of examples of flawed AI systems wreaking havoc on people’s lives. In the U.K., the government used an algorithmic grading tool to approximate students’ grades after their exams were canceled due to the pandemic, and wound up giving students from disadvantaged schools worse grades than affluent ones. In the Netherlands, the Dutch government used an algorithm to predict people’s risk of defrauding the child benefits system and ended up penalizing tens of thousands of lower-income people, some of whom had their children taken away from them.

“In all of these examples, you find that the people who go to jail, the people who are disfranchised, the people who are dying, the people who are negatively impacted are often people at the very margins of society,” Birhane says. “This is the dire cost of not evaluating algorithmic systems before we deploy them.”

<hr class=“wp-block-separator is-style-wide”/>

This story is part of AI 20, our monthlong series of profiles spotlighting the most interesting technologists, entrepreneurs, corporate leaders, and creative thinkers shaping the world of artificial intelligence.

<hr class=“wp-block-separator is-style-wide”/> https://www.fastcompany.com/91238006/how-abeba-birhane-is-cleaning-up-ais-dirty-data?partner=rss&amp;utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=rss+fastcompany&amp;utm_content=rss

Creato 2mo | 10 dic 2024, 12:40:06


Accedi per aggiungere un commento

Altri post in questo gruppo

Frustrated with today’s ‘attention economy’? You’re really going to hate what comes next

In the 1990s, the internet was a bit of a wonderland. It was new and liberating and largely free of

25 gen 2025, 12:20:09 | Fast company - tech
Why tech in Congress lags  behind the modern world

On a typical day, you can’t turn on the news without hearing someone say that Congress is broken.

25 gen 2025, 12:20:08 | Fast company - tech
$TRUMP was just the beginning: The new administration is finding all sorts of ways to cash in

At President Donald Trump’s inauguration on Monday, Detroit pastor Lorenzo Sewell took the stage to pray for the incoming administration, peppering his

25 gen 2025, 12:20:07 | Fast company - tech
Did you show ‘negative sentiment’ for insurance companies after the UHC CEO shooting? Police were watching

When news broke that the United Healthcare CEO was shot in broad daylight early last month, outrage erupted online. But it wasn’t aimed at the assassin. Instead, it was directed at the broken U.S.

25 gen 2025, 00:50:02 | Fast company - tech
How an AI-generated ‘expert’ sank into media deadlines

Ashley Abramson first came across Sophie Cress in a cold pitch to her work email. Cress was asking to be an expert source for any stories Abramson was working on as a freelance reporter. “I’ve got

24 gen 2025, 22:30:03 | Fast company - tech
Meta’s Threads is finally getting ads

Threads, Meta’s X and Bluesky rival, is testing ads with certain brands in the United States and Japan, the company said Friday.

“We know there will be plenty of feedback abo

24 gen 2025, 20:10:07 | Fast company - tech
How the broligarchy is imitating Trump in more ways than one

Sooner or later, the politicians who most admire Donald Trump begin to emulate him. They

24 gen 2025, 17:50:03 | Fast company - tech