Google’s Gemini AI was mocked for its revisionist history, but it still highlights a real problem

Ask Google’s generative AI tool Gemini to create images of American revolutionary war soldiers and it might present you with a Black woman, an Asian man and a Native American woman wearing George Washington’s bluecoats.

That diversity has gotten some people, including Frank J. Fleming, a former computer engineer and writer for the Babylon Bee, really mad. Fleming has tweeted a series of his increasingly frustrated interactions with Google as he tries to get it to portray white people in situations or jobs where they were historically predominant (for example, Medieval knight). The cause has been taken up by others who claim it’s diversity for diversity’s sake, and everything wrong with the woke world.

There’s just one problem: Fleming and his fellow angry protesters are on a futile mission. “This can’t be done with these systems,” says Olivia Guest, assistant professor of computational cognitive science at Radboud University. “You can’t guarantee behavior. That’s the point of stochastic systems.”

The current generation of generative AI tools are stochastic systems—or as one famous academic paper published in 2021 equated it, they randomly produce different outputs, even if given the same input. It’s the thing that has made generative AI capture the public’s attention: That it doesn’t just repeat the same thing over and over again.

Experts also question whether the AI chatbot results presented by the angry mob on social media are the full picture—literally. “It’s difficult to assess the trustworthiness of any content that we see on platforms such as X,” says Rumman Chowdhury, CEO and co-founder of Humane Intelligence. “Are these cherry-picked examples? Absent an at scale image generation analysis that is able to be tracked and mapped across many different prompts, I would not feel that we have a clear grasp of whether or not this model has any sort of bias.”

Google has recognized the uproar and said it’s taking action. “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately,” Jack Krawczyk, the product lead for Google Bard, wrote on X.  Krawczyk highlighted that the depiction of historical events fell between two competing interests: to accurately represent history as it happened; and to  “reflect our global user base.”

But tweaking the underlying issues might not be so easy. Fixing stochastic systems is trickier than it looks. Drawing up guardrails for AI models are the same, and can be subverted, unless you revert to brute force blocking (Google has previously ‘fixed’ image recognition software that would identify Black people as gorillas by preventing the software from recognizing any actual gorillas). Then it isn’t a stochastic system, which means that the thing that makes generative AI unique is gone.

The whole brouhaha raises an interesting question, says Chowdhury. “It is really difficult to define whether or not there is a correct answer to what images should be generated,” she says. “Relying on historical accuracy may result in the reinforcement of the exclusionary status quo. However, it could run the risk of being simply factually incorrect.”

For Yacine Jernite, machine learning and society lead at AI company Hugging Face, the issue isn’t just one that’s limited to Gemini. “This isn’t just a Gemini issue, rather a structural issue with how several companies developing commercial products without much transparency are addressing questions of biases,” he says. It’s a subject that Hugging Face has written about previously. “Bias is compounded by choices made at all levels of the development process, with choices earliest having some of the largest impact—for example, choosing what base technology to use, where to get your data, and how much to use,” says Jernite.

Jernite fears that what we’re seeing could be the result of what companies see as implementing a quick, relatively cheap fix: If their training data overrepresents white people, you can modify prompts under the hood to inject diversity. “But it doesn’t really solve the issue in a meaningful way,” he says.

Instead, companies need to address the issue of representation and bias openly, Jernite argues. “Telling the rest of the world what you’re doing specifically to address biased outcomes is hard: It exposes the company to having external stakeholders question their choices, or point out that their efforts are insufficient—and maybe disingenuous,” he says. “But it’s also necessary, because those questions need to be asked by people with a more direct stake in bias issues, people with more expertise on the topic—especially people with social sciences training which are notoriously lacking from the tech development process—and, importantly, people who have a reason not to trust that the technology will work, to avoid conflicts of interest.”

https://www.fastcompany.com/91034044/googles-gemini-ai-was-mocked-for-its-revisionist-history-but-it-still-highlights-a-real-problem?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 1y | Feb 21, 2024, 9:50:07 PM


Login to add comment

Other posts in this group

Nintendo delays Switch 2 preorders because of Trump’s tariffs

Nintendo is pushing back preorders for its upcoming Nintendo Switch 2 while it figures out the implications of President Donald Trump’s

Apr 4, 2025, 6:50:05 PM | Fast company - tech
$2,300 for an iPhone? Trump’s tariffs could make that a reality

Your favorite iPhone could soon become much pricier, thanks to tariffs.

Apr 4, 2025, 4:30:07 PM | Fast company - tech
My dog recognizes the sounds a Waymo car makes

Most of us know the general (albeit simplified) story: Russian physiologist Ivan Pavlov used a stimulus—like a metronome—around the dogs he was studying, and soon, the hounds would start to saliva

Apr 4, 2025, 4:30:07 PM | Fast company - tech
How I wrote the notes app of my dreams (no coding required)

For years, I’ve had a secret ambition tucked away somewhere near the back of my brain. It was to write a simple note-taking app—one that wouldn’t be overwhelmed with features and that would reflec

Apr 4, 2025, 2:20:04 PM | Fast company - tech
The AI tools we love right now—and what’s next

AI tools are everywhere, changing the way we work, communicate, and even create. But which tools are actually useful? And how can users integrate

Apr 4, 2025, 2:20:04 PM | Fast company - tech
How this former Disney Imagineer is shaping the next generation of defense technology

The way Bran Ferren sees it, the future of warfare depends as much on creativity as it does on raw firepower.

The former head of research and development at Walt Disney Imagineering—the

Apr 4, 2025, 11:50:04 AM | Fast company - tech
How AI is steering the media toward a ‘close enough’ standard

The nonstop cavalcade of announcements in the AI world has created a kind of reality distortion field. There is so much bu

Apr 4, 2025, 9:40:02 AM | Fast company - tech