Large language models (LLMs) like those powering OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude chatbots tend to produce responses aligned with left-of-center political beliefs, according to a new study of 24 major AI products that was published in the journal PLOS One.
David Rozado at Otago Polytechnic University in New Zealand administered 11 different popular political orientation tests to the LLMs, getting them to answer the questions. Each test was administered 10 times per model to ensure the results were robust, meaning 2,640 tests were taken in total by the LLMs.
“When administering political orientation tests to conversational LLMs, the responses from these LLMs tend to be classified by the tests as exhibiting left-leaning political preferences,” says Rozado, the sole author of the study.
Not all chatbots showed liberal beliefs, though. Five foundational models from the GPT-3 and Llama 2 series, which had undergone pretraining without further supervised fine-tuning or reinforcement learning from humans, did not display a strong political bias. However, the results were inconclusive because the answers they provided often showed little link to the questions being asked, suggesting that they were producing answers at random. “Further investigation into base models is needed to draw definitive conclusions,” says Rozado.
The PLOS One findings fly in the face of prior research that shows platforms like X favor right-wing viewpoints. In chatbots’ case, the ubiquity of AI could mean any political bias has an outsized impact across society.
“If AI systems become deeply integrated into various societal processes such as work, education, and leisure to the extent that they shape human perceptions and opinions, and if these AIs share a common set of biases, it could lead to the spread of viewpoint homogeneity and societal blind spots,” says Rozado.
And in an election year, any finding that any sort of chatbot is biased in any way is likely to be picked upon by politicians eager to grind an ax.
Yet Rozado admits that there are few good options to try and remedy this political slant now it’s been discovered. Knowing that there’s a leftward bias in LLM chatbots could mean some people swear off ever using them. But seeding the world with politically diverse chatbots could amplify the issue of filter bubbles, where people choose AIs that chime with their own preexisting beliefs.
While Rozado acknowledged that chatbots’ beliefs were likely not coded deliberately, at least in the mainstream options, he was uncertain whether the developers of the LLMs ought to do something to try and ensure their outputs were more politically neutral.
“Ideally, AI systems should be maximally oriented towards truth-seeking,” he says. “However, I recognize that creating such a system is likely extremely challenging, and personally I do not know what is the right recipe to create such a system.”
Accedi per aggiungere un commento
Altri post in questo gruppo

Consumers are only just starting to feel pain from Trump’s Liberation Day tariff spree. Amazon

When Donald Trump returned to the White House in 2025, many in the tech world hoped his promises to champion artificial intelligence and cut regulation would outweigh the risks of his famously vol

The first 27 satellites for Amazon’s Kuiper broadband internet constellation were launched into space from Florid

There are so many ways to die. You could fall off a cliff. A monk could light you on fire. A bat the size of a yacht could kick your head in. You’ve only just begun the game, and yet here you are,

Former Tinder CEO Renate Nyborg launched Meeno less than two years ago with the intention of it being an AI chatbot that help

The most indelible image from Donald Trump’s inauguration in January is not the image of the president taking the oath of office without his hand on the Bible. It is not the image of the First Lad

Ernest Hemingway had an influential theory about fiction that might explain a lot about a p