AI Chatbots have telltale quirks. Researchers can spot them with 97% accuracy

There’s a cat-and-mouse game between those using generative AI chatbots to produce text undetected and those trying to catch them. Many believe they know the telltale signs—though as a journalist fond of the word “delve” and prone to em-dashes, I’m not so sure.

Researchers at four U.S. universities, however, have taken a more rigorous approach, identifying linguistic fingerprints that reveal which large language model (LLM) produced a given text.

“All these chatbots are coming out every day, and we interact with them, but we don’t really understand the differences between them,” says Mingjie Sun, a researcher at Carnegie Mellon University and lead author of the study, which was published in Cornell University’s preprint server arXiv. “By training a machine learning classifier to do this task, and by looking at the performance of that classifier, we can then assess the difference between different LLMs.”

Sun and his colleagues developed a machine learning model that analyzed the outputs of five popular LLMs, and was able to distinguish between them with 97.1% accuracy. Their machine learning model uncovered distinct verbal quirks unique to each LLM.

ChatGPT’s GPT-4o model, for instance, tends to use “utilize” more than other models. DeepSeek is partial to saying “certainly.” Google’s Gemini often prefaces its conclusions with the word “essentially,” while Anthropic’s Claude overuses phrases like “according to” and “according to the text” when citing its sources.

xAI’s Grok stands out as more discursive and didactic, often reminding users to “remember” key points while guiding them through arguments with “not only” and “but also.”

“The writing, the word choices, the formatting are all different,” says Yida Yin, a researcher at the University of California, Berkeley, and a coauthor of the paper.

These insights can help users select the best model for specific writing tasks—or aid those trying to catch AI-generated text masquerading as human work. So, remember: according to this study, if a model utilizes certain words, it’s certainly possible to identify it.

https://www.fastcompany.com/91286162/ai-chatbots-have-telltale-quirks-researchers-can-spot-them-with-97-accuracy?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Utworzony 1mo | 3 mar 2025, 14:30:07


Zaloguj się, aby dodać komentarz

Inne posty w tej grupie

IPOs are being pulled and the market is yo-yoing. What’s next?

Calling this a chaotic week on Wall Street might be underselling the situation. We’ve seen drops of more than 2,000 points in the Dow and then gains of even more. In the past five days, the

11 kwi 2025, 10:40:05 | Fast company - tech
This tiny screw is powering the humanoid robot revolution

The humanoid robotics revolution is just around the corner. Test models are already working in f

11 kwi 2025, 10:40:05 | Fast company - tech
Why I read 597 applications for one job—no AI involved

Last month I posted a job description on our blog for a chief of staff role at my venture capital firm, Graham & Walker. Turns out, that job description really hit a nerve. Within an hour, more th

11 kwi 2025, 10:40:04 | Fast company - tech
Silicon Valley needs to get back to silicon

With their drab gray suits and their Buddy Holly glasses, the so-called traitorous eight don’t look like revolutionaries. Given n

11 kwi 2025, 08:30:02 | Fast company - tech
Morocco’s social security database hacked and leaked on Telegram

Morocco‘s social security agency said troves of data were stolen from its systems in a cyberattack this week that resulted in personal informatio

10 kwi 2025, 23:10:04 | Fast company - tech
What if Apple Vision Pro’s killer app is the ‘AI girlfriend’?

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter ever

10 kwi 2025, 18:30:06 | Fast company - tech