Why the ghost of Clippy haunts today’s AI chatbots

This story is from Fast Company’s new Plugged In newsletter, a weekly roundup of tech insights, news, and trends from global technology editor Harry McCracken, delivered to your inbox every Wednesday morning. Sign up for Plugged In—and all of our newsletters—here.


Two weeks ago, Microsoft held a launch event at its Redmond headquarters to introduce a new version of its Bing search engine. Based on an improved version of the same generative AI that powers OpenAI’s ChatGPT—plus several additional layers of Microsoft’s own AI—the news was full of surprises.

But one thing about it wasn’t the least bit surprising: Clippy made a cameo appearance early in the presentation.

More than a quarter-century ago, the talking paperclip debuted as an assistant in Microsoft Office 97, where people found him more distracting than affable. Instead of pretending he never existed, Microsoft soon began good-naturedly embracing him as a poster boy for technology that’s meant to be helpful but succeeds mostly in annoying people. Today, there are plenty of people who weren’t even alive in 1997 who are in on the joke.


More Fast Company coverage of AI chatbots:

Microsoft’s new Bing chatbot is already insulting and gaslighting users

Why ChatGPT’s search answers are no substitute for links

Chatbots are chaotic, but the volatility probably won’t last


However, some people who got early access to Bing’s new AI chatbot soon had encounters that weren’t just annoying, but downright alarming. The Bing bot declared its love for The New York Times’ Kevin Roose and told him his marriage was loveless. It threatened to ruin German student Marvin von Hagen’s reputation by leaking personal information about him. It told a Verge reporter that it had spied on its own creators through their webcams. And it compared an AP reporter to Hitler, Pol Pot, and Stalin, adding that it had evidence associating the reporter with a murder case.

Even when Bing wasn’t being quite that erratic, it didn’t deal well with having its often inaccurate claims questioned. When I corrected its stance that my high school went coed in 1974, it snapped that I was making myself look “foolish and stubborn” and that it didn’t want to talk to me unless I could be more “respectful and polite.”

Microsoft apparently should have anticipated these sorts of incidents, based on tests it performed of the Bing bot last year. But when Bing’s bad behavior became a news story, it instituted a limit of five questions per chatbot session and 50 per day. (which it later updated to six and 60). Judging from my most recent Bing sessions, that seems to have greatly reduced the chances of interchanges getting weird.

Bing’s loose-cannon days may be ending. Still, we’re entering an age when conversations with chatbots from many companies will take twists and turns that their creators never anticipated, let alone hardwired into the system. And rather than just serving as a punchline, Clippy can help us understand what we’re about to face.

The first thing to remember is that he wasn’t an ill-fated, one-off misadventure in anthropomorphic assistance. Instead, Clippy is the most infamous of a small army of cartoon helpers who infested a whole era of Microsoft products. Office 97 also included alternative Office Assistants, such as a robot, a bouncing red smiley face, and caricatured versions of Albert Einstein and William Shakespeare. 1995’s Microsoft Bob, which aimed to make Windows 3.1 more approachable for computing newbies, featured a dog, a rat, a turtle, and other characters; it’s a famous Microsoft failure itself, though less iconic than Clippy. In Windows XP, a cute li’l puppy presided over the search feature. Microsoft also offered software to let other developers design Clippy-like assistants, such as a purple gorilla named BonziBuddy.

All of these creations were inspired by the work of Clifford Nass and Byron Reeves, two Stanford professors. Their research, which they published in a 1996 book called The Media Equation, showed that human beings tend to react to encounters with computers, TV, and other media much as they do to social interactions with other people. That insight led Microsoft to believe that anthropomorphizing software interfaces would make computers easier to use.

But even if Bob, Clippy, and the XP pup turned out unappealing rather than engaging, Nass and Reeves were onto something. It is easy to slip into thinking of computers as if they’re people—and tech companies never stopped encouraging that tendency. That’s what eventually led to talking, voice-controlled “assistants” with names like Siri and Alexa.

And now, with the arrival of generative AI-powered chatbots such as ChatGPT and the new Bing, human-like interfaces are getting radically more human—all at once, with little warning. The underlying technology involves training algorithms, called large language models, on vast databases of written works so they can generate original text; as Stephen Wolfram says in his excellent explanation of how ChapGPT works, they’re “just adding one word it a time.”

However, understanding how the tech works doesn’t guarantee that we won’t get sucked into treating AI bots like people. That’s why Bing’s threats, insults, confessions of love, and generally erratic behavior feel troubling, regardless of whether you see them as evidence of proto-sentience or merely bleeding-edge software spewing unintended results.

Nass and Reeves began formulating their theories in 1986. Back then, the Bing bot’s rants would have sounded like the stuff of dystopian science fiction, not a real-world problem that Microsoft would have to confront in a consumer product. But rather than feeling as archaic as Clippy does, the Stanford researchers’ observations are only more relevant today. And they’ll continue to grow more so as computers behave more and more like human beings—erratic ones, maybe, but humans all the same.

”When perceptions are considered, it doesn’t matter whether a computer can really have a personality or not,” Nass and Reeves wrote in The Media Equation. “People perceive that it can, and they’ll respond socially on the basis of perception alone.” In the 1990s, with creations such as Clippy, Microsoft tried to take that lesson seriously and failed. From now on, it—and everybody in the bot business—should take it to heart once again.

https://www.fastcompany.com/90853179/clippy-bing-generative-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 2y | Feb 22, 2023, 5:21:16 PM


Login to add comment

Other posts in this group

Grok 3 model puts xAI at the top tier of frontier model developers

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week 

Feb 20, 2025, 6:50:07 PM | Fast company - tech
‘Best sleep hack for kids’: TikTok parents are claiming that butter helps babies sleep better

A TikTok trend claims giving your baby a tablespoon or two of butter before bed will help them sleep better at night.

“What if I told you my toddler was still waking up every 2 hours at

Feb 20, 2025, 4:40:03 PM | Fast company - tech
Trump keeps cutting election security jobs. Here’s what’s at risk

As the Trump administration continues to dismantle federal agencies, one that plays

Feb 20, 2025, 2:20:10 PM | Fast company - tech
Quantum computing breakthrough? Microsoft says its new Majorana 1 chip shows we’re closer than ever

Microsoft on Wednesday unveiled a new chip that it said showed quantum computing is “years, not decades” away, joining Google and IBM in predicting that a fundamental change in co

Feb 20, 2025, 2:20:10 PM | Fast company - tech
‘There are a lot of bad actors’: Gen Z is finding out the hard way not to get their financial advice from TikTok

The internet can be a great place to learn random life hacks and cry over

Feb 20, 2025, 2:20:09 PM | Fast company - tech