How philosopher Shannon Vallor delivered the year’s best critique of AI

A few years ago, Shannon Vallor found herself in front of Cloud Gate, Anish Kapoor’s hulking mercury drop of a sculpture, better known as the Bean, in Chicago’s Millennium Park. Staring into its shiny mirrored surface, she noticed something. 

“I was seeing how it reflected not only the shapes of individual people, but big crowds, and even larger human structures like the Chicago skyline,” she recalls, “but also that these were distorted—some magnified, others shrunk or twisted.” 

To Vallor, a professor of philosophy at the University of Edinburgh, this was reminiscent of machine learning, “mirroring the patterns found in our data, but in ways that are never neutral or ‘objective,’” she says. The metaphor became a popular part of her lectures, and with the advent of large language models (and the many AI tools they power), has gathered more potency. AI’s “mirrors” look and sound a lot like us because they are reflecting their inputs and training data, with all of the biases and peculiarities that entails. And whereas other analogies for AI might convey a sense of living intelligence (think of the “stochastic parrot” of the widely cited 2021 paper), the “mirror” is more apt, says Vallor: AI isn’t sentient, just a flat, inert surface, captivating us with its fun-house illusions of depth.

The metaphor becomes Vallor’s lens in her recent book The AI Mirror, a sharp, witty critique that shatters many of the prevailing illusions we have about “intelligent” machines and turns some precious attention back on us. In anecdotes about our early encounters with chatbots, she hears echoes of Narcissus, the hunter in Greek mythology who fell in love with the beautiful face he saw when he looked in a pool of water, thinking it was another person. Like him, says Vallor, “our own humanity risks being sacrificed to that reflection.”

She’s not anti AI, to be clear. Both individually and as codirector of BRAID, a U.K.-wide nonprofit devoted to integrating technology and the humanities, Vallor has advised Silicon Valley companies on responsible AI. And she sees some value in “narrowly targeted, safe, well-tested, and morally and environmentally justifiable AI models” for tackling hard health and environmental problems. But as she’s watched the rise of algorithms, from social media to AI companions, she admits her own connection to technology has lately felt “more like being in a relationship that slowly turned sour. Only you don’t have the option of breaking up.”

For Vallor, one way to navigate—and hopefully guide—our increasingly uncertain relationships with digital technology is to tap into our virtues and values, like justice and practical wisdom. Being virtuous, she notes, isn’t about who we are but what we do, part of a ”struggle” of self-making as we experience the world, in relation with other people. AI systems, on the other hand, might reflect an image of human behavior or values, but, as she writes in The AI Mirror, they “know no more of the lived experience of thinking and feeling than our bedroom mirrors know our inner aches and pains.” At the same time, the algorithms, trained on historical data, quietly limit our futures, with the same thinking that left the world “rife with racism, poverty, inequality, discrimination, [and] climate catastrophe.” How will we deal with those emergent problems that have no precedent, she wonders. “Our new digital mirrors point backward.”

As we rely more heavily on machines, optimizing for certain metrics like efficiency and profit, Vallor worries we risk weakening our moral muscles, too, losing track of the values that make living worthwhile.

As we discover what AI can do, we’ll need to focus on leveraging uniquely human traits, too, like context-driven reasoning and moral judgment, and on cultivating our distinctly human capacities. You know, like contemplating a giant bean sculpture and coming up with a powerful metaphor for AI. “We don’t need to ‘defeat’ AI,” she says. “We need to not defeat ourselves.”

This story is part of AI 20, our monthlong series of profiles spotlighting the most interesting technologists, entrepreneurs, corporate leaders, and creative thinkers shaping the world of artificial intelligence.

https://www.fastcompany.com/91240425/how-philosopher-shannon-vallor-delivered-the-years-best-critique-of-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 3mo | Dec 11, 2024, 11:50:04 AM


Login to add comment

Other posts in this group

These tech companies are building healthier social media habits for kids

The last year has seen a global reckoning with the effects of social media on kids. Australia banned

Mar 23, 2025, 12:30:02 PM | Fast company - tech
This wellness app is like TikTok for your feelings

Would you share the pages of your journal with a bunch of strangers, because that’s the idea behind social wellness app Exist. 

The new

Mar 23, 2025, 5:30:03 AM | Fast company - tech
Yes, Apple is delaying some AI features. But does it really matter?

Earlier this month, Apple officially announced that it would be postponing the launch of some planned Apple Intelligence features to a later, unspecified date in the future. These features mainly

Mar 22, 2025, 10:50:06 AM | Fast company - tech
Suffering from loneliness? These businesses may have a cure

Loneliness isn’t just a lingering by-product of COVID lockdowns—it’s a public health crisis. The impacts of social isolation are said to be as detrimental to human health as

Mar 22, 2025, 10:50:06 AM | Fast company - tech
Anthropic is adding web search to its Claude chatbot in a very smart way

Anthropic announced Thursday that it has added web search capability to its Claude chatbot. It’s not a new feature to the AI world—but the company’s approach stands as one of the most thoughtful t

Mar 21, 2025, 11:20:06 PM | Fast company - tech
In this horror game, the monster can see you through your webcam

If the thought of being hunted by something that can see your every move makes your skin crawl, you might want to steer clear of Eyes Never Wake.

This viral horror game takes im

Mar 21, 2025, 9:10:03 PM | Fast company - tech
Fewer than 500 people are responsible for $3.2 trillion of artificial crypto trading

Market manipulation in the cryptocurrency world is rampant—and fewer than 500 people are responsible for as much as $250 million a year in profits and over $3.2 trillion in artificial trading, acc

Mar 21, 2025, 6:40:04 PM | Fast company - tech