How philosopher Shannon Vallor delivered the year’s best critique of AI

A few years ago, Shannon Vallor found herself in front of Cloud Gate, Anish Kapoor’s hulking mercury drop of a sculpture, better known as the Bean, in Chicago’s Millennium Park. Staring into its shiny mirrored surface, she noticed something. 

“I was seeing how it reflected not only the shapes of individual people, but big crowds, and even larger human structures like the Chicago skyline,” she recalls, “but also that these were distorted—some magnified, others shrunk or twisted.” 

To Vallor, a professor of philosophy at the University of Edinburgh, this was reminiscent of machine learning, “mirroring the patterns found in our data, but in ways that are never neutral or ‘objective,’” she says. The metaphor became a popular part of her lectures, and with the advent of large language models (and the many AI tools they power), has gathered more potency. AI’s “mirrors” look and sound a lot like us because they are reflecting their inputs and training data, with all of the biases and peculiarities that entails. And whereas other analogies for AI might convey a sense of living intelligence (think of the “stochastic parrot” of the widely cited 2021 paper), the “mirror” is more apt, says Vallor: AI isn’t sentient, just a flat, inert surface, captivating us with its fun-house illusions of depth.

The metaphor becomes Vallor’s lens in her recent book The AI Mirror, a sharp, witty critique that shatters many of the prevailing illusions we have about “intelligent” machines and turns some precious attention back on us. In anecdotes about our early encounters with chatbots, she hears echoes of Narcissus, the hunter in Greek mythology who fell in love with the beautiful face he saw when he looked in a pool of water, thinking it was another person. Like him, says Vallor, “our own humanity risks being sacrificed to that reflection.”

She’s not anti AI, to be clear. Both individually and as codirector of BRAID, a U.K.-wide nonprofit devoted to integrating technology and the humanities, Vallor has advised Silicon Valley companies on responsible AI. And she sees some value in “narrowly targeted, safe, well-tested, and morally and environmentally justifiable AI models” for tackling hard health and environmental problems. But as she’s watched the rise of algorithms, from social media to AI companions, she admits her own connection to technology has lately felt “more like being in a relationship that slowly turned sour. Only you don’t have the option of breaking up.”

For Vallor, one way to navigate—and hopefully guide—our increasingly uncertain relationships with digital technology is to tap into our virtues and values, like justice and practical wisdom. Being virtuous, she notes, isn’t about who we are but what we do, part of a ”struggle” of self-making as we experience the world, in relation with other people. AI systems, on the other hand, might reflect an image of human behavior or values, but, as she writes in The AI Mirror, they “know no more of the lived experience of thinking and feeling than our bedroom mirrors know our inner aches and pains.” At the same time, the algorithms, trained on historical data, quietly limit our futures, with the same thinking that left the world “rife with racism, poverty, inequality, discrimination, [and] climate catastrophe.” How will we deal with those emergent problems that have no precedent, she wonders. “Our new digital mirrors point backward.”

As we rely more heavily on machines, optimizing for certain metrics like efficiency and profit, Vallor worries we risk weakening our moral muscles, too, losing track of the values that make living worthwhile.

As we discover what AI can do, we’ll need to focus on leveraging uniquely human traits, too, like context-driven reasoning and moral judgment, and on cultivating our distinctly human capacities. You know, like contemplating a giant bean sculpture and coming up with a powerful metaphor for AI. “We don’t need to ‘defeat’ AI,” she says. “We need to not defeat ourselves.”

This story is part of AI 20, our monthlong series of profiles spotlighting the most interesting technologists, entrepreneurs, corporate leaders, and creative thinkers shaping the world of artificial intelligence.

https://www.fastcompany.com/91240425/how-philosopher-shannon-vallor-delivered-the-years-best-critique-of-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 4mo | Dec 11, 2024, 11:50:04 AM


Login to add comment

Other posts in this group

What to know about Jared Birchall, Elon Musk’s right-hand man

A Wall Street Journal report this week gave an extensive look into how Elon Musk, the

Apr 18, 2025, 4:40:03 PM | Fast company - tech
Netflix beats first quarter forecast, revealing it hasn’t been touched by Trump’s tariffs, yet

Netflix fared better than analysts anticipated during the first thr

Apr 18, 2025, 2:20:07 PM | Fast company - tech
Why are AI companies so bad at naming their models?

Six hours after OpenAI’s launch of GPT-4.1, Sam Altman was already apologizing. 

This time, it wasn’t about

Apr 18, 2025, 9:40:03 AM | Fast company - tech
TikTok is obsessed with this investor who bought 30 floors of a Chicago skyscraper

One of the more unique takes on the POV trend on TikTok: “POV: You bought a 100-year-old skyscraper . . . ”

For those unlikely to ever own a skyscraper themselves, TikTok’s Skyscraper Gu

Apr 18, 2025, 5:10:03 AM | Fast company - tech
Instagram launches ‘Blend’ to share personalized Reels with friends

When it comes to sharing Instagram Reels with friends, the process of three taps to get a Reel from A to B can feel surprisingly tedious. Now, Instagram has addressed that issue with its latest fe

Apr 17, 2025, 10:10:04 PM | Fast company - tech
New Jersey is suing Discord for allegedly violating child safety laws

New Jersey filed a lawsuit against Discord on Thursday, alleging that the social platform recklessly exposed children to “harassment, abuse, and sexual exploitation by predators who lurk on

Apr 17, 2025, 10:10:03 PM | Fast company - tech
Google just lost a major ad tech antitrust case. What happens next could rewire the web

Google has acted illegally to maintain a dominant position in online advertising, a federal judge ruled on Thursday. The tech giant’s “exclusionary conduct substantially harmed Google’s publisher

Apr 17, 2025, 7:40:06 PM | Fast company - tech