How philosopher Shannon Vallor delivered the year’s best critique of AI

A few years ago, Shannon Vallor found herself in front of Cloud Gate, Anish Kapoor’s hulking mercury drop of a sculpture, better known as the Bean, in Chicago’s Millennium Park. Staring into its shiny mirrored surface, she noticed something. 

“I was seeing how it reflected not only the shapes of individual people, but big crowds, and even larger human structures like the Chicago skyline,” she recalls, “but also that these were distorted—some magnified, others shrunk or twisted.” 

To Vallor, a professor of philosophy at the University of Edinburgh, this was reminiscent of machine learning, “mirroring the patterns found in our data, but in ways that are never neutral or ‘objective,’” she says. The metaphor became a popular part of her lectures, and with the advent of large language models (and the many AI tools they power), has gathered more potency. AI’s “mirrors” look and sound a lot like us because they are reflecting their inputs and training data, with all of the biases and peculiarities that entails. And whereas other analogies for AI might convey a sense of living intelligence (think of the “stochastic parrot” of the widely cited 2021 paper), the “mirror” is more apt, says Vallor: AI isn’t sentient, just a flat, inert surface, captivating us with its fun-house illusions of depth.

The metaphor becomes Vallor’s lens in her recent book The AI Mirror, a sharp, witty critique that shatters many of the prevailing illusions we have about “intelligent” machines and turns some precious attention back on us. In anecdotes about our early encounters with chatbots, she hears echoes of Narcissus, the hunter in Greek mythology who fell in love with the beautiful face he saw when he looked in a pool of water, thinking it was another person. Like him, says Vallor, “our own humanity risks being sacrificed to that reflection.”

She’s not anti AI, to be clear. Both individually and as codirector of BRAID, a U.K.-wide nonprofit devoted to integrating technology and the humanities, Vallor has advised Silicon Valley companies on responsible AI. And she sees some value in “narrowly targeted, safe, well-tested, and morally and environmentally justifiable AI models” for tackling hard health and environmental problems. But as she’s watched the rise of algorithms, from social media to AI companions, she admits her own connection to technology has lately felt “more like being in a relationship that slowly turned sour. Only you don’t have the option of breaking up.”

For Vallor, one way to navigate—and hopefully guide—our increasingly uncertain relationships with digital technology is to tap into our virtues and values, like justice and practical wisdom. Being virtuous, she notes, isn’t about who we are but what we do, part of a ”struggle” of self-making as we experience the world, in relation with other people. AI systems, on the other hand, might reflect an image of human behavior or values, but, as she writes in The AI Mirror, they “know no more of the lived experience of thinking and feeling than our bedroom mirrors know our inner aches and pains.” At the same time, the algorithms, trained on historical data, quietly limit our futures, with the same thinking that left the world “rife with racism, poverty, inequality, discrimination, [and] climate catastrophe.” How will we deal with those emergent problems that have no precedent, she wonders. “Our new digital mirrors point backward.”

As we rely more heavily on machines, optimizing for certain metrics like efficiency and profit, Vallor worries we risk weakening our moral muscles, too, losing track of the values that make living worthwhile.

As we discover what AI can do, we’ll need to focus on leveraging uniquely human traits, too, like context-driven reasoning and moral judgment, and on cultivating our distinctly human capacities. You know, like contemplating a giant bean sculpture and coming up with a powerful metaphor for AI. “We don’t need to ‘defeat’ AI,” she says. “We need to not defeat ourselves.”

This story is part of AI 20, our monthlong series of profiles spotlighting the most interesting technologists, entrepreneurs, corporate leaders, and creative thinkers shaping the world of artificial intelligence.

https://www.fastcompany.com/91240425/how-philosopher-shannon-vallor-delivered-the-years-best-critique-of-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 2mo | 11 déc. 2024 à 11:50:04


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

This app combines Wikipedia and TikTok to fight doomscrolling

“Insane project idea: all of wikipedia on a single, scrollable page,” Patina Systems founder Tyler Angert posted on X earli

10 févr. 2025 à 22:30:04 | Fast company - tech
Elon Musk’s $97 billion OpenAI bid would give the DOGE chief even more power in the AI race

A group of investors led by Elon Musk has given OpenAI an unsolicited offer of $97.4 billion to buy the nonprofit part of OpenAI. An attorney for the group submitted the bid to OpenAI Monday, the

10 févr. 2025 à 22:30:04 | Fast company - tech
What exactly is the point of the AI Action Summit?

The world’s leading minds in AI are gathering in Paris for the AI Action Summit, which kick

10 févr. 2025 à 20:10:10 | Fast company - tech
NASA astronauts are streaming live on Twitch  from space. Here’s how to watch

Ever wondered what life is like for an astronaut? Now you can ask during NASA’s first

10 févr. 2025 à 20:10:08 | Fast company - tech
Credo AI’s vision for ethical and transparent AI governance

Brendan Vaughn, editor-in-chief of ‘Fast Company,’ interviews Credo AI’s CEO on AI governance trends at the World Economic Forum 2025.

https://www.fastcompany.com/91275783/credo-ais-vision-fo

10 févr. 2025 à 17:50:05 | Fast company - tech
AI summit in Paris brings together Big Tech and U.S. VP Vance

Major world leaders are meeting for an AI summit in Paris, where challenging diplomatic talks are expected as tech titans fight for dominance in the

10 févr. 2025 à 17:50:03 | Fast company - tech
Roblox joins $27 million industry nonprofit to support online safety

A group of internet businesses, including Roblox, Google, OpenAI, and Discord, have cofounded a nonprofit called Robust Open Online Safety Tools (ROOST).

The new organization will fund f

10 févr. 2025 à 15:30:08 | Fast company - tech