How AI is steering the media toward a ‘close enough’ standard

The nonstop cavalcade of announcements in the AI world has created a kind of reality distortion field. There is so much buzz, and even more money, circulating in the industry that it feels almost sacrilegious to doubt that AI will make good on its promises to change the world. Deep research can do 1% of all knowledge work! Soon the internet will be designed for agents! Infinite Ghibli!

And then you remember AI screws things up. All. The. Time.

Hallucinations—when a large language model essentially spits out information created out of whole cloth—have been an issue for generative AI since its inception. And they are doggedly persistent: Despite advances in model size and sophistication, serious errors still occur, even in so-called advanced reasoning or thinking models. Hallucinations appear to be inherent to generative technology, a by-product of AI’s seemingly magical quality of creating new content out of thin air. They’re both a feature and a bug at the same time.

In journalism, accuracy isn’t optional—and that’s exactly where AI stumbles. Just ask Bloomberg, which has already hit turbulence with its AI-generated summaries. The outlet began publishing AI-generated bullet points for some news stories back in January this year, and it’s already had to correct more than 30 of them, according to The New York Times.

The intern that just doesn’t get it

AI is occasionally described as an incredibly productive intern, since it knows pretty much everything and has superhuman ability to create content. But if you had to issue 30-plus corrections for an intern’s work in three months, you’d probably tell that intern to start looking at a different career path.

Bloomberg is hardly the first publication to run head-first into hallucinations. But the fact that the problem is still happening, more than two years after ChatGPT debuted, pinpoints a primary tension when AI is applied to media: To create novel audience experiences at scale, you need to let the generative technology create content on the fly. But because AI often gets things wrong, you also need to check its output with “humans in the loop.” You can’t do both. 

The typical approach thus far is to slap a disclaimer onto the content. The Washington Post’s Ask the Post AI is a good example, warning users that the feature is an “experiment” and encouraging users to “Please verify by consulting the provided articles.” Many other publications have similar disclaimers.

It’s a strange world where a media company introduces a new feature with a label that effectively says, “You can’t rely on this.” Providing accurate information isn’t a secondary feature of journalism—it’s the whole point. This contradiction is one of the strangest manifestations of the application of AI in media.

Moving to a “close enough” world

How did this happen? Arguably, media companies were forced into it. When ChatGPT and other large language models first began summarizing content, we were so blown away by their mastery of language that we weren’t as concerned about the fine print: “ChatGPT can make mistakes. Check important info.” And it turns out that for most users that was good enough. Even though generative AI often gets facts wrong, chatbots have seen explosive user growth. “Close enough” appears to be what the world is settling on. 

It’s not a standard anyone sought out, but the media is slowly adopting it as more publications launch generative experiences with similar disclaimers. There’s an “If you can’t beat ’em, join ’em” aspect to this, certainly: As more people turn to AI search engines and chatbots for information, media companies feel pressure to either sign licensing deals to have their content included, or match those AI experiences with their own chatbots. Accuracy? There’s a disclaimer for that. 

One notable holdout, however, is the BBC. So far, the BBC hasn’t signed any deals with AI companies, and it’s been a leader in pointing out the inaccuracies that AI portals create, publishing its own research on the topic earlier this year. It was also the BBC that ultimately convinced Apple to dial back its shoddy notification summaries on the iPhone, which were garbling news to the point of making up entirely false narratives.

In a world where it’s looking increasingly fashionable for media companies to take licensing money, the BBC is architecting a more proactive approach. Somewhere along the way—whether out of financial self-interest or falling into Big Tech’s reality distortion field—many media companies began to buy into the idea that hallucinations were either not that big a problem or something that will inevitably be solved. After all, “Today is the worst this technology will ever be.”

Think of pollution and coal plants. It’s an ugly side effect, but one that doesn’t stop the business from thriving. That’s how hallucinations function in AI: clearly flawed, occasionally harmful, yet tolerated—because the growth and money keep coming.

But those false outputs are deadly to an industry whose primary product is accurate information. Journalists should not sit back and expect Silicon Valley to simply solve hallucinations on its own, and the BBC is showing there’s a path to being part of the solution without evangelizing or ignoring the problem. After all, “Check important info” is supposed to be the media’s job.

https://www.fastcompany.com/91310978/ai-steers-the-media-toward-a-close-enough-standard?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

созданный 16h | 4 апр. 2025 г., 09:40:02


Войдите, чтобы добавить комментарий

Другие сообщения в этой группе

Trump extends TikTok sale deadline again—this time by 75 days

President Donald Trump on Friday said is signing an executive order to

4 апр. 2025 г., 21:20:02 | Fast company - tech
Nintendo delays Switch 2 preorders because of Trump’s tariffs

Nintendo is pushing back preorders for its upcoming Nintendo Switch 2 while it figures out the implications of President Donald Trump’s

4 апр. 2025 г., 18:50:05 | Fast company - tech
$2,300 for an iPhone? Trump’s tariffs could make that a reality

Your favorite iPhone could soon become much pricier, thanks to tariffs.

4 апр. 2025 г., 16:30:07 | Fast company - tech
My dog recognizes the sounds a Waymo car makes

Most of us know the general (albeit simplified) story: Russian physiologist Ivan Pavlov used a stimulus—like a metronome—around the dogs he was studying, and soon, the hounds would start to saliva

4 апр. 2025 г., 16:30:07 | Fast company - tech
How I wrote the notes app of my dreams (no coding required)

For years, I’ve had a secret ambition tucked away somewhere near the back of my brain. It was to write a simple note-taking app—one that wouldn’t be overwhelmed with features and that would reflec

4 апр. 2025 г., 14:20:04 | Fast company - tech
The AI tools we love right now—and what’s next

AI tools are everywhere, changing the way we work, communicate, and even create. But which tools are actually useful? And how can users integrate

4 апр. 2025 г., 14:20:04 | Fast company - tech
How this former Disney Imagineer is shaping the next generation of defense technology

The way Bran Ferren sees it, the future of warfare depends as much on creativity as it does on raw firepower.

The former head of research and development at Walt Disney Imagineering—the

4 апр. 2025 г., 11:50:04 | Fast company - tech