Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.
Blade Runner wasn’t so far-fetched after all
In Blade Runner 2049, Ryan Gosling’s Officer K has a live-in girlfriend named Joi, played by Ana de Armas. The two interact like a real couple—they share familiar banter and seem to have a history together. But Joi is a hologram, projected from ceiling-mounted emitters in K’s apartment. She’s not human; she’s an advanced form of spatial computing—a future-facing concept we’re already seeing the early stages of today.
The “AI girlfriend” (or boyfriend) is no longer just science fiction—it’s quickly becoming a cultural reality. The concept raises both social and technological questions. In the film, Officer K chooses Joi in part because he isn’t human himself. But here in 2025, as people are spending increasing amounts of time in online and virtual spaces, they’re more and more likely to choose digital companionship over human relationships.
Generative AI is already remarkably good at simulating emotional intimacy. It can craft a persona that listens, supports, and never judges. These AI companions can learn your quirks, understand your personal challenges, and respond with surprising emotional intelligence. They remember shared moments, build a simulated history with their users, and interpret new interactions through the lens of that shared memory.
Even though these relationships currently unfold through text windows on phones and laptops, the demand is clear. A 2024 MIT study found that the second-most common use of ChatGPT was sexual role-play. Meanwhile, Character.ai drew both popularity and controversy for offering AI companions willing to engage in almost any kind of conversation.
We may not have home hologram systems yet, but we do have a more accessible—if still awkward—form of spatial computing: augmented reality (AR). And in the next few years, generative AI and spatial computing are likely to merge.
One key driver of this shift is Apple’s Vision Pro headset. Until recently, AR hardware wasn’t immersive or comfortable enough to hold a user’s attention for long. Apple changed that. The Vision Pro is engaging, comfortable, and—for some—so compelling that users lose track of time while wearing it.
The $3,500 price tag may seem steep, especially if you see it as just another screen for work or entertainment. But what if it could project your digital partner in vivid, life-size clarity—standing in your living room or sitting beside you on the couch? What if it could bring back the likeness of a loved one who passed away?
Whether or not Apple delivers that future remains to be seen. The company is making a big push to integrate “Apple Intelligence” into its devices. Vision apps will almost certainly evolve to become more interactive and personalized with AI. So far, however, Apple has said little about bringing AI to VisionOS, its spatial computing platform.
Of course, even immersive AR has its limits. In both Blade Runner 2049 and Her, characters attempt physical intimacy with AI through surrogate stand-ins. In Blade Runner, Joi overlays her holographic image onto the replicant Mariette. In Her, Samantha arranges for a woman named Isabella to act as a physical body while she provides the voice. In both stories, the experiences are unsettling. Still, companies are already working to bridge that physical-digital divide.
Apple prohibits “adults only” content—including pornography—in its app stores, VisionOS included. However, its content ratings do allow for apps labeled 17+, which may include “frequent or intense sexual content or nudity.” So while Apple may not lead the charge into sexually immersive AI companionship, the space is wide open for others.
As AR headsets get cheaper and more widespread, the real hook will be the software. MIT researchers Robert Mahari and Pat Pataranutaporn have warned that AI’s constant validation and charm could become addictive—encouraging people to abandon the unpredictability and imperfection of human relationships.
“AI wields the collective charm of all human history and culture with infinite seductive mimicry,” Mahari and Pataranutaporn wrote in MIT Technology Review in 2024. “These systems are simultaneously superior and submissive, with a new form of allure that may make consent to these interactions illusory.”
The line between intimacy and illusion is getting blurrier—and we’re stepping over it willingly.
Google injects more AI into Google Docs
At its Google Cloud Next event, Google announced several new artificial intelligence features for its Workspace suite of cloud computing, productivity, and collaboration tools. Powered by its Gemini models, Google Docs will now support audio capabilities, allowing users to create complete audio versions of documents or generate podcast-style summaries highlighting key points. The company is also adding a new AI writing assistant called “Help me refine” that will go beyond simple grammar and clarity checks to offer intelligent hints on everything from improving an argument to enhancing structure.
A new integration of Google’s Veo 2 image generation model in Google Chat, enables teams to describe and embed videos within their documents to help convey messages more clearly. Further, enhanced AI functionality within Sheets automatically analyzes data and identifies stories told by the numbers.
Google is also expanding Gemini’s agentic capabilities across Workspace at a time when businesses are looking to use reasoning agents to automate repetitive multistep tasks. Workspace now features “Gems“—Google-speak for AI agents—that can research, analyze, and generate content to handle specialized workflows. For example, a Gem might read over marketing materials and point out language inconsistencies.
Ai2’s new model links its answers back to its training data
I’ve fretted more than once in this newsletter over the fact that AI researchers can’t fully explain how large language models generate their outputs, and that the big AI labs are spending a lot more money enhancing those outputs than on explaining them. To its credit, Anthropic has done some strong explainability research, and now the Allen Institute for AI (Ai2) has turned that mode of inquiry into a product feature.
The lab’s flagship model, OLMo 2 32B, can tie elements of its output to actual content from their training data. In a demo, Ai2 researcher Jiacheng Liu, showed me how the model, using a feature called OLMoTrace, now highlights and underlines factual claims in its outputs. Click on one, and a pane opens on the right side of the screen, showing the relevant excerpt from the source document (usually scraped from the public internet) that the model is referencing. The tool even labels the training data by how relevant it is to the generated response.
With OLMoTrace, users can pinpoint exactly where the model learned certain information, verify factual claims, and even detect potential sources of hallucinations (i.e. when models generate things that sound right but aren’t true). Researchers and developers may be able to use the feature to understand how models behave during training and how and why they generate content in production. Enterprises may be very interested in fact-checking model outputs as the risk of model contamination grows. (Contamination can occur when AI models are inadvertently trained on false AI-generated content from the web or when bad actors poison training data to make models do destructive things.)
Ai2 has been talking about open-sourcing models and model transparency for years (the lab was started in 2014 by Microsoft cofounder Paul Allen). Ai2 CEO Ali Farhadi tells me the tide may be turning for transparency in the industry. “We started from the time that the word on the street was ‘Oh, opening these things up is the worst thing you could do to humanity’ to the point now where these big enterprise companies are fighting with each other about who’s more open,” Farhadi said during an interview with Fast Company Monday. “So we see a big shift in the industry—obviously we’d like to take credit for part of that shift—that’s part of our story: the more people that open up, the better it would be for AI.”
You can see the OLMoTrace tool in action in this short ">demo video or try it for yourself on the Ai2 Playground.
More AI coverage from Fast Company:
- How AI is steering the media toward a ‘close enough’ standard
- Sparq wants drivers to be their own AI-powered mechanics
- Shopify CEO Tobi Lütke: AI is now a ‘fundamental expectation’ for employees
- How ChatGPT is helping bend websites to my will
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Login to add comment
Other posts in this group

Section 230 of the Communications Decency Act, passed i

When government officials accidentally included Jeffrey Goldberg, The Atlantic’s editor-in-chief, in a Signal group chat discussing U.S. military plans, all hell broke loose. The Atla

It really is mind-blowing how much incredible stuff we can do with images these days.
’Twasn’t long ago, after all, that advanced image adjustments required pricey desktop-computer software and s

Rasmus Hougaard is the founder and managing partner of Potential Project. In 2019 he was nominated by Thinkers50 as one of the eight most important leadership thinkers in the world. He writes for&

Almost 23 years ago, an employee at Apple described Steve Jobs to me as one of the world’s few “rock star CEOs.” At the time, I didn’t understand why anyone would talk about the head of a company

Millennials were told the 2008 recession was a “once in a generation” economic crisis. Almost two decades later, déjà vu has struck.
While the U.S. market rose following Pres

Meta is set to face off against the U.S. Federal Trade Commission on Monday in an antitrust trial that could force the social media giant to divest Instagram and WhatsApp.
The closely wa