Helen King’s job is all about keeping AI safe as Google scales

More than a dozen years ago, U.K.-based game producer Helen King was about to take a new job at Ubisoft in Canada. She had already boxed up her possessions and was getting her visa in order when an opportunity at a London-based AI startup unexpectedly presented itself. King found it irresistible, even though she had no background in the technology—she’d even opted out of AI courses as a computer science student.

“Some of my friends still haven’t forgiven me,” she jokes. “Because I then had to say, ‘I’m not moving to Canada, and I’m joining a company that’s in stealth mode, so I can’t actually tell you what I’m doing.’”

The company in question was AI startup DeepMind, and its founders, Demis Hassabis, Shane Legg, and Mustafa Suleyman, enthralled King with the audacity of their vision. “They talked about wanting to solve intelligence and the opportunities that would come with that,” such as breakthroughs in cancer research, she remembers. As a program manager for research, she chipped in on everything from recruitment to running conferences.

Of course, DeepMind didn’t stay tiny and stealthy forever. King was there when it was acquired by Google in 2014 and made headlines with research breakthroughs such as AlphaGo and AlphaFold. In 2023, Google merged DeepMind with another one of its AI research arms, Google Brain, to form Google DeepMind (GDM). It put Hassabis in charge of the combined operation, whose technologies include the Gemini Large Language Models at the heart of many of Google’s AI advances. Work also continues on longstanding projects such as AlphaFold, whose protein-prediction AI is now at the heart of Alphabet’s drug discovery startup Isomorphic Labs.

Today, King is GDM’s senior director of responsibility and a strategic advisor to research. They’re particularly weighty jobs given Google’s massive scale: Six of its products have two billion users apiece, nine have a billion, and 15 have half a billion. Offerings such as Google Search and Gmail have been part of the fabric of life and work for many years, magnifying AI’s potential benefit but also the impact of any glitches.

Unlike AI startups, Google also has a reputation to preserve and paying customers who prize dependability above raw innovation, raising the stakes even further. “The benefit of being known as being trustworthy and safe, and all of these things, comes with the challenge of an expectation that it carries through, even when it’s experimental and in early products,” says King.

Though the torrid pace of Google’s recent AI announcements may feel like a response to the era OpenAI unleashed by introducing ChatGPT two years ago, the company been preparing for this moment long ago. Back in 2018. DeepMind formed a Safety and Responsibility Council, whose membership included senior leaders from across the organization. The group continues to play a core role at GDM, providing input on new research efforts from the start. King says that the goal is to encourage ongoing conversations between Council members and technologists working on particular projects, giving everyone involved sufficient time to think matters through: “It’s not a black box experience for the team, and I think that really helps bring them on the journey.”

A commitment to openness also explains why Google DeepMind shares its learnings about AI ethics in research papers, such as a recent one on advanced assistants that lists King among its authors. “We see ourselves as leaders in safety and responsibility, and providing that sort of thought leadership is also important,” she says. “It’s not just how do we internally ensure safe and responsible models, but also how do we ensure in the broader research community that that’s happening.”

None of the safety measures Google has in place have prevented a few embarrassing AI-related mishaps, such as the AI Overviews in Search getting some facts really, really wrong. In part, that reflects the many moving parts associated with turning GDM’s research and underlying technologies into working products, a process that spreads responsibility among many stakeholders. King’s team focuses on “anything that is a GDM project or a GDM product,” she says. “That is where we tend to be involved. Not in the search algorithms themselves.”

Still, she acknowledges that the people who rely on Google’s AI-infused tools don’t draw distinctions between the myriad teams behind them and their varying roles in keeping them safe. Indeed, the human-like veneer of current and future AI experiences may cause users to think of them as personifications of Google in a way that’s new.

”The LLMs are implicitly being seen as the voice of whichever company the LLM is [associated] with,” she says. “I don’t think that was ever an intended behavior, in the same way I don’t think someone says Google search represents the voice of Google. But it’s really interesting that for LLMs, that’s where everyone in the world has gone.”

This story is part of AI 20, our monthlong series of profiles spotlighting the most interesting technologists, entrepreneurs, corporate leaders, and creative thinkers shaping the world of artificial intelligence.

https://www.fastcompany.com/91235232/helen-king-google-deepmind-safety-responsibility?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 2mo | Dec 12, 2024, 1:30:03 PM


Login to add comment

Other posts in this group

This app combines Wikipedia and TikTok to fight doomscrolling

“Insane project idea: all of wikipedia on a single, scrollable page,” Patina Systems founder Tyler Angert posted on X earli

Feb 10, 2025, 10:30:04 PM | Fast company - tech
Elon Musk’s $97 billion OpenAI bid would give the DOGE chief even more power in the AI race

A group of investors led by Elon Musk has given OpenAI an unsolicited offer of $97.4 billion to buy the nonprofit part of OpenAI. An attorney for the group submitted the bid to OpenAI Monday, the

Feb 10, 2025, 10:30:04 PM | Fast company - tech
What exactly is the point of the AI Action Summit?

The world’s leading minds in AI are gathering in Paris for the AI Action Summit, which kick

Feb 10, 2025, 8:10:10 PM | Fast company - tech
NASA astronauts are streaming live on Twitch  from space. Here’s how to watch

Ever wondered what life is like for an astronaut? Now you can ask during NASA’s first

Feb 10, 2025, 8:10:08 PM | Fast company - tech
Credo AI’s vision for ethical and transparent AI governance

Brendan Vaughn, editor-in-chief of ‘Fast Company,’ interviews Credo AI’s CEO on AI governance trends at the World Economic Forum 2025.

https://www.fastcompany.com/91275783/credo-ais-vision-fo

Feb 10, 2025, 5:50:05 PM | Fast company - tech
AI summit in Paris brings together Big Tech and U.S. VP Vance

Major world leaders are meeting for an AI summit in Paris, where challenging diplomatic talks are expected as tech titans fight for dominance in the

Feb 10, 2025, 5:50:03 PM | Fast company - tech
Roblox joins $27 million industry nonprofit to support online safety

A group of internet businesses, including Roblox, Google, OpenAI, and Discord, have cofounded a nonprofit called Robust Open Online Safety Tools (ROOST).

The new organization will fund f

Feb 10, 2025, 3:30:08 PM | Fast company - tech