Google DeepMind CEO Demis Hassabis used to get some of his most fulfilling work done between midnight and 3 a.m. A self-described “nocturnal person,” the Londoner devoted the solitude of his overnight shift to catching up on scientific papers, coming up with new ideas, and just plain thinking.
Lately, however, even Hassabis’s wee hours are in demand. Much of his team is now based at Google’s home turf in Silicon Valley, where it’s eight hours earlier. As a consequence, video meetings for him often stretch into the new day. “I had a really good routine until about 18 months ago,” he says wistfully.
His cherished quiet time was disrupted by the creation of Google DeepMind, often called GDM within the company. The organization is the result of the April 2023 merger of two existing Google AI research arms: DeepMind, which Hassabis cofounded with Mustafa Suleyman and Shane Legg in 2010 and sold to Google in 2014, and Google Brain, which Google itself formed in 2011. Google Brain chief Jeff Dean became Google’s chief scientist, while Hassabis was named to head the combined research entity—arguably AI’s most formidable brain trust.
It’s an enormous responsibility. Though Google has talked about being an AI-first company for years, it was OpenAI that set off the current generative AI frenzy with the November 2022 launch of ChatGPT. Google—for all its technical wherewithal—has been scrambling since then to establish itself as a leader in turning the emerging technology into actual products. It’s racing against innumerable others, including Microsoft, OpenAI’s flagship partner (and Suleyman’s current employer; the company appointed him to head a new consumer AI unit in March, in a deal that also involved paying $650 million to license technology created by Inflection AI, the startup he cofounded after leaving Google in 2022).
Google’s game plan bets heavily on Gemini, a large language model named for the twin Google AI labs that came together to create it. Various versions of the Gemini technology power a raft of new features, from Google Search’s “AI Overview” summaries to Gmail’s ability to draft emails for users. Gemini has also replaced Google Assistant as Android’s default voice AI, is available as a stand-alone chatbot, and can be utilized by other developers in their products via the Google Cloud platform.
Gemini still hasn’t dislodged ChatGPT in the public consciousness as the AI chatbot that defines the category; ChatGPT, with new features such as built-in search, is taking on Google even more directly. And a few of Google’s AI launches have turned into PR nightmares as their rough edges became obvious: The company’s image generator was initially capable of troubling anachronisms such as depicting Nazi soldiers as Black people, and one AI Overview helpfully recommended using glue to keep cheese stuck to pizza. The company’s willingness to spread AI that can get things wrong marks a distinct shift from its earlier, more cautious mindset, before ChatGPT showed that the technology’s current rawness didn’t stifle adoption: “Candidly, I never would have believed that society would have tolerated this many hallucinations, but it has,” says GDM COO Lila Ibrahim.
But Google is finally benefiting from its biggest competitive advantage: the ability to deploy artificial intelligence to vast numbers of people with the flip of a switch. “Every billion-user product at Google, at this point, has Gemini integrations,” says Eli Collins, GDM’s VP of product. “And we have nine billion-user products.”
As Hassabis puts it: “The way it’s turned out is that we’re now the engine room of Google.” Yet GDM’s mission is even bigger than that. Like OpenAI, Anthropic, and others, it’s working toward achieving artificial general intelligence, or AGI.
Though definitions of precisely what AGI is vary, everyone agrees it involves AI becoming more competent across more fields than anything that exists today. More than 20 years ago, Legg helped popularize the term when he suggested it as the title for a book by fellow AI scientist Ben Goertzel. Now GDM’s chief AGI scientist, Legg defines it as “something that can at least match human capability in the sorts of cognitive tasks that people can typically do.” Achieving it is what prompted him, along with Hassabis and Suleyman, to start DeepMind 14 years ago. Hassabis’s guess as to when some entity—not necessarily Google—will attain that epoch-shifting feat leaves plenty of wiggle room in both directions. He sees “a 50% chance in 10 years,” but says, “I wouldn’t be surprised if it happened earlier.”
Unassuming and affable in person, Hassabis speaks calmly but rapidly, with an accent evocative of his boyhood in North London. A lifetime overachiever, he has recently been racking up the world’s loftiest honors. Knighted last March in the U.K. “for services to artificial intelligence,” he is now Sir Demis. (“I rarely use that title, and I’d rather other people didn’t.”) The same month, he was appointed to the Vatican’s Pontifical Academy of Sciences. (“I’m not Catholic myself, but they’re very open-minded about discussing the philosophical implications of what’s happening.”) In October, I spoke with him hours after he and GDM director of research John Jumper were named Nobel Prize laureates in chemistry for GDM’s AlphaFold, a breakthrough in AI-assisted protein research that could revolutionize drug discovery—an honor they shared with University of Washington scientist David Baker for his independent protein research. (“Surreal, but I hope it’ll sink in in the next few days.”)
Despite all this, Hassabis’s career is entering a new and fraught phase. For years, he has been motivated by AI’s potential to solve some of humanity’s biggest challenges, a vision Google shared and allowed him to pursue with great autonomy. But now he must square that aim with the growing pressure to pump out new technologies that will keep Google’s biggest products relevant. Success could hinge on his ability to balance scientific idealism with commercial reality.
Google DeepMind’s blocky, modernist 11-story office looms large in London’s Knowledge Quarter. A thriving tech district near the British Library, the area is also home to outposts of AstraZeneca, Meta, and Samsung, along with research facilities such as the Francis Crick Institute, one of Europe’s largest biomedical labs. When I visit in early October, the skies have cleared several days of drizzle, and sunlight streams into the conference room where Hassabis is explaining to me how hard he’s worked to ensure that GDM’s workplace transcends the hermetically sealed atmosphere endemic in Silicon Valley.
This being a technology company, there’s nothing unexpected about the fact that the room is named and decorated in tribute to Nikola Tesla, or that others honor Ada Lovelace, Alan Turing, and other eminences of technological history. But there are also rooms celebrating philosophers Baruch Spinoza and Ludwig Wittgenstein. And Mary Shelley, whose best-known work, Frankenstein, is a 200-year-old tale of artificial intelligence gone awry. One of two on-site cafés is named after science fiction author Isaac Asimov, whose 1942 Three Laws of Robotics—number one is “a robot may not injure a human being or, through inaction, allow a human being to come to harm”—are a prescient first draft of ethical guidelines for AI.
The GDM library is stocked with printed books on diverse subjects: Hassabis waxes particularly enthusiastic about The Fabric of Reality, a 1997 tome on quantum theory by physicist David Deutsch. There’s tech-inspired art on the premises, too, including two massive glass-and-steel polyhedrons in the lobby. The structures, by British American sculptor Anthony James, are meant—according to a nearby placard—to “bring a rigid and gleaming tangibility to the abstraction of the numerical calculation of flawless coherence.”
This multidisciplinary atmosphere reflects London’s rich cultural history as well as Hassabis’s own interests, which span “philosophy and arts and humanities” and beyond, he says. “I think the same in terms of values and society, too. I feel like the world needs to input what it wants out of [AI]—not just a hundred square miles of a patch in California.”
Another influence, and Hassabis’s most foundational one: games. Born in London in 1976 to a Greek Cypriot father and a Singaporean mother, he exhibited early signs of brilliance by trouncing his dad and uncle at chess as a 4-year-old. By age 8, he’d earned enough money in competitive chess to buy his first PC; later, he owned a Commodore Amiga 500, an innovative computer that remains so resonant in his memory that—after learning I’d owned one, too—he excitedly tells me that GDM had recruited one of its creators from elsewhere inside Google.
By 17, Hassabis was developing video games professionally, including an amusement park simulator that sold millions of copies. In 1997, as he was graduating from the University of Cambridge with a degree in computer science—and the year before he founded his own game studio—IBM’s Deep Blue chess-playing computer beat world champion Garry Kasparov. Hassabis was enthralled. But that landmark moment was also “a weird dead end,” he says. An example of an AI construct known as an expert system, Deep Blue was hardwired for chess mastery, period. It couldn’t be trained to play additional games, let alone perform other kinds of jobs.
Hassabis found himself drawn to another approach to AI: neural networks. By mimicking the workings of the human brain, software built on this model could learn to do many things, just as people do. The technology had been constrained by limits in computing power, but Hassabis—along with Suleyman (whom he’d first known as his younger brother’s best friend) and Legg (a fellow researcher at University College London)—was convinced it could make strides as supercomputers grew ever more capable.
In 2010, the three founded DeepMind, so stealthy a startup that its original website consisted of a logo and nothing else. Early on, it created software that could learn to play 1970s Atari video games—not because the world needed Atari-playing computers, but as a starting point for crafting artificial intelligence that could figure out rules and goals on its own rather than having them painstakingly coded in. (Vintage Atari game cartridges are still stacked high in the GDM game room.)
Having cracked Space Invaders and Breakout, the company turned its attention to a far greater challenge: Go, the 2,500-year-old board game so complex that many thought a computer might never master it. In 2016, DeepMind’s AlphaGo software beat legendary Go master Lee Sedol—a bigger moment for AI than the vanquishing of Kasparov 19 years earlier. The following year, a new version called AlphaGo Zero taught itself to be an even better Go player without being trained using data from human games, another major advance toward AI that learns the way people do, except way faster.
By then, DeepMind’s short life as an independent company was over. In January 2014, Google had acquired the startup for a price reportedly between $400 million and $650 million—a pittance compared to OpenAI’s current valuation of $157 billion, but not bad by 2014 standards. At the time, investors were so indifferent to AI that, as a startup, DeepMind had found it “very hard to raise even $10 million,” recalls Hassabis, who bonded with Google cofounders Larry Page and Sergey Brin over a shared dedication to developing AI.
Selling to Google “meant Demis didn’t have to spend his time talking to investors all the time,” says Helen King, one of DeepMind’s first employees and now GDM’s senior director of responsibility, overseeing the safety of its work, from mitigating biases in Gemini to grappling with the potential risks of AGI and other future technologies. “He could focus on actually moving the research forward.”
Even while AlphaGo’s match against Sedol was in progress, Hassabis’s mind raced toward AI research involving more substantial stuff than games. “In my mind, the next step was to come out of the lab and apply it to a real-world problem,” he says. And protein folding was top of my list.” Identifying how proteins fold into 3D structures could provide critical insights for drug discovery and other biotechnology fields, but it was so arduous a process that only around 100,000 such structures had been determined out of billions of possibilities. The possibility of AI helping had been planted in Hassabis’s brain in the ’90s by a college friend, who’d talk about it obsessively as they played foosball at a pub.
DeepMind launched an effort to automate the process and called it AlphaFold. The first version won a biannual protein-predicting competition in 2018; the second, which won in 2020, was an even more extraordinary advance for both biotech and AI. It also proved that DeepMind’s research-first culture could thrive within Google. AlphaFold co-creator Jumper, who came to DeepMind after writing his University of Chicago PhD thesis on using machine learning for protein prediction, told me in 2021 that working there was “like being at an AI conference every day.”
“Please welcome, for the first time on the I/O stage, Sir Demis.”
When Google CEO Sundar Pichai ushers Hassabis onstage at the company’s annual Google I/O developer conference in May, held at the Shoreline Amphitheatre n
Login to add comment
Other posts in this group
Japan Airlines said it was hit by a cyberattack Thursday, causing delays to
Thumbnails play the YouTube equivalent of a movie poster, aiming to draw your attention to click and watch when you have hundreds of videos clogging your recommended content. Most of us have been
Over the past two years, generative AI has dominated tech conversations and media headlines. Tools like ChatGPT, Gemini, Midjourney, and Sora captured imaginations with their ability to create tex
Was YouTube TV’s recent price increase the straw that broke the camel’s back for you? Wh
TikTok is the new doctor’s office, quickly becoming a go-to platform for medical advice. Unfortunately, much of that advice is pretty sketchy.
A new report by the healthcare software fi