Former OpenAI leader blasts company for ignoring ‘safety culture’

Not all the departures from OpenAI have been on the best of terms. Jan Leike, a coleader in the company’s superalignment group who left the company Wednesday, among a growing series of departures, has taken to X to explain his decision—and he has some harsh words for his former employer.

Leike said leaving OpenAI was “one of the hardest things I have ever done because we urgently need to figure out how to steer and control AI systems much smarter than us.” However, he said, he chose to depart the company because he has “been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

But over the past years, safety culture and processes have taken a backseat to shiny products.

— Jan Leike (@janleike) May 17, 2024

Leike left OpenAI within hours of the announcement that cofounder and chief scientist Ilya Sutskever was departing. Among Leike’s roles was ensuring the company’s AI systems aligned with human interests. (He had been named as one of Time magazine’s 100 most influential people in AI last year.)

In the lengthy thread, Leike accused OpenAI and its leaders of neglecting “safety culture and processes” in favor of “shiny products.” (Leike’s problems with CEO Sam Altman seemingly go back to before the attempt to remove him from the company last November. While many employees objected to the board’s actions and wrote an open letter threatening to leave the company and go work with Altman elsewhere, Leike’s name was not among the signatures.)  

“Over the past few months, my team has been sailing against the wind. Sometimes we were struggling for compute [total computational resources] and it was getting harder and harder to get this crucial research done,” he wrote. “Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all humanity.”

Bloomberg, on Friday, reported that OpenAI has dissolved the superalignment team, folding remaining members into broader research efforts at the company. Leike and Sutskever were the lead members of that team.

Fears over AI destroying humanity or the planet might seem like something pulled from Terminator, but Leike and other big AI scientists say the concept isn’t as absurd as it seems. Geoff Hinton, one of the most notable names in AI, says there’s a 10% chance AI will wipe out humanity in the next 20 years. Yoshua Bengio, another noted AI scientist, puts those odds at 20%. Leike has been even more fatalistic in the past, putting the p(doom) score (probability of doom), which runs from zero to 100, between 10 and 90.

“We are long overdue in getting incredibly serious about the implications of AGI [artificial generalized intelligence],” Leike wrote. “We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all humanity. OpenAI must become a safety-first AGI company.”

Read the complete thread here.

Altman responded on X, saying he was “super appreciative” of Leike’s contributions to the company’s safety culture. “He’s right,” Altman replied, “We have a lot more to do; we are committed to doing it,” noting he would follow up soon with a longer post.

i'm super appreciative of @janleike's contributions to openai's alignment research and safety culture, and very sad to see him leave. he's right we have a lot more to do; we are committed to doing it. i'll have a longer post in the next couple of days.

🧡 https://t.co/t2yexKtQEk

— Sam Altman (@sama) May 17, 2024

Leike did not respond to queries asking him to expound further on his thoughts.

Leike’s comments, however, raise questions about the status of the pledge OpenAI made in July of 2023 to dedicate 20% of its computational resources toward the effort to superalign its AI models as part of its quest to develop responsible AGI.

An AI system is considered to be “aligned” if it is attempting to do the things humans ask it to. “Unaligned” AI attempts to do things outside of human control.

Leike ended his missive with a plea to his former coworkers, saying, “Learn to feel the AGI. Act with the gravitas appropriate for what you’re building. I believe you can ‘ship’ the cultural change that’s needed. I am counting on you. The world is counting on you.”

https://www.fastcompany.com/91127491/former-openai-leader-jan-leike-blasts-company-for-ignoring-safety-culture?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Erstellt 10mo | 17.05.2024, 21:40:08


Melden Sie sich an, um einen Kommentar hinzuzufügen

Andere Beiträge in dieser Gruppe

Intel’s anticipated $28 billion chip factories in Ohio are delayed until 2030

Intel‘s promised $28 billion chip fabrication plants in Ohio are facing further delays, with the first factory in New Albany expected

28.02.2025, 23:50:06 | Fast company - tech
Tired of overdramatic TikTok food influencers? Professional critics are too

TikTok and Instagram are flooded with reels of food influencers hyping already viral restaurants or bringing hundreds of thousands of eyes to hidden gems. With sauce-stained lips, exaggerated chew

28.02.2025, 23:50:05 | Fast company - tech
The internet has suspicions about family vloggers fleeing California. Here’s why

An unsubstantiated online theory has recently taken hold, claiming that family vloggers are fleeing Los Angeles to escape newly introduced California laws designed to protect children featured in

28.02.2025, 21:40:02 | Fast company - tech
DOGE isn’t Silicon Valley innovation—it’s just a sloppy rebrand of free-market dogma

At a press conference in the Oval Office earlier this month, Elon Musk—a billionaire who is not, at least formally, the President of the United States—was asked how the Department of Government Ef

28.02.2025, 19:20:04 | Fast company - tech
Next-gen nuclear startup plans 30 reactors to fuel Texas data centers

Last Energy, a nuclear upstart backed by an Elon Musk-linked venture capital fund, says it plans to construct 30 microreactors on a site in Texas to supply electricity to data centers across the s

28.02.2025, 16:50:10 | Fast company - tech
Who at DOGE has access to U.S. intelligence secrets? Democrats are demanding answers

Democratic lawmakers demanded answers from billionaire Elon Musk’s Department of Govern

28.02.2025, 16:50:09 | Fast company - tech