Former OpenAI leader blasts company for ignoring ‘safety culture’

Not all the departures from OpenAI have been on the best of terms. Jan Leike, a coleader in the company’s superalignment group who left the company Wednesday, among a growing series of departures, has taken to X to explain his decision—and he has some harsh words for his former employer.

Leike said leaving OpenAI was “one of the hardest things I have ever done because we urgently need to figure out how to steer and control AI systems much smarter than us.” However, he said, he chose to depart the company because he has “been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

But over the past years, safety culture and processes have taken a backseat to shiny products.

— Jan Leike (@janleike) May 17, 2024

Leike left OpenAI within hours of the announcement that cofounder and chief scientist Ilya Sutskever was departing. Among Leike’s roles was ensuring the company’s AI systems aligned with human interests. (He had been named as one of Time magazine’s 100 most influential people in AI last year.)

In the lengthy thread, Leike accused OpenAI and its leaders of neglecting “safety culture and processes” in favor of “shiny products.” (Leike’s problems with CEO Sam Altman seemingly go back to before the attempt to remove him from the company last November. While many employees objected to the board’s actions and wrote an open letter threatening to leave the company and go work with Altman elsewhere, Leike’s name was not among the signatures.)  

“Over the past few months, my team has been sailing against the wind. Sometimes we were struggling for compute [total computational resources] and it was getting harder and harder to get this crucial research done,” he wrote. “Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all humanity.”

Bloomberg, on Friday, reported that OpenAI has dissolved the superalignment team, folding remaining members into broader research efforts at the company. Leike and Sutskever were the lead members of that team.

Fears over AI destroying humanity or the planet might seem like something pulled from Terminator, but Leike and other big AI scientists say the concept isn’t as absurd as it seems. Geoff Hinton, one of the most notable names in AI, says there’s a 10% chance AI will wipe out humanity in the next 20 years. Yoshua Bengio, another noted AI scientist, puts those odds at 20%. Leike has been even more fatalistic in the past, putting the p(doom) score (probability of doom), which runs from zero to 100, between 10 and 90.

“We are long overdue in getting incredibly serious about the implications of AGI [artificial generalized intelligence],” Leike wrote. “We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all humanity. OpenAI must become a safety-first AGI company.”

Read the complete thread here.

Altman responded on X, saying he was “super appreciative” of Leike’s contributions to the company’s safety culture. “He’s right,” Altman replied, “We have a lot more to do; we are committed to doing it,” noting he would follow up soon with a longer post.

i'm super appreciative of @janleike's contributions to openai's alignment research and safety culture, and very sad to see him leave. he's right we have a lot more to do; we are committed to doing it. i'll have a longer post in the next couple of days.

🧡 https://t.co/t2yexKtQEk

— Sam Altman (@sama) May 17, 2024

Leike did not respond to queries asking him to expound further on his thoughts.

Leike’s comments, however, raise questions about the status of the pledge OpenAI made in July of 2023 to dedicate 20% of its computational resources toward the effort to superalign its AI models as part of its quest to develop responsible AGI.

An AI system is considered to be “aligned” if it is attempting to do the things humans ask it to. “Unaligned” AI attempts to do things outside of human control.

Leike ended his missive with a plea to his former coworkers, saying, “Learn to feel the AGI. Act with the gravitas appropriate for what you’re building. I believe you can ‘ship’ the cultural change that’s needed. I am counting on you. The world is counting on you.”

https://www.fastcompany.com/91127491/former-openai-leader-jan-leike-blasts-company-for-ignoring-safety-culture?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 11mo | 17 mag 2024, 21:40:08


Accedi per aggiungere un commento

Altri post in questo gruppo

Elon Musk claims DOGE firings will boost American manufacturing. But who will really be working in these factories?

Last Friday, Elon Musk tweeted a grand unifying theory for America’s path to prosperity. “We need to shift people from low- to negat

15 apr 2025, 21:20:08 | Fast company - tech
Major U.S. banks pause data sharing with this federal bureau after a cyberattack exposed sensitive information

Several of the largest U.S. banks are reportedly pausing or reassessing how they send sensitive information to the Office of the Comptroller of the Currency (OCC) following a major cyberattack on

15 apr 2025, 21:20:07 | Fast company - tech
Justice Department prohibits employees from sharing anything related to their work on social media

President Donald Trump‘s administration has ordered U.S. Justice Department employees not to post anything on social media rela

15 apr 2025, 21:20:06 | Fast company - tech
OpenAI is building an X-like social media network

OpenAI is working on its own X-like social media network, the Verge reported on Tuesday, citing multiple sources familiar with the matter.

15 apr 2025, 19:10:07 | Fast company - tech
8 times Meta has been accused of copying competitors’ features

Mark Zuckerberg’s marathon stint on the stand in the Federal Trade Commission’s (FTC) antitrust trial

15 apr 2025, 19:10:05 | Fast company - tech
OpenAI wants to be more than an AI company

Could it be any clearer that Sam Altman intends for OpenAI to be a sprawling consumer tech company, not just an AI lab? His public comments certainly suggest as much. Today,

15 apr 2025, 19:10:04 | Fast company - tech