OpenAI cofounder Ilya Sutskever’s new AI startup is fundraising with a $30 billion valuation

A new artificial intelligence company from one of the cofounders of OpenAI is quickly becoming one of the most highly valued AI firms in an increasingly crowded marketplace. Ilya Sutskever’s Safe Superintelligence (SSI) is in the process of raising in excess of $1 billion with a valuation topping $30 billion. Bloomberg reports San Francisco-based Greenoaks Capital Partners is leading the deal and plans to invest $500 million itself. Greenoaks did not reply to a request for comment about the investment.

$30 billion might be well short of the $340 billion valuation OpenAI boasts, but it’s still well above many others in the space, including Perplexity, which has a $9 billion valuation. The new figure is significantly higher than SSI’s $5 billion valuation in its last round, held this past September, when it raised $1 billion from investors including Sequoia Capital and Andreessen Horowitz. 

SSI was founded by Ilya Sutskever, Daniel Gross, and Daniel Levy last June, just one month after Sutskever departed OpenAI. Very little is known about the company so far, aside from its stated goal of building . . . well, a “safe superintelligent” AI system. The company does not yet have a product on the market.

“We approach safety and capabilities in tandem as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the company’s website reads. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. . . . We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.”

Ilya Sutskever, born in Russia but raised in Jerusalem, studied with AI pioneer Geoffrey Hinton, who has warned about the dangers of AI. A short stint at Google led to his meeting and ultimately working with cofounders Sam Altman, Greg Brockman, and Elon Musk, on the organization that would become OpenAI. (Musk would later call Sutskever the “linchpin” to OpenAI’s success.)

Sutskever was one of the board members who led the push to remove Altman from the CEO role at OpenAI for a short period at the end of 2023. Sutskever and Altman reportedly clashed over the pace at which generative AI is being commercialized.

Days after helping orchestrate the coup, Sutskever reversed course, signing onto an employee letter demanding Altman’s return and expressing regret for his “participation in the board’s actions.” He was removed from the board after Altman returned. (Sutskever isn’t the only OpenAI alum working on his own AI project. On Tuesday, former chief technology officer Mira Murati officially announced Thinking Machines Lab, her AI startup.)

When Sutskever left OpenAI, he posted on X that he was working on a new project “that is very personally meaningful to me about which I will share details in due time.”

Even with the subsequent announcement about SSI’s creation last June, those details remain scant. SSI and Sutskever have dropped a few hints, however, saying that they plan on creating a single product with one focus and one goal. And SSI has made it clear that it plans to ignore pressure from markets or investors to release its product.

“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the website reads.

Sutskever is widely respected as one of the world’s top AI researchers, which makes this possible funding round less surprising (even if the company’s valuation is higher than expected). Despite that, he has eschewed the spotlight for much of his career, not doing many interviews, but speaking about AI’s potential for both good and bad when he does.

“AI is a great thing. It will solve all the problems that we have today. It will solve unemployment . . . disease . . . poverty,” he said in a ">documentary titled, iHuman, from filmmaker Tonje Hessen Schei, which came out in 2020. “But it will also create new problems,” Sutskever continued. “The problem of fake news is going to be a million times worse. Cyberattacks will become much more extreme. We will have totally automated AI weapons. I think AI has the potential to create infinitely stable dictatorships.”


https://www.fastcompany.com/91280395/openai-cofounder-ilya-sutskever-new-ai-startup-fundraising-30-billion-valuation?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 5mo | Feb 19, 2025, 1:20:08 AM


Login to add comment

Other posts in this group

How are Americans using AI? This poll reveals 3 findings

Most U.S. adults say they use artificial intelligence to search for information, but fewer are

Jul 29, 2025, 4:40:21 PM | Fast company - tech
The massive blackout in Spain, Portugal, and France exposed a global power crisis. Now what?

On the morning of April 28, large parts of Spain, Portugal, and southern France went dark. A massive blackout

Jul 29, 2025, 12:10:02 PM | Fast company - tech
4 new AI tools that are worth your time

This article is republished with permission from Wonder Tools, a newsletter that helps you discover the most useful sites and apps. 

Jul 29, 2025, 5:10:03 AM | Fast company - tech
The ‘Vogue’ AI model backlash isn’t dying down anytime soon

AI-generated “models” have now made their way into the hallowed pages

Jul 28, 2025, 10:10:06 PM | Fast company - tech
Tesla signs $16.5 billion deal with Samsung to make AI chips

Tesla has signed a $16.5 billion deal to source chips from

Jul 28, 2025, 7:40:08 PM | Fast company - tech
Estée Laundry is back—and this time it’s got a newsletter

Estée Laundry, the anonymous Instagram account and self-proclaimed beauty industry “watchdog,” is back after a two-year hiatus.

Estée Laundry—the name a play on beauty giant Estée Lauder

Jul 28, 2025, 5:30:05 PM | Fast company - tech