OpenAI cofounder Ilya Sutskever’s new AI startup is fundraising with a $30 billion valuation

A new artificial intelligence company from one of the cofounders of OpenAI is quickly becoming one of the most highly valued AI firms in an increasingly crowded marketplace. Ilya Sutskever’s Safe Superintelligence (SSI) is in the process of raising in excess of $1 billion with a valuation topping $30 billion. Bloomberg reports San Francisco-based Greenoaks Capital Partners is leading the deal and plans to invest $500 million itself. Greenoaks did not reply to a request for comment about the investment.

$30 billion might be well short of the $340 billion valuation OpenAI boasts, but it’s still well above many others in the space, including Perplexity, which has a $9 billion valuation. The new figure is significantly higher than SSI’s $5 billion valuation in its last round, held this past September, when it raised $1 billion from investors including Sequoia Capital and Andreessen Horowitz. 

SSI was founded by Ilya Sutskever, Daniel Gross, and Daniel Levy last June, just one month after Sutskever departed OpenAI. Very little is known about the company so far, aside from its stated goal of building . . . well, a “safe superintelligent” AI system. The company does not yet have a product on the market.

“We approach safety and capabilities in tandem as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the company’s website reads. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. . . . We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.”

Ilya Sutskever, born in Russia but raised in Jerusalem, studied with AI pioneer Geoffrey Hinton, who has warned about the dangers of AI. A short stint at Google led to his meeting and ultimately working with cofounders Sam Altman, Greg Brockman, and Elon Musk, on the organization that would become OpenAI. (Musk would later call Sutskever the “linchpin” to OpenAI’s success.)

Sutskever was one of the board members who led the push to remove Altman from the CEO role at OpenAI for a short period at the end of 2023. Sutskever and Altman reportedly clashed over the pace at which generative AI is being commercialized.

Days after helping orchestrate the coup, Sutskever reversed course, signing onto an employee letter demanding Altman’s return and expressing regret for his “participation in the board’s actions.” He was removed from the board after Altman returned. (Sutskever isn’t the only OpenAI alum working on his own AI project. On Tuesday, former chief technology officer Mira Murati officially announced Thinking Machines Lab, her AI startup.)

When Sutskever left OpenAI, he posted on X that he was working on a new project “that is very personally meaningful to me about which I will share details in due time.”

Even with the subsequent announcement about SSI’s creation last June, those details remain scant. SSI and Sutskever have dropped a few hints, however, saying that they plan on creating a single product with one focus and one goal. And SSI has made it clear that it plans to ignore pressure from markets or investors to release its product.

“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the website reads.

Sutskever is widely respected as one of the world’s top AI researchers, which makes this possible funding round less surprising (even if the company’s valuation is higher than expected). Despite that, he has eschewed the spotlight for much of his career, not doing many interviews, but speaking about AI’s potential for both good and bad when he does.

“AI is a great thing. It will solve all the problems that we have today. It will solve unemployment . . . disease . . . poverty,” he said in a ">documentary titled, iHuman, from filmmaker Tonje Hessen Schei, which came out in 2020. “But it will also create new problems,” Sutskever continued. “The problem of fake news is going to be a million times worse. Cyberattacks will become much more extreme. We will have totally automated AI weapons. I think AI has the potential to create infinitely stable dictatorships.”


https://www.fastcompany.com/91280395/openai-cofounder-ilya-sutskever-new-ai-startup-fundraising-30-billion-valuation?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 2d | 19 févr. 2025 à 01:20:08


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

Grok 3 model puts xAI at the top tier of frontier model developers

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week 

20 févr. 2025 à 18:50:07 | Fast company - tech
‘Best sleep hack for kids’: TikTok parents are claiming that butter helps babies sleep better

A TikTok trend claims giving your baby a tablespoon or two of butter before bed will help them sleep better at night.

“What if I told you my toddler was still waking up every 2 hours at

20 févr. 2025 à 16:40:03 | Fast company - tech
Trump keeps cutting election security jobs. Here’s what’s at risk

As the Trump administration continues to dismantle federal agencies, one that plays

20 févr. 2025 à 14:20:10 | Fast company - tech
Quantum computing breakthrough? Microsoft says its new Majorana 1 chip shows we’re closer than ever

Microsoft on Wednesday unveiled a new chip that it said showed quantum computing is “years, not decades” away, joining Google and IBM in predicting that a fundamental change in co

20 févr. 2025 à 14:20:10 | Fast company - tech
‘There are a lot of bad actors’: Gen Z is finding out the hard way not to get their financial advice from TikTok

The internet can be a great place to learn random life hacks and cry over

20 févr. 2025 à 14:20:09 | Fast company - tech