Sam Altman offers clues about where OpenAI is headed

On Sunday OpenAI cofounder and CEO Sam Altman published a blog post titled Reflections about his company’s progress—and the speed bumps along the way—during its first nine years. Altman’s words are important because OpenAI has a very good chance of being first to reach AGI, or artificial general intelligence (machines that are generally as smart or smarter than humans), then progressing on toward superintelligent systems (which are far smarter than humans). And these systems, when applied in the real world, could affect all of us in profound ways. Altman’s comments, however, might illuminate this transition a bit more with some added context.

First off, the blog post was spurred by an interview Altman recently did with Bloomberg. According to Bloomberg, the OpenAI PR team suggested an interview in which Altman would “review the past two years, reflect on some events and decisions, to clarify a few things.”

Sam Altman says OpenAI has shifted to a “next paradigm” of models

Altman appears to refer to the new o1 model and o3 models, which take a different approach to intelligence than the earlier models that power ChatGPT. Those earlier models relied on massive amounts of training data and computing power during pre-training. But o1 and o3 apply more computing power at “inference time” (or “test time”), then the model is actually working on a complex problem for a user.

How ChatGPT came about

Sam Altman describes the run up to the event that changed everything for OpenAI, the public launch of the ChatGPT chatbot on November 30th of 2022. “We had been watching people use the playground feature of our API and knew that developers were really enjoying talking to the model,” Altman writes. “We thought building a demo around that experience would show people something important about the future and help us make our models better and safer.” The playground feature he refers to was at the time called “Chat With GPT-3.5.” He tells Bloomberg’s Josh Tyrangiel in a new interview that “the rest of the company was like, ‘Why are you making us launch this? It’s a bad decision. It’s not ready.’ I don’t make a lot of ‘we’re gonna do this thing’ decisions, but this was one of them.”

Altman sees the world through the eyes of an entrepreneur

The first effect of the explosion of ChatGPT that Altman mentions is, interestingly, about growth and financial reward. “The launch of ChatGPT kicked off a growth curve like nothing we have ever seen . . . We are finally seeing some of the massive upside . . . ” Altman studied computer science–including AI–as an undergraduate, but he is not an AI researcher. He’s spent most of his career as an expert in funding and growing technology startup companies. He was president of Y Combinator, the prestigious startup accelerator, from 2014 to 2019. 

Altman puts some context around his November 2023 firing

After Altman’s surprise firing, the OpenAI board of directors cited trust issues and concerns over the CEO’s handling of AI safety measures. Board member Helen Toner (an AI safety expert) said Altman gave inaccurate information about safety processes, and did not inform the board before launching ChatGPT. (Employees and VCs with financial interest in the company revolted and Altman was quickly reinstated as CEO.) Altman says the turmoil was partly the result of rapid change happening within the company at the time. “We had to build an entire company almost from scratch (around ChatGPT) . . . ” he writes. “Moving at speed in uncharted waters is an incredible experience, but it is also immensely stressful for all the players . . . conflicts and misunderstanding abound . . . ” He adds that the last two years have been the most “unpleasant years of my life so far.”

Altman says the old board, and himself, were to blame

Altman calls the members of the former board, which included OpenAI founder and AI mastermind Ilya Sutskever, well-meaning, and takes responsibility for the November 2023 blowup. But he also implies that the former board lacked perspective to govern a company with OpenAI’s unique technology, challenges, and goals. “The whole event was, in my opinion, a big failure of governance by well-meaning people, myself included . . . I also learned the importance of a board with diverse viewpoints and broad experience in managing a complex set of challenges . . . “

Some new color on Altman’s reinstatement as CEO

The longest part of the blog is a footnote about legendary investor Ron Conway and AirBnB founder Brian Chesky, both of whom are longtime friends of Altman. The full story of what went on behind the scenes after Altman was fired has never been thoroughly reported. But Altman suggests that Conway and Chesky may have done more than just “support and advise.” 

“I am reasonably confident OpenAI would have fallen apart without their help . . . ” Altman writes. “They used their vast networks for everything needed and were able to navigate many complex situations.” Conway and Chesky may have played a role in rallying OpenAI employees and investors around Altman, and against the board that fired him.

Sam Altman tries the explain the “brain drain” at OpenAI

This is perhaps the second major issue OpenAI hoped to address in the Bloomberg interview–the growing number of smart people who have left OpenAI over the past year, including CTO Mira Murati and cofounder Ilya Sutskever. “Teams tend to turn over as they scale, and OpenAI scales really fast . . . At OpenAI numbers go up by orders of magnitude every few months,” Altman writes. “When any company grows and evolves so fast, interests naturally diverge.” Altman suggests that researchers will naturally depart as the company’s research priorities shift. There’s truth in that. And OpenAI’s research priorities did hit a big bend in the road during 2024 with the o1 models. 

Why new products and growth are so important to OpenAI

When the main thrust of AI research is figuring out how to apply more computing power to AI models, being an AI startup is a extremely capital-intensive business. OpenAI’s founders didn’t see that coming, Altman says. OpenAI and its investors are already spending billions on computing power to train and operate frontier AI models. They’re spending more on acquiring new training data. In the future, OpenAI’s pursuit of superintelligence will require much bigger server clusters, and more capital expense to find and buy the electricity needed to power them. “There are new things we have to go build now that we didn’t understand a few years ago, and there will be new things in the future we can barely imagine now,” Altman writes.

OpenAI believes it knows how to build AGI

Altman suggests that his company has either already developed systems that can be described as AGI, or that such systems are squarely within its sights. He’s referring to “agentic” systems that can reason through complex tasks and control external systems. It’s important to note, however, that OpenAI changed its definition of AGI in 2018. Originally, the company defined it as a system with the learning and reasoning powers of a human mind. Now its charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work . . . ”    

OpenAI’s next frontier is superintelligence

“Superintelligence” means systems capable of far greater intelligence than humans in a broad array of fields. While AGI could make a big difference in terms of human productivity, superintelligence could bring answers to problems that humans currently can’t solve (curing cancer, for example). “Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity,” Altman writes. But it would also mean the beginning of an era in which humans are no longer the smartest entities in our environment.

https://www.fastcompany.com/91255808/sam-altman-offers-clues-about-where-openai-is-headed?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 2d | 7 ian. 2025, 01:30:06


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

2025 could be the year of the robot lawn mower

Skeptics scoffed in 2002 when Roomba introduced the robot vacuum to homeowners. That didn’t last very long as the little round device quickly became a fixture in millions of households. Now the la

8 ian. 2025, 12:20:05 | Fast company - tech
In the social media wars, Bluesky is destroying Truth Social

In November 2024, a left-leaning cohort of social users ran to Bluesky. The movement made the platform a liberal haven—and more popular than far-right alternative Truth Social. 

Acc

8 ian. 2025, 10:10:02 | Fast company - tech
Uber is replacing Lyft in a points partnership with Delta

Lyft and Delta are ending their points-earning loyalty partnership in April and the airline will start a new program with rideshare competitor Uber.

Uber announced the multi-year strateg

8 ian. 2025, 03:10:04 | Fast company - tech
Meta discontinues its $1,500 Meta Quest Pro headset. What could this mean for Apple’s VR dreams?

Two months after it announced plans to discontinue its high-end Quest Pro VR/XR headset, Meta has stopped selling its remaining inventory of the

7 ian. 2025, 22:30:04 | Fast company - tech
Meta adds Trump ally and UFC boss Dana White to its board

Meta has appointed three new members to its board of directors, including Dana White, the president and CEO of Ultimate Fighting Championship

7 ian. 2025, 22:30:03 | Fast company - tech
Fact-checking on Facebook was already a lost cause

On January 7, Mark Zuckebrerg made a not-so-stunning announcement. 

The Meta CEO said that his company is

7 ian. 2025, 20:10:07 | Fast company - tech