Why OpenAI needs Microsoft on its board

The OpenAI management saga, which began two weeks ago when the organization’s board unexpectedly fired CEO Sam Altman, and which has since featured a series of Succession-worthy twists and turns, seems to be nearing an end. It’s still not entirely clear why Altman was fired in the first place. (Many accounts suggested the board was concerned that Altman was being reckless in his race to commercialize OpenAI’s artificial-intelligence technology, but board members and the company’s interim CEO have said that was not the problem.) But what is clear is that he has emerged on top: He was restored as CEO after 95% of the company’s employees threatened to quit, and in a memo to employees last week wrote that he is “so looking forward to finishing the job of building beneficial AGI with you all—best team in the world, best mission in the world.”

Three of the four board members who voted to oust Altman, meanwhile, are gone, with the new board currently consisting of just three people—chair Bret Taylor, Larry Summers, and holdover Adam D’Angelo. And those three are, according to a memo from Taylor, planning to re-make the governance structures at OpenAI, including expanding the board and, most strikingly, giving Microsoft—OpenAI’s most important partner and owner of a 49% stake in the organization’s profit-making subsidiary—a seat as a “non-voting observer.”

The inclusion of Microsoft on the board, even as a non-voting observer, is a dramatic change for OpenAI. It was founded in 2015 as a non-profit devoted to the creation of artificial general intelligence for the benefit of “humanity as a whole.” And though it started that profit-making subsidiary in 2019 to accelerate the work of developing AI, OpenAI’s mission statement was explicit about the fact that its fiduciary duty is not to investors, but rather “to humanity.” In other words, its mission was to develop AGI, but only if it could be done safely. That’s why the subsidiary was put under the complete control of the nonprofit, which was run by a board of independent directors and included no outside investors or partners, who might put commercial interests ahead of the organization’s mission.

Given that, it’s easy to see the decision to add Microsoft to the board as a sign that, ultimately, the money men have won, and that, in the development of AI, commercial imperatives will now trump any safety concerns. But there’s another way of looking at the move, namely that it’s remedying a flaw in the way OpenAI was set up. Not having Microsoft or other investors represented on the board may have seemed like a logical way to insulate the organization from commercial pressure. But over time, it actually made it harder for the organization to fulfill its mission.

Why? The simple answer is that, at this point, OpenAI can’t stop AI development—it can only guide it. If, for instance, it decided that it was going to shut down the profit-making subsidiary because it was being reckless in its approach, that would not make the problem go away. Instead, what would happen is what happened when it appeared that Altman was fired: The staff and technology will just decide to migrate elsewhere, either to startups or to Microsoft, where the only fiduciary duty managers have is to the bottom line, and there is no mission statement that requires people to take safety into account.

In other words, shutting the company down would not only not make the future development of AI safer, but by throwing it entirely into the commercial realm, it might well make that development more dangerous. And since commercial development is inevitable, what OpenAI needs is a board that’s not indifferent to it, but rather a board that’s striking a balance between the speed and scope of development and the needs of safety. And, paradoxically, a board that included investors and corporate partners would be more likely to strike that balance.

That’s not just because the nonprofit would have a better understanding of what partners like Microsoft want. It’s also because Microsoft would get a better sense of the nonprofits’ safety concerns. That doesn’t mean that it would always take those concerns seriously, but at least it would be aware of them.

Of course, there’s no guarantee that OpenAI’s new board will find a way to balance development and safety. (In fact, it may be that, in the absence of government regulation, that balance is impossible to strike.) But what it should be able to do, at least, is avoid the problem OpenAI just faced, namely the risk of blowing the organization up and having Microsoft and others ready to swoop in and pick up the pieces. That would be a much worse outcome, from the perspective of OpenAI’s mission, than keeping the organization intact, even if it means Microsoft exerts more influence. As Michael Corleone put it, keep your friends close, but your enemies—or, in this case, your frenemies—closer.

https://www.fastcompany.com/90987753/why-openai-needs-microsoft-on-its-board?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 1y | 4 dec. 2023, 13:40:06


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

“Hostile and political”: Jeff Bezos should have known Trump was always going to turn against Amazon

Consumers are only just starting to feel pain from Trump’s Liberation Day tariff spree. Amazon

29 apr. 2025, 21:30:07 | Fast company - tech
In his first 100 days, Trump’s tariffs are already threatening the AI boom

When Donald Trump returned to the White House in 2025, many in the tech world hoped his promises to champion artificial intelligence and cut regulation would outweigh the risks of his famously vol

29 apr. 2025, 16:50:07 | Fast company - tech
How learning like a gamer helped this high-school dropout succeed

There are so many ways to die. You could fall off a cliff. A monk could light you on fire. A bat the size of a yacht could kick your head in. You’ve only just begun the game, and yet here you are,

29 apr. 2025, 12:20:08 | Fast company - tech
Renate Nyborg’s Meeno wants to become the Duolingo of dating

Former Tinder CEO Renate Nyborg launched Meeno less than two years ago with the intention of it being an AI chatbot that help

29 apr. 2025, 12:20:07 | Fast company - tech
How Big Tech’s Faustian bargain with Trump backfired

The most indelible image from Donald Trump’s inauguration in January is not the image of the president taking the oath of office without his hand on the Bible. It is not the image of the First Lad

29 apr. 2025, 12:20:06 | Fast company - tech
Turns out AI is really bad at picking up on social cues

Ernest Hemingway had an influential theory about fiction that might explain a lot about a p

29 apr. 2025, 12:20:04 | Fast company - tech