OpenAI nearly blew up their $80 billion company. But was the issue they were reportedly arguing over really the right issue?

 

By Chris Stokel-Walker

On Tuesday night, OpenAI announced that it had reached an agreement in principle for cofounder and recently ousted CEO Sam Altman to return to the company. As part of the arrangement, the $80 billion company will also have a new board composed of Bret Taylor, Larry Summers, and Adam D’Angelo—the latter of whom is the sole holdover from the previous board. Former president Greg Brockman, who quit in protest of Altman’s firing, will also return to his old post. 

That seemingly puts a lid on one of the wilder corporate dramas in Silicon Valley history: After the board got rid of Altman on Friday, the departed CEO (alongside Brockman) wound up agreeing to a new role heading up AI at Microsoft (which also owns a $13 billion stake in OpenAI). In response to the firing, 700-plus OpenAI employees (more than 95% of the company) threatened to quit unless Altman was reinstated and the board overturned. (Bizarrely, one of the signatories was cofounder and chief scientist Ilya Sutskever, who was on the original board and supported Altman’s exit.) And now here we are, with Altman back at the helm of OpenAI and a seemingly more supportive board behind him.

The exact reasons behind OpenAI’s self-immolation this weekend still aren’t clear—even now that Altman is coming back. But they’re thought to revolve around a dispute between Altman and Sutskever about the future direction of the artificial intelligence startup. Altman has reportedly been criticized by some within OpenAI for trying to push the company’s development of AI tech too quickly. Specifically, there’s a push to achieve artificial general intelligence, or AGI, which refers to AI that can learn to accomplish any intellectual task that human beings can perform.

It’s worth noting that Sutskever leads OpenAI’s super-alignment team, which was created to stop AI from going rogue and ignoring human orders. And it’s perhaps telling that Altman’s replacement for around 36 hours was former Twitch CEO Emmett Shear, a vocal advocate for cautious AI development. (Shear himself replaced Mira Murati, OpenAI’s chief technology officer, whose interim CEO label lasted an even shorter time.)

So, while the OpenAI board remains silent about the real reason for the fissure within the firm, all signs point toward a schism over the perceived risks of superintelligence.

Which most people believe isn’t a realistic risk at all.

“It’s hard to see how this leads to a change about bias, toxicity, hallucinations, ChatGPT causing headaches for teachers, stealing writers’ work, and the general aspiration of this tech as something for bosses to replace workers with,” says Willie Agnew, a researcher at the University of Washington studying AI ethics and critical AI—all of which are proven problems plaguing OpenAI.

OpenAI nearly blew up their $80 billion company. But was the issue they were reportedly arguing over really the right issue?

Movements organized by campaign groups such as Effective Altruism, which believes the risk of runaway AI is all too real, have captured the public’s attention. The Future of Life Institute made headlines in March 2023 for its open letter calling for a pause in AI development. A subsequent May 2023 letter by another group, the Center for AI Safety, warned that AI could be more dangerous to humanity than nuclear weapons. Yet there’s no reliable data suggesting a consensus agreement that AI is indeed an existential threat. One ballyhooed survey of AI researchers had a 4% response rate for a question about AI extinction.

A recent survey suggests 66% of Americans think it’s highly unlikely AI will cause the end of the human race in the next decade. The public is ten times more likely to think the world will end with a nuclear war, twice as likely to think they’ll be struck down by a pandemic, and four times as likely to believe climate change will end humanity.

“For every other existential risk we consider—nuclear weapons, pandemics, global warming, asteroid strikes—we have some evidence that can happen and an idea of the mechanism,” says Julian Togelius, associate professor in AI at New York University. “For superintelligence we have none of that.” Togelius says that “the idea of AGI is not even conceptually coherent.”

Sasha Luccioni, a researcher in ethical and sustainable AI at the company Hugging Face, echoes that sentiment. “If AGI is really the issue behind all of this drama, then OpenAI definitely has more pressing concerns.” Luccioni is worried that people are relying on ChatGPT to assist in fields like mental health therapy—a job for which it’s wholly unqualified—and fears the impact of a torrent of AI-generated disinformation that could poison the well ahead of upcoming elections. “These are all things that should be addressed now, instead of the theoretical threat of AGI, which is wholly hypothetical.”

Perhaps, then, the issue isn’t whether Altman was pursuing AGI at a pace that worried the board but rather that OpenAI has already moved too quickly with ChatGPT—and the result is staring us in the face.

Fast Company

(4)