Sam Altman’s Second Coming Sparks New Fears of the AI Apocalypse
Open AI’s new boss is the same as the old boss. But the company—and the artificial intelligence industry—may have been profoundly changed by the past five days of high-stakes soap opera. Sam Altman, OpenAI’s CEO, cofounder, and figurehead, was removed by the board of directors on Friday. By Tuesday night, after a mass protest by the majority of the startup’s staff, Altman was on his way back, and most of the existing board was gone. But that board, mostly independent of OpenAI’s operations, bound to a “for the good of humanity” mission statement, was critical to the company’s uniqueness.
As Altman toured the world in 2023, warning the media and governments about the existential dangers of the technology that he himself was building, he portrayed OpenAI’s unusual for-profit-within-a-nonprofit structure as a firebreak against the irresponsible development of powerful AI. Whatever Altman did with Microsoft’s billions, the board could keep him and other company leaders in check. If he started acting dangerously or against the interests of humanity, in the board’s view, the group could eject him. “The board can fire me, I think that’s important,” Altman told Bloomberg in June.
“It turns out that they couldn’t fire him, and that was bad,” says Toby Ord, senior research fellow in philosophy at Oxford University, and a prominent voice among people who warn AI could pose an existential risk to humanity.