Sam Altman: You should not trust Sam Altman

 

By Mark Sullivan

Large generative AI models may be the biggest technology revolution in history, and the leader of the biggest developer of such models may be among the most powerful people in the world. That leader, OpenAI CEO Sam Altman, spoke at the Bloomberg Technology Summit in San Francisco Thursday morning, and he brought his characteristic quotability with him.

Below, we’ve rounded up seven of his most notable quotes from the event:

On existential threat posed by future AI:

“We are on an exponential curve, and a relatively steep one, and human intuition for exponential curves is really bad. In general, it clearly was not that important in our evolutionary history. And so, given that we all have that weakness, I think we have to really push ourselves to say, Okay, GPT-4 [is] not a risk like you’re talking about there, but how sure are we that GPT-9 won’t be? And if it might be, even if there’s a small percentage chance of it being really bad, like that deserves attention.”

On why we shouldn’t just stop developing such potentially dangerous technology:

“I think that the upsides here are tremendous, that you know opportunity for everyone on Earth to have a better quality education than basically anyone can get today. . . . Medical care, and making that available truly globally. That’s going to be transformative. The scientific progress we’re going to see—I’m a I’m a big believer that, like, real sustainable improvements in quality of life come from scientific technological progress . . . I think it’d be good to end poverty. Maybe you think it’d be good to stop a technology that can do that; I personally don’t.”

On his call for regulations on AI, and charges that OpenAI only wants regulations in order to protect its current leadership position:

“We . . . don’t think small startups and open-source models below a certain very high capability threshold should be subject to a lot of regulation. We’ve seen what happens to countries that try to overregulate tech. I don’t think that’s what we want here.”

“You could, like, point out that we’re trying to do regulatory capture here or whatever, but I just think that’s [a] transparently, intellectually dishonest response.” 

“I think that people training models that are way above any model scale that we have today, but above some certain capability threshold, I think you should need to go through a certification process for that. I think there should be external audits and safety tests. We do this for, like, lots of other industries where we care about safety.” 

Sam Altman: You should not trust Sam Altman

On working together with China on safety rules for AI development:

“I think this thing that often gets said in the U.S., which is like, It’s impossible to cooperate with China, it’s just totally off the table is asserted as fact, and people are trying to will it into existence; but it’s not clear to me it’s true fact. I suspect it’s not.” 

On bias in Open AI’s models:

“There was a recent study that GPT-4, the model that is released, is less biased on implicit-bias tests than humans, even humans who think they’ve really trained themselves to be good at this. . . . If you look at the progress from model to model, even some of our biggest critics are like, wow, they’ve gotten quite a lot of the bias out of the model. So I think models like this can be a force for reducing bias in the world, not for enhancing it.”

On owning no equity in OpenAI, and his own personal motivations:

“One of the takeaways I’ve learned is that this concept of having enough money is not an idea that’s easy to get across to people. I have enough money. What I want more of is, like, an interesting life, impact; access to be in the conversation. So I still get a lot of selfish benefit from this. What else am I going to do with my time? This is really great. I cannot imagine a more interesting life than this one and a more interesting thing to work on.”

On whether the world should trust Sam Altman:

“You shouldn’t. No one person should be trusted here. I don’t have super voting shares. The board can fire me. I think that’s important. [We] think this technology, the benefits, the access to it, the governance of it, belongs to humanity as a whole. If this really works, it’s quite a powerful technology and you should not trust one company and certainly not one person.” 

Fast Company

(8)