On Thursday, I had dinner with Sam Altman, a few other OpenAI executives, and a small group of reporters in San Francisco. Altman answered our questions for hours. No topic was off limits, and everything, with the exception of what was said over desert, was on the record.
It’s uncommon to have such an extended, wide-ranging interview with a major tech CEO over a meal. But there’s nothing common about the situation Altman finds himself in. ChatGPT has quickly become one of the most widely used, influential products on earth. Now, Altman is plotting an aggressive expansion into consumer hardware, brain-computer interfaces, and social media. He’s interested in buying Chrome if the US government forces Google to sell it. Oh, and he wants to raise trillions of dollars to build data centers.
But first, he’s focused on the response to last week’s rollout of GPT-5. About an hour before the dinner started, OpenAI pushed an update to bring back the “warmth” of 4o, its previous default model for ChatGPT. It was Altman who made the call to quickly bring back 4o as an option for paying subscribers after some protested its disappearance on Reddit and X.
“I think we totally screwed up some things on the rollout,” he said. “On the other hand, our API traffic doubled in 48 hours and is growing. We’re out of GPUs. ChatGPT has been hitting a new high of users every day. A lot of users really do love the model switcher. I think we’ve learned a lesson about what it means to upgrade a product for hundreds of millions of people in one day.”
He pegged the percentage of ChatGPT users who have unhealthy relationships with the product at “way under 1 percent,” but acknowledged that OpenAI employees are having “a lot” of meetings about the topic. “There are the people who actually felt like they had a relationship with ChatGPT, and those people we’ve been aware of and thinking about. And then there are hundreds of millions of other people who don’t have a parasocial relationship with ChatGPT, but did get very used to the fact that it responded to them in a certain way, and would validate certain things, and would be supportive in certain ways.”
“You will definitely see some companies go make Japanese anime sex bots because they think that they’ve identified something here that works,” he said in a not-so-subtle dig at Grok. “You will not see us do that. We will continue to work hard at making a useful app, and we will try to let users use it the way they want, but not so much that people who have really fragile mental states get exploited accidentally.”
Altman wants ChatGPT to feel as personal as possible but not necessarily play to a specific ideology or political view. ”I don’t think our products should be woke. I don’t think they should be whatever the opposite of that is, either. I think our product should have a fairly center of the road, middle stance, and then you should be able to push it pretty far. If you’re like, ‘I want you to be super woke,’ it should be super woke. And if you’re like, ‘I want you to be conservative,’ it should reflect you.”
ChatGPT has roughly quadrupled its user base in a year and is now reaching over 700 million people each week. “Pretty soon, billions of people a day will be talking to ChatGPT,” Altman said. “We’re the fifth biggest website in the world right now. I think we’re on the clear path to the third.” (That means beating Instagram and Facebook.) “Then it gets harder. For ChatGPT to be bigger than Google, that’s really hard.”
For its operation to keep scaling, OpenAI needs a lot more GPUs. This is one of Altman’s top priorities. “You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future,” he confidently told the room.
“We have to make these horrible trade-offs right now,” he said. “We have better models, and we just can’t offer them because we don’t have the capacity. We have other kinds of new products and services we’d love to offer.”
He also thinks we’re in an AI bubble. “When bubbles happen, smart people get overexcited about a kernel of truth,” he explained. “If you look at most of the bubbles in history, like the tech bubble, there was a real thing. Tech was really important. The internet was a really big deal. People got overexcited. Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes. Is AI the most important thing to happen in a very long time? My opinion is also yes.”
He confirmed recent reports that OpenAI is planning to fund a brain-computer interface startup to rival Elon Musk’s Neuralink. “I think neural interfaces are cool ideas to explore. I would like to be able to think something and have ChatGPT respond to it.”
Does Fidji Simo joining OpenAI to run “applications” imply there will be other standalone apps besides ChatGPT? “Yes, you should expect that from us.” He hinted at his social media ambitions: “I am interested in whether or not it is possible to build a much cooler kind of social experience with AI.” He also said, “If Chrome is really going to sell, we should take a look at it.”
While Altman has a lot of interests, it’s not actually clear that running OpenAI over the long run is one of them. “I’m not a naturally well-suited person to be a public company CEO,” he said at one point. “Can you imagine me on an earnings call?”
I then asked if he would be CEO in a few years. “I mean, maybe an AI is in three years. That’s a long time.”
Here are some other things Altman said:
- Making GPT-5: “We had this big GPU crunch. We could go make another giant model. We could go make that, and a lot of people would want to use it, and we would disappoint them. And so we said, let’s make a really smart, really useful model, but also let’s try to optimize for inference cost. And I think we did a great job with that.”
- OpenAI’s AI device with Jony Ive: “It’s going to take us a while, but I think you will think it is very worth the wait. I think it is incredible. You don’t get a new computing paradigm very often. There have been like only two in the last 50 years. So just let yourself be happy and surprised. It really is worth the wait.”
- The future of the web and publishers: “I do think people will go to fewer websites. I think people will care more about human-crafted content than ever. My directional bet would be that human-created, human-endorsed, human-curated content all goes up in value dramatically.”
- What AGI means: “Maybe the milestone that’s most relevant to us is when most of our research cluster is allocated to the AI researcher instead of the human researchers. But I don’t think that’s going to be so binary, because I think it’ll feel like people get a little more help and a little more help and a little more help.”
- “If we didn’t pay for training, we’d be a very profitable company.”
- “I don’t use Google anymore. I legitimately cannot tell you the last time I did a Google search.”
Interesting career moves this week:
- I suppose I should start asking, “Are you about to quit your job?” in interviews. A week after my Decoder episode with him was published, GitHub CEO Thomas Dohmke announced he was leaving for startup life. The real story here is that GitHub is less independent from the rest of Microsoft, which suggests that the commercial interests of Jay Parikh’s new Core AI team that absorbed it have officially overtaken GitHub’s open-source, Switzerland-for-coding ethos.
- Igor Babuschkin, the co-founder and de facto head of Elon Musk’s xAI, announced that he’s leaving after two years to launch an investing firm focused on AI safety. (I’m sure recent Grok headlines had nothing to do with the timing of this news.)
- Alexandr Wang added more OpenAI researchers to his new AI lab at Meta: Hyung Won Chung, Jason Wei, and Zhiqing Sun.
- Anthropic added Dave Orr, a Google veteran who most recently ran safety for Gemini, as its head of safety. It also hired Jordan Burgess and his team at the startup Humanloop, which has been working on LLM evaluations for businesses.
- Joelle Pineau, Meta’s former head of AI research, joined Cohere as chief AI officer.
If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.
As always, I welcome your feedback. You can respond here or ping me securely on Signal.