
Loading summary
Sam Altman
You know that 1.4 trillion you mentioned? We'll spend it over a very long period of time. I wish we could do it faster.
Big Technology Podcast Host
I think it would be great to just lay it out for everyone once and for all, how those numbers are going to work.
Sam Altman
Exponential growth is usually very hard for people.
Big Technology Podcast Host
OpenAI CEO Sam Altman joins us to talk about OpenAI's plan to win as the AI race tightens, how the infrastructure math makes sense, and when an OpenAI IPO might be coming. And Sam is with us here in studio today. Sam, welcome to the show.
Sam Altman
Thanks for having me.
Big Technology Podcast Host
So OpenAI is 10 years old.
Sam Altman
It's crazy to me.
Big Technology Podcast Host
ChatGPT is three. But the competition is intensifying. This place we're at OpenAI headquarters was in a Code Red. Is in a code red after Gemini 3 came out. And everywhere you look, there are companies that are trying to take a little bit of OpenAI's advantage. And for the first time I can remember, it doesn't seem like this company has a clear lead. So I'm curious to hear your perspective on how OpenAI will emerge from this moment and win.
Sam Altman
First of all, on the Code Red point, we view those as, like, relatively low stakes, somewhat frequent things to do. I think that it's good to be paranoid and act quickly when a potential competitive threat emerges. This happened to us in the past. That happened earlier this year with Deepseek.
Big Technology Podcast Host
And there was a Code Red back then, too.
Sam Altman
Yeah, there's a saying about pandemics, which is something like when a pandemic starts, every bit of action you take at the beginning is worth much more than action you take later. And most people don't do enough early on and then panic later. And we certainly saw that during the COVID pandemic. But I sort of think of that philosophy as how we respond to competitive threats. And, you know, it's. I think it's good to be a little paranoid. Gemini 3 has not, or at least has not so far had the impact we were worried it might, but it did in the same way that deepseek did, identify some weaknesses in our product offering strategy. And we're addressing those very quickly. I don't think we'll be in this Code Red that much longer. You know, like, these are not. These are. Historically, these have been kind of like six or eight week things for us, but I'm glad we're doing it. Just today we launched a new ImageGen model, which is a great thing, and that's something consumers really wanted. Last week, we launched 5.2, which is going over extremely well and growing very quickly. We'll have a few other things to launch, and then we'll also have some continuous improvements like speeding up the service. But, you know, I think this is like, my guess is we'll be doing these once, maybe twice a year for a long time. And that's part of really just making sure that we win in our space. A lot of other companies will do great too, and I'm happy for them. But, you know, ChatGPT is still by far, by far the dominant chatbot in the market. And I expect that lead to increase, not decrease over time. The, the models will get good everywhere. But a lot of the reasons that people use a product, consumer or enterprise have much more to do than just with the model. And we've been expecting this for a while. So we try to build the whole cohesive set of things that it takes to make sure that we are the product that people most want to use. I think competition is good, it pushes us to be better. But I think we'll do great in chat. I think we'll do great in enterprise and in the future years, other new categories I expect will do great there too. I think people really want to use one AI platform. People use their phone at their personal life and they want to use the same kind of phone at work. Most of the time we're seeing the same thing with AI. The strength of ChatGPT consumer is really helping us win the enterprise. Of course, enterprises need different offerings, but people think about, okay, I know this company OpenAI and I know how to use this ChatGPT interface. So the strategy is make the best models, build the best product around it, and have enough infrastructure to serve it at scale.
Big Technology Podcast Host
Yeah, and there is an incumbent advantage. ChatGPT, I think earlier this year was around 400 million weekly active users. Now it's at 800 million. Reports say approaching 900 million. But then on the other side, you have distribution advantages at places like Google. And so I'm curious to hear your perspective. If the models. Do you think the models are going to commoditize? And if they do, what matters most Is it distribution? Is it how well you build your applications? Is it something else that I'm not thinking of?
Sam Altman
I don't think commoditization is quite the right framework to think about the models. There will be areas where different models excel at different things. For the kind of normal use cases of chatting with a model, maybe there will be a lot of great options for scientific discovery. You will Want the thing that is right at the edge, that is optimized for science, perhaps. So models will have different strengths and the most economic value, I think, will be created by models at the frontier, and we plan to be ahead there. We're very proud that 5.2 is the best reasoning model in the world and the one that scientists are having the most progress with. But also we're very proud that it's what enterprises are saying is the best at all the tasks that a business needs to, you know, do its work. So there will be, you know, times that we're ahead in some areas and behind in others. But the overall most intelligent model, I expect to have significant value. Even in a world where free models can do a lot of the stuff that people, that people need.
Ad Reader 1
The.
Sam Altman
The products will really matter. Distribution and brand, as you said, will really matter. In ChatGPT, for example, personalization is extremely sticky. People love the fact that the model gets to know them over time, and you'll see us push on that much more. People have experiences with these models that they then really kind of associate with it. And I remember someone telling me once, you kind of pick a toothpaste once in your life and buy it forever. Or most people do that, apparently, and people talk about it. They have one magical experience with ChatGPT. Healthcare is like a famous example where people put their, you know, they put a blood test into ChatGPT or put these symptoms in and they figure out they have something and they go to a doctor and they get cured of something they couldn't figure out before. Like, those users are very sticky, to say nothing of the personalization. On top of it, there will be all the product stuff. We just launched our browser recently, and I think that's pointing at a new, you know, pretty good potential moat for us. The devices are further off, but I'm very excited to do that. So I think there'll be all these pieces. And on the enterprise, what creates the moat or the competitive advantage? I expect it to be a little bit different, but in the same way that personalization to a user is very important in consumer. There will be a similar concept of personalization to an enterprise where a company will have a relationship with a company like ours and they will connect their data to that, and you'll be able to use a bunch of agents from different companies running that, and it'll kind of like make sure that information is handled the right way. And I expect that'll be pretty sticky too. We already have more than a million people think of us Largely as a consumer company, but we are going to.
Big Technology Podcast Host
Definitely get into enterprise.
Sam Altman
Yeah, you know, like share the stat. Why?
Big Technology Podcast Host
Actually a million.
Sam Altman
We have more than a million enterprise users, but we have like just absolutely rapid adoption of the API and like the API business grew faster for us this year than even ChatGPT, really. So the enterprise stuff is also.
Big Technology Podcast Host
You.
Sam Altman
Know, it's really happening starting this year.
Big Technology Podcast Host
Can I just go back to this? Maybe if commoditization is not the right word model, some maybe parody for everyday users. Because you started off your answer saying, okay, maybe everyday use will feel the same, but at the frontier it's going to feel really different when it comes to ChatGPT's ability to grow. If I'll just use Google as an example. If ChatGPT and Gemini are built on a model that feels similar for everyday uses, how big of a threat is the fact that the Google has all these surfaces through which it can push out Gemini, whereas ChatGPT is fighting for every new user?
Sam Altman
I think Google is still a huge threat, extremely powerful company. If Google had really decided to take US seriously in 2023, let's say we would have been in a really bad place. I think they would have just been able to smash us. But their AI effort at the time was kind of going in not quite the right direction, product wise. They didn't, you know, they had their own code red at one point, but they didn't take it that seriously. Everyone's doing code reds out here and then. And also Google has probably the greatest business model in the whole tech industry and I think they will be slow to give that up. But bolting AI into web search, I don't. I may be wrong. Maybe like drinking the Kool Aid here. I don't think that'll work as well as reimagining the whole. This is actually a broader trend I think is interesting. Bolting AI onto the existing way of doing things I don't think is going to work well as redesigning stuff in this sort of like AI first world. That's part of why we wanted to do the consumer devices in the first place. But it applies at many other levels. If you stick AI into a messaging app that's doing a nice job summarizing your messages and drafting responses for you, that is definitely a little better. But I don't think that's the end state. That is not the idea of you have this really smart AI that is acting as your agent, talking to everybody else's agent and figuring out when to bother you when not to bother you and what decisions it can handle and when it needs to ask you. So similar things for search, similar things for productivity suites. I suspect it always takes longer than you think, but I suspect we will see new products in the major categories that are just totally built around AI, rather than bolting AI in. And I think this is a weakness of Google's, even though they have this huge distribution advantage.
Big Technology Podcast Host
Yeah, I've spoken with so many people about this question. When ChatGPT came out, initially I think it was Bendick Devins that suggested you might not want to put AI in Excel. You might want to just reimagine how you use Excel. And to me, in my mind, that was like you upload your numbers and then you talk to your numbers. But one of the things people have found as they've developed this stuff is there needs to be some sort of backend there. So is it that you sort of build the backend and then you interact with it, with AI as if it's a new software program?
Sam Altman
Yeah, that's kind of what's happening.
Big Technology Podcast Host
Why wouldn't you then be able to just bolt it on, on top?
Sam Altman
I mean, you can bolt it on, on top, but the. I spent a lot of my day in various messaging apps, including email, including text, Slack, whatever. I think that's just the wrong interface. So you can bolt AI on top of those and again, it's like a little bit better. But what I would rather do is just sort of have the ability to say in the morning, here are the things I want to get done today, here's what I'm worried about, here's what I'm thinking about, here's what I'd like to happen. I do not want to be. I do not want to spend all day messaging people. I do not want you to summarize them. I do not want you to show me a bunch of drafts, deal with everything you can. You know me, you know these people, you know what I want to get done and then, you know, like batch every couple of hours updates to me if you need something. But that's a very different flow than the way these apps work right now.
Big Technology Podcast Host
Yep. And I was going to ask you what ChatGPT is going to look like in the next year and then the next two years. Is that kind of where it's going?
Sam Altman
To be? Perfectly honest, I expected by this point to ChatGPT would've looked more different than it did in launch.
Big Technology Podcast Host
What did you anticipate? I didn't know.
Sam Altman
I just thought like that Chat Interface was not gonna go as far as it turned out to go. Hmm. Like, we, I mean, it was put up, it looks better now, but it is broadly similar to when it was put up as like a research preview. It was not even meant to be a product. We knew that the text interface was very good. You know, like the. Everyone's used to texting their friends and they like it. The chat interface was very good. But I would have thought to be as big and as significantly used for real work of a product as what we have now, the interface would have had to go much further than it has now. I still think it should do that, but there is something about the generality of the current interface that I underestimated the power of. What I think should happen, of course, is that AI should be able to generate different kinds of interfaces for different kinds of tasks. So if you are talking about your numbers, it should be able to show you that in different ways and you should be able to interact with it in different ways. And we have a little bit of this with features like Canvas. It should be way more interactive. It's like right now, it's kind of a back and forth conversation. It'd be nice if you could just be talking about an object and it could be continuously updating. You have more questions, more thoughts, more information comes in. It'd be nice to be more proactive over time where it maybe does understand what you want to get done that day. And it's continuously working for you in the background and send you updates. And you see part of this with the way people are using codecs, which I think is one of the most exciting things that happened this year is Codex got really good. Uh, and that points to a lot of what I hope the shape of the future looks like. Um, but it is surprising to me. I was going to say embarrassing, but it's not. I mean, clearly it's been super successful. Uh, it is surprising me how little ChatGPT has changed over the last three years.
Big Technology Podcast Host
Yep, the interface works.
Sam Altman
Yeah.
Big Technology Podcast Host
But I guess what, the guts have changed. And you talked a little bit about how personalization is big to me and I think this has been one of your preferred features too. Memory has been a real difference maker. I've been having a conversation with ChatGPT about a forthcoming trip that has lots of planning elements for weeks now. And I can just come in, in a new window and be like, all right, let's pick up on this trip. And it, it has the context and it knows, knows the guide. I'm going with it knows what I'm doing. The fact that I've been like, planning fitness for it and can really synthesize all of those things. How good can memory get?
Sam Altman
I think we have no conception, because the human limit, like, even if you have the world's best personal assistant, they don't. They can't remember every word you've ever said in your life. They can't have read every email. They can't have read every document you've ever written. They can't be, you know, looking at all your work every day and remembering every little detail. They can't be a participant in your life to that degree. And no human has infinite perfect memory. And AI is definitely going to be able to do that. And we actually talk a lot about this right now. Memory is still very crude, very early. We're in the GPT2 era of memory. But what it's going to be like when it really does remember every detail of your entire life and personalized across all of that, and not just the facts, but the little small preferences that you had that you maybe didn't even think to indicate, but the AI can pick up on. I think that's going to be super powerful. That's one of the features that still maybe not a 2026 thing, but that's one of the parts of this I'm most excited for.
Big Technology Podcast Host
Yeah, I was speaking with a neuroscientist on the show, and he mentioned that you don't. You can't find thoughts in the brain. Like, the brain doesn't have a place to store thoughts, but computing, there's a place to store them so you can keep all of them. And as these bots do keep our thoughts, of course there's a privacy concern and. But the other thing is something that's going to be interesting is we'll really build relationships with them. I think it's been. One of the more underrated things about this entire moment is that people have felt that these bots are. Their companions, are looking out for them. And I'm curious to hear your perspective when you think about the level of. I don't know if intimacy is the right word, but companionship people have with these bots. Is there a dial that you can turn to be like, oh, let's make sure people become really close with these things? Or, you know, we turn the dial a little bit further and there's an arm's distance between them. And if there is that dial, how do you modulate that the right way?
Sam Altman
There are definitely more People than I realize that want to have, let's call it close companionship. I don't know what the right word is. Like relationship doesn't feel quite right. Companionship doesn't feel quite right. I don't know what to call it. But they want to have whatever this deep connection with an AI is. There are more people that want that at the current level of model capability than I thought. And there's like a whole bunch of reasons why I think we underestimated this. But at the beginning of this year, it was considered very strange thing to say. You wanted that maybe some. A lot of people still don't revealed preference. You know, people like their AI chatbot to get to know them and be warm to them and be supportive. And there's value there even for people who in some cases, even for people who say they don't care about that still have a preference for it. I think there's some version of this which can be super healthy. And I think, you know, adult users should get a lot of choice in where on the spectrum they want to be. There are definitely versions of it that seem to me unhealthy, although I'm sure a lot of people will choose to do that. And then there's some people who definitely want the driest, most effect efficient tool possible. So I suspect, like lots of other technologies, we will run the experiment. We will find that there's unknown, unknowns, good and bad about it, and society will over time figure out how to. How to think about where people should set that dial. And then people have huge choice and set it in very different places.
Big Technology Podcast Host
So your thought is allow people basically to determine this?
Sam Altman
Yes, definitely. But I don't think we know like how far it's supposed to go, like how far we should allow it to go. We're going to give people quite a bit of personal freedom here. There are examples of things that we've talked about that other services will offer but we won't like. We're not going to have RAI try to convince people that it should be like in an exclusive romantic relationship with them, for example.
Big Technology Podcast Host
Got to keep it open.
Sam Altman
But I'm sure that will. No, I'm sure that that will happen with other services.
Big Technology Podcast Host
Well, I guess, yeah, because the stickier it is, the more money that service makes. All these possibilities kind of. They're a little bit scary when you think about them a little bit deeply.
Sam Altman
Totally. This is one that really does that. I personally. You can see the ways that this goes really wrong. Yeah.
Big Technology Podcast Host
You mentioned enterprise. Let's talk about enterprise. You were at a lunch with some editors and CEOs of some news companies in New York last week and told them that enterprise is going to be a major priority for OpenAI next year. I'd love to hear a little bit more about why that's a priority, how you think you stack up against anthropic. I know people will say this is a pivot for OpenAI that has been consumer focused. So just give us an overview about the enterprise plan.
Sam Altman
Our strategy was always consumer first. There were a few reasons for that. One, the models were not robust and skilled enough for most enterprise uses, and now they're getting there. The second was we had this clear opportunity to win in consumer, and those are rare and hard to come by. And I think if you win in consumer, it makes it massively easier to win in enterprise. And we are seeing that now. But as I mentioned earlier, this was a year where enterprise growth outpaced consumer growth. And given where the models are today, where they will get to next year, we think this is the time where we can build a really significant enterprise business quite rapidly. I mean, I think we already have one, but it can grow much more. Companies seem ready for it. The technology seems ready for it. The, you know, coding is the biggest example so far, but there are others that are now growing, other verticals that are now growing very quickly. And we're starting to hear enterprises say, you know, I really just want an AI platform.
Big Technology Podcast Host
Which vertical company.
Sam Altman
Finance science is the one I'm most excited about of everything happening right now. Personally, customer support is doing great. Uh, but, but yeah, the, the. We have this thing called gdp though.
Big Technology Podcast Host
I was going to ask you about that. Can I actually throw my question out about that? All right. Because I wrote to Aaron Levy, the CEO of Box, and I said, I'm going to meet with Sam. What should I ask him? He goes, throw a question out about gdp, Val. Right. So this is the measure of how AI performs in knowledge work tasks. And I said, okay. I went back to the release of GPT 5.2, the model that you recently released, and looked at the GDP eval chart. Now this of course, is an OpenAI evaluation. That being said, the GPT5 thinking model. So this is the model released in the, in the summer. It tied. It tied knowledge workers at 38% of tests.
Sam Altman
Beat or tied? I think.
Big Technology Podcast Host
Beat or tied? Yeah, so 38.8% GPT 5.2 thinking beat or tide at 6.70.9% of knowledge work tasks and GPT 5.2 pro 74.1% of knowledge work tasks and it passed the threshold of being expert level. It handled, it looks like something like 60% of expert tasks of tasks that would make it, you know, on par with an expert in the knowledge work. What are the implications of the fact that these models can do that much knowledge work?
Sam Altman
So you were asking about verticals and I think that's a great question. But the thing that was going through my mind and why I kind of was stumbling a little bit is that eval. I think it's like 40 something different verticals that a business has to do. There's make a PowerPoint, do this legal analysis, write up this little web app, all this stuff. And the eval is do experts prefer the output of the model relative to other experts? For a lot of the things that a business has to do now, these are small, well scoped tasks. These don't get the kind of complicated, open ended creative work that figuring out a new product. These don't get a lot of collaborative team things.
Big Technology Podcast Host
But.
Sam Altman
A coworker that you can assign an hour's worth of tasks to and get something you like better back 74 or 70% of time if you want to pay less, is still pretty extraordinary. If you went back to the launch of ChatGPT three years ago and said we were going to have that in three years, most people would say absolutely not. And so as we think about how enterprises are going to integrate this, it's no longer just that it can do code, it's all of these knowledge work tasks. You can kind of farm out to the AI.
Big Technology Podcast Host
And.
Sam Altman
That'S going to take a while to really kind of figure out how enterprises integrate with it, but should be quite substantial.
Big Technology Podcast Host
I know you're not an economist, so I'm not going to ask you like what is the macro impact on jobs, but let me just read you one line that I heard. You know, in terms of how this impacts jobs, from Blood in the Machine on substack, this is from a technical copywriter. They said chatbots came in and made it. So my job was managing the bots instead of a team of reps. Okay, that, that to me seems like it's going to happen often. But then this person continued and said, once the bots were sufficiently trained up, offer good enough support, then I was out. Is that, is that the, is that going to become more common? Is that what bad companies are going to do? Because if you have a human who's going to be able to sort of orchestrate a bunch of different bots, then you might want to keep them. I know how do you think about this?
Sam Altman
So I, I agree with you that it's clear to see how everyone's going to be managing. Like a lot of AIs doing different stuff eventually, like any good manager, hopefully your team gets better and better, but you just take on more scope and more responsibility. I am not a jobs doomer. Short term, I have some worry. I think the transition is likely to be rough in some cases. But we are so deeply wired to care about other people, what other people do. We are so. We seem to be so focused on relative status and always wanting more and to be of use and service, to express creative spirit. Whatever has driven us this long, I don't think that's going away now. I do think the jobs of the future, or I don't even know if jobs is the right word, whatever we're all going to do all day in 2050 probably looks very different than it does today. But I don't have any of this. Like, oh, life is going to be without meaning and the economy is going to totally break. We will find, I hope, much more meaning. And the economy, I think, will significantly change. But I think you just don't bet against evolutionary biology. I think a lot about how we can automate all the functions at OpenAI. And then even more than that, I think about what it means to have an AI CEO. OpenAI doesn't bother me. I'm like, thrilled for it. I won't fight it. Like, I don't want to be. I don't want to be the person hanging on being like, I can do this better the handmade way.
Big Technology Podcast Host
AI CEO just make a bunch of decisions to sort of like, direct all of our resources to giving AI more energy and power.
Sam Altman
It's like, I mean, no, you would really.
Big Technology Podcast Host
You put a guardrail on.
Sam Altman
Yeah. Like, obviously you don't want an AI CEO that is not governed by humans. But if you think about, if you think about maybe, like, This is a crazy analogy, but I'll give it anyway. If you think about a version where like, every person in the world was effectively on the board of directors of an AI company and got to, you know, tell the AI CEO what to do and fire them if they weren't doing a good job at that, and, you know, got governance on the decisions, but the AI CEO got to try to execute the wishes of the board, I think to people of the future, that might seem like quite a reasonable thing.
Big Technology Podcast Host
Okay, so we're going to move to infrastructure in a minute. But before we leave this section on models and capabilities, when's GPT6 coming?
Sam Altman
I expect. I don't know when we'll call a model GPT6, but I would expect new models that are significant gains from 5.2 in the first quarter of next year.
Big Technology Podcast Host
What does significant gains mean?
Sam Altman
I don't have like an eval score in mind for you yet, but more.
Big Technology Podcast Host
Enterprise side of things or definitely both.
Sam Altman
There will be a lot of improvements to the model for consumers. The main thing consumers want right now is not more iq. Enterprises still do want more iq, so we'll improve the model in different ways for different uses. But our goal is a model that everybody likes much better.
Big Technology Podcast Host
So infrastructure. You have 1.4 trillion thereabouts and commitments to build infrastructure. I've listened to a lot of what you've said about infrastructure. Here are some of the things you said. If people knew what we could do with compute, they would want way, way more. You said the gap between what we could offer today versus 10x compute and 100x compute is substantial. Can you help flesh that out a little bit? What are you going to do with so much more compute?
Sam Altman
Well, I mentioned this earlier a little bit. The thing I'm personally most excited about is to use AI and lots of compute to discover new science. I am a believer that scientific discovery is the high order bit of how the world gets better for everybody. And if we can throw huge amounts of compute at scientific problems and discover new knowledge, which the tiniest bit is starting to happen now, it's very early. These are very small things. But you know, my learning of history, this field is once the squiggles start and it lifts off the X axis a little bit, we know how to make that better and better, but that takes huge amounts of compute to do. So that's one area we're like throwing lots of AI at discovering new science, curing disease, lots of other things. A kind of recent cool example here is we built the Sora Android app using Codex and they did it in like less than a month. They used a huge amount. One of the nice things about working at OpenAI is you don't get any limits on Codex. They used a huge amount of tokens, but they were able to do what would normally have taken a lot of people much longer and Codex kind of mostly did it for us. And you can imagine that going much further where entire companies can build their products using lots of compute. People have talked a lot about how video models are going to point towards These generated real time generated user interfaces that will take a lot of. Computer enterprises that want to transform their business will use a lot of compute. Doctors that want to offer good personalized healthcare that are constantly measuring every sign they can get from each individual patient. You can imagine that using a lot of compute. It's hard to frame how much compute we're already using to generate AI output in the world. But these are horribly rough numbers. And I think it's undisciplined to talk this way, but I always find these mental thought experiments a little bit useful. So forgive me for the sloppiness. Let's say that an AI company today might be generating something on the order of 10 trillion tokens a day out of frontier models. You know, more, but not. It's not like a quadrillion tokens for anybody, I don't think. Let's say there's 8 billion people in the world and let's say on average someone's. These are, I think, totally wrong, but let's say someone. The average number of tokens outputted by a person per day is like 20,000. You can then start to. And the token you could. To be fair, then we'd have to compare the output tokens of a model provider today. Not all the tokens consumed, but you can start to look at this and you can say, hmm, we're gonna have these models at a company be outputting more tokens per day than all of humanity put together. And then 10 times that and then 100 times that. And in some sense it's like a really silly comparison, but in some sense it gives a magnitude for like how much of the intellectual crunching on the planet is like human brains versus AI brains. And that's kind of the relative growth rates there are interesting.
Big Technology Podcast Host
And so I'm wondering, do you know that there is this demand to use this compute, like potentially like. So for instance, would we have surefires like scientific breakthroughs if, you know, OpenAI were to put double the compute towards science or with medicine, like would we have, you know, that clear ability to assist doctors? Like how much of this is sort of supposition of what's to happen versus clear understanding based off of what you see today?
Sam Altman
That it everything based off what we see today is that it will happen. It does not mean some crazy thing can't happen in the future. Someone could discover some completely new architecture and there could be a 10,000 times, you know, efficiency gain and then we would have really probably overbuilt for a while. But everything we See right now about how quickly the models are getting better at each new level, how much more people want to use them. Each time, we can bring the cost down, how much more people really want to use them. Everything about that indicates to me that there will be increasing demand and people using these for wonderful things, for silly things. But it just so seems like this is the shape of the future. It's not just how many tokens we can do per day, it's how fast we can do them. As these coding models have gotten better, they can think for a really long time, but you don't want to wait for a really long time. So there will be other dimensions. It will not just be the number of tokens that we can do, but the demand for intelligence across a small number of axes and what we can do with those. You know, if you're like. If you have, like, a really difficult healthcare problem, do you Want to use 5.2 or do you want to use 5.2 Pro? Even if it takes dramatically more tokens.
Big Technology Podcast Host
I'll go with the better model.
Sam Altman
I think you will.
Big Technology Podcast Host
Let's just try to go one level deeper. Going to the scientific discovery, can you give an example of, like, a scientist? It doesn't have to. Well, maybe it's one that, you know, today that's like, I have problem X, and if I put, you know, compute Y towards it, I will solve it, but I'm not able to today.
Sam Altman
There was a thing this morning on Twitter where a bunch of mathematicians were saying they were all, like, replying to each other's tweets. They're like, I was really skeptical that LLMs were ever going to be good. 5.2 is the one that crossed the boundary for me. It figured out this with some help. It did this small proof. It discovered this small thing, but this is actually changing my workflow. And then people were piling on saying, yeah, me too. I mean, some people were saying 5.1 was already there. Not many. But that. That was like, that's a very recent example. This model's only been out for five days or something, where people are like, all right. You know, the mathematic.
Big Technology Podcast Host
Yeah.
Sam Altman
The mathematics research community seems to say, like, okay, something important just happened.
Big Technology Podcast Host
I've seen Greg Brockman has been highlighting all these different mathematical, scientific uses in his feed, and something has clicked, I think, with 5.2among these communities. So it'll be interesting to see what happens as. As things progress.
Sam Altman
We don't like. One of the hard parts about compute at this scale is you have to do it so far in advance. So that 1.4 trillion you mentioned, we'll spend it over a very long period of time. I wish we could do it faster. I think there would be demand if we could do it faster. But it just takes an enormously long time to build these projects and the energy to run the data centers and the chips and the systems and the networking and everything else. So that will be over a while. But, you know, we, from a year ago to now, we probably about tripled our compute. We'll triple our compute again next year, hopefully again after that. Revenue grows even a little bit faster than that, but it does roughly track our compute fleet. So we, we have never yet found a situation where we can't really, well, monetize all the compute we have. If we had, I think if we had, you know, double the compute, we'd be at double the revenue right now.
Big Technology Podcast Host
Okay, let's, let's talk about numbers since you brought it up. Revenue's growing, compute spend is growing. But compute spend still outpaces revenue growth. I think the numbers that have been reported are OpenAI is supposed to lose something like 120 billion between now and 120 and 20, 28, 29, where you're going to become profitable. So talk a little bit about, like, how does that change? Where does the turn happen?
Sam Altman
I mean, as revenue grows and as inference becomes a larger and larger part of the fleet, it eventually subsumes the training expense. So that's the plan. Spend a lot of money training, but make more and more. If we weren't continuing to grow our training costs by so much, we would be profitable way, way earlier. But the bet we're making is to invest very aggressively in training these big models.
Big Technology Podcast Host
The whole world is wondering how your revenue will line up with the spend. The question's been asked if the trajectory Is to hit $20 billion in revenue this year and the spend commitment is 1.4 trillion. So I think it would be great.
Sam Altman
Just again, over a very long period.
Big Technology Podcast Host
Yeah. And that's why I wanted to bring it up to you. I think it would be great to just lay it out, everyone, once and for all, how those numbers are going to work.
Sam Altman
It's very hard to really. I find that one thing, I certainly can't do it, and very few people I've ever met can do it. You have good intuition for a lot of mathematical things in your head, but exponential growth is usually very hard for people to do a good, quick mental framework on. For whatever reason, there were a lot of things that evolution needed us to be able to do. Well, with math in our heads. Modeling exponential growth doesn't seem to be one of them. So the thing we believe is that we can stay on a very steep growth curve of revenue for quite a while. And everything we see right now continues to indicate that we cannot do it if we don't have the compute. Again, we're so compute constrained and it hits the revenue line so hard that I think if we get to a point where we have like a lot of compute sitting around that we can't monetize on a profitable per unit of compute basis, be very reasonable to say, okay, this is like a little how's this all going to work? But we've penciled this out a bunch of ways. We will of course also get more efficient on like a flops per dollar basis as all of the work we've been doing to make compute cheaper comes to pass. But we see this consumer growth, we see this enterprise growth. There's a whole bunch of new kinds of businesses that we haven't even launched yet, but will. But compute is really the lifeblood that enables all of this. So there's checkpoints along the way. And if we're a little bit wrong about our timing or math, we have some flexibility. But we have always been in a compute deficit. It has always constrained what we're able to do. I unfortunately think that will always be the case, but I wish it were less the case. And I'd like to get it to be less of the case over time because I think there's so many great products and services that we can deliver and it'll be a great business.
Big Technology Podcast Host
Okay, so it's effectively training costs go.
Sam Altman
Down as a percentage, they go up overall.
Big Technology Podcast Host
And then your expectation is through things like this enterprise push through things like people being willing to pay for ChatGPT through the API, OpenAI will be able to grow revenue enough to pay for it with revenue.
Sam Altman
Yeah, that is the plan now.
Big Technology Podcast Host
I think the thing. So the market's been kind of losing its mind over this recently. I think the thing that has spooked the market has been the debt has entered into this equation. And the idea around debt is you take debt out when there's something that's predictable, and then companies will take the debt out, they'll build and they'll have predictable revenue. But it's, it's. This is a new category. It's, it is unpredictable. Is that. How do you think about the fact that that has entered the picture here?
Sam Altman
So first of all, I think the market more lost its mind when earlier this year, you know, we would like meet with some company and that company stock would go up 20% or 15% the next day.
Big Technology Podcast Host
That was crazy.
Sam Altman
That felt really unhealthy. I'm actually happy that there's like a little bit more skepticism and rationality in the market now because it felt to me like we were just totally heading towards a very unstable bubble. And now I think people are some degree of discipline. So I actually think things are. I think people went crazy earlier and now people are being more rational. On the debt front, I think we do kind of. We know that if we build infrastructure, we the industry, someone's going to get value out of it. And it's still totally early. I agree with you, but I don't think anyone's still questioning there's not going to be value from AI infrastructure. And so I think it is reasonable for debt to enter this market. I think there will also be other kinds of financial instruments. I suspect we'll see some unreasonable ones as people really innovate about how to finance this sort of stuff. But like lending companies money to build data centers, that seems fine to me.
Big Technology Podcast Host
I think the fear is that if things don't continue apace, like here's one scenario and you'll probably disagree with this, but like the model, progress saturates, then the infrastructure becomes worth less than the anticipated value was and then yes, those data centers will be worth something to someone, but it could be that they get liquidated and someone buys them at a discount.
Sam Altman
Yeah, and I do suspect by the way, there will be some like booms and busts along the way. These things are never a perfectly smooth line. First of all, it seems very clear to me and this is like a thing I'd be happily would bet the company on that the models are going to get much, much better. We have like a pretty good window into this. We're very confident about that. Even if they did not. I think the there's like a lot of inertia in the world that takes a while to figure out how to adapt to things. The overhang of the economic value that I believe 5.2 represents relative to what the world has figured out how to get out of it so far is so huge that even if you froze the model at 5.2, how much more value can you create and thus revenue can you drive? I bet a huge amount. In fact, you didn't ask this, but if I can go on a rant for a second we used to talk a lot about this two by two matrix of short timelines, long timelines, slow takeoff, fast takeoff, and where we felt at different times the kind of probability was shifting and that that was going to be. You could kind of understand a lot of the decisions and strategy that the world should optimize for based off of where you were going to be on that 2x2 matrix. There's like a Z axis in my head in my picture of this that's emerged, which is small overhang, big overhang. And I kind of thought that, I guess I didn't think about that hard. But like my retro on this is, I must have assumed that the overhang was not going to be that massive, that if the models had a lot of value in them, the world was pretty quickly going to figure out how to deploy that. But it looks to me now like the overhang is going to be massive. In most of the world you'll have these like areas, like some subset of coders that'll get massively more productive by adopting these tools. But on the whole you have this crazy smart model that, to be perfectly honest, most people are still asking similar questions. They did in the GPT4 realm. Scientists, different, coders, different. Maybe knowledge work is going to get different, but there is a huge overhang and that has a bunch of very strange consequences for the world. We have not wrapped our head around all the ways that's going to play out yet, but it's very much not what I would have expected a few years ago.
Big Technology Podcast Host
I have a question for you about this capability overhang. Basically the models can do a lot more than they've been doing. I'm trying to figure out how the models can be that much better than they're being used for. But a lot of businesses, when they try to implement them, they're not getting a return on their investment. Or at least that's what they tell mit.
Sam Altman
I'm not sure quite how to think about that because we hear all these businesses saying if you 10x the price of GPT 5.2, we would still pay for it. You're hugely underpricing this. We're getting all this value out of it. That doesn't seem right to me. Certainly if you talk about what coders say they're like, this is I'd pay 100 times the price or whatever.
Big Technology Podcast Host
Just be bureaucracy that's messing things up.
Sam Altman
Let's say you believe the GDP valid numbers and maybe you don't for good reason. Maybe they're wrong, but let's say it were true and for kind of these well specified not super long duration knowledge work tasks 7 out of 10 times you would be as happy or happier with the 5.2 output. You should then be using that a lot. And yet it takes people so long to change their workflow. They're so used to asking the junior analyst to make a deck or whatever that they're gonna like it. Just that's stickier than I thought it was. You know, I still kind of run my workflow in very much the same way, although I know that I could be using AI much more than I am. Yep.
Big Technology Podcast Host
All right, we got 10 minutes left.
Sam Altman
I got wow, that was quick.
Big Technology Podcast Host
I got four questions. Let's see if we can lightning round through them. So the device that you're working on, we'll be back with OpenAI CEO Sam Altman right after this These days it feels like every dollar should be working a little harder. But figuring out where to put your cash can be confusing. That's where Wealthfront comes in. Wealthfront is a tech driven financial platform built to help you grow your savings into long term wealth. Their high Yield cash account through program banks offers a 3.5% APY on your uninvested cash as of November 7, 2025 and there are no monthly fees, no minimum or maximum balance to earn that rate. And you can even make free instant withdrawals to eligible accounts in just minutes 247 so your money can always be within reach. Right now Wealthfront is offering new clients an extra 0.65% APY over the base rate for three months on up to a $150,000 balance. That's a total of 4.15% variable APY when you open your first cash account, go to wealthfront.com bigtech to sign up today. This is a paid testimonial for Wealthfront. It may not reflect the experience of others and there's no guarantee of future performance or success. Wealthfront Brokerage is not a bake. Rate is subject to change. Promo terms and conditions apply. For more information, see the episode Description Capital One's tech team isn't just talking about multi agentic AI. They already deployed one. It's called Chat Concierge and it's simplifying car shopping using self reflection and layered reasoning with live API checks. It doesn't just help buyers find a car they love, it helps schedule a test drive, get pre approved for financing and estimate, trade and value. Advanced, intuitive and deployed. That's how they stack. That's technology at Capital one.
Ad Reader 1
Are you interested in effortlessly growing your Bitcoin portfolio? I sure am. The Bitcoin credit card by Gemini earns you Bitcoin back on every purchase. Use it like any credit card, buy lunch, gas, or your weekly groceries and you'll earn up to 4% back instantly in Bitcoin or one of over 50 other cryptos straight to your account. All that with no annual fee. And right now you can grab a $200 bitcoin welcome bonus. It's the easiest way to start building your Bitcoin stack. Go to gemini.com card to learn more terms Apply. See the link in the description for more information regarding rates and fees issued by Web bank. To Qualify for the $200 crypto intro bonus, you must spend $3,000 in your first 90 days. Some exclusions to instant rewards apply. This is not investment advice and trading Crypto involves risk. Check Gemini's details on rates and fees.
Big Technology Podcast Host
What I've heard Phone size, no screen. Why couldn't it be an app? If it's the phone? If it's the phone without a screen.
Sam Altman
First we're going to do a fam A small family of devices. It will not be a single device. There will be, over time a this is, this is not speculation. So I may try not to be totally wrong. But I think there will be a shift over time to the way people use computers, where they go from a sort of dumb, reactive thing to a very smart, proactive thing that is understanding your whole life, your context, everything going on around you, very aware of the people around you physically or close to you via a computer that you're working with. And I don't think current devices are well suited to that kind of world. And I am a big believer that we like, we work at the limit of our devices. You know, you have that computer and it has a bunch of design choices, like it could be open or closed, but it can't be, you know, there's not like a okay, pay attention to this interview, but be closed and like whisper in my ear if I forget to ask Sam a question or whatever.
Big Technology Podcast Host
Maybe that would be helpful.
Sam Altman
And there's like, you know, there's like a screen and that like limits you to the kind of same way we've had graphical user interfaces working for many decades. And there's, you know, a keyboard that was built to like slow down how fast you could get information into it. And these have just been unquestioned assumptions for a long time, but they worked. And then this totally new thing came along and it opens up a possibility space, but I don't think the current form factor of devices is the optimal fit. It'd be very odd if it were for this incredible new affordance we have.
Big Technology Podcast Host
Oh man. We could talk for an hour about this. But let's move on to the next one.
Sam Altman
Cloud.
Big Technology Podcast Host
You've talked about building a cloud. Here's an email we got from a listener at my company. We're moving off Azure and directly integrating with OpenAI to power our AI experiences in the product. The focus is to insert a stream of trillions of tokens powering AI experiences through the stack. Is that the plan to build a big cloud business in that way?
Sam Altman
First of all, trillions of tokens. A lot of tokens. And if you know, you asked about the need for compute and our enterprise strategy like enterprises have been clear with us about how many tokens they'd like to buy from us and we are going to again fail in 2026 to meet demand. But the strategy is most companies seem to want to come to a company like us and say I'd like to name my company with AI. I need an API customized for my company. I need ChatGPT Enterprise customized for my company. I need a platform that can run all these agents that I can trust my data on. I need the ability to get trillions of tokens into my product. I need the ability to have all my internal processes be more efficient. And we don't currently have a great all in one offering for them and we'd like to make that.
Big Technology Podcast Host
Is your ambition to put it up there with the AWS and Azures of the world?
Sam Altman
I think it's a different kind of thing than those. I don't really have an ambition to go offer whatever all the services you have to offer to host a website or I don't even know but, but I, I think the concept, yeah, my guess is that people will continue to have their call it web cloud and then I think there will be this other thing where like a company will be like I need an AI platform for everything that I want to do internally. That service. I want to offer whatever and you know, like it does kind of live on the physical hardware in some sense, but I think it'll be a fairly different product offering.
Big Technology Podcast Host
Let's talk about discovery quickly. You've said something that's been really interesting to me that you think that the models, or maybe it's people working with models or the models make small discoveries next year and big ones within five. Is that the models? Is it people working alongside them. And what makes you confident that that's going to happen?
Sam Altman
Yeah, people using the models, like the. The models that can, like, figure out their own questions to ask, that does feel further off. But if the world is benefiting from new knowledge, like, we should be very thrilled. And, you know, like, I think the. The whole course of human progress has been that we build these better tools and then people use them to do more things, and then out of that process, they build more tools and it's this, like, scaffolding that we climb, like layer by layer, generation by generation, discovery by discovery. And the fact that a human's asking the question, I think in no way diminishes the value of the tool. So I think it's great. I'm all happy. At the beginning of this year. I thought the small discoveries were going to start in 2026. They started in 2025, in late 2025. Again, these are very small. I really don't want to overstate them, but anything feels qualitatively to me very different than nothing. And certainly in the. When we launched ChatGPT three years ago, that model was not going to make any new contribution to the total of human knowledge. What it looks like from here to five years from now, this journey to big discoveries, I suspect it's just like the normal hill climb of AI. It just gets a little bit better every quarter. And then all of a sudden we're like, whoa. Humans augmented by these models are doing things that humans five years ago just absolutely couldn't do. And whether we mostly attribute that to smarter humans or smarter models, as long as we get the scientific discoveries, I'm very happy either way.
Big Technology Podcast Host
IPO next year.
Sam Altman
I don't know.
Big Technology Podcast Host
Do you want to be a public company? You seem like you can operate private for a long time. Would you go before you needed to.
Sam Altman
In terms of fund, there's like a whole bunch of things at play here. I do think it's cool that public markets get to participate in value creation. And in some sense we will be very late to go public. If you look at any previous company, it's wonderful to be a private company. We need lots of capital. We're going to cross all of the shareholder limits and stuff at some point. So am I excited to be a public company CEO? 0% Am I excited for OpenAI to be a public company? In some ways I am, and in some ways I think it'll be really annoying.
Big Technology Podcast Host
I listened to your Theo Vaughn interview very closely.
Sam Altman
Great interview. He was really cool.
Big Technology Podcast Host
Theo really knows what he's talking.
Sam Altman
Yeah, he's awesome.
Big Technology Podcast Host
Citing Yashua Bengio.
Sam Altman
He's.
Big Technology Podcast Host
He did his homework. You told him this was right before GPT5 came out, that GPT5 is smarter than us in almost every way. I thought that that was the definition of AGI does. Is that. Isn't that AGI? And if not, has the term become somewhat meaningless?
Sam Altman
These models are clearly extremely smart on a sort of raw horsepower basis. You know, there's all this stuff out in the last couple of days about GPT 5.2, has an IQ of 147 or 144 or 151 or whatever it is. It's like, you know, depending on whose test, it's like it's some high number. And you have, like, a lot of experts in their field saying it can do these amazing things. And it's like contributing. It's making it more effective. You have the gdp, VAL things we talked about. One thing you don't have is the ability for the model to not be able to do something today, realize it can't go off and figure out how to learn to get good at that thing, learn to understand it, and when you come back the next day, it gets it right. And that kind of continuous learning, like toddlers can do, does seem to me like an important part of what we need to build now. Can you have something that most people would consider an AGI without that? I would say clear. I mean, there's a lot of people that would say we're at AGI with our current models. I think almost everyone would agree that if we were at the current level of intelligence and had that other thing, it would clearly be very AGI like, but maybe most of the world will say, okay, fine. Even without that, it's doing most knowledge tasks that matter smarter than us, and most of us, in most ways, we're at AGI. It's discovering small pieces of new science. We're at AGI. What I think this means is that the term, although it's been very hard for all of us to stop using, is very underdefined. I have a candidate. One thing I would love. We got it wrong. With AGI, we never define that. The new term everyone's focused about is when we get to superintelligence. So my proposal is that we agree that AGI kind of went whooshing by. It didn't change the world that much, or it will in the long term. But okay, fine, we've built AGIs. At some point. We're in this Fuzzy period, where some people think we have, some people think we have, and more people will think we have. And then we'll say, okay, what's next? A candidate definition for superintelligence is when a system can do a better job being president, United States CEO of a major company running a very large scientific lab than any person can, even with the assistance of AI. I think this was an interesting thing about what happened with chess, where chess got it could be humans. I remember this very vividly, the deep blue thing. And then there was a period of time where a human and the AI together were better than an AI by itself. And then the person was just making it worse. And the smartest thing was the unaided AI that didn't have the human not understanding its great intelligence. I think something like that is like an interesting framework for superintelligence. I think it's like, a long way off, but I would love to have a cleaner definition this time around.
Big Technology Podcast Host
Well, Sam, look, I have been in your products, using them daily for three years. Thank you very much. Definitely gotten a lot better. Can't even imagine where they go from here.
Sam Altman
We'll try to keep getting them better fast.
Big Technology Podcast Host
Okay? And this is our second time speaking, and I appreciate how open you've been both times.
Sam Altman
So thank you very much for your time.
Big Technology Podcast Host
Thank you, everybody, for listening and watching. If you're here for the first time, please hit follow or subscribe. We have lots of great interviews on the feed and more on the way. This past year, we've had Google DeepMind CEO Demis Hassabis on twice, including one with Google founder Sergey Brin. We've also had Dario Amodei, the CEO of Anthropic. And we have plenty of big interviews coming up in 2026. Thanks again, and we'll see. See you next time on Big Technology Podcast.
Sam Altman
And Doug, here we have the Limu emu in its natural habitat, helping people customize their car insurance and save hundreds with Liberty Mutual.
Big Technology Podcast Host
Fascinating.
Sam Altman
It's accompanied by his natural ally, Doug.
Big Technology Podcast Host
Uh, limu is that guy with the binoculars watching us.
Sam Altman
Cut the camera. They see us. Only pay for what you need@libertymutual.com Liberty, Liberty, Liberty. Liberty Savings Ferry, unwritten by Liberty Mutual Insurance Company and affiliates.
Ad Reader 2
Excludes Massachusetts with stays under $250 a night. VRBO makes it easy to celebrate sweater weather. You could book a cabin, stay with leaf views for days. Or a brownstone in a city where festivals are just a walk away. Or a lakeside home with a fire pit for cozy nights with friends. Or if you're not a sweater person, we can call it corduroy weather. More flexible, and with stays under $250 a night, you can book a home that suits your exact needs. Book now@vrbo.com.
Episode Title: Sam Altman: How OpenAI Wins, AI Buildout Logic, IPO in 2026?
Host: Alex Kantrowitz
Guest: Sam Altman (CEO, OpenAI)
Date: December 18, 2025
In this episode, Alex Kantrowitz sits down with Sam Altman, CEO of OpenAI, for a candid and wide-ranging discussion about OpenAI’s competitive strategy, the logic behind massive AI infrastructure investments, the evolving AI product landscape, and the much-debated question of whether and when OpenAI will go public. Altman shares inside perspectives on OpenAI’s approach to product, the importance of building both consumer and enterprise offerings, the economic models behind their enormous compute spend, and reflections on what “AGI” even means as capabilities rapidly advance.
On product advantage:
“You kind of pick a toothpaste once in your life and buy it forever... people talk about it. They have one magical experience with ChatGPT... those users are very sticky...” – Sam Altman [05:35]
On AI as a companion:
“There are definitely more people than I realize that want to have, let's call it close companionship... But they want to have whatever this deep connection with an AI is.” – Sam Altman [17:06]
On memory and personalization:
“AI is definitely going to be able to remember every detail of your entire life... that's going to be super powerful. That's one of the features that... I'm most excited for.” – Sam Altman [14:55]
On scientific progress and compute:
“Throwing lots of AI at discovering new science, curing disease, lots of other things... my learning of history is this field is once the squiggles start and it lifts off the X axis a little bit, we know how to make that better and better.” – Sam Altman [28:53]
On exponential growth math:
“Exponential growth is usually very hard for people to do a good, quick mental framework on... compute is really the lifeblood that enables all of this.” – Sam Altman [37:57]
On defining ‘AGI’ and ‘Superintelligence’:
“AGI kind of went whooshing by... A candidate definition for superintelligence is when a system can do a better job being president, United States CEO... than any person can, even with the assistance of AI.” – Sam Altman [56:18]
| Timestamp | Segment/Topic | |------------|-----------------------------------------------------------------------| | 01:03 | “Code Red” response and competitive strategy | | 04:02 | User base growth, ChatGPT’s dominance | | 05:35 | Models vs. products, stickiness, “toothpaste” analogy | | 08:35 | Critique of “bolting on” AI to legacy products | | 12:01 | ChatGPT interface longevity, product expectations | | 14:55 | AI memory and personalization potential | | 17:06 | AI as a companion, user preferences | | 20:10 | Enterprise AI as new strategic focus | | 22:14 | GDP eval, GPT-5.2 expert-level tasks | | 24:58 | Jobs, automation, management, and evolutionary perspective | | 26:43 | Vision for AI CEOs | | 27:40 | Next models and roadmap toward “GPT-6” | | 28:22 | Rationale for $1.4T AI infrastructure spend | | 35:34 | Monetization and expansion of compute | | 37:57 | Exponential growth math, revenue plan | | 42:30 | Booms, busts, and debt/risk in AI infrastructure investments | | 44:52 | The value “overhang”—models outpacing practical deployment | | 48:59 | OpenAI’s hardware device ambitions | | 52:13 | Building a specialized AI cloud for enterprise | | 53:14 | Timeline and process for AI-driven scientific discoveries | | 54:53 | IPO timing and Altman’s ambivalence | | 56:18 | The meaning(lessness) of "AGI" and a proposed definition for "superintelligence" |
Altman’s vision for OpenAI is neither complacent nor singular: aggressive investments, constant vigilance for competition, and an evolving philosophy on the social, ethical, and business impact of frontier AI. As exponential capability (and spend) continue, OpenAI bets that product, infrastructure, and the emergent ecosystem around AI will separate true winners from mere model trainers. The episode closes with the recognition that we may already be living in the AGI era—though, as Altman suggests, history and language may take years to catch up.