Brad Gerstner (44:13)
Your thoughts? You know, before we go down the OpenAI rabbit hole, let's just really contextualize, like, what's going on here. You know, I have this additional chart you showed. One, you know, they added 4 billion of revenue in January, 7 billion in February, 11 billion of annualized run rates, or 10 or 11 billion in March. Just to put in perspective, that's databricks plus palantir combined that they added in a single month. Right. So we started with everybody at the start of the year wringing their hands, including, you know, Gurley and others saying we're in a big bubble, asking whether the AI revenues would show up to justify all of this investment. And bam. You have the largest revenue explosion in the history of technology. So the company's plans were to end the year at about a $30 billion exit run rate. They got there by the end of March. Right. And I suspect that it's continuing in April. So you have to ask what's going on and what's the big. So what? The first thing for me is that model and product capability just hit this threshold we talked about earlier, near AGI, whatever the hell you want to call it. And everybody, like Altimeter said, damn, this is so good. I have to have it. This is no longer about my IT budget. This is about labor augmentation and labor replacement. And by the way, cowork is growing even faster than Claude Goad at the same stage of development. So what it showed is we have a near Infinite tam. It turns out that the TAM for intelligence is radically different than anything that we've seen before. And I think the best example of this, right, this is millions of self interested parties, consumers, enterprises, a thousand now over a million dollars, right? It's not that there was some great go to market and anthropic that all of a sudden, you know, they snuck up and blew everybody away. No, it was companies demanding the product. They're getting throttled on the product. Why? Because it's so good. It makes them better at their business. We are all self interested actors. And when millions of those people are all making the same decision, there's a huge tell. And the tell here is that the TAM is as big as Dario and Sam and others have been saying. We knew intelligence was going to scale on the exponential. The question was whether revenue will scale on the exponential. And that's what we're seeing. And remember, they're doing this with only 1 1/2 to 2 gigawatts of compute, right? These guys are massively compute constrained. They're each going to be adding 3 gigawatts of compute this year. And so that will unlock. They would be growing even faster but for that. And then Jason, to your point about the open source models that we all want to be a part of this solution, I've talked to a lot of big companies. 65 to 70% of their token consumption is open source model, right? Are these cheap Chinese and other tokens. So these revenue ramps are happening while the world is already using open source. This is not frontier only, this is frontier plus open source. We're going to see massive token optimization over the course of the year. But what happens on this Jevons paradox is the unit cost, right. Of intelligence is plummeting, not the cost of tokens. The unit cost of intelligence is plummeting because the capabilities of these models is so much better. I look at what it does for Altimeter day in and day out. I talked to a major company yesterday. They're on a run rate to do a hundred million of token consumption this year on about $5 billion in opex. They think that we're now nearing peak employment in their company, but that their token, their intelligence consumption. Okay, let's not call it token consumption, right, because tokens may go up a lot, but their intelligence consumption is going to go up a lot. So I would leave you with this. We're early to Chamas point. We have low penetration of the Global 2000. We have low penetration of the use cases. We have low Penetration within the use cases that they're already using. And the models are only getting better. So I think when you look out toward the end of the year, I would not be shocked if you see Anthropic exiting this year at 80 to $100 billion in revenue. And by the way, doing it at the same time that OpenAI, who is also on the wave, they'll be releasing an incredible model in the next imminently they're going to be on that wave and you're going to see an inflection in their revenues as well.