
In this episode, I reflect on the third anniversary of ChatGPT's launch as a marker of where we are in the exponential age. As a product, ChatGPT captures the speed of technological progress, the new behaviours emerging around it and the widening gap between innovation and institutional change – all symptomatic of the era I called the exponential age in my 2021 book.
Loading summary
A
So ChatGPT has its third anniversary this week. Let's look at ChatGPT through the lens of my exponential age framework. I guess the first point is that ChatGPT and the large language model it's built on, they're just one segment of a larger exponential transition, even if we just think about computing. But ChatGPT tends to drown things out because it's becoming the verb for AI, and with some justification. According to a recent count, a message on X from one of the investors in OpenAI, nearly 900 million users, that means people seem to like it. Data from similar web, which monitors web usage and app usage, shows that it's a really, really sticky app. About a third of people who use ChatGPT in a month use it every day. Now, that's not as much as Instagram, for sake of argument, but it's about the Same level as YouTube and higher than Snapchat, both really sticky, well loved apps. So there's scale there that is creating noise and occupying headspace. But of course you can't say scale and think about ChatGPT without thinking about the grammar of scale that is involved. Those large language models, those scaling laws, the increasingly large voracious demand for compute and for chips, the bigger and bigger and bigger data centers. The numbers are so big, hundreds of billions of dollars, they seem to tower over the debate like a skyscraper, but in a way that hides what else is going on. As some of you may remember, OpenAI launched GPT 4.5. It was a new foundation model replacing GPT 4 that they put out soon after ChatGPT. And 4.5 was kind of a flop. It was an attempt to do a big model. We didn't really like it. It sort of fell a bit flat. But what OpenAI did and what the researchers did is they found a new approach. They found that approach of reasoning, that's thinking at inference time, the point at which you or I might put a query into the chatbot. And Those reasoning models, O1O3 performed really, really well. I think it was a real milestone moment in how an emerging technology starts to improve. Now, if you've used Gemini Pro, which Google released an update to a few days ago, it really feels like there's something going on beyond either the reasoning model approach or the large language model. Of course, Gemini Pro is using both of those techniques, but it feels like there's a new technology sitting behind that because Gemini 3 is really well grounded in the complexities of the real world. And perhaps that's a hint to the kind of world model that Demis Hassabis has alluded to. So what we often see with these exponential technologies is that from a distance they look like, you know, one single smooth curve. But in fact there are a series of overlapping curves of different technologies and different approaches that ultimately give you that exponential. Now, it's not to say that the foundation models companies are not pursuing scale. And what Google proved with Gemini was that scale still works at each stage of building these models. It's just that it's not the only thing that is going on. So the overlapping S curves, which I explain in the book, starting to be present and be felt within these chatbots that many of us are using every day. But that's not just the end of the story. It's visible elsewhere. Think about the chips. So we all know about Nvidia's GPUs that are powering these systems. Across roughly the past decade, Nvidia's GPUs have delivered about double the usable AI throughput that Moore's Law would predict. So Moore's law essentially said, look, every, you know, a couple of years there'll be a doubling of performance capability. And that has been the clock speed of the technology industry for more than 50 years. It's a, in a way, a social agreement between the ecosystem of semiconductor companies that we needed to deliver on that. Well, the GPUs have delivered far faster over quite a long period of time, over a decade. But it doesn't stop there because these chips are really complex. They require really cutting edge technologies from, you know, the fabs, the photolithography, chemical washing, the bonding, the packaging. And yet Nvidia has pushed up its own clock speed, moving from a two year cycle for new chips to a one year cycle. Another example of time compression. And LLM has as a result, I think of ChatGPT, that famous paper that now feels so outdated. The stochastic parrot paper have become that byword for AI. They've encapsulated the general transformer architecture that underpins large language models. But other things like diffusion models and state space models. The AI index tracks a bunch of notable AI models. I'm not quite sure what makes a model notable or not, but it tracks about 61 of them. At the end of 2024, only 40% were true LLMs. Others used different architectures which are needed across domains like time series, electrical systems, proteins, multi omics and other medical applications. So the point is, there are more exponentials in this AI wave that are doing their thing than just ChatGPT. And the large language model. These new technologies really make new things possible. We start to do things that we didn't do before, maybe because they weren't possible or just because they were too expensive. Because what a general purpose technology does, has to do is it has to in some way reduce the cost of something we're doing in the market, in the economy, and an exponential general purpose technology will reduce that cost even more. So think about how you might be using this today. You might get a contract for something that you're signing up to and if it's got any degree of materiality, you'll probably get it read by Claude or ChatGPT. Well, we wouldn't go to a lawyer for a contract like that five years ago or two years ago. But now these are new behaviours that we will generally take. Well, what does this mean? Is this a reasonable non disclosure agreement to sign, something I have to do many, many times a week? And they're not all reasonable. So there's this new behavior that has emerged that's been enabled as a consequence of this exponential technology. Another one of my favorites is that I use it to prioritize my to do lists set against the plans that I have. I mean those who work with me know that I'm always pushing, putting pedal to the metal. I've got way too much on and so prioritization becomes quite important. The amount of time I spend has declined significantly, but the quality of that has improved dramatically. But perhaps my favourite new use is building software that works. So I've been writing programs for a long time, since the early 80s over my shoulder. You can see my first computer just over there from 1981. But I didn't keep my skills up to date. And I remember about 15 years ago my, my CTO at my last company saying please stop trying to write software. You've got a development team working for you and they can do it. He was, I think, pleading with me because the quality wasn't particularly good. But what I can now do is build software that that works. I might not be able to sell it, I might not be able to get it into the app Store, but it works reliably. And I have made, I kid you not, dozens of software tools, many of which I use daily or weekly or run constantly in the background. These are might be complex workflows or they might be things that I've built through a combination of replit and cursor. You know, they include a content management system that helps me make sense of the hundreds of things that the research team reads every month. This is something that is designed for the way I want to work. I'm not trying to bootstrap my way into Notion or into Obsidian. I'm saying this is the particular set of workflows and analyses that I find useful and I've defined it. I've got another bit of software that keeps track and up to date with the 40 or so startup investments I've made. If you're one of my founders, one of the reasons I have a better sense of what you're doing and how to help you is because I've built an application that processes your updates and helps me understand what the key points are, what the hotspots are, what you are struggling with, where you either need help or you are going so fast you just need more introductions and really trivially. But again, I'm not someone who loves admin. I've built an app that automates my expense tracking, that goes into my email, that pulls out my expenses, that parses them, that decides whether they are company expenses or personal expenses, and puts them in a spreadsheet and takes a copy of the invoice or makes one as a PDF and drops it in a Dropbox. It's better than Expensify, which was a product I was using before because it's really again tailored to the way I want to work. And so what we're starting to see from personal stories and I think every single one of you share them in the chat will have different examples of how you are using these. These tools are expanding and it's quite difficult to keep track of them. Some of the foundation model companies have gone off and done surveys, but the numbers that come back are meaningless. What does it mean to say use AI at work? What does it mean to say you're using it for educational tasks? It's really hard to break that apart and get any sense of it. But if you talk to your friends or you look at your own logs and histories, you will get a sense of just how diverse those tasks are and how many of them are in a way brand new to your behavior. So these are exactly the sorts of things that you'd expect to see from a general purpose technology. And with an exponential technology that is bringing the costs down so rapidly and giving the bringing the capabilities up so fast, it's no surprise that we are leaning into it and inventing all sorts of these new behaviors. A quick note, if you want to support us in bringing more of these conversations to the world, please consider subscribing to the show. The exponential age is, is quite a tricky one to navigate because you've got these general purpose technologies, but they're rapidly improving. And at the best of times, general purpose technologies have broad and diffuse effects across our economies. And the speed that exponential ones bring makes it just much harder to understand what second and third order effects might be. But AI is just one of those technologies and it's certainly moving faster, both in terms of progress and applications diffusion and uptake than anything that I had studied previously. But there are some common patterns. So couple of common patterns. It'll open and expand markets, number one. And number two, we don't necessarily know where the profits are going to lie and that makes it hard for those who have to deploy capital. So if we talk about the point about expanding markets, first of all, I read from an investment bank report a couple of weeks ago that, you know, for OpenAI to hit its revenue numbers, which are extremely ambitious, maybe a bit foolhardy, it would involve every iPhone user in the world needing to spend, I can't remember the number, 50, 60, 70 a year on these services. And it's a shockingly stark calculation. But it also doesn't make any sense when you think about new technologies that come into the market. You don't even have to look at general purpose technologies to do that because new technologies that are innovative come in to do something new by their definition. So it's almost impossible to think about how big the market could become. Now, the way investors think about this is through an idea of the total addressable market. The tam, the maximum extent of a particular market. You think about Google when it went for its IPO back in 2004, it was five or six years old. They described in their IPO prospectus the TAM, the total addressable market they were serving. And that TAM was $6 billion. That is what they're serving. Well, Google's revenues today are a few hundred billion dollars and the market that they serve is now $600 billion. Now, if you had gone back and said, well, it's 2004 and Google needs, you know, 300, it's going to have $300 billion of revenue. That means for every dollar advertisers are spending on digital advertising today, they'll need to spend 50. Well, that's impossible. Well, that's the iPhone and ChatGPT calculation that the investment bank did a couple of weeks ago. Markets expand as we bring in new technologies that do things differently, that introduce efficiencies, that introduce new sources of value creation. New sources of exchange. Another example of that is Uber, or the company that was known at the time as Uber Cab. So when they launched, they talked about going after a particular high end market and they said, listen, our tam, our total addressable market is about a billion dollars. So if we get 100% of the market we're going after, our revenues will be a billion dollars. And you know, of course no one gets 100%. Now that's obviously not true. I mean, that market today is thought of to be in the hundreds of billions of dollars and Uber's revenues are much, much higher than a billion dollars a year. Because new technologies, new innovations, new customer propositions can expand markets. But just because they're expanding, it doesn't necessarily mean we know who's going to make money. We don't know where the profits are going to lie. And that's particularly true if we move down a layer from general innovations like Uber into general purpose technologies which are, you know, diffuse. They spread across the economy in interesting ways. And it's not clear whether it's going to be the inventor of the technology, the people who build it out, who will capture the profits, or someone else. If you think about the Internet, the value accrued to downstream companies making the applications, and to users like us who get an unpaid for consumer surplus from all the free services we've enjoyed for the last 20 or 30 years. The value didn't really accrue to the people who laid the first fiber lines or built the first modems. But that's not a common pattern. I mean, if you look at the internal combustion engine, that took a completely different shape. So the car companies over the decades captured most of the profits. But the oil companies who are upstream and the finance companies who are enabling the purchases of the cars were not far behind in their share of the profit pool. But both of those sectors enjoyed far better margins than the auto marks. We don't necessarily know with general purpose technologies where profit pools will lie, who will make the money. AI makes it even harder. Because AI is not just a simple general purpose technology. It's also the invention of a new method of invention. In a way, it looks in on itself and can help us come up with new approaches to do new things, both in AI and in other domains. And what that means is that it will create things that will change the assumptions about how you structure firms, how industries take their shape, and ultimately where profits can be made. It's quite hard to imagine this because we're asking for so much to change that there's baked in assumptions we don't really think about every day. But it will in some ways have that kind of impact. Let's take a look at the software market. So it's $155 billion a year, $160 billion a year spent on developers in a market that's worth probably a couple of trillion globally. Well, AI is enabling people like me now to write software for myself. And the coding tools market, which is only two years old, has already turned into a $3 billion revenue market if you look at cursor and replit and chat and build and other types of things. So how big will that market get? Will it is the TAM $153 billion because that's how much we spend on the people who construct the code? Or will the TAM get much, much larger? Is my spending on these tools net new spend to a new expanding market rather than competing for the work that a developer might be doing somewhere else? So what will we even call that market when it evolves? I mean, it's taken a while for us to come up with a name for the market that Uber competes in or the market that Airbnb competes in. So we don't really know. We can see that the shape might change or what might it do to the white collar services market. You know, on the one hand it reduces their footprint because so much of the generation and analysis work can be done by AI. And you know, we're hearing stories of paralegals coming under pressure because things like Harvey and other tools can, can do lots of the tasks that they, they used to do. But that's a moment in time today. And if you go back to where the analysts were seven or eight years ago, people were saying it was going to be routinized work that would first fall to AI, not this more non routine work of open ended research that a consulting analyst might do in their first couple of years at work. And I think that we don't know how this will play out because these firms are adaptable themselves. Who's to say that the market won't expand? Now hear me out. Here's why the market might expand. When you go and talk to companies who are trying to implement this new technology, they say, well, it's really difficult. And it's difficult because it's ultimately a change and transformation project. What does that mean? It's a project that is about people. It's about getting some people on side. It's about getting tacit knowledge out of their heads. It's about sitting down and figuring out experiments. It's about deciding how to change. It's about roles and responsibilities. All of that are people questions. So the AI might be able to provide guidance, but actually getting people to change is going to take people to persuade each other. And at the same time, these tools are going to behave in all sorts of new ways. They're going to be new contractual relationships, new complexities. I mean, there's not a new technology and a new complexity that a lawyer doesn't love because you need to construct new contracts. There is a new case law. So even as these tools get better, the world may be getting more complicated and certainly for the coming years or more, it might create more opportunities, not close them down. I mean, the truth is we can't be sure. I just say that at these moments with the exponential page, which is so disorienting, so surprising, you have to be a bit open minded and imagine counterfactuals against all of this market change. Traditional capital markets really struggle to fund this. That's why many of these firms are backed by venture capital in the early days. But the appetite for change now is so great that a lot of this funding is happening in public markets. And so we get to this sort of odd world where a company like Nvidia is trading at a forward price earnings ratio of about 27. Well, it's a bit spicy, but it's not.com levels that was Yahoo at 1,200. But more interestingly, it has a ratio called PEG, which is price earnings growth of about 0.7. And the rule of thumb is that under one you've got a company that's undervalued. Now Nvidia has a $500 billion order backlog. So this whole thing is painting a picture of a little bit of confusion, maybe not a bubble, more of a utility that can't build power plants fast enough. And so we see this extreme dispersion in the markets, that persistent bubble bust anxiety. And I think what it comes down to is fundamentally that the traditional capital markets are not well suited for dealing with this degree of change, uncertainty and speed. The return profile of AI as this general purpose technology, as an invention of a method of invention, as an exponential technology, is really orthogonal to the returns profile that capital markets look for. What they want is focused, understandable return on capital over a three to ten year period. And yet with general purpose technologies, as it will be with AI, those gains are diffuse. They may be spread out across the economy, they may be spread out across industries, they may accrue to the public rather than to things that private companies can benefit from. And the pathways may be indirect and there may be great lags before profits emerge. So existing capital market apparatus doesn't price this kind of system level change. It prices firm level cash flows. This specific GPU cluster produces this specific revenue stream over the following years. But my sense will be that AI's biggest gains will be a system wide improvement, a system wide option value. It is an exponential gap. The institutions of finance are not particularly suited to the dynamics of this particular technology. And it's really not because the participants are confused and are not brilliant and smart and diligent. It's because of that, the nature of those exponential processes and how they're actually hard to rationalize in the linear spreadsheets around which decisions are made. So this feels exceptional, but the pattern is familiar. For a decade I've been arguing that exponential technologies behave in this way, and it's playing out precisely in front of our eyes. They accelerate quickly, they create an exponential gap. Because those technologies move quickly, we adopt them quickly, and our institutions, norms and other systems, for many good reasons, move slowly. These technologies create new behaviors, they can expand markets. And you can see that in the way people, all of us, use these tools in ways that were unimaginable to years ago, perhaps even just a year ago. So this is why a historical lens matters. And my framework for the exponential age helps make sense of this moment of ChatGPT turning 3. The speed and the scale might feel disorienting, but there are those common features in this underlying dynamic. Thanks for listening all the way to the end. If you want to know when the next conversation is released, just hit subscribe wherever you're listening. That's all for now, and I'll catch you next time.
Episode: The method of invention, AI’s new clock speed and why capital markets are confused
Host: Azeem Azhar
Date: December 5, 2025
Azeem Azhar examines the rapid evolution of AI, particularly focusing on ChatGPT's third anniversary, and places its rise within his "Exponential Age" framework. He explores the shifting technological landscape—where AI accelerates at breakneck speeds, reshaping business, productivity, and society—and dissects why capital markets and economic actors struggle to adapt. The episode is rich with historical analogies, practical examples, and deep dives into both the opportunities and confusions of exponential technologies.
On ChatGPT’s Pervasiveness:
“ChatGPT tends to drown things out because it's becoming the verb for AI, and with some justification.” (00:34)
On Rapid Change:
“GPUs have delivered far faster [progress] over quite a long period of time, over a decade… Another example of time compression.” (09:20)
On New Uses for AI:
“What I can now do is build software that works… I have made, I kid you not, dozens of software tools, many of which I use daily...” (22:30)
On Markets’ Struggle:
“The traditional capital markets are not well suited for dealing with this degree of change, uncertainty and speed.” (59:35)
On Exponential Age Dynamics:
“They accelerate quickly, they create an exponential gap. Because those technologies move quickly, we adopt them quickly, and our institutions, norms and other systems... move slowly.” (1:03:10)
Azeem Azhar offers a lucid analysis of the exponential age, using ChatGPT’s anniversary to reflect on how rapidly evolving AI is outpacing traditional ways of thinking—about markets, productivity, and even the logic of invention. He encourages listeners to remain open-minded and recognize the familiar underlying patterns amid the chaos of swift change.
For more on this theme, revisit Azeem’s previous explorations of exponential technologies and how markets historically respond to disruption.