
What made 2025 special? 🎊 We recorded this to publish after Christmas, but demand for year‑end reflections prompted an early release - so if you hear me say Christmas has passed, that’s why. 🎊 In this episode, I reflect on the past year and what it revealed: a K-shaped divide. On one track, AI models are now doing hours of high quality work, improving at exponential pace, and shifting how we work from doing to judging. On the other, organisations and the broader economy are struggling to keep up. Stay to the end for my seasonal film recommendation.
Loading summary
A
As we approach the end of the year, I want to talk to you about what I saw in 2025. What made it a special year, what are we learning about AI? What surprised me, and if you're lucky and stay to the end, you'll even get my seasonal film recommendation. So what happened in AI this year that makes it different to previous years? Well, my favorite is surely going to be Nano Banana. That's Google's image generation service. First of all, it's got a fantastic name. It's one of the few things, as a Brit, I say less well than an American. So if you're an American, say Nano Banana, and if you're not, find an American and get them to say it to you because they say it so much better than we do. It's kind of amazing when you pair nano banana with Gemini 3 Pro and you present some complex idea, you can say to Gemini, listen, please use nanobanana to turn this into an explanatory diagram. And you get pretty phenomenal explanatory diagrams, little cheat sheets if you will. They could either be infographics or they could look like whiteboards. I have used it so frequently to communicate with people I'm doing things with. The other thing that happened in 2025 is that many of the models got good enough to do really long stretches of work. Claude 4.5 from Anthropic is fantastic if you're working with long documents, it's fantastic if you are coding. GPT5 from OpenAI is increasingly good when you think about deep research. It's really good at certain classes of problem solving. And in the latest version of 5.2, I've been using it to support some financial modeling. And I can't leave out Google's Gemini 3 or 3 Pro. It's markedly better than the previous generation of models, and it doesn't stop there. I've been using Manus, the research project, more and more often. All of these models are now capable of doing what I would say is a few hours worth of high quality work. And what it's meant is that I've been able to build my own useful apps. What was derisively called Vibe coding a year or so ago is turning into letting people like me who've not been allowed near software code for more than a decade develop things that are actually going to be useful. So I've built apps that allow me to evaluate the different LLMs as they get released. I've been built apps that do analysis of documents, and I built apps that help Me build forecast scenarios. As I try to think about what might happen over the next 10 or 20 years. One of the things that I've noticed is there's a cognitive shift. It's a shift from the effort of actually doing the work to the effort of judging the work, judging the outputs of these systems, specifying the problem well and then turning that problem over to a model and then looking at the output and needing to maintain a degree of mental acuity as I look at this thing and say, well, are the assumptions it's making reasonable ones and are there any obvious errors? It's a completely different way of working. Of course those of us who have managed people and led teams before will recognize that it's a little bit like managing and leading a team. Exponential is a really easy word to say. It gets bandied about a lot. I mean I should know. I've written a book called Exponential. This is called Exponential View. And it's really hard, as I've argued in the past, for people to understand what that will actually mean. But with the progress of AI models we are actually seeing something that is exponential. They are improving in non linear ways. And that's becoming more, more obvious as you use these models yourself. And it's, it's quite important because we don't come across exponential processes that we can experience ourselves all that often. But that exponential progress is not going in a single direction. As you know when you use these AI systems they all have slightly different capabilities. And I think what we've seen in the end of the year with GPT 5.2 and Claude Opus 4.5 and others is that exponential progress is not across a single dimension. It is actually showing us that there will be a variety of models even as these things get more and more capable. So Claude is really good at coding. It's the model that I use. It's the model my team uses when they're coding. It's also really good at working with long documents. So if I am working with long documents, it tends to be the one that I turn to. GPT 5.2 is really good when it's in pro mode, when you need to spend 20 or 30 minutes trying to tackle or crack a problem. When we're running fact checking systems we tend to use GPT 5.2. We have a pretty sophisticated way of breaking out a piece of writing into its constituent facts and then running a number of sub process to check those facts. I've always found that OpenAI's models are the best at doing that and then the question is, when would I use Gemini 3? It's funny, there are just moments where I feel, well, Gemini 3 is the right thing to do when I come to thinking through this particular problem. But therein lies an additional issue, which is, of course the cognitive load of using these different systems has now landed on me because I'm sitting there trying to select which model I should use for a particular task. It's really easy with Claude writing code use Claude. But then there's that nebulous set of other types of problems where, well, should I use GPT 5.2 or should I use Gemini 3? It's never that clear, although it is clear after the fact. I think the other thing that we're seeing is how slow and painful organizational rewiring is compared to the progress we're making with the models. Let me give you a story of Exponential View so we've obviously been thinking about these large language models for a really long time. From the first Transformer paper, I'd been experimenting with GPT3 from for a few years now, and in the summer of 2022 when image generation models like Midjourney and Stable Diffusion were getting popular, I remember telling the team that we need to get good at this, take gen AI image generation quite seriously. And in November 15th I really, really doubled down and sent an email to the team, reinforcing how important Gen AI was going to be. Of course, two weeks later ChatGPT was released and I'm going to take credit for it, but I think we all know it was just luck. But it took us until January 2024 before we really fundamentally changed the way this team of five or six of us started to work. So there was a year where we had started to embed the use of AI in people's everyday. For a number of months we ran daily standups where people shared their experiences of using these tools, which frankly before GPT4 were a little bit shonky. But it's really January 2024 where you really start to see us change how we organize ourselves and operate. So the team at Exponential View has a discretionary budget to spend on the AI tools. They need the state of the art tools. Some of them get, you know, ChatGPT Pro, some get Claude Pro. The budget is fairly generous. It's normally a few hundred dollars per person per month, so they have room to experiment, they have room to figure out what tools work best for them if they want to build an agentic workflow. Of course they can get hold of a zapier or a Lindy or something similar to do that. And if they want to build their own code, they've got access to cursor repl it and so on. And since that period and through 2025, we've started to introduce AI build sprints or AI experimentation sessions or AI upskilling moments where people go off and build their own tools. We started back in the spring of 25 establishing the rule of five. It's not been adhered to brilliantly, not even by me, but the idea was that if you were doing something five times in a month or in a week, you should automate it. And that rule I think is quite helpful. We kind of push on it. But now we've gone even further where we encourage team members, myself included, to build the tools we need for the job that we might be doing or the project where we're working on. So I'll give you an example of something that I'm working on. So we're, you know, building something new and essentially I'm prototyping that myself. Using these tools, other team members may have built their own workflows to support their research or their data gathering. And again, what we've done is we've pushed that capability down to everyone individually and hopefully they'll tell you themselves supported them in them being able to build up their own capability and capacity to go after the things they need to achieve. That takes time. The idea of strategically deliberately developing that AI capability took us as an organization more than a year and we're only six people. So what does that mean when you are a bigger team, a big company that's a little bit more hidebound? I think the other thing that's been quite surprising for many people in 2025 is how fast revenue has, has grown. Now we're obviously far away from the revenue levels to pay for all of the investment in the big data centers. But revenue is growing incredibly quickly. I mean we have a very conservative forecast because we deduplicate revenues across the AI stack. But our mid case number is around $60 billion of gen AI revenues in 2025. So 230% annualized growth over the year and there are lots of things that you could reasonably count in that that we don't count. So for example, you know, any of the integration and development fees from systems integrators are not part of that $60 billion. The revenue uplift for online advertising that Metro Google has is not part of that either. That's a really rapid commercialization. Two years to get to $60 billion in revenue. The PC took nine years, the Internet took 13. Most strikingly, that growth rate is not slowing significantly. Of course you would expect it to slow because it started from such a high level. But according to our estimates, we see the exit growth rate. If you look at the last quarter of 2025 and say, how fast do revenues grow? At around 214% per annum, which is pretty close to the overall growth rate for the year and three years in, I still see a really common error. Maybe it's a misconfiguration. When I watch smart people using AI, it's a really common mistake. It's one that has probably been the biggest misconception, consistently number one for the last couple of years, which is they're not experimenting enough, they're not playing around enough, pushing the systems to their limit, pushing beyond their own limits, having things break, having things fail. The reason you need to do this is because these tools, like much of the software before them, can do a lot of different things in a lot of different ways. And if you don't try, you'll never figure out what they're capable of. And part of that trying is overcoming the obvious mistakes that you will make or that they will make. So if you have a static way of working with your AI system, I think you're really, really missing out because the models are getting more capable. More importantly, the application layer around the model, the tools they can use, the use of memory, the use of projects, is, are making these things more and more useful. My experience, and don't quote me exactly on this number, is that about three quarters of what I'm doing today, I wasn't doing three or four months ago because models have become so much more capable. So if you treat AI like a really simple operating system upgrade, rather than something that whose capacity grows dramatically, which requires changing how you work regularly, you are absolutely going to miss out on a lot of the benefits that emerge. I think one of the surprising things for people who aren't developers or who've never worked with developers is that in a way, we all start to behave a little bit like developers. If you work with developers building software, you'll notice that they'll often think about their tooling, they will think about their workflows, they will start to think about what are the things in my weekly cadence that I can automate away, whether it is deployment or testing or documentation. And they'll sit and they'll do that. And the amount of time they're actually engaged in typing in Code, it's not going to be a hundred percent. And in the same way, I think as we start to use AI effectively, we are communing and coding with a smart piece of software. That ratio between how much time am I actually putting bricks down in the wall of bricks versus figuring out how to best organize the bricks or how to specify this starts to shift. A quick note. If you want to support us in bringing more of these conversations to the world, please please consider subscribing to the show. So one of the things that played out this year that I had picked up on actually this time last year, was the clash between the requirement for AI and the requirement for physical infrastructure, specifically data centers and the electricity that powered them. I had an op ed in the New York Times in December 2024 where I made this exact point. And of course AI is intangible. Like so much software, it's intangible to us, but it is backed by an enormous physical infrastructure. And that is really substantive. Yes, at one level it's delivering electrons in two chips, but in order to do that, you need substations and gas turbines and solar panels and batteries and transformers and connections to the grid. And then you need the human workforce to build all that out. I mean, you need lots of electricians to build a Stargate data center. And to do that, that's often been measured in the pre AI world in decades rather than years, let alone months. And so you've got this challenge that you can build a data center traditionally in 18 to 24 months, but you can't necessarily connect it to the grid. But in fact, with the demand for data centers going up so quickly, in many cases it's taking longer to build than the 18 to 24 months because of backlogs of the core components on the power side or the staff that you need to actually wire it all up. And I think that that tension is going to continue over the at least the next year and probably into 2020 26, as demand continues to outstrip supply. And some of this supply is simply not as liquid as other things are on the Internet. The most common question that people have asked me in 2025 is are we in an AI bubble now? We have written so much about that and we've of course been tracking it live since September. And you can go and visit boom or bubble AI, which is our live daily updated tracker of the key stressors on the market to get a sense of where in the flight path the trajectory of this investment build out. We are. But the question That I get asked that really matters to people is what's going to happen to jobs. And if they're my age, they're asking about their kids and if they're younger, they might be asking about their own prospects. You know, what should my son or daughter do? Or will I be able to get a job? I've got a degree in X or Y. Well, we've learnt a little about this this year, although there's still a lot that's unanswered. There's a lot of uncertainty in the market and that uncertainty is coming from politics. It's coming from the risk of inflation, the risk of rates rises or rate cuts. It's coming from trade questions. And the technology uncertainty is giving bosses another reason to delay hiring and in particular a reason to delay hiring new people and younger people because they are the riskiest. You know, hiring is hiring somebody is a long term commitment and a new person is an unproven asset to the firm. I think that that is one of the challenging realities of the backdrop that we're operating in. I wouldn't say that it is primarily about AI. It is primarily about an environment of really deep uncertainty in which AI is contributing. And I think the expectations that have been set about what AI can do are not helping. The other thing I think that we've learned in 2025 that probably wasn't true in 2024 after some of the early academic research, is that AI is an equalizing technology only in some cases. So back in 2024 we had a lot of great research from US universities showing that if you gave teams AI, those in the bottom two quartiles would have their performance pushed up to say, second quartile level. And it's a great equalization story. But it's becoming increasingly clear that just like all other technology, AI will also supercharge the very, very best. And that's mostly through returns to expertise. So there was a really fascinating paper this year, Sarka and colleagues, where, which was called AI Agents Productivity and Higher Order Thinking. Early Evidence from Software Development that is a good academic title wouldn't be such a great Christmas song title. And what they showed was senior developers gained the most from using software agents to support their developing because they know how to direct and evaluate output. And so they are able to direct the AIs and evaluate those outputs at a much higher caliber than younger developers. And that's really part of a story of how a moment of technology change like this doesn't necessarily help younger, younger people when the Internet Rolled around. I was a young person back then and it helped me a great deal because the senior bosses know they needed, knew they needed to do something about it. And so they would turn to people like me who frankly didn't know anything else. But we did know a little bit about the Internet and it was quite helpful. I think now the default position would be that a lot of what AI enables are those more executive skills that you get through greater and greater experience. And I think that creates a signaling problem for younger people. How do you signal in a labor market which values judgment over credentials? And in one of the fantastic conversations I had this year with one of my podcast guests, Ben from Revelio, we talked about this and that the two things that emerged which are in reach of any young person are, number one, in your own time, through your internships, through your college degree, ship some end to end projects, not point projects, but end to end projects which have all the messy reality of real world work. And the second is use your ability to network, to build a relevant network and some domain fluency in the area in which you want to work. If I had a letter for the year 2025, that letter would be K. The K shaped economy. Everything is turning into a two track. One track up here and one track down there. There's a world of AI which increasingly looks like magic turned into engineering. Just look at what's happened with stock market valuations or revenue growth of the big tech companies. Look at how quickly gen AI startups are growing, how they are scaling and getting to 2050, a hundred a billion dollars of revenue faster than any companies before them. And then there's the rest of the economy, you know, the stalwarts of the Dow index. That isn't quite as exciting. So the letter K encapsulates that. And I often find myself having these two different conversations and not much in the middle. One is anchored and rooted in where we're coming from. The reality of legacy, the reality of technical debt, of inertia, of things that just take time to change. Hidden below that, the question of whether it's not about change, but a total reinvention. And the other, which is really heading towards the future, it is not tethered to the ground. Perhaps it needs a little bit of a tether so exuberance doesn't get too irrational. But the world of AI and the practicality of the impractical that we're able to achieve with it feels completely different. Now at the end of 2025, it feels to me that that resolution of that K isn't any closer. In some sense, it actually feels further away than it was a year ago. And you can see that that K shape existing in other places. You know, the US is pegging its growth on data centers. You know, data centers were more than half of US GDP growth in 2025. And yet there's growing resistance to data centers. And you've got politicians like Bernie Sanders on the left, but also politicians on the right pushing for and celebrating when data center buildouts in the US have been halted by local political action. There's that K again. But the other example is just in the shape of the ordinary American in the way they look at this. So you know ChatGPT and its ilk are the fastest growing, most popular product in history. You can look at the data of how people use ChatGPT. The more they use it, the more they use it. That is a revealed preference around how much people like this product. And on the other side, the K shape that lives in people's imaginaries is that 75 to 80% of Americans are not optimistic about what AI will mean for them. They're not optimistic about what it might mean for their their lives, their families, their jobs, their communities, society at large. So there is the K shape writ large. Again, it would be a little bit boring if 2026's letter of the year was also K. But there is a small chance that it might be now. We are coming to the end of the year and we're coming to the end of this show. And in this festive spirit, even though Christmas is now behind us, I'm going to recommend we one seasonal movie. It's only 53 minutes long. It's kind of silly. I watch it with my almost grown up kids on Christmas Eve and I would recommend it to you. It's available on Netflix. It's a good, feel good comedy called Click and Collect. And it brings together, I think, so many of the issues of the modern world. The ability to access anything at any time on demand as it meets the physical realities of the real world. I love it. I'm sure you will. It's perfect viewing even as we run into the new year. And with that, happy New Year to you. Thanks for listening all the way to the end. If you want to know when the next conversation is released, just hit subscribe wherever you're listening. That's all for now and I'll catch you next time. Sa.
Podcast: Azeem Azhar's Exponential View
Host: Azeem Azhar
Release Date: December 19, 2025
In this solo reflection, Azeem Azhar reviews the standout developments in 2025 concerning artificial intelligence (AI), the evolving nature of work, challenges in organizational adaptation, the physical infrastructure bottlenecks powering exponential technologies, and the emergent "K-shaped" economy. The episode discusses how advanced AI tools are transforming productivity, the nuanced impact on labor markets, and societal tensions emerging as technology accelerates unevenly across sectors.
Timestamps: 00:50-10:30
Timestamps: 10:30-20:30
Timestamps: 20:30-26:30
Timestamps: 27:15-31:30
Timestamps: 32:00-43:30
Timestamps: 43:30-47:00
[48:00]
Azeem closes with a light-hearted seasonal note—his film recommendation for the holidays:
“A good, feel-good comedy called Click and Collect…It brings together, I think, so many of the issues of the modern world: the ability to access anything at any time on demand as it meets the physical realities of the real world.” (48:20)
Azeem’s narrative is warm, frank, and laced with humor—especially around the "Nano Banana" pronunciation and seasonal references. The episode is optimistic on technological capability but clear-eyed about societal division, infrastructure limits, and the uneven distribution of AI’s benefits. The tone encourages adaptability, experimentation, and continual learning as both organizations and individuals navigate a rapidly splitting landscape.