Transcript
A (0:00)
Coming into 2026, things feel a little bit different. The moment is somewhat unusual. It's charged by, I think, a maturation of what we've seen from AI tools, the promise of the last seven or eight years. And so I wanted to start this year by grounding us in something different and something simple. The way in which AI is already showing up in my day to day and how that's changed in the last two or three months and what that means for the year ahead. So let's get back to this sensation that I have felt and experienced over the last few weeks. It really feels that some of the AI tools crossed some part of the uncanny valley. And what it feels like right now for me is that I have maybe fifty, maybe a hundred people working for me in addition to my brilliant team. Not metaphorically, but it just in terms of actual velocity. And what has really driven that has been the ability for these systems to write really good code and become more and more reliable in the analysis they do. Things that have sat at the bottom of my to do list for months are now done. They're done in an hour, they're done for a few dollars. And so what I wanted to do, it's been such a moment, maybe a realization or an epiphany about what that work style looks like just in the last few weeks that I want to suggest that 2026 could be a year for everyone where the way in which you use AI is going to change. It's going to be less about using tools and more about orchestrating your team of virtual workers alongside your real human colleagues. And I think one way of thinking about this is about moving from the to do list to the done list. Now, I want to take you through six shifts. These are anchored around three particular principles. The first anchor is how we make things and why the act of building has fundamentally changed. The second anchor is what does meaning look like? Because when making gets much cheaper, we need to ask the question about what is still valuable. And the third is what are the foundations that all of this sits on? The energy and the money and whether this entire edifice will hold together. So we'll end that conversation right? The part of that third anchor is the question that I was asked the most in 2025, when will the AI bubble burst? Or indeed is it a bubble at all? We've obviously done all of our evidence based research and forecasting. We've got a solid understanding of, of the cycle. You can check the website at boom or bubble AI, which is updated, I think Every single day, or most days, perhaps not, not on the weekends. So with that, let's get to the first anchor, which is what's happening to making. The act of building has been transformed not incrementally, but really, really categorically. It is really, really remarkable what's happened in the last six months, three months in particular. I'm going to get into some of those details. The venture capitalist Tom Tungas came up with this really great phase and I keep coming back to it. He called it the done list. Says we're out of the to do list era. We're into the done list and that's the capability and the capability that things like Claude Code are providing to all of us. And here's what it feels like to live within it. And I'll just give you one example. I've got about 4,000 music tracks on my computer and my SSDs. They're a real mess. They're the tracks that I use when I'm. When I'm DJing and doing sets of different types. And needless to say, over the years, they've just got really messy, confused in all sorts of different places. I download them, I buy them, and I can never find them again. And they also all need to go through two post processing steps so that, you know, the levels are good and so that they've got marked up with harmonics and so on. And then I can load them into the DJ software and put them on my decks and mix with them. Now for months, actually much longer, I have planned to organize those 4,000 files and make sure they are all processed by those two steps and neatly cataloged and therefore accessible to me. Guess what? It has been on the bottom of my to do list because there are a million other things that I need to do now. Last week, sort of actually. Was it last week? Yeah, I think so. Last week or maybe just before New Year's, in about an hour, I built three apps on my Mac. One scoops all the files together, finds them all, finds duplicates, gets rid of duplicates, figures out where they've been processed correctly and sends them to the right queue for processing. A second app, which I designed and built, addressed the specific need that I had, which was I felt that the metadata around the tracks was not rich enough to help me navigate when I'm putting a set together. So I built an app, it's called Psychic Octopus. You can see my state of mind at the time, which goes through all of those files and adds additional metadata markers around the degree of Percussion there is, where the drops are, how much vocal there is beyond what the mainstream music systems are providing. And the third app was a playlist generator so I could load up my tracks. I could say this is where I want to start with this track. This is where I want to end in this genre and over this length of time. And I want the mood to feel like this during the set and it would go out and discover some paths that would work and present them back to me. It all works. And it all took me about an hour, maybe an hour and a half. To be honest. The taste of that playlist maker is pretty terrible. It is pretty terrible. It does have great taste. But the point is it's done and it works. And this is one of, I would say around 30 or 40 apps that I have built over the last couple of weeks that I'm using, some of which you we may make available to members in coming months, which speaks specifically to needs that, that, that I have. And this is really, really radical and I'm far and away the not the only one experiencing this. I mean I've been reading a lot of testimony over the last two or three weeks. There was one particular engineer, he said that his two person team now supports thousands of users. And he described what he now does as moving from playing every instrument to conducting an orchestra. So this is what a year ago, Andrej Karpathy called vibe coding. The idea that you have a vibe, you can talk to your computer and it will start to build whatever you need. And you know, I wrote a lot about vibe working and I'm sure many of us are doing that. Where we vibe our way through a speech we need, or we vibe our way through a marketing plan, or we vibe our way through analyzing customer data. You don't really maintain those outputs. What you do is you maintain a list of intents that become a set of done lists. And you know, and for teams like mine who are small, as a small team, everyone is highly motivated, super capable, they can lean into this and they can do this really, really quickly. But, but I think for large companies there is going to be molasses to put the best term to it. You know, their, their processes are designed for this linear approval based sequential system. They, especially if they're public companies, have to worry a great deal about risk. They have to go through all of those layers of approvals. That's the to do list era. Those of us who are moving into the done list era are in parallel, outcome based and immediate response. I will give you one Small example of this. I'm sure, like many of you have also been through this, where you're looking for just the right note taker on your computer and you've moved from notes to notepad, to evernote, to notion to roam and to the next thing. Well, I knew exactly what I wanted. And it's very simple. It works in my browser, it syncs across to my, my phone, and it took me 30 to 40 minutes to build quicker than it takes me to navigate around a complex notion. And it's exactly the thing that I want. So this is a really remarkable moment, like this revolution in building that we're starting to see through the capabilities of these agents. And that's the second part of this tentpole, which is this agentic coding revolution. We have moved, crossed a line. I think if 2024 and 25 was largely about productivity boosts in particular, say for software engineers in 2026, there is a notion that one of the things a software engineer did, which was that they translated real world needs into code that computers could turn into app apps that met that real world need. Well, it feels to me that that particular attribute is becoming obsolete, the translator attribute. And product managers who sit one level up, closer to the end user would do some of that translation as well. And I think they still have an important role. But you can start to see that the gap between the user and the code that the user wants to get something done, or the analysis the user wants is squeezing. I use the word smushing when I was writing my notes for this. It's smushing and smushing fast, if indeed that is, that is a word. There was a really interesting story a few days ago that bcg, which is that strategy consultancy, had operationalized what they called the consultant as creator model. Their consultants have built over 36,000 custom tools using AI. So they're not buying the software. They're probably externalizing certain capabilities in these tools for themselves and for their customers. And one of the things I found really remarkable was that the lead developer of Claude Code, Claude Code is the, is Anthropic's coding tool. He revealed that in December, 100% of his code contributions were written by the AI itself. And he wrote 40,000 lines of code. And he describes it as editing and directing rather than typing syntax. You know, it's the ultimate in the kind of leadership role that you might have. So one way that you might want to frame this as like the skill that's required in this world of building and making is the importance in being able to frame the right question, right to direct the right intent. The economist friend of Exponential View, Erik Brynjolfsson, has called this chief Question Officer. If the large parts of the execution phase of work, whether it's coding and calculation and certain classes of the doing, are increasingly commoditized and they happen quickly, value shifts to the complements. I mean that's classic economist framing, right? The value shifts to the complement and in this case that would be the problem formulation, the question asking, the intention seeking and the evaluation, right? Did I get what I needed? And so the bottleneck shifts from writing the code to do you have a good enough backlog that you can articulate and can you formulate that problem and can you evaluate it? I've gone through a year's backlog of stuff that I would just never get round to in just a day. And so that backlog moves to knowing what's possible, what you need, what to get right. It's no longer just an engineering mindset, it is really a creative mindset, it's an artist mindset. Now a lot of this is classic Jevons paradox. So you know, we're bringing down the cost of something, so we're going to do lots of it. And what's happening is that we're doing building things that otherwise we simply wouldn't do. No one was going to build that DJ music workflow for me and I'd seen all the tools and I wasted more time searching the web for tools that could do it than it took me to build. And I certainly wasn't going to pay 10 to $15,000 to a dev shop to build it for me. I was just never going to get it built. And now it is built, it is being used and it is in existence and this is happening. You can see that this rippling is rippling through the engineering community if you go onto X. But you also look at Stack Overflow, which had been the place where a lot of engineering knowledge was stored, and Stack Overflow has really collapsed. People are not asking questions anymore because the AIs are really developing what they need to, to develop. I think for, for software companies this could be quite challenging because right now I can build a tool faster than it takes to communicate the spec. So why would I start to accept the shortcomings of someone else's design unless they have some kind of deep lock in, which might be that they are the so called system of record within my, within my enterprise. And of course there are all sorts of things we have to consider like, are these systems going to be reliable? Are they going to be safe? How are they going to interact with each other? Are we going to be able to maintain, keep our data or will they fail? And you know, the data has gone, gone for good. I mean these, these feel like they are problems that will get solved over the next year or two. You know, Eric talks about that value going to the question, the questioner and the insight. And I think that if I look at this from a business perspective, where does that value start to anchor? And I think that there are probably three areas and one area is, is the data. So it doesn't matter how good an app is, if it doesn't have the data that is relevant to make it good, it's going to be irrelevant. So I think that data becomes important. I think the second thing that starts to become important is, is distribution. It's going to be harder and harder to stand out. You have to be hyper, hyper viral in order to do that. So existing distribution I think may start to benefit people. And I think the third thing is what is the insight that you have, the really specific take that speaks to something about the world that doesn't currently exist, which is what building anything is, right? You don't build something that's already there. And the interesting thing is that all of those many of those things correlate to one thing, which is customers, right? Because customers can be a source of data, they can be a source of insight. You know, they express what their problems are and they are also your distribution angle. So one idea I'm noodling with is the importance of that customer centricity. And hopefully, you know, within, for premium members of Exponential View over the year, you'll start to see the fruits of some of that. So when baking is trivial, the question is what is still hard? What is still valuable in that context? You can think back to the Economist framework, which is you look for the compliment. And one of the ways that I now think about this and within this anchor is the idea of orchestration. If you are not building something directly yourself. And it really feels to me that certainly for the class of code that we might have written, and certainly within Anthropic, right, the, these, these incredible developers, the class of code they are writing, well, you don't have to do that anymore. So what are you doing? And the answer is that we are, we're orchestrating. And as any conductor, I'm just going to say this, I realized I knew nothing about conducting except for watching that, that film which had, I Think Kate Blanchette in it. But you know, when you're conducting an orchestra, you sort of need to know the capabilities of the people who you are conducting. And one of the things I've noticed as I've started to move from sort of single agent coding to multi agent systems and working in parallel with 6, 8, 10 of these things at a time, is that you do need to understand those, those systems. And I've even discovered, and this is kind of curious, that the agents are starting to understand their own limitations. I was building something that's a kind of internal research tool a few days ago that involves a lot of different agents. And at one point Claude said to me explicitly, don't use me for this, go and use ChatGPT because it's tougher and more cynical than I am. Which I thought was a really interesting bit of self reflection. It's quite a strange moment. And you know, that is evidence, I think, that we're not going to have that singleton AI, all powerful AI. It is that society of AI that I wrote about a couple of years ago, or if you saw the recent essay about the, the world of spiky minds, that's also sort of reflective of that. Lots of different models, lots of different capabilities and tones and the need for us to be able to orchestrate and understand across them if we are actually to be more than individual contributors. So there's a reality shock here at check here, which is the usage gap. So at the end of 2025, you know, chatgpt alongside everyone else by the way, launched their sort of year in 2025. Everyone's aping, aping, Spotify, they released their year in 2025. If you got one of those, why don't you share some of your highlights from that year in 2025 in the chat? Because that would be quite interesting. I think when you look at that data, it was quite interesting to see what it took to get to one the top 1% of all ChatGPT users. And I found on Reddit a post that showed that there was a user who had only had 283 chats in the whole year and they got to the top 1%. Now one query a day in my mind is not really using AI. That is not diffusion, that is experimentation. That is a bit like, you know, my, my home. I can tell you we have a fondue set but it comes out every three or four years. So I know that in some, you know, technical way we own a fondue set but we don't really do Fondue and that gap between fondue set owning AI users and users who've figured out orchestration is now a chasm and is growing really fast. And I really think it's hard for it to, to close. You know, we've started to see how within the mainstream, Gemini from Google is taking up winning a lot of market share against ChatGPT because that's where the kind of casual person who's going to dip their bread into melted cheese, you know, use AI once or twice to improve some simple processes, is going to end up going. And that casual usage is volatile. Whereas when you're in the orchestration space, that deep integration matters. So this observation is not about who's going to win the broad consumer. This is, that's a commercial question. Google versus OpenAI. You know, I'm sure that they will both have successful businesses here, but I'm really interested in who is really embedding this technology in their ways of working in order to really, really maximize and expand their capacity. If it. If 99% of ChatGPT users are doing fewer than 1,283queries in 2025 based on one Reddit post, maybe there's better data out there. The number of people who are going into orchestration is very, very small. And so the human job though exists. It's really, really fun. I'm super enjoying myself. It's about architecture, it's about establishing intent, asking what am I trying to do? It's verifying have I got where I wanted to get to, what have I learned on that process about things that I can extend and what is the expertise that I now need to know which agent to use, which agent for which task. You know, I've gone from being an awful developer. Many of you know, I was such a bad developer. My development team begged me to stop writing code to being able to write quite good code and applying my judgment to it. That takes us to, to the sort of second half of this second anchor, which is authenticity. So authenticity is going to become really important. I, you know, I read on, on X something from an American venture capitalist, first name's Lulu, I forget the rest of her name. And she made a really important point, she argument where she said, listen, authenticity is going to become really easy and accessible sort of pseudo authenticity because the tools are getting better and better. And it took me back to a speech I gave about 15 years ago where I said, look, in the age of AI 15 years ago, where, you know, before AlphaGo and all these other things, right, conceptually making Stuff will become perfect because the AI systems, you know, we weren't in the transformer architecture at that time, will make things to machine calibration. And in a world where everything becomes perfect, we will start to value the complement of perfection, which is the imperfect, the thing that has some meaning. And I described it slightly tongue in cheek as the future being artisanal cheese. You know, we can all go out and get cheese and buy lots of cheese that's been made in a factory, but there is still some pleasure in artisanal cheese or micro brewing. And that's what I meant, which is that the machines can produce perfection. So the thing that's hard are the thing that's human, the thing with texture, with idiosyncrasy, with interiority. The idea is that authenticity is proof of work. And I've discovered this because, you know, I'm writing my, my, my new book at the moment and of course I'm using the LLM tools to help me with research and with, with being red teamers on the concepts and. But if you try to get the LLMs to write anything, even a network of LLMs, what you get is incredibly mid copy. It doesn't have that interiority, it doesn't have that idiosyncrasy, it doesn't have the details that come out from being a human who's lived through something and has experienced things and knows when to break the rules and when not to break the rules. And there's some empirical evidence around this. The ARP AGI General Intelligence Benchmark, the Arc AGI 2, showed that pure LLMs score 0% on tasks requiring genuine fluid intelligence. They're good at being able to work their way through certain classes of crystallized problems. But I think that means that right now they're not very good at doing the thing that makes the writing good. I say right now because the LLMs, of course, aren't just LLMs now there's lots of other things going on in these systems to make them perform and make them be much, much more reliable. So we don't know if that frontier doesn't get crossed. What I have learned, of course, is that they can identify golden threads and they can analyze writing styles, but they can't write good quality paragraphs. And one of the challenges, of course, if you are someone who is delivering what I'm doing now, which is like an expensive authentic experience, I'm spending an hour of my time, you are all graciously giving me an hour of your time. You know, we have to work with LLMs, we have to work with the best tools available, otherwise we just can't cope with the demands of the work. But it's that hard human work, the work with your pen. Where's my pen gone? Here it is. You all know my lovely bronze fountain pen that doesn't have the ink reservoir in it, that work without the pen. The work of observing the world around you, of reading different things, the effort that you put in that becomes a differentiator. And I think the funny thing that's the funny parallel is that that's also what made bitcoin valuable, right? It was the proof of work that made bitcoin valuable. It's a proof of work that makes the authenticity valuable because we did the thing that John Kennedy called the hard thing. And what I'm doing is I'm putting my attention and my taste and my experience into the output. So where do I think value goes in that world? And I think this creates an opportunity for community builders, for people who develop a sense of taste, a sense of perspective, experience that they can bring together things that are difficult because difficulty plus authenticity will turn into scarcity and value. And Ben Thompson at Stratecherry has written a piece just in the last few days that sort of reflects on this as well. The idea that imperfection becomes the premium, the rambling podcast, the slightly rambling substack lives, the things that could only come from that person. A quick note, if you want to support us in bringing more of these conversations to the world, please consider subscribing to the show. So let's get to part three. So here are the foundations. What's underpinning all of this? Well, energy and it's whether the capital, whether the economics can hold. First part of this anchor is about what's happening with, with solar power. This is super important, you know, because all of this, these things I've talked about, require energy. And there is a crunch in the US which is why we're seeing a lot of gas turbines being reused and moved from the aeronautical industry to power data centers. This will be a short term thing. The reason is that a gigawatt data center is probably worth tens of billions of dollars or $10 billion of revenue REC and semi analysis over its lifetime. So you don't really want to sit around waiting for a grid connection and you're happy to pay for natural gas, which is more expensive now than solar in most parts of the world. But it's a blip against the overall paradigm, which is that electricity is turning into a manufactured Technology because of the learning curve that attaches to solar. And why we are seeing this dramatic growth in solar deployment not just in the rich parts of the world, but also in places like India. India's coal consumption in 25 was lower than in 2024, in large part because of the growth of solar. And we've obviously know the story in Pakistan and we see what's happening in sub Saharan Africa. What will continue to happen year in, year out will be the doubling of cumulative production, will continue to apply that learning rate on solar. And you know, production is growing rapidly. Pakistan imported 17 gigawatts of solar modules in 2024. That was more than all but four countries. Africa imported Sub Saharan Africa, 2 gigawatts of panels in one month last year. This is people led, market led, and it's policy led. But all of those panels increase production volumes and the learning rate applies. Historical learning rates have been 20% per doubling of cumulative production, which means that the panels get 20% cheaper. New analysis suggests we might be in a regime change and then that that learning rate might be as high as 40%, maybe even higher. I'm a bit skeptical frankly about it being 40% but what it means is that the reduction in cost will, will get even faster. Now of course there are other questions like balance of systems costs and you know, how do you build out a utility scale solar cheaply? Those things are also being addressed through robotics. I'm an investor in a robotics company that does, has tracked robots that do solar field installations. And a lot of this doesn't actually run into balance of systems costs because it's decentralized and it's sort of absorbed by a small business. And that will fundamentally transform how we produce our electricity. And 26 is going to be another important year for that. And we will be surprised. I would, I will wage at the end of the year, my only forecast really that 26 solar deployment will be higher than ever before. And look at something. Seven of the big major, six of the major oil nations producing nations are in conflict at the moment. And oil is trading at $56 a barrel. I mean that tells us something. So the strategic leverage of petro states has really, really diminishing. And the marginal value is going to the electron. So finally, two minutes on the bubble. We held on nerve at the end of 2025 and the mood turned really sour by late October, early November and it was not showing up in our data. We're tracking thousands of, of, of different things every day and we could see narrative changing. But we couldn't see the fundamentals changing. And of course as you saw at the end of the year and the beginning of this year, the analyst starting to say that well still in an investment boom, not in bubble cycle territory just yet, maybe towards the end of the year, maybe into 27, you should check the framework out at boom or bubble AI and also tell us what else you'd like to see there. When we run forward, when do we think there could be some real pain? We struggle to see real bubble top risk before third quarter this year, maybe fourth quarter this year. But again we update our forecasts regularly and we might review them, bring them forward or do something else. My midpoint estimate is actually much later than that. The bear case requires lots of things to align. It requires enterprise adoption to not to show up by the mid half of this year that Microsoft doesn't kind of turn around. That's lot of momentum that OpenAI really really struggles, that there's some kind of deep debt default shock and that there becomes a visible data center construction overhang. I think if all of those things happen together, and I mean together, there might be a difficulty in the second half of this of this year. But you just have to go back and look at the demand signals. One of my favorites I read on the, read this on X was a single developer who consumed 100 billion tokens in 39 days. That is quite remarkable. Or if you look at what's happening in memory, I mean memory, HBM memory which is what's needed for these AI to AI factories is sold out for the whole year for 2026 from Hynix and from Micron. Prices won't drop for a couple of years and the spot prices on GPUs are rising, not falling. This is not over supply. And what I think we're going to start to see this year is we're going to see early frontier companies, big companies deploying agentic systems and once they get it working and it might take a few months, they'll put it in front of tens of thousands of employees and that's an enormous step change increase in demand. To give you a sense of what that increase looks like, I'll regularly run you know, processes through some of my multi agent systems. That'll be half a million tokens for something that I'm trying to pat and push, push forward on the frontier. So this is, this is the cataclysmic mood I think has receded a little bit. So always stay level headed. Read boom Orbubble AI and read the newsletter because that's our job. So this is where we are. Let's go reprise that. My three anchors for the year. The act of making has been transformed. You can build faster then you can spec. That done list is real. The question of meaning has sharpened and that meaning is going to come from us. Authenticity being that new scarcity and those new foundations, energy and capital are for now holding and that boom continues. The gap between those who have crossed over and hasn't, I think is going to widen. And I'm not sure what it takes for that to to close. Thanks for listening all the way to the end. If you want to know when the next conversation is released, just hit subscribe wherever you're listening. That's all for now, and I'll catch you next.
