Loading summary
A
Today on the AI Daily Brief the week the Global AI Conversation hit a whole new level. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, Assembly Robots and Pencils, AIUC and blitzi. To get an ad free version of the show, go to patreon.com aidaily brief or you can subscribe on Apple Podcasts. To learn about sponsoring the show. Send us a note at SponsorsiDailyBrief AI while you're on aidailybrief AI, you can subscribe to our newsletter, which is newly restarted and which is going to have all of the links to all of the articles and posts that I reference in the show. And you can also learn about all our various other ecosystem initiatives like Claw Camp or Enterprise Claw, registration for which is open until the end of next week. The the last couple of months have seen a steady growing acknowledgment of just how significant the disruption of AI is. This shift came first to those who are actually in the industry. Just last week OpenAI founder Andrej Karpathy wrote, it's hard to communicate how much programming has changed due to AI in the last two months. Not gradually and over time in the progress as usual way, but specifically this last December. There are a number of asterisks, but in my opinion coding agents basically didn't work before December and basically work since. The models have significantly higher quality, long term coherence and tenacity and they can power through large and long tasks well past enough that it is extremely disruptive to the default programming workflow. As a result, programming is becoming unrecognizable. You're not typing computer code into an editor like the way things were since computers were invented. That era is over. You're spinning up AI agents, giving them tasks in English and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long running orchestrator claws with all of the right tools, memory and instructions that productively manage multiple parallel code instances for you. The leverage achievable via top tier agentic engineering feels very high right now. In my opinion. This is nowhere near business usual time in software. Now of course, this is not limited to software. On the earnings call where he explained the 40% reduction in the Block team, Jack Dorsey specifically noted the leap that AI had made around the same December timeline and indeed Wall street is one of the main places where the recognition of the phase shift in AI is fully coming home to Rooster Michael Gad of the League Lag Report wrote this I once tweeted, AI is BS have been playing around with Perplexity Computer to automate workflows. It's not bs. It's going to fundamentally alter the world. I believe it now by the end of the year, I believe we will see huge layoffs. Block is a sign of what's to come. This humble shift, a throwing up of the hands and saying I was wrong to doubt this maybe got its best expression this week in a memo from legendary Oak Tree investor Howard Marks. The memo is called AI Hurdles Ahead. In it, Marx writes, my main reason for writing this addendum is to address significant changes that have taken place in AI over the three months since I published Is it a bubble? First he said, there's the pace at which developments in AI are occurring. That speed is unlike anything we've seen before now, and this has implications that have never existed. AI is growing at speeds that greatly outpace the technological innovations of the past. Nothing has ever taken hold at the pace AI has. It's able to change the world at a speed that approaches instantaneous, outpacing the ability of most observers to anticipate or even comprehend. The second important thing that's happened has been an incredible leap ahead in AI's capabilities. Level one is chat AI. Level two is tool using AI. Level three is autonomous agent. At this level, the user doesn't tell AI what to do. The user gives it a goal as well as the parameters of the desired output. The agent does the work, checks it, and submits a finished product. This is labor replacement at the task level, not assistance replacement. Marx the most significant thing that distinguishes AI is something we've never dealt with in connection with prior technological developments. AI's ability to act autonomously. The bottom line, Marx concludes, is that AI is very real, capable of doing a lot of work that heretofore has been done by knowledge workers and growing extremely rapidly in terms of applications. What we see today is only the beginning, as I mentioned above. If I had to guess, I'd say its potential is more likely underestimated today than overestimated. He does point out that it's not clear that the market is pricing that disruption the right way, but that the change is undeniable. And yet still, when we look back at the history of this particular week in time, this will be the Citrini Report week. The piece by Citrini Zalip Shah is called the 2028 global intelligence crisis and walks through a doomsday scenario where effectively AI is so good that it's actually bearish, creating a doom spiral where AI does everything, allowing companies to cut human workers, which reduces spending, which reduces available capital from consumers, which forces companies to lay off more, and so on and so forth. The note, while admittedly an artifact of speculative exploration, hit with the force of a neutron bomb in a Wall street environment that finds itself extremely destabilized and unclear what to make of AI change? Is it an infrastructure bubble? Is it the Sasspocalypse where AI does everything? Can it be both at the same time? Despite, as Deutsche bank strategist Jim Reed put it, the report having a high vibes to substance ratio, it was extremely resonant. So much so that much of the rest of the week has been responses and rejoinders. Economics opinion writer Noah Smith wrote a response called the Citrini post is just a scary Bedtime story. He summed it up AI might take your job, but it probably won't crash the economy. And if it does, we know how to deal with it. Noah writes, if you don't like posts about AI, I have some bad news. For the next few years, there are probably going to be a lot of them. It's not often one gets to live through an industrial revolution in real time, especially one that moves so quickly. There will be very few pieces of the economy, if any, that this revolution doesn't touch, and it will have major implications for other things I write about, like geopolitics, society, etc. AI is not going to be a special compartmentalized topic for a long time. It's going to be central to a lot of what's going on. If you find that boring, well, all I can say is we don't get to choose the times we live in. Every couple of weeks someone comes out with a big post about how AI is changing everything, and that post goes viral and everyone talks about it for a few days. A couple weeks ago it was Matt Schumer. Something big is happening. This week it's to Trini researches the 2028 global intelligence crisis. And yes, the title is in all caps. The post paints a picture of a future in which AI disrupts lots of different kinds of white collar work and service industry business models in industries like software, finance, business services, and so on, and in which this disruption causes an economic crisis. Noah continues that this is really two theses in one a microeconomic thesis about which industries and jobs AI will disrupt, and a macroeconomic thesis about what this will do to the economy overall. Now, I'll pause the reading there, but Noah goes on to basically make the point that, among other things, the Citrini post operates from the implicit idea that there will be no policy response, a fairly confusing view given the magnitude of the disruption they're articulating. The Kobayisi letter also took on the Citrini post. Their response essay was called what if AI doesn't actually end the World in it? They what's obviously true AI is not another software feature or efficiency gain. It's a general purpose capability shock that touches every white collar workflow simultaneously. Unlike any revolution in history, AI is getting better at everything simultaneously. But what if the doomsday scenario is false? It assumes demand is fixed, that productivity gains don't expand markets, and that the system cannot adapt faster than the disruption we believe they continue. There is a second path that is being dramatically underpriced. The same anthropic takedowns that look like early signs of systemic collapse may ultimately be the start of the largest productivity expansion ever. While our analysis is not a certain outcome, it is important to remember that humanity has always prevailed and the free market always works itself out. Couple of the key pieces of the argument from the Kobaisi letter One is something that I've talked about frequently on this that the doom loop or any long term job loss scenario assumes that demand is fixed. The bearish loop, they write, creates a simplified linear AI gets better, businesses reduce headcount and wages buying drops, businesses invest in AI again to defend their margins and the downward cycle repeats. This assumes they write a completely stagnant economy. History suggests otherwise. When the cost of producing something collapses, demand really stays flat, it expands. When compute costs fell, we did not consume the same amount of compute more cheaply. We consumed orders of magnitude more of it and built entirely new industries on top of AI decreases costs in every sector, and when service costs go down, purchasing power increases with or without wage growth. The doom loop becomes dominant only if AI replaces labor without materially expanding demand. The optimistic scenario emerges if cheaper compute and productivity yields entirely new categories of consumption and economic activity. The way that I put this in the past is that if the cost to produce code is 1/100 of what it used to be, we don't get 1/100 of the coders, we get 100 times more code. The Kobaisi letter also argues that labor markets don't vanish, but restructure. They write. A key concern is that AI disproportionately affects white collar employment which drives discretionary consumption and housing demand. This is true and a legitimate concern, particularly as the wealth divide is already so massive. However, AI struggles with physical world dexterity and human identity. Skilled trades, hands on healthcare, advanced manufacturing and experience driven industries retain structural demand. In many cases, AI complements these roles rather than replaces them. More importantly, AI lowers the barrier to entrepreneurship. When one individual can automate accounting, marketing support and coding tasks, small scale business formation becomes easier. We are bullish on small businesses. In fact, the removal of barriers to entry through AI may be the solution to flatten the wealth divide that we currently face. The Internet killed certain job categories, but created entirely new ones. AI may follow a similar story, compressing some white collar functions while while expanding self directed economic participation elsewhere. In their conclusion they write AI amplifies outcomes. It can amplify fragility if institutions fail to adapt, and it can also amplify prosperity if productivity outpaces disruption. The anthropic takedowns are signals that workflows are being repriced and cognitive labor is becoming cheaper. A clear transition but transition is not the same as collapse. As every other major technological revolution has looked destabilizing at the start, the most underpriced possibility today is not dystopia, it's abundance. AI may compress rents, reduce friction and restructure labor markets, but it may also deliver the largest real productivity expansion in modern history. And it wasn't just Internet newsletters that were publishing rebuttals. No less than Citadel securities got in on the game. Their piece, which they called the 2026 global intelligence crisis and they pointed out that much of the evidence just points in a different direction. Easily the most referenced part of the Citadel rejoinder is the chart of indeed job postings for software engineers that shows them going up dramatically over the last few months. They also point out that maybe the biggest X factor in all of this is AI diffusion speed. Not how much of the white collar work AI could do right now theoretically, but at what speed will Enterprise actually allow it to do that work? Citadel writes the first order presentation of AI adoption is generally a binary question. Do you use AI? The more important question insofar as it relates to the AI displacement narrative is how intensely is AI being used for work. Looking at St. Louis Fed data, they say the data presents little evidence of any imminent displacement risk. Recursive technology, they point out, is not recursive adoption and point out that the risk of displacement declines with a slower pace of adoption. Finally, calling upon the example of history, they write in 1930, John Maynard Keynes wrote Economic Possibilities for Our Grandchildren, predicting that productivity growth would be so powerful that by the early 21st century, the workweek would fall to 15 hours. He was directionally correct about productivity growth but profoundly wrong about labor market implications. Rather than working dramatically less, societies consumed dramatically more. Why? Because rising productivity lowered costs and expanded the consumption frontier. Preferences shifted towards higher quality goods, new services, and previously unimaginable forms of expenditure. Leisure increased modestly, but material aspiration expanded far more. History suggests productivity gains do not automatically translate into labor withdrawal or demand collapse as they alter the composition of demand, expand real incomes, and generate new industries. Keynes underestimated the elasticity of human wants. You've heard me talk about Assembly AI and their insanely accurate Voice AI models, but they just shipped something big. Universal 3 Pro is a first of its kind class of speech language model that lets you prompt speech recognition with your own domain context and vocabulary instead of fixing transcripts and post processing. It's more flexible than traditional ASR and more deterministic than LLMs, so you get accurate output at the source and can capture the emotion behind human speech that transcripts often miss, all without custom models or post processing hacks. And to celebrate the launch, they're making it free to try for all of February. If you're building anything with voice, this one's worth a look. Head to AssemblyAI.com freeoffer to check it out. Today's episode is brought to you by Robots and Pencils, a company that is growing fast. Their work as a high growth AWS and Databricks partner means that they're looking for elite talent ready to create real impact at Velocity. Their teams are made up of AI native engineers, strategists and designers who love solving hard problems and pushing how AI shows up in real products. They move quickly using roboworks, their agentic acceleration platform so teams can deliver meaningful outcomes in weeks, not months. They don't build big teams, they build high impact, nimble ones. The people there are wicked smart with patents, published research and work that's helped shaped entire categories. They work in Velocity pods and studios that stay focused and move with intent. If you're ready for career defining, work with peers who challenge you and have your back. Robots and Pencils is the place Explore open roles@rootsandpencils.com careers that's robotsandpencils.com careers there's a new standard that I think is going to matter a lot for the enterprise AI agent space. It's called AIUC1 and it builds itself as the world's first AI agent standard. It's designed to cover all the core enterprise risks, things like data and privacy, security, safety, reliability, accountability and societal impact, all verified by a trusted third party. One of the reasons it's on my radar is that 11 labs, who you've heard me talk about before and is just an absolute juggernaut right now, just became the first voice agent to be certified against AIUC1 and is launching a first of its kind insurable AI agent. What that means in practice is real time guardrails that block unsafe responses and protect against manipulation, plus a full safety stack. This is the kind of thing that unlocks enterprise adoption. When a company building on 11 labs can point to a third party certification and say our agents are secure, safe and verified, that changes the conversation. Go to AIUC.com to learn about the world's first standard for AI agents. That's AIUC.com Weekends are for Vibe Coding it has never been easier to bring a passion project to life, so go ahead and fire up your favorite vibe coding tool. But Monday is coming and before you know it, you'll be staring down a maze of microservices, a legacy COBOL System from the 1970s, and an engineering roadmap that will exist well past your retirement party. That's why you need Blitzi, the first autonomous software development platform designed for enterprise scale code bases. Deploy the beginning of every sprint and tackle your roadmap 500% faster. Blitzy's agents ingest your entire code base, plan the work and deliver over 80% autonomously validated, end to end tested, premium quality code at the speed of compute. Months of engineering compressed into days. Vibe code your passion projects on the weekend. Bring Blitzi to work on Monday. See why Fortune 500s trust Blizzi for the code that matters@blizzi.com, that's blitzy.com. And this idea of human wants, both in terms of their ability to expand but also just in terms of their manifestation in reality and theory, was the subject of my ponderance. Written in the midst of a 20 hour forced layover in the Amazonian rainforest that I called we're all missing the most important market force that will shape AI or My plane made an emergency landing in the Amazon and all I got was this lesson about the future of the world. The piece reads. It's a weird week, man. A bomb cyclone blizzard with the force of a Category 2 nearly 3 hurricane shut down New York and the rest of the East Coast. This was problematic for Lots of reasons, not least of which was that it completely torpedoed our family's return from Uruguay to the Hudson Valley. Meanwhile, back home, the latest AI doomer Sci fi I say that with a lot less derision than it probably sounds. Struck a nerve deep enough to rip the throats out of IBM, Visa, and many others just because of what AI might do. As I sit here in Manaus, Brazil, I find myself contemplating how my family's experience over the last 24 hours or so demonstrates just how wrong I think we are about how AI ends up playing out in the economy. I'll give you a moment to finish ralphing at the utter LinkedIn ness of that statement, and then let me explain why we're all missing the most important market force that will shape AI. When we got the notification that we were making an emergency landing in Manaus, the trip had already been an utter calamity. A few days earlier, in the middle of the night, we got the text notification from Delta that almost assuredly our upcoming trip from Montevideo to JFK was going to get 86'd by the impending snowstorm. The options for rescheduling weren't great. It was basically stick around Uruguay till Friday, when we were supposed to have gotten home Monday morning, or scurry on Monday to do a new multi leg trip through Sao Paulo and Atlanta. We figured that even if things were still gnarly in New York on the back end, solving that from Georgia was easier than solving that from Sao Paulo. Dutifully, we drove the two hours from Jose Ignacio to the Montevideo airport, returned our tiny VW rental, and let the kids scarf some Mickey D's before the 27 hours of upcoming travel. In retrospect, it was the last peaceful moments of optimism we'd have for some time. We got to the check in line and instantly it was clear that something was wrong. I can't check you in, said the attendant. Wait, what? Why? We can't check in anyone whose final destination is New York. But the storm is over. We're not even getting there until tomorrow, when it will be even more over. And we've got stops in Sao Paulo and Atlanta. Let us get stuck there. I can't. It's our policy, and after a call to her supervisor, it remained their policy. We didn't have a lot of great turn around and hang out for another five days. Or call Delta, have them delete the final Atlanta to JFK leg so we could at least get to the US Atlanta it was, and after some frantic searching, we booked what seemed like the last rental car in America to do the 15 hour drive home from Jackson Hartfield. Fast forward about 10 hours. We've made it through the first leg of the flight, a couple hours in the actually kinda excellent GRU airport in Brazil, and all of us, including 4 year old Gus and 7 year old Alden, are passed out dreaming of a next day full of Kia souls and a dozen Wawa and Red Bull stops. That is until at 4am the captain gets over the loudspeaker and says that, sorry, a generator has stopped working and we have to make an emergency diversion into Manaus. That's the capital of the Amazon. For those keeping track at home, we hadn't even made it out of Brazil. So much for at least if we get stuck, it will BE in the US airports are stressful at the best of times. 300 people dropped out of the sky into a place many of them had never heard of, and at the mercy of the gods of airplane mechanics, hotel availability and Brazilian customs authorities, and you've got something else entirely. But this is supposed to be the setup to a story about AI, right? We're now sitting here at the lovely hotel Villa Amazonia in the old part of Manaus, waiting for a room to be ready so we can catch a few winks before trudging back to catch another plane. Hopefully a new one. To be honest, that will somehow, someway get us back to the US of A. It is absolutely undeniable how much AI has made this experience better. I've used LLMs to translate back and forth in a language I barely know how to say thank you in research the safety profile of different areas. It's not a war zone, but it's not low risk either. Gee, thanks Chatgpt. Real reassuring. And I've also used LLMs to hunt for rental cars, plan driving routes, and of course, reassure myself that Airbus 330s really can fly with just one generator in those tense 45 minutes between when we got the announcement and when we touched down. And yet, as awesome as AI has been, every part of the story has really been about human interaction and human discretion. Either that went for us or against us. The attendant at MVD and her supervisor who didn't buck a clearly stupid policy that might have made sense 24 hours earlier, but certainly didn't anymore. The customer service reps of the Delta Diamond Medallion status line, who ranged from wildly unhelpful on the one end of the spectrum to hustling to find us a flight to Philly on the other. The hotel staffers who overlooked that we hadn't technically booked our kids on the reservation, and who hustled to get us in a room before the 3pm check in time. AI has been extremely helpful during this trip, but at no point would I have rather interacted with AI than these humans. That's not just because I prefer human interaction out of some historic sense of legacy of the way things have always been done. In fact, I'd venture to say that I'm exactly the type of person who in many situations would wildly rather interact with an anonymous robot. The reason that I preferred human interaction is the possibility of exception. Human systems are built with an implicit assumption of discretionary non compliance. Rules tend to be written much tighter than anyone expects them to be followed. Everyone knows this. The rule writers know it, the managers know it, the humans interacting with the system as customers. Human judgment is the shock absorber between the world the policy was designed for and the messy reality. To be clear, this is a feature, not a bug. The whole system would be significantly more brittle if everyone just followed the rules perfectly. You can probably see where I'm going with this. A world where AI agents perfectly follow the policy all the time would be in many, many real world contexts, much worse than the one where humans follow it only imperfectly. Call it the paradox of perfect compliance, but couldn't AI have grace and flexibility programmed in as well? Sure, and as we design agent led systems, it will probably be important to remember that in people's real lived experience, exceptions are as important as rules. But kindness as governance, an unspoken and yet nearly universal aspect of well functioning human systems, is hard to program. Small acts of bureaucratic rebellion tend not to be the byproduct of clear, rational calculations. Instead, they are felt decisions. They are a split second judgment call that comes on the heels of the utterly relatable exhale of an exhausted parent at their wit's end just trying to keep it together for their even more exhausted kids. There's something in the pleas of the person being helped that suggests that as bad as this situation is, there's something else they're going through that's even harder. Which brings us to this weekend's market freakout caused azure. The latest AI doomer fanficthought exercise is a fictional dispatch from 2028 describing an AI driven economic crisis. This one isn't about the fallout of an AI bubble popping because of a performance plateau. Instead, it's a meditation on what happens if AI actually gets as good as we think it will. Basically so bullish. It's bearish the piece is from well respected market research firm Citrini and is well constructed and worth reading. And boy did people read it. 9 million on the X Post alone, Bloomberg, the Wall Street Journal and many more wrote articles about the piece as the latest leg of the Sasspocalypse cleaved billions off of tech and finance stocks. In other words, markets actually moved on a literal work of fiction. There is a ton of great debate to be had around the piece, which is of course happening right now, and which makes the outcome of them having shared it likely better in the medium and long run than if they hadn't shared it. Even if doordash stockholders don't really agree right now, I'm not really interested in a point by point rebuttal. What I do want to point out is that like most analysis on both the bear and bull side, it rests on an assumption so deeply embedded that almost no one questions it that because markets reward efficiency, efficiency is inevitable. This efficiency gospel isn't exactly wrong, but it mistakes means for ends. And here is my main point. Markets don't exist to be efficient. Market markets exist to serve human preferences. Outside of the efficiency gospel, the value of efficiency is primarily in how it improves a company's ability to serve human wants and needs, not an ends in and of itself. Confusing the two is like saying the point of a restaurant is great ingredients in a clean kitchen. Too much of the AI discourse on both bear and bull sides makes exactly this mistake. We've thought a lot about how much more efficient AI will make things, but too little about what we and other humans of the future are going to want. AI might make every part of a company's operations more efficient, but will that company's customers actually want to interact with the new, more efficient version on the other side? What are the chances that they actually reject it in favor of a more human version? Will they actually be willing to pay a premium for a less Rayleigh efficient experience because they like that version of the experience better? Markets make this confusing. Investors are the high priests of the efficiency gospel, and the day to day excitement of market moves tends to lead more media attention to be focused on the stock story than the value created to the end consumer. Indeed, for the market priests and priestesses, the value to the end consumer is actually secondary to the value to the shareholder, but that only lasts for so long. A company can live for a long time because the markets like it, but not forever. Ultimately, the buck stops with the customer, and when it comes to the customer, human institutions are not outcome generating machines, at least not exclusively. In many cases they're also or even primarily agency validating systems. There's plenty of evidence to suggest that people are willing to pay for the possibility of being an exception. The chance that someone will look at your situation and deviate from the script. The knowledge that the person across the counter could break the rule for you even if they don't. Friction isn't always waste. Think about all the ways capitalism has invented for us to transform the possibility of exception into the exception is the norm. Loyalty programs, status tiers, premium service. The entire premium loyalty economy is a multi billion dollar bet that people will pay for guaranteed access to generally favorable human discretion. A couple of years ago I decided I was being stupid not to concentrate airline loyalty on a single airline. And so pick Delta. We spent a fair bit of scratch on a Delta SkyMiles card, so I got to diamond last year. Turns out There's a special 24 hour line just for diamond members. And man have I put that thing to the test. The last few days, even as Reddit rages at 2, 3, 4, even 5 hour wait times with Delta in the wake of the blizzard, I've been able to get a real live human being on the phone in under a minute a half dozen times. The point is, Delta isn't trying to automate the diamond line. The diamond line is the product. Automate that and you've eliminated the thing people are paying for. I'm not trying to be Pollyannish about the magnitude of AI disruption. Anyone who listens to the podcast knows how enormous a change I think we're living through and how profoundly challenging this next middle part could be, even though I'm optimistic for the long term. But a big strand of the most urgent concerns are predicated not just on the scale of disruption, but the speed. This type of doomerism rejects comparisons to the past because the paradigm shift was more gradual. While this one is happening everywhere all at once, the core question these arguments tend not to grapple with is just because AI could do something, will it always be called to do so? If you live in the efficiency gospel, the answer is of course yes. If a non human intelligence can perform the same task more efficiently, it will inevitably be tapped to do that task at the expense of the human who used to do it. But efficiency is not destiny. Indeed, efficiency is only one type of market force. Humans have agency. Humans have purchasing power. Even in the Citrini Report, the white collar labor folks aren't out of consumer power yet. If human desire runs counter to efficiency as it often does, there's every reason to think that the old maxim that the customer is always right will provide a serious counterweight to the unstoppable market advance of the machines. Safetyists have long advocated some type of pause to allow us more time to adapt. I think we might be underestimating the extent to which human consumer preferences will do that all on their own. It's entirely possible I'm wrong and the forces of the efficiency gospel are too strong to resist. But I'm on hour 30 or 40 or 50 or who knows by the time you're reading this Stranded in who knows where, Brazil with two small kids AI as information guide has been amazing, but exactly zero times have I wished I could have a more efficient AI to interact with. What I've wanted was a human being who looked at our situation and decided to break the rules just a little to help us get home. Efficiency is not destiny. And ultimately, and now I'm done reading myself and back to just talking as myself. The thing to note here is just that, as compelling as all of these arguments sound, as many holes as there are in one, as many better points in another, the reality is that we are all just grasping and guessing at a future that we cannot know. Abundance author Derek Thompson writes, the level of uncertainty is so high and the quality and supply of real world real time information about AI's macroeconomic effects so paltry that very serious conversations about AI are often more literary than genuinely analytical. I feel lucky to have been able to have conversations about the frontier of AI with executives and builders at Frontier Labs, economists at AI conferences, investors in AI and other AI folks at off the record dinners where important truths can theoretically be shared without risk. I can't emphasize enough that nobody knows anything is about as close to the reality here as three words are going to get you. Nobody knows what's going to happen this year, or next year, or the year after that. There is no secret cigar filled room of people who have unique access to some authentic postcard from the future. When you drill down underneath the bluster, the boomerism, the fear, the anxiety, what's there at the bottom is genuine uncertainty, a vacuum into which storytelling is flooding. The Frontier Labs don't really know what they're building exactly. The economists don't really know how to model the thing they're claiming they're building. I wish more people talked about and thought about this subject through that sort of lens. We're trying to model the economy, wide effects of a technology whose properties the Frontier Labs can't even really describe yet. Whatever you think about AI today, be prepared to change your mind soon. In an extension of that post, in his substack, Derek writes that artificial intelligence offers its obsessives a kind of Schrodinger's apocalypse, which exists in a superposition between the economy is about to change forever, and from a macroeconomic standpoint, everything still looks eerily normal. My final reminder for this episode is that in the case of this Schrodinger's apocalypse, it's not just a question of acknowledging that multiple possibilities exist in the box. I think we need to recognize at a much more fundamental level that we have a lot more agency than we give ourselves credit for to decide and shape which versions of this future come to pass. For now, that is going to do it for today's AI Daily Brief. Appreciate you listening or watching. As always, and until next time, peace.
Host: Nathaniel Whittemore (NLW)
This episode unpacks an intense week in the global conversation about artificial intelligence, focusing on an emerging economic anxiety and philosophical debate: is AI creating a doomsday scenario for jobs and markets, or is it launching us into a new era of abundance? NLW walks listeners through viral research notes, rebuttals from leading economists and market actors, and his own reflections from being stranded in the Amazon—a setup for examining why human preference and agency may matter more than apocalyptic efficiency narratives.
AI’s Disruption Becomes Mainstream:
Widespread Industry Recognition:
A Viral Doomsday Scenario:
Immediate, Widespread Impact:
Noah Smith Pushes Back:
The Kobayisi Letter: Market Adaptation & AI’s Dual Pathways
Citadel Securities: Adoption, Not Just Capability, is Critical
NLW’s Personal Reflection from Stranded Travel (Manaus, Brazil):
Markets Serve Human Wants, Not Just Efficiency:
Unprecedented Uncertainty:
Schrödinger’s Apocalypse:
Karpathy’s Description of the Programming Paradigm Shift [03:17]:
“Programming is becoming unrecognizable...That era is over...The leverage achievable via top tier agentic engineering feels very high right now.”
Michael Gad’s 180 on AI [05:04]:
“I once tweeted, AI is BS…It’s going to fundamentally alter the world. I believe it now. By the end of the year, I believe we will see huge layoffs.”
Howard Marks on AI’s Pace and Unparalleled Disruption [06:08]:
“Nothing has ever taken hold at the pace AI has. It’s able to change the world at a speed that approaches instantaneous, outpacing the ability of most observers to anticipate or even comprehend.”
Kobayisi Letter on Transition vs. Collapse [19:41]:
“A clear transition — but transition is not the same as collapse…The most underpriced possibility today is not dystopia, it’s abundance.”
NLW’s Paradox of Perfect Compliance [37:02]:
“A world where AI agents perfectly follow the policy all the time would be in many, many real world contexts, much worse than the one where humans follow it only imperfectly. Call it the paradox of perfect compliance…”
Markets and the End User [44:29]:
“Markets don’t exist to be efficient. Markets exist to serve human preferences. Outside of the efficiency gospel, the value of efficiency is primarily in how it improves a company’s ability to serve human wants and needs, not as an end in and of itself.”
On Agency Over the AI Future [01:02:34]:
“…We have a lot more agency than we give ourselves credit for to decide and shape which versions of this future come to pass.”
This summary aims to capture the urgency, complexity, and narrative richness of NLW’s episode, including both high-level market debate and the speaker’s personal tone and insights.