Transcript
A (0:00)
Today on the AI daily brief, something big is happening. The AI daily brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, Rackspace Technology, Robots and Pencils, Blitzy and Super Intelligent. To get an ad free version of the show, go to patreon.com aidailybrief or you can subscribe on Apple Podcasts to learn about sponsoring the show or anything else about the show really. Go to aidaily Brief AI or send us a note at Sponsors AI now, as you guys know, I'm traveling and I had actually pre planned a long read slash big think episode for this Sunday. But then this conversation, prompted by an article by Matt Schumer, absolutely took over our corner of the Internet and frankly expanded quite a bit beyond it in a way that it felt important to add our part to the conversation and make sure that if you hadn't yet, you you get access to part of this as well. So we're going to read some excerpts from Something Big Is Happening, a post that appeared on X about a week ago and has 80 million views, much more than the views. It has sparked an enormous number of response articles and conversations, a couple of which we will also be excerpting. And the reason it's so important is that Matt has encapsulated and crystallized this sentiment which you've been hearing and feeling through this show all year, which is that a shift has happened with big implications. Matt starts think back to February 2020. If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas, but most of us weren't paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper, you would have thought they'd been spending too much time on a weird corner of the Internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home and life rearranged itself into something you wouldn't have believed if you described it to yourself a month earlier. I think we're in the this seems overblown phase of something much, much bigger than Covid. I've spent six years building an AI startup and investing in the space. I live in this world, and I'm writing this for the people in my life who don't. My family, my friends, the people I care about, who keep asking me, so what's the Deal with AI and getting an answer that doesn't do justice to what's actually happening. I keep giving them the polite version, the cocktail party version, because the honest version sounds like I've lost my mind. And for a while, I told myself that that was a good enough reason to keep what's truly happening to myself. But the gap between what I've been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy. Here's the thing nobody outside of tech quite understands yet. The reason so many people in the industry are sounding the alarm right now is because this already happened to us. We're not making predictions. We're telling you what already occurred in our own jobs and warning you that you're next. For years, AI has been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then, in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last, it was better by a wide margin. And the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my Expertise. Then, on February 5, two major AI labs released new models on the same day. GPT5.3, Codex from OpenAI and Opus 4.5 from Anthropic. And something clicked. Not like a light switch, more like the moment you realize the water has been rising around you and is now at your chest. I am no longer needed for the actual technical work of my job. I describe what I want built in plain English, and it just appears. Not a rough draft. I need to fix the finished thing. I tell the AI what I want. What? Walk away from my computer for four hours and come back to find the work done, done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave. Let me give you an example so you can understand what this actually looks like in practice. I'll tell the AI I want to build this app. Here's what it should do. Here's roughly what it should look like. Figure out the user flow, the design, all of it. And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago. It opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn't like how something looks or feels, it goes back and changes it on its own. It iterates like a developer would, fixing and refining until it's satisfied. Only once it has decided the app meets its own standards does it come back to me and say, it's ready for you to test. And when I test it, it's usually perfect. I'm not exaggerating. This is what my Monday looked like this week, and here's why this matters to you. Even if you don't work in tech the AI labs made a deliberate choice. They focused on making AI great at writing code first, because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version which writes better code, which builds an even smarter version. Making AI great at coding was a strategy that unlocks everything else. That's why they did it first. My job started changing before yours, not because they were targeting software engineers. It was just a side effect of where they chose to aim first. They've now done it and they're moving on to everything else. The experience that tech workers have had over the past year of watching AI go from helpful tool to does my job better than I do is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in 10 years. The people building these systems say one to five years. Some say less. And given what I've seen in just the last couple of months, I think less is more likely. Now the next section is Matt debunking the idea where people say, but I tried AI and it wasn't that good. Matt says, I hear this constantly and I understand it because it used to be true. However, he points out, the time when that was true is ancient history. And what's more, the gap one that Ethan Molik has talked about quite a bit of the default free version that most people have access to is significantly behind the top tier paid versions. Matt makes the analogy. Judging AI based on free tier chatgpt is like evaluating the state of smartphones by using a flip phone. Now, I'm skipping a bunch of parts of this because frankly, you as an audience weren't even necessarily exactly Matt's target. It's more your friends and family and peers who aren't listening to the AI Daily Brief. Every day, Matt tries to put some context around how fast Things are moving. Referencing the ongoing meter autonomy study, he talks about the fact that AI is now building the next AI, quoting the 5.3codex release where they wrote GPT5.3 codex is our first model that was instrumental in creating itself. He then goes through a number of different professions, including legal, financial analysis, writing and content, software engineering, medical analysis, and customer service to share what he thinks the impact on those jobs might be. As he rounds the corner, Matt has a section called what you should actually do. Matt writes, I'm not writing this to make you feel helpless. I'm writing this because I think the single biggest advantage you can have right now is simply being early. Early to understand it, early to use it, early to adopt. His advice then is one start using AI seriously and not just as a search engine. Basically get the paid version, use the best model available, and use it for hard things. Second piece of advice, he says, this might be the most important year of your career and work accordingly. Matt writes, the person who walks into a meeting and says, I used AI to do this analysis in an hour instead of three days is going to be the most valuable person in the room. Not eventually, right now. Once everyone figures it out, the advantage disappears. Next, he says, have no ego about it. The people who will struggle the most are ones who refuse to engage, the ones who dismiss it as a fad, who feel that AI diminishes their expertise, who assume their field is special and immune. He has a bunch more. But then his final piece of advice is build the habit of adapting. He says this is maybe the most important one. The specific tools don't matter as much as the muscle of learning new ones quickly. AI is going to keep changing and fast. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who come out of this well won't be the ones who mastered one tool. They'll be the ones who got comfortable with the pace of change itself, make a habit of experimenting, try new things even when the current thing is working. Get comfortable being a beginner repeatedly, that adaptability is the closest thing to a durable advantage that exists right now. Matt concludes, I know this isn't a fad. The technology works, it improves predictably, and the richest institutions in history are committing trillions to it. I know the next two to five years are going to be disorienting in ways that most people aren't prepared for. This is already happening in my world. It's coming to yours. I know the people who will come out the best are the ones who start engaging now, not with fear, but with curiosity and a sense of urgency. And I know you deserve to hear this from someone who cares about you, not from a headline six months from now when it's too late to get ahead of it. We're past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasn't knocked on your door yet. It's about to. So that is the piece. And if you're not sure why it got so much traction, I think it crystallizes the sensibility that people have been trying in fits and spurts to articulate for a couple months now since the Opus 4.5 and GPT 5.2 models came out, but especially over the course of the early part of 2026 as we've really seen just how big a difference these models used well represent. Now, of course, 80 million people can't look at a thing without getting some serious critiques. There is a healthy dose of personal invective aimed at Matt. There are dismissals via accusations of slop, basically saying that the ideas aren't legitimate because they believe Matt used AI to write this 5,000 word tome. Some people, I think, reasonably don't love the COVID comparison, either because it feels too abstract or it feels too aggressively doom and gloom, or because structurally a virus that passes is different than a change that doesn't change back. One of the most valuable critiques, I think, is the critique that basically says other knowledge work problems outside of coding aren't as instantly addressable and as easily addressable by AI as coding is. Isaac Saul writes, One thing I've noticed is that computer code is a really structured language and software is a defined problem space with a lot of defined patterns. So software people tend to think everything is a pattern, and AI being really good at their job makes them overestimate how well it can do everything else. The truth is there is a lot more disorder, unpredictability and humanness in so much of our lives and our work that I don't think AI applications will always or even often be able to account for. Matt, for instance, lists journalism as a job in trouble thanks to AI. Not that our industry needs more trouble. And it's true that AI can read documents fast and do incredible research and even write clean copy and edit. It will probably eliminate or reduce the need for some jobs. But you know what it can't do? It can't work a source over for years on end. It can't doesn't and won't bear witness to live events. It reminds me of the famous Goodwill hunting scene where Robin Williams is chastising Matt Damon about being such a smartass but not being able to describe what the Sistine Chapel smells like. Damon is the AI, Isaac concludes. People think humans are finite numbers of neurons and processes and thoughts and learning, but I think that is wrong. We are all constantly changing every day, every second, thanks to new inputs and new experiences. So yes, I buy that AI will be able to read documents better than your typical lawyer, but can it build a relationship with a client or look at a jury and guess what argument might move them to guilty? Or know when to cross the lines with a judge or when to step back? I don't really think so, and those limits to me are so under discussed in this dialogue that it kind of discredits everything else. Now, I do not agree with the idea that it discredits everything else, but I do think that there is a lot in this critique that is worthy of consideration. The particular type of criticism that I have no patience for is the well, actually, AI isn't all that good. Call this the Gary Marcus strand of criticism, the folks who just simply cannot be convinced that AI is as powerful as people say it is. Now, one very highfalutin strand of this critique came from Will Mendis. He wrote another very widely viewed post called Tool Shape Objects. And for all the people ranting and raving about how good this one is, I think it basically uses good writing to trick you into thinking it's made a point more profound than it actually has. I think it actually secretly reveals something about the current state of work outside of AI entirely, which has some big implications as well. However, it was read enough that I think it's worth excerpting as well. All right, friends, quick break to talk about a question I hear constantly. How do you actually move from AI experimentation to production without getting buried in infrastructure decisions? That's where Rackspace AI Launchpad comes in. It's a fully managed service designed to help enterprises build, test and scale AI workloads through a guided phased approach. With AI Launchpad, Rackspace manages the infrastructure, GPUs and core tooling so teams can focus on validating use cases instead of building environments from scratch. You start with a proof of concept, move into a real pilot, and then scale into production on managed and enterprise grade GPU infrastructure. Whether you're testing inference at the edge, fine tuning foundation models, or standing up a production pipeline, the goal is the same faster progress with less operational friction. If you're ready to move beyond demos and actually put AI to work. Take a look at Rackspace AI Launchpad and see how a managed path to production can accelerate results. Visit rackspace.comailaunchpad to learn more. Today's episode is brought to you by Robots and Pencils, a company that is growing fast. Their work as a high growth AWS and databricks partner means that they're looking for elite talent ready to create real impact at Velocity. Their teams are made up of AI native engineers, strategists and designers who love solving hard problems and pushing how AI shows up in real products. They move quickly using roboworks, their agentic acceleration platform so teams can deliver meaningful outcomes in weeks, not months. They don't build big teams, they build high impact, nimble ones. The people there are wicked smart with patents, published research and work that's helped shaped entire categories. They work in Velocity pods and studios that stay focused and move with intent. If you're ready for career defining work with peers who challenge you and have your back, Robots and Pencils is the place. Explore open roles@rootsandpencils.com careers that's robotsandpencils.com careers weekends are for Vibe Coding it has never been easier to bring a passion project to life, so go ahead and fire up your favorite Vibe coding tool. But Monday is coming and before you know it you'll be staring down a maze of microservices, a legacy COBOL System from the 1970s, and an engineering roadmap that will exist well past your retirement party. That's why you need Blitzi, the first autonomous software development platform designed for enterprise scale. Code bases. Deploy at the beginning of every sprint and tackle your roadmap 500% faster. Blitzi's agents ingest your entire code base, plan the work, and deliver over 80% autonomously validated, end to end tested, premium quality code at the speed of compute months of engineering compressed into days. Vibe code your passion projects on the weekend. Bring Blitzi to work on Monday. See why Fortune 500s Trust Blitzi for the Code that Matters at blitzi.com that's blitzy.com Today's episode is brought to you by my company, Superintelligent. In 2026, one of the key themes in enterprise AI, if not the key theme is going to be how good is the infrastructure into which you are putting AI in agents. Superintelligence Agent readiness audits are specifically designed to help you figure out one where and how AI and agents can maximize business impact for you and two, what you need to do to set up your organization to be best able to leverage those new gains. If you want to truly take advantage of how AI and agents can not only enhance productivity, but actually fundamentally change outcomes in measurable ways in your business this year, go to Besuper AI. The piece, as I said, is called Tool shaped Objects. Will writes. In 1711, a toolmaker in Kyoto began forging kana blades for carpenters building the temples at Higashi Honganji. The blades were forged from laminated steel, the highest quality white Hagane forge welded to soft iron, and were extraordinary. Three hundred years later, his descendants still forged them. A Chiazuru costs somewhere between 300 and $3,000. It takes days to set up. The die must be hand fitted, the blade back flattened on a series of progressively finer stones, the chip breaking matted until light cannot pass between it and the edge. Only then can you take a shaving. The shaving curls are transcendent. It is beautiful. It is also, in the economic sense, worthless. The power planer does the same work in a fraction of the time the kana exists so that the setup can exist. I want to talk about a category of object that is shaped like a tool but distinctly isn't one. You can hold it, you can use it. It fits in the hand the way a tool should. It produces the feeling of work, the friction, the labor, the sense of forward motion. But it doesn't produce work. The object is not broken, it is performing its function. Its function is to feel like a tool. Now, from there, Will does an again highbrow version of the slop critique. Talking about Matt's piece, he said it was written, or perhaps more precisely, generated, by Matt Schumer, the CEO of an LLM startup that I couldn't immediately parse the function of from its various landing pages. What is interesting is not that the essay is sloppy. What is interesting is that people consumed it, they shared it, they engaged with it. Now, for those keeping track at home, that is a critique of generation, a critique that it's slop, and an implication that because Will can't understand Matt's startup, his opinion isn't really interesting. Will argues they performed the act of reading and distributing an essay about artificial intelligence that was itself produced by artificial intelligence, and at no point in this loop did the output matter. The consumption was the product, the sharing was the output. The essay, much like the AI it discusses, was a tool shaped object and it worked exactly as designed. Will argues ultimately that AI is everywhere in consumption and almost nowhere in output, we are spending unprecedented sums to acquire, configure, deploy and operate these systems and the primary product of that spending is the experience of spending it. Will argues that the current generation of LLM driven insanity, the billion dollar frameworks, the orchestration layers, the agentic workflows, is the most sophisticated tool shaped object ever created. You can build an agent that reads your emails, summarizes the contents, drafts a response, checks the response against the style guide, routes the response through an approval chain, logs the interaction and reports the result to a dashboard. You can watch this happen. You can watch the token stream, you can see the chain of thought, you can monitor the system prompt, you can adjust the temperature, you can swap the model. You can add a tool. You can add six tools. You can add a tool that calls another agent, that calls a third agent, that searches the web and synthesizes the results into a memo that no one will read now. Will caveats all of this and says that maybe at some point in the future LLMs will be worth something, but but it'll take a long time to diffuse into the real economy I think that this essay might be one of the most condescending things I've ever read in my entire life, and I think a lot of the people who are liking and sharing it are attracted to its clever condescension. There are really two substantive arguments here. I think the most revealing paragraph is the one that I just read about the agent that reads your email. The argument that Will is trying to make is that all of this work, all of this computation adds up to nothing because who cares about that memo, the critique embedded in that. However, as much as Will thinks it's about the AI, is about the nature of knowledge work in general. In my quote share of the piece I said tool shaped objects is less of a rebuke when you realize most work isn't about producing value but instead producing work shaped objects. And the point that I was making is that it is true that a huge amount of the so called work that is done today is not valuable in any real sense. One only needs to go back and look at the TPS reports from Office Space to see that this critique has been around for quite some time. I do not think that it follows that LLMs are fake tools because they are used in the service of what is ultimately not all that valuable work. The second critique comes in the paragraph, but my narrow suggestion is that this diffusion into the real economy will take much, much longer and look much different than the current run on South Bay. Best buy for Mac Minis would have you believe. My response to that is well, yeah, no S Sherlock Jacob Franic responded and got it exactly right. I think when he said that was a lot of words to say. AI adoption won't happen as fast as some would have you believe. That in and of itself is slop. It's a nothing statement. Jacob actually kind of went off a little bit in a separate post. He writes Will's essay is tangential slop about how some teams are burning tokens simply building productivity systems rather than doing work itself. Meanwhile, anthropic execs are telling you that 100% of their code is written by LLMs. That's actual effing work being done. They're shipping dozens of features a month, all of which would be impossible without Claude as the original essay points out, software engineering is no longer a profession of writing code and instead of orchestrating agents and writing code is just the start. AI and robots will be coming for numerous other jobs soon, so the authors suggest that everyone start preparing for what the world will look like when intelligence is commoditized. That's it. That's all the original essay says, and Will literally agrees with it. He just suggests it will take longer than the original essay predicted. Without referencing this conversation, Ethan Malik actually summed up this back and forth. He wrote it's a weird time to post about AI because a lot of people are vastly underestimating what AI can do and how many large scale impacts on work are inevitable with today's models, while a lot of other people underestimate the real world problems involved in getting value from AI. I think Ethan is right. But let's look at the implications of being wrong in each of the ways that Ethan suggests people are wrong. The implication for being wrong about the speed at which this AI diffuses across the workplace in society is perhaps over investment. It's some extra time preparing when you could have used that time for other things. But ultimately you weren't wrong about the thing you were wrong about the timescale. Now let's talk about the implications of being wrong about fundamentally underestimating what AI can do. And not preparing it could literally mean, on an individual or an organizational level, professional extinction. Not that it will always be so. And I don't think anyone can purport to know how exactly the lines between the AI haves and have nots will shake out. It could be, and I hope it is the case, that there is plenty of time for everyone to catch up and adapt, that the skeptics of today, if indeed they are wrong, will have had time to be wrong and still adapt whatever it is that they do for work to the new reality without someone who wasn't skeptical and embraced AI out competing them. But I'm not sure that that's going to be the case. The point of course, is that the cost of underestimating AI is a hell of a lot higher than the cost of overestimating it, and so many people are just unwilling to change their priors. Now one thing that gives me hope is that there are a lot of folks who are not AI, people who are becoming more palatable messengers. I had a political campaign recently tell me that their biggest issue with AI is that the people who were building it were such a holes and relative to their constituencies I am sympathetic. But it turns out that the way that a technology impacts your life has basically nothing to do with the personality traits of the person who built it. In any case, as I was saying, there are increasingly groups of people who have credibility with different audiences and constituencies who are not trying to sell AI products, who are trying to convince people to look at it differently. One very notable voice here is Derek Thompson, formerly of the Atlantic and co author of Abundance, who has been pounding the pavement on this. Over on Twitter recently, for example, he wrote, there are still a lot of journalists and commentators that I follow who think AI is nothing of much significance, still just a mildly fancy autocomplete machine that hallucinates half the time and can't even think. If you're in that category, what is something I could write or show with my reporting and work that might make you change your mind? I find that attitude, and just the time that Derek puts into it, extremely optimistic. Sequoia partner Pat Grady also does a good job of summing up my feeling about the Something Big Is Happening essay. Overall, he writes, Something Big Is Happening is in fact a marvelously useful tool. It has served a real purpose. It has been a wake up call for some 70 million people. Those people are now more aware of what is coming and more likely to make the right choice. Will you let AI wash over you, or will you put it to work? The best time to make that decision is right now, and lo and behold, the longer that people have been talking about this piece, the better the conversation has gotten. Conor Boyak wrote a follow up about the seen and the unseen. It's called AI Isn't Coming for your Future. Fear Is. He writes, I'm not going to argue that those articles are wrong about everything. AI is powerful it is moving fast. The disruption is real, and I take the concern seriously. But I'm going to tell you that the fear you're feeling right now, that sinking sense that the rug is being pulled out from under you, is one of the oldest and most consistently wrong reactions in human history. It has a name, it has a pattern, and it has a track record of being spectacularly, almost comically incorrect not once or twice, like every single time. Conor writes that the single idea written over 175 years ago that is the master key to understanding every AI Doomer headline you've ever read is this. It's from Frederic Bastiat from 1850 when he wrote, there is only one difference between a bad economist and a good one. The bad economist confines himself to the visible effect. The good economist takes into account both the effect that can be seen and and those effects that must be foreseen. Connor simplifies this to the seen and the unseen. He writes, when a new technology arrives, certain effects are immediately visible. You can see the assembly line worker whose job has been automated. You can see the copywriter watching Grok produce in seconds what used to take her hours. You can see the customer service team being replaced by a chatbot built with Claude code in a matter of minutes. This is the scene. It's tangible, it's emotional, it has a human face and it makes for incredible content because fear and loss are among the most powerful drivers of engagement. But there is a second category of effects. The ones Bastiat said emerge only subsequently. These are the unseen. The new industries that don't exist yet. The businesses that become possible only because costs have dropped. The creative work that gets unlocked when drudgery disappears. The entrepreneur who can now build alone what used to require a team of 20. The consumer who now has access to something that was previously unaffordable. The unseen is by definition harder to see. That's the whole point. And it's why the bad economist or the bad forecaster or the panic scrolling reader always gets it wrong. They stare at the scene, extrapolate doom, and completely miss the explosion of new opportunity forming just outside their field of vision. Now Conor goes on and gives lots and lots of evidence of this throughout history. He connects it back to AI talking about how the scene effects are AI doing many of the tasks he used to spend hours on while the unseen is being freed up to do higher order work that he never had time for before. The real risk, he argues, is not AI it's mindset. Connor says, the people who will be harmed By AI aren't the ones whose current jobs get disrupted. Disruption is temporary. People retool, pivot, and find new opportunities as they always have. The people who will be genuinely harmed are the ones who adopt the fixed pie mindset, the ones who see only the seen. The good thinker takes into account both the effects that can be seen and those that must be foreseen. He sees the jobs disappearing and asks, what new thing is this making possible? Where is the unseen opportunity forming? What can I do now that I couldn't do before? That question, what is this making possible? Is the most valuable question you can ask right now about AI, about your career, about your life. The knitting machine didn't ruin England. It made it the wealthiest nation on Earth. The power loom didn't destroy the textile industry. It expanded it beyond anyone's imagination. The computer didn't end employment. It. It created the modern economy. AI won't shrink your future if you refuse to let fear shrink your vision. Now, this is very close to what I have always felt, I said it recently, is that one of the assumptions that AI nervousness rests upon is that there is a fixed amount of work in the world to be done and that if AI does a lot of it, humans won't be able to. My argument is that we will always expand, that there is always more work to be done and more to be created based on that work. I think the best concern to have with that view is that an optimism about where this all resolves does not negate or change the fact that there can be utter carnage in the liminal period of transition. And that is something we need to think about and account for. Ultimately, though, as we wrap up for today, I agree with Pat that whether you think Matt's wrong, whether you think he's just trying to hype things or sell you something, the piece has provoked a conversation that almost everyone who has participated in is richer for. They are thinking about and engaging with the issues. They see that folks in the AI industry feel like something has shifted that they need to pay attention to. Many died in the wool. Skeptics will give it no heed. But for the majority of people who don't know exactly how to feel, maybe it creates context to go try something new that they wouldn't have before. Maybe they go try to vibe code something. Who knows? All I know is that it's better to have the conversation than not. And so I think overall, for the world, this was a very good week. That's gonna do it for today's AI Daily brief. Appreciate you listening or watching. As always. Until next time, peace.
