Transcript
A (0:00)
Today on the AI Daily Brief why Agents Make Every Job a Startup the AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors KPMG Section, Granola and Assembly. To get an ad free version of the show go to patreon.com or you can subscribe on Apple Podcasts. Ad free starts at just $3 a month. If you want to learn more about sponsoring the show or really anything about the show at all, you can go to aidaily Brief AI now today for this long read slash Big Think Style episode, we're exploring a phenomenon that I've been observing for some time, that I've been feeling within myself that I've seen many other people feeling, and that I think there's increasing discourse around. I think unpacking. It is important to understand in practice, not in theory, how AI and agents are actually going to impact the economy. The phenomenon is this. For the first couple years of Genai, the prevailing narrative was that it was going to be this time saving thing. Even as recently as the end of last year when we did our big ROI survey, time saving was the biggest reported ROI that people shared. And yet, especially now that we're in this agentic era, we're all finding ourselves not clocking out at 3pm on Thursday, but forcing ourselves to go to bed and turn off at 3am and you see this all over AI. Twitter Aaron Levy from Box writes Sorry to anyone who thought AI would mean we'd work less, at least for now. AI makes it easy to explore more than you did before and so you start doing far more as a result. Shanu Matthew reposted that saying have consistently logged 6am to 10pm days the last few weeks with only breaks for dinner and workout and let it disrupt sleep too. Unfortunately, thinking about the technical issues really hard to not get fixated on solving some of these problems when you make so much progress in each new session, but run into a whole new set of issues to resolve. Also, it's difficult to step away when you think you just need to point the agent to a detailed spec and let it do the work, but end up coming with the next three to five things to work on. While going at that discussing a recently released open source tool that advertises itself as the orchestration layer for zero human companies, Abdul Kadir captured the excitement around all of this. 12 hours ago around 1am I found out about Paperclip. 12 hours later I skipped sleep because I couldn't go to bed with all the things that's unlocking for me and my business. Brian Johnson, the king of healthy habits discussing Claude, writes, I got seahold, suffered sleep consequences. I busted my screens off, rule turned down socializing, fell behind on work. AI is preposterous, as close to magic as I've experienced, except a seed becoming a tree and a zygote becoming a baby. Now the post is really long, but the key part is this. It's the contrast between all of those things, all of the breaking of his healthy habits on the one hand and on the other hand, how he puts it. The exhilaration I felt in the past two weeks is hard to explain. Sam Altman really summed this up at the beginning of last week with a tweet sharing two contrasting quotes. The first quote post AGI no one is going to work and the economy is going to collapse. The second quote I'm switching to polyphasic sleep because GPT5.5 in Codex is so good that I can't afford to be sleeping for such long stretches and miss out on working. Cheyenne Zhao writes, polyphasic sleep to maximize Codex usage is the most honest thing Sam has ever tweeted. Forget the AGI philosophy for a second. The revealed preference is that the CEO of the company building these tools and literally cannot stop using them because the output per hour is too valuable to waste on sleeping. That's not a sign of an economy about to collapse. That's a sign of a tool so productive that time itself becomes the bottleneck, which is exactly what we've been seeing in inference. The constraint isn't model quality anymore, it's how many hours per day you can feed at work. The point of all this is that right now many people are experiencing this weird hybrid of exhilaration and anxiety, where on the one hand they feel like wizards because of what they can do, but on the other they feel like they're leaving more on the table than ever before. So what's going on? What's happening, and what does it tell us about how AI is going to play out? A concept I want to introduce is the infinite backlog. You might have heard of something called the lump of labor fallacy. It comes up basically every time there's some new productivity enhancing technology. And in short, it's the wrong idea that there is some finite amount of work, in other words, a lump of labor, and that if something new comes along to take a chunk of that work, it means someone else doesn't get to do that work and thus loses their job. The reason that lump of labor is a fallacy is what I call the infinite backlog. In an expansionary capitalist system, and within the companies that are doing that expansion, there is always more to do. There's always some next thing. There's always the things that you would do if you had the time and resources to do them. In many ways, the job of leaders is to select from and prioritize the infinite backlog and turn some tiny piece of it into a roadmap. It's then the job of individual contributors to take on and do their part of that infinite backlog that has been translated into a roadmap. Of course, at that point, it's not framed as backlog, it's framed as current work streams. But lurking behind that has always been the infinite backlog of all these things you could do if resources. But even more than resources, time was not a constraint in modern work. There's a reasonable sense of what different people in different types of roles could could be expected to achieve. Now, there might be significant gradations between not so good performers, average performers, good performers, and great performers. But you're talking about the difference between a 1x and maybe a 4x. Maybe it's even higher than 4x. But the point is, it's the same order of magnitude. Now AI comes along, and all of a sudden everyone's getting these multiples on their time. People are doing twice as much or sometimes three times as much. Some of the best workers are even getting more out of it than that. But when AI was an assistant, the final boss of time still had not been defeated. Even AI assisted great performers still were constrained by time. There was still an end of the week, a Friday afternoon where you could look and see what you accomplished and be reasonably okay with it being enough. Even if you wished you had time for more, agents change this entirely. Whereas AI assistants compressed time allowed you to get higher leverage in the same unit of time than you've ever had before, agents seemingly break the rules of time. Because agents aren't you being more productive, they're you replicating yourself infinitely. Not only can agents work 24 7, even as we sleep, multiple instances of those agents can be working in parallel. Everything that we do, everything that we might do, could be happening right now, all, all at the same time. And all of a sudden, the infinite backlog becomes not this theoretical future thing, but something to actually be distributed across all of these multiple intelligences. The risk is that the infinite backlog, instead of Being a thing that shapes your future becomes a contemporary failure, a never ending slate of immediate term unmet opportunities. The feeling that people are having as they unlock the power of agents is the awareness of the infinite backlog of everything they could be doing and everything that they're not doing, becoming immediate and close. What's interesting is when you take a step back and you try to find analogies for this. The best and frankly only analogy that sort of works is entrepreneurship. Entrepreneurship is basically the act of assembling limited resources to take on some infinite backlog that others, for whatever reason, haven't taken on yet. Whether it's pursuing some new opportunity, developing some new market. As a founder and an early entrepreneurial team, you are building without blueprints, sailing without a rudder. Pick your metaphor. The point is you have to figure it out as you go along the path. There are efforts that you realize only in retrospect were wasted efforts, dead end pathways, redundancies and opportunities that you missed. The job of the startup is to figure out what to do when the options are infinite, but the resources with which to do it are extremely finite. Now, startups are extremely exhilarating. They are some of the highest highs you can have in business. You are creating something from nothing, birthing purpose and meaning and value into the world. And yet, of course, at the same time, there is an almost Kierkegaardian dizziness of freedom. When you can do anything, when you can work on anything, what are you supposed to do? Ultimately, this is why most people don't choose to be founders or even early startup stage employees. It's too stressful, it's too hard. The highs are too infrequent and too hard to achieve relative to the lows and the stress and the anxiety, not to mention the frequent financial sacrifice. Now, with that analogy in mind, let's come back to the knowledge worker who's sitting there with their open claw and their Mac Minis surrounding them, discovering all the new things they can do. The first part of them taking on their infinite backlog is in many cases extremely exciting. Because it's the do big things that we've always wanted to do part of the infinite backlog. It's the known part of the infinite backlog. If you're a marketer, it's the radical increase around an automation of content output, the automation of analytics and monitoring that feeds back into a system that does what you've always done, but more and better than ever before. This is the part of the infinite backlog that, while, yes, very far out on the knowable roadmap was still on the knowable roadmap. What gets challenging is the uncharted waters beyond that. And this is where you start to feel that dizziness of freedom that startups feel. It's the feeling of doing things, but not knowing if those are the things that are most effective to be doing. The awareness of all the things that you're choosing not to do by doing that thing instead. In fact, once it becomes plausible to go take on the unknown section of the infinite backlog, there is a very common tendency to feel like you're not doing enough or like you're doing the wrong thing, like you should be doing something else. That feeling, combined with just the very real constraints that still do exist even in a world of infinitely replicable agents, are creating a new type of challenge. Tang Yan recently tweeted something I've noticed. AI agents create a weird new kind of burnout, especially for young people. A lot of ambitious 22 year olds are going to think the answer is simple. Spin up more agents, ship more code, sleep less, outwork everyone. And for a while it will feel incredible. You can keep multiple agents running, feed them tasks, review outputs, fix mistakes, make decisions and keep the whole loop moving. Problem is that the work no longer drains you through typing, it drains you through judgment. More attention, more context switching, more verification, more decisions per hour. So instead of 8 to 10 normal productive hours, you might get 4 or 5 extremely intense hours before your brain is fully cooked and you feel numb. Until you sleep properly and reset. Some of my friends are already burnt out. They don't say it out loud, but I can tell the agent can keep working 24, 7. The human still has a hard limit. It turns out, of course, that constraints, even in the world of theoretically infinitely replicable intelligence, do exist. They might not be the same constraints, in fact, they are not, but constraints they are nonetheless. They are constraints like judgment. Figuring out what matters to work on, planning, how to sequence those things, what to start when and how to do it. Coordination. When you have all of these intelligences running in parallel, how do you make sure they're working towards a larger end rather than running off in a bunch of different directions? Evaluation. Are you simply trusting that all of your new agent workers are always going to get it right and to produce good work? Or is there some part where you or someone else of the human persuasion that is has to step in and evaluate and see whether their work is actually good or not? And presumably to the extent that it's not good, go back to that planning constraint and figure out how to make it better. And then of course there are the technical challenges around all these agents. There's issues of context, memory, basically all the things that all the startups are moving as fast as they can to try to solve. There is of course the constraint of cost, the great constraint which will shape more than anything else I believe the next 18 to 24 months. My whole episode on Friday was about how we are running headlong into the business model implications of effectively unlimited demand for tokens and a supply of tokens that because of our supply of compute and upstream from that even our supply of energy cannot possibly meet. It turns out that those intelligences probably won't be infinitely self replicable. Even if you can do a lot, you will still have to choose. Which brings us right on back to judgment as a constraint. Finally, there is absorption. Whoever the intended recipient of all of the output of this work is, whether it's a market that wants to buy something or or be advertised to, or be solicited on behalf of the sales department or whatever, how much more stuff can they actually absorb? How much consumption elasticity do they have? So what are we supposed to do about this? The feeling of exhilaration is real. The unlock of parallelism and replicated intelligences, even if it is not infinite, is still unbelievably powerful. It might not have beaten time, even though it feels like that at first, but it has certainly changed our relationship with time in a way that is no longer a difference in scale, but a difference in kind. And yet we're left with this dizziness of freedom, of the challenge of actually using these new powers and doing so without burning ourselves out. One of the most important AI questions right now isn't who's using AI? It's who's using it? Well, KPMG and the University of Texas at Austin just analyzed 1.4 million real workplace AI interactions and found something surprising the highest impact. Users aren't better prompt engineers. They treat AI like a reasoning partner. They frame problems, guide thinking, iterate, and push for better answers. And the good news? These behaviors are teachable at scale. If you're trying to move from AI access to real capability, KPMG's research on sophisticated AI collaboration is worth your time. Learn more at kpmg.com us sophisticated that's kpmg.com us sophisticated Here's a harsh truth. Your company is probably spending thousands or millions of dollars on AI tools that are being massively underutilized. Half of companies have AI tools, but only 12% use them for business value. Most employees are still using AI to summarize meeting notes if you're the one responsible for AI adoption at your company, you need section Section is a platform that helps you manage AI transformation across your entire organization. It coaches employees on real use cases, tracks who's using AI for business impact, and shows you exactly where AI is and isn't creating value. The result. You go from rolling out tools to driving measurable AI value. Your employees move from meeting summaries to solving actual business problems and you can prove the roi. Stop guessing if your AI investment is working. Check out section@sectionai.com that's S E C T I O N A I.com Today's episode is brought to you by Granola. Granola is the AI notepad for people in back to back meetings. You've probably heard people raving about Granola. It's just one of those products that people love to talk about. I myself have been using Granola for well over a year now and honestly, it's one of the tools that changed the way I work. Granola takes meeting notes for you without any intrusive bots joining your calls. During or after the call, you can chat with your notes, ask Granola to pull out action items, help you negotiate, write a follow up email, or even coach you using recipes which are pre made prompts. Once you try it on a first meeting, it's hard to go without. Head to Granola AI AIDAILY and use code AIDAILY. New users get 100% off for the first three months. Again, that's Granola AI Aidaily. One of the trends that I follow most closely when it comes to AI is around voice. Today's episode is brought to you by assembly AI. The Best Way to Build Voice AI Apps the company has been moving with extreme velocity lately, shipping major improvements to their speech to text models that go way beyond just better transcription. Specifically, they are getting to an accuracy level that can reliably capture the type of things that used to break every other speech to text model. Think credit card numbers, read aloud, email addresses spelled out, complex medical terminology, financial figures. All of these things, in other words, that it really matters to get right. So for anyone who's building in fintech, healthcare, sales, intelligence, customer support, getting those things wrong isn't just annoying, it's a liability. Their speech understanding models are also really good at things like identifying speakers, surfacing key moments and uncovering insights from voice data. And all of that happens in a single API call. The proof is in the pudding and assembly powers some of the top voice AI products in the market today, like Granola, Dovetail and Ashby. Getting started is free. Head to AssemblyAI.combrief to test it live and get $50 in free credits. No contract, no upfront commitments. That's AssemblyAI.combrief. What becomes pretty clear pretty fast is that an entirely new structure of support needs to be built around this. To put it differently, if we want agents to unlock the infinite backlog, we have to design entirely new architectures of support around them. Now, some of those are build inputs for the agents themselves. They need model access, token allocations, sandboxes, evaluation tools, permissioned context. This is the stuff that we talk about most often, because each of these, when you squint at them, becomes an opportunity for some startup to build something new. What we talk a lot less about, but which, if we understand a big problem, is going to be this new category of burnout is the human support that has to go around agent architectures. New types of support for prioritization and helping people figure out what matters. New types of support to help people in a dynamic and evolving way uncover what are sustainable rhythms. If it is the case that our relationship with time has changed, but that time has not been defeated, there is some new pace that is sustainable that we're going to have to normalize and design around. That, of course, will be made even more challenging by the fact that it's likely to keep evolving. There's also other more practical stuff like new types of embedded tech support goes given that a lot of people managing these fleets of agents won't have been traditionally developers, which gets us to a third category of support, which is organizational support. We are going to need massively new types of coordination. If you have people in different departments all around the organization unlocking big chunks of this infinite backlog, which we never assumed before, we would possibly get into. In the same way that individuals are going to have to design new coordination systems around their fleets of agents, the organization as a whole is going to have to design new types of coordination systems around the different individuals who are deploying each of their own fleets of agents. This will 100% change management. Management will have to become much more dynamic, much more responsive and adept to dealing with emergent opportunity that comes from unlocking parts of the infinite backlog. Organizations will also need to get better at transmitting value and unlocks from one part of the organization to another. And what's really cool is that when you walk down this path, you start to see the shape of some New Types of roles that will Emerge One of the things that grates me endlessly about the job conversation around AI is that it tends to be either a AI will take all our jobs or b no, new technologies always create new types of jobs. Now, clearly I am in the B camp, but what frustrates me about the B argument is that effectively no one takes the time to to actually try to game out what those new jobs might look like. Now, there is a certain honesty to that in the admitting that we very rarely know what the future is shaped like and what type of jobs will be needed to support it. But that doesn't mean that we can't try. And what's cool about what's happening in 2026 with agents unlocking this infinite backlog is that we don't just have to imagine we're starting to see some glimpses. Turning it back to Aaron Levy again from Box on Thursday, he tweeted Starting to hire and retrain for new agent engineering roles for internal functions to help get more powerful agents working on critical business processes. I expect this type of role to be a very big deal over time. At Box and other companies it looks something like an internal FTE whose job it is to wire up internal systems and get agents working with them effectively. The person will be extremely technical and capable of building secure governed agents for internal workflows that connect to business systems like box, salesforce, workday, etc. And codify workflows and skills. In some cases this person may understand the business process well enough to do it fully, but in many cases I expect them to work with the businesses directly in an embedded fashion. Ironically, that may introduce another new role on the business side that is more akin to agent product management for internal processes. The key is that you need technical plus process people that can span multiple teams or functions in an organization. It's not about bringing automation to a job, but about bringing automation to a process. This is going to be a very big trend in most companies going forward. Fun to watch the early innings of what this will look like now. This is something similar to what I talked about in my 2026 predictions when I talked about internally deployed Vibe coders, people who could help non technical people inside the organization figure out how to use these tools for more general business or knowledge work. And while even Aaron, who is literally designing these roles right now, doesn't know exactly what they're going to mean, it's not hard to imagine some of the things that might be created. You basically take the opportunity created by fleets of agents unlocking the infinite backlog. Divide it by the new support structures needed to make that work. And what comes out is not just changes to what existing people do, but a whole lot of new roles. You've got technical and infrastructural roles, which is sort of more what Aaron was talking about. You've got agent ops engineers who keep the fleets running, context librarians whose job it is to curate what agents know and make sure it's up to date and or manage complex permissioning, which is going to be a huge part of every organization. You've got eval engineers who create quality gates at scale rather than assuming that every person who uses agents is going to be equally good at that. And then moving on from the technical, we also have all of these new coordination and alignment roles. Coordination architects who design how everything stays legible, Information pipeline owners who route signals to the right places, intrapreneur orchestration leads who broker conversations across overlapping work. Finally, there are the strategic and managerial think experiment portfolio managers who fund, scale, merge and kill agentic unlocks of the infinite backlog. Intrapreneur coaches who take this new generation of agentic intrapreneurs and support them with judgment, pacing, founder condition, support, and more. Now, to be clear, I have no idea if these are the exact roles that are going to be created. It is almost certainly the case that the way that it shakes out will be different than I'm sitting here and imagining it. But you better believe that some version of all these things is going to become big parts of what we do. Whether it's old roles that are just updated to have these new responsibilities or new roles entirely. Now, none of this is going to be easy. My thesis for this show is that agents make every job a startup. Not everyone wants to work for a startup, however, and that's going to cause stress and challenge. But the opportunity is so immense that it's gonna pay off to figure out how to make it work. Now, the vast majority of you guys listening, even though you are in the very, very top tier and vanguard of where individuals and organizations are when it comes to AI, still probably haven't gotten all that far along in your personal or organizational infinite backlog. I certainly have some very clear known agentic work that I need to be doing, like building better systems for distributing and repurposing content, which I have exactly zero of right now. The point being, we are very early in this conversation. That doesn't mean we can't start having the conversation. I think some of the types of conversations, you could be having one on the backlog itself. What is actually in your organization's infinite backlog? In other words, what have you always meant to do but couldn't? What parts seem reachable now that weren't six months ago? And who in the company is best positioned to go after which parts? Another important conversation is how you support the people doing the work. Do your people have the power, the model, access, the tools, the budgets to actually unlock the true capability set of the agentic era? Do they have the right types of playgrounds to experiment in? Can they get the cross functional context their agents need without negotiating it from scratch every time? Are you teaching things like judgment and prioritization, not just prompting? Are you building, pacing, infrastructure or just rewarding whoever stays up latest? A third type of conversation is around organizational coherence. Does the org actually have that ambient awareness of what's being built across teams? Are managers equipped to make portfolio level decisions about emerging products? If something works in one corner of the company, are there mechanisms for it to spread to other parts of the company? And finally, of course, what are the implications for leadership itself? What does management mean on a fundamental level when its job shifts from assigning tasks to harnessing emergent infinite backlog unlocks? From a cultivating leadership perspective, which of the people currently thrive in the founder condition and which need a different shape of work? And of course, that really fun one which is going to come up so much faster than people think it is. What are the new roles that we're going to need that don't exist in the org chart yet? The great thing about starting to have these conversations now is that most of the answers are going to involve a lot of work. That's going to take time and effort. The earlier you start, the better positioned you'll be. So that, my friends, is my argument for why agents make every job a startup. This will be challenging for many, incredibly exhilarating for others. But if nothing else, we're all in it together and figuring it out as we go. Thanks as always for listening or watching. And until next time, peace.
