
Loading summary
A
Today on the AI Daily Brief, a power ranking of some of 2026's big ideas in AI. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. All right friends, quick announcements before we dive in. First of all, thank you to today's sponsors, KPMG Blitzy, Super Intelligent and Robots and Pencils. To get an ad free version of the show go to patreon.com aidaily Brief or you can subscribe on Apple Podcasts. And to find out anything you need to about the show including sponsorship opportunities or our latest research, go to aidaily Brief AI now today is Sunday. We are getting very very close to the Christmas and New Year holiday. The plan as of right now is to have Monday and Tuesday be our last regular format episodes of the year and then starting From Wednesday the 24th we will be into our end of year pre records. We'll have a couple of interesting interviews and a lot of look back and look forward type of content. But for this Sunday long read slash Big Think episode, I've been following along as A16Z drops their big ideas for 2026. Basically what they did is they went to their partners and asked what they thought people would be building in the new year. Now, not all of the ideas that they shared are about AI, but as you might imagine, a good chunk of them are. And so what we're going to do today is go through and review some of the ones I find most interesting and give them a power ranking. We're going to score them 1 to 5 on how likely I think it is to come to pass, 1 to 5 on how valuable I think it would be if it did come to pass, and then 1 to 5 as an X factor, which can be anything that I think is interesting for basically any reason. As you can tell, this methodology is highly scientific and of course completely subjective to myself. Mostly it's just a fun way to share these ideas, but also give us all something to have joyful holiday arguments about. Kicking off, we have an idea from partner Jennifer Lee. Startups tame the chaos of multimodal data Jennifer writes, unstructured multimodal data has been enterprise's biggest bottleneck and their biggest untapped treasure. The limiting factor for AI companies is now data entropy, the steady decay of freshness, structured and truth inside the unstructured universe where 80% of corporate knowledge now lives. That's why, she writes, untangling unstructured data becomes a generational opportunity. Enterprises need a continuous way to clean structure, validate and govern their multimodal data so downstream AI workloads actually work. All right, so we're kicking off with a bang. Let's start with how likely I think this is. This gets at least a 4 on the likely scale, maybe even 5. This is such a huge problem that is so universally recognized that I absolutely think there's going to be a set of startups that go after this incredible opportunity. The only reason that I'm not 100% sure it's a 5 is that it's such a difficult and complex problem that I wouldn't be surprised if we don't just see one overarching effort to try to get everything but more surgical efforts around particular categories of data to start. In terms of value, I also have this as a four. I think that for the companies who all of a sudden can unleash all of their organized and structured data on the world of AI and agents, it's going to be unfathomably valuable. But the only reason it's not a five is that how much the world cares about and needs structured corporate data is perhaps at least a little bit more of a discussion for an X factor. I gave this just a one. It's not a glamorous build, it's just something that needs to happen. Next up, a prediction from Malika. Agent native infrastructure becomes table stakes, she writes in 2026. The biggest infrastructure shock won't come from outside companies, but from within. We're shifting from human speed traffic that's predictable and low currency to agent speed workloads that are recursive, bursty and massive. The enterprise backend of today was built for a one to one ratio of human action to system response. It's not architected for a single agentic goal to trigger a recursive fan out of 5000 subtasks, data database queries and internal API calls in under milliseconds when an agent attempts to refactor a code base or remediate a security log. It doesn't look like a user to a legacy database or rate limiter. It looks like a DDoS attack. Building for agents in 2026 means rearchitecting the control plane. We'll see the rise of agent native infrastructure. The next generation must treat thundering herd patterns as the default state. All right, so this one is interesting to me. On the one hand, for likely, I'm giving it a 2. Not because I don't think that she's right, but just because she's defining an incredibly broad pattern instead of something precise now, that's not a knock, it's just the nature of this particular type of prediction. However, I think that she's right that there is going to be a very different set of expectations around how we get things done once we fully unleash the power of agents. I think the value if something like this comes to pass is at least a three and maybe even getting into a four or five just because of what new capabilities become unlocked when you can, in her words, trigger a recursive fan out of 5,000 subtasks. Speaking of which, for an X factor, I'm giving this one a 4 because you may recognize embers of the Dr. Strange theory here in this prediction. The Dr. Strange theory is of course the way that I've explained in the past that I think that while right now we're kind of viewing agents as one to one replacements for human labor, in the future it will be very different and we will have legions of agents doing both different sets of tasks and then recombining as well as doing the same task over and over and over in order to see which one does best. Building the infrastructure for that is going to be a big change, and I would love to see that be something that a lot of entrepreneurs dive into in 2026. Next up, we have a prediction from prolific A16Z content creator and partner Justine Moore, who predicts creative tools will go multimodal. She writes, we now have the building blocks to tell stories with AI, generative voices, music, images and video. But for anything beyond a one off clip, it's often time consuming and frustrating, if not impossible to get the outputs you want, especially if you want anywhere near the level of control that a traditional director would have. Why can't we feed a model a 30 second video and ask it to continue the scene with a new character created from a reference image and voice, or reshoot a clip so that we can see a scene from a different angle or make the motion match a reference video. 26 is the year that AI goes multimodal. Give a model whatever form of reference content you have to work with to make something new or edit an existing scene. We've already started to see some early products here like Klinga one and Runway Aleph. But there's a lot more to be done and we need innovation at both the model and the application layers. So I scored this one a little bit lower. I put the likelihood at a 2 and the value at a 2. Although I might be misinterpreting exactly what Justine is saying, it seems to me, like she's suggesting a user experience pattern that's about getting a lot with very little about giving an AI tool just a very small input and letting it run wild and doing a ton. Now, if that's not what she's saying, I apologize for rating it too lowly. But my sense of how this plays out, at least in terms of the next sequence of things that are going to be built, is I think we're going to get much more prosumer type tools before we get to the next big iteration of general consumer versions of this. We already have a lot of these creative multimodal tools that basically come down to give a little bit of text or a reference image and kind of let the AI do whatever it wants. I think where things are lacking and where we're starting to see the trend is around getting more fine grained controls for professional or prosumer type users. I think we're going to get AI native cap cut, which by the way, might just be cap cut before we get a wild additional array of creative tools inside something like Sora. Now, the X factor here is a three or four, because I think that this sort of creation is really cool. I just think that the phase that we're going into next is a prosumer and professional phase for creative tools rather than a general consumer phase. That said, I could easily see being wrong on this one. We have not yet figured out what, if any, native social network will arise around AI generated content. And that is such a big prize that I would not at all be surprised to see people spending a lot of cycles and entrepreneurial energy around it. Next up, Yoko Lee coming in with the philosophical saying that 2026 is the year we step inside video. Yoko writes, In 2026, video stops behaving like something we passively watch and starts feeling like a place we can actually step into. Video models can finally understand time, remember what they've already shown us, react when we do something, and hold together with the kind of quiet consistency we expect from the physical world. Instead of producing a few seconds of disconnected imagery, these systems sustain characters, objects and physics long enough for actions to matter and consequences to unfold. This shift turns video into a medium we can build on, a space where robots can practice, games can evolve, designers can prototype, and agents can learn by doing. What emerges is less like a clip and more like a living environment, one that starts to close the gap between perception and action. For the first time, it feels like we can inhabit the videos we generate. All right, so very neat, but I'm going to be a wet blanket here and give this a sad solitary one on the likelihood. This is not about the trajectory of this prediction, it's about the timeline. I just simply do not believe that 2026 is going to be the year where we're going to see these types of capabilities. We are still just nudging into what we can do with AI video and I don't think that we're going to be anywhere near to where Yoko is predicting here in 2026. I don't even know if we're going to be here in 2027. This feels like a 2829 type of thing to me. I also think that because of that, any version that we get next year is going to be around a one on value. Which is not to say that it won't be a five eventually and totally transform how we interact with the world and entertainment. But in the short term I'm putting both likely and valuable at 1. However, where Yoko recovers some scores on the X factor, which I'm giving a five because this is hella cool and almost the polar opposite of the multimodal data, which is so obvious, so present and so important, but does little to stir our souls as compared to something like this, which we can really imagine a very different future to the world we inhabit today. Next up, a prediction from Alex Immerman. Vertical AI evolves from information retrieval and reasoning to multiplayer Alex writes AI has driven vertical software to unprecedented growth. Healthcare, legal and housing companies reached $100 million ARR within a few years. Finance and accounting are close behind. The evolution was first information retrieval, finding, extracting and summarizing the right information. 2025 brought reasoning. 2026 unlocks multiplayer mode. Vertical software benefits from domain specific interfaces, data and integrations. But vertical work is inherently multi party. If agents are going to represent labor, they need to collaborate. From buyers and sellers to tenants, advisors and vendors. Each party has distinct permissions, workflows and compliance requirements that only vertical software understands today. Each party uses AI in isolation which creates handoffs without authority. The AI analyzing purchase agreements doesn't talk to the CFO for its model adjustments. The maintenance AI does not know what the on site staff promised the tenant. Multiplayer changes by coordinating across stakeholders, routing to functional specialists, maintaining context syncing changes. Tasks performed by AI will be completed with higher success rates. And when value increases from multi human and multi agent collaboration, switching costs rise. Here we'll see the network effects that have eluded AI applications. The collaboration layer becomes the moat. So I'm putting the likelihood of this as a three and that reflects mostly my sense that this is going to be extremely jagged and wildly unevenly distributed across different industries and horizontal applications. I think that in terms of general trajectory it would make a lot of sense for this sort of multi agent collaboration to start to become the norm. However, I don't think that every industry in every space is going to be at the point in 2026 where where they're going to be at the stage to actually take advantage of that. However, it is likely that some leading edge spaces will be and that's why I'm giving it a 3 for likelihood. The value seems high like a 3 or a 4. I think Alex is right that if these multi agent systems work and can do these handoffs well, it will significantly increase the value of the tasks that can be performed by AI. It loses a couple points though just for how difficult this is to do in practice and as an X factor. Once again I'm giving this a solid 3. The dream of multi agent systems remains, even if that's very much not where we were in 2025 and there's still a lot of progress to go before we get there, even in 2026. Next up, a prediction from Stephanie Zhang. Creating for agents, not humans in 2026, people will start interfacing with the web through their agents, and what mattered for human consumption won't matter the same way for agent consumption. For years we've optimized for predictable human behavior. Rank high on Google, appear among the first few items on Amazon Lead with a tldr When I took a journalism class in high school, we were taught the five W's and H's for news and to start with, a hook for features. Maybe a human would miss the deeply relevant, insightful statement buried on page five, but the agent won't. This shift is about software too. Apps were designed for human eyes and clicks, and optimization meant good UI and intuitive flows. As agents take over retrieval and interpretation, visual design becomes less central to comprehension. We're no longer designing for humans, but for agents. The new optimization isn't for visual hierarchy but for machine legibility. And that will change the way we create and the tools we use to do it. So on this I'm giving it a five for likely. I think 100%. This is happening. You're already starting to see this in areas like E commerce and I think that that's going to be propagate across the web in terms of value. I'm not sure. I don't really feel like I have a sense yet of which of the use cases that we distribute to agents of stuff we used to do on the web is going to be most valuable. And so it's hard for me to gauge how these new interfaces that enable agents better change our lived experience and the value we get. The X factor is similarly weird. This could be a 3. It could unlock new possibilities, but it could also be a negative 3. It could be terrible. It could start to tear down things about the web that we know and trust and atrophy other parts of the web that we interact with all the time. Kudos to Stephanie for hitting on one that I think is one of the most wildly hard to wrap our heads around before it actually happens. Next up, moving a little bit more practical, Santiago Rodriguez talks about the end of screen time KPI in AI applications Santiago writes, for the last 15 years screen time has been the best indicator of value delivery in both consumer and business applications. We've been living in a paradigm focused on hours of Netflix streaming, mouse clicks in a healthcare EHR, UX, or even time spent on ChatGPT as the key performance indicator. As we move to a future based on outcome based pricing that perfectly aligns incentives between vendors and users, we'll first move away from screen time reporting. Now, Santiago points out that this also creates new challenges. How much an application can charge per user requires a more complex method of measuring roi, and indeed then while I think in some ways this seems pretty obvious, those challenges are not to be underestimated and so I'm giving the likelihood of this A3. I think that he's probably right that we start to see nudges away from this, but I tend to think that our belief in how fast we're going to shift to totally new methods of measuring value and then pricing around it has been pretty overstated. I think that we're going to continue to see lots of experiments in outcome based pricing, but I'm not sure that it's going to become ubiquitous at any time in the next couple of years. I certainly think though that there is a lot of value in moving away from screen time KPI as a metric, too many things suck and consume our attention and so even if these are business applications, not entertainment applications, I would still certainly like to see us more focused on measures of outputs than on measures of inputs. As an X factor, I'm giving it a one again, nothing wrong with it, it's just not something that's going to get my particular juices flowing. Hello friends. If you've been enjoying what we've been discussing on the show. You'll want to check out another podcast that I have had the privilege to host, which is called you can with AI from kpmg. Season one was designed to be a set of real stories from real leaders making AI work in their organizations, and now season two is coming and we're back with even bigger conversations. This show is entirely focused on what it's like to actually drive AI change inside your enterprise and as case studies, expert panels, and a lot more practical goodness that I hope will be extremely valuable for you as the listener. Search you can with AI on Apple, Spotify or YouTube and subscribe today. This episode is brought to you by Blitzi, the enterprise autonomous software development platform with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Enterprise engineering leaders start every development sprint with the Blitzi platform, bringing in their development requirements. The Blitzi platform provides a plan, then generates and pre compiles code for each task. Blitzi delivers 80% plus of the development work autonomously while providing a guide for the final 20% of human development work required to complete the sprint. Public Companies are achieving a 5x engineering velocity increase when incorporating Blitzi as their pre IDE development tool, pairing it with their coding pilot of choice. To bring an AI native SDLC into their org, visit blitzi.com and press get a demo to learn how Blitzy transforms your SDLC from AI assisted to AI native. Today's episode is brought to you by my company, Superintelligent. Superintelligent is an AI planning platform and right now, as we head into 2026, the big theme that we're seeing among the enterprises that we work with is a real determination to to make 2026 a year of scaled AI deployments, not just more pilots and experiments. However, many of our partners are stuck on some AI plateau. It might be issues of governance, it might be issues of data readiness. It might be issues of process mapping. Whatever the case, we're launching a new type of assessment called Plateau Breaker that, as you probably guessed from that name, is about breaking through AI plateaus will deploy voice agents to collect information and diagnose what the real bottlenecks are that are keeping you on that plateau. From there, we put together a blueprint and an action plan that helps you move right through that plateau into full scale deployment and real roi. If you're interested in learning more about Plateau Breaker, shoot us a note. ContacteeSuper AI with Plato in the subject line. Small, nimble teams beat bloated consulting every time. Robots and Pencils partners with organizations on intelligent cloud native systems powered by AI. They cover human needs, design AI solutions, and cut through complexity to deliver meaningful impact without the layers of bureaucracy. As an AWS certified partner, Robots and Pencils combines the reach of a large firm with the focus of a trusted partner. With teams across the us, Canada, Europe and Latin America, clients gain local expertise and global scale as AI evolves. They ensure you keep peace with change as and that means faster results, measurable outcomes, and a partnership built to last. The right partner makes progress inevitable. Partner with Robots and pencils at robotsandpencils.com aidaily Brief. Next up, another big zoom out from Jonathan Lai World Models take the Spotlight in storytelling in 2026, he says AI powered world models will revolutionize storytelling through interactive virtual worlds and digital economies. Technologies like Marble from World Labs and Genie 3 from DeepMind already generate full 3D environments from text prompts, allowing users to explore them as if in a game. As creators adopt these tools, entirely new storytelling formats will emerge, potentially culminating in a generative Minecraft where players co create vast evolving universes. These worlds could blend game mechanics with natural language programming such as commanding create a paintbrush that changes the color of anything I touch to pink. The rise of World Models signals not just the new genre of play but but an entirely new creative medium and economic frontier. This is very similar to the year we step inside video. For me in that I think that the X factor is a four or five. I think this is super fascinating. I think it is a whole new area of exploration of human experience, but I do not believe that it is a 2026 thing. I have the likelihood and the value both at a 2 and honestly maybe to remain consistent with Step into Video I should have it as a one. The early demos and experiments we've seen from World Labs and DeepMind are incredibly impressive. They show that there is going to be something phenomenal here. But at this point we don't even really have like the GPT1 version of this stuff. We certainly don't have anything close to the GPT 3.5 moment for these things. I think that we are going to make a lot of progress in this area next year. I think we're likely to see more startups working around it and we may see even some of the first really niche experiences that capture people's attention. But when it comes to this actual revolution in storytelling, I think you're looking way farther out towards the end of the decade. Next up, a prediction from Emily Bennett that is something that I've certainly spent a lot of time on based on my background. She predicts the first AI Native university or as she puts it, an institution built from the ground up around intelligent systems. Over the past several years, she writes, universities have dabbled in AI enabled grading, tutoring and scheduling. But what's emerging now is deeper an adaptive academic organism that learns and optimizes itself in real time. Picture an institution where courses advising, research, collaboration and even building operations continuously adapt based on data feedback loops. Schedules optimize themselves. Reading lists evolve nightly and rewrite themselves as new research appears. Learning paths shift in real time to meet each student's pace and context. We're already seeing precursors. ASU's campus wide partnership with OpenAI produced hundreds of AI driven projects across teaching and administration. In the AI native university, professors become architects of learning, curating data, tuning models and teaching students how to interrogate machine reasoning. Assessment shifts too. Detection tools and plagiarism bans give way to AI aware evaluation. Grading students on how they use AI, not whether they used it. The AI Native University will become the talent engine for a new economy. Now once again, I have a fairly high X factor here. A three or four. I think this is a super interesting and important topic area in addition to everything else. I'm a venture partner at Learn Capital and first worked with those guys back in 2011 2012. So I've been thinking about these issues for some time. I built a program at Northwestern right out of school and I got kids who are 4 and 7 who are just at the beginning of their educational journey. So why am I not ranking this one a five across the board? I actually have the likelihood down closer to a two. I have the value also kind of low and the reason for that might seem weird. It's not that I disagree with the value of anything that Emily is predicting here. If I could snap my fingers in it all existed right now. What I wonder about is the longevity of this and how intermediary this step is. The issue for me when it comes to predictions of how education is going to evolve is that we don't yet really have a good picture on what the skills needs will be on the other side of AI development. To the extent that you can even think about it as the other side of AI development. A lot of what Emily is proposing is just a better AI fied version of the same education we have now. But I tend to think that what we need to train people for is going to look so wildly different than it did in the past that this might end up feeling like a very intermediary step. And I wonder if there will be enough momentum, given how intermediary we are, to make a lot of these big changes knowing they might just change again a couple years down the line. I don't know how good this analogy is, but I see it in somewhat of a similar light to current AI process mapping tools, where an AI sits and watches how a human does something so that an agent can do that same thing. On the one hand, this is intuitive and makes sense. On the other hand, in what universe do we think that an agent is going to just do things the same way that a human did, just a little bit faster? There are going to be totally new agent native processes that don't frankly make sense to humans, but still get the outputs that we're looking for. I kind of think that this AI native university feels like a similar intermediary step, but then again, maybe this is an intermediary journey that's 10 to 20 years and we just need to make a bunch of these changes now, even before we can know what the full future holds. Next up, let's talk about ChatGPT becoming the AI App Store A prediction from Anish Acharya Anish writes, consumer product cycles require three things to work new technology, new consumer behavior, and a new distribution channel. Until recently, the AI wave had fulfilled the first two conditions but had no new native distribution channel. Most products grew off the back of existing networks like X or by word of mouth. With the recent release of OpenAI's Apps SDK, Apple's support for mini apps, and ChatGPT's rollout of Group messaging. Though consumer developers can now tap ChatGPT's 900 million user audience directly and also grow with new networks of mini apps like Wabi. As the final piece of the consumer product cycle, this new distribution channel is set to kick off a once in a decade gold rush in consumer tech in 2026. Ignore at your own peril, boy. Fortunes are going to be made and lost on what venture capitalists think about this. I, for my part, despite not really being a vc, am in the more skeptical category. I'm putting likelihood at a 2 because while I think that there will be some value to the distribution that ChatGPT can provide for apps, my instinct is that it looks a lot closer to a new version of SEO and or a new channel for ads as opposed to an app store where people are actively looking for things. It is powerful that people go to ChatGPT with extremely high intent and are looking for answers to their problems or particular types of information that does create an opportunity to serve them highly targeted recommendations. Which for the moment, of course ChatGPT is not calling ads that point people to apps that might be useful for them. Basically, I think that it is a good distribution channel, but I don't agree that it is a once in a decade gold rush channel. However, like I said, I could be very wrong about this one and the cost benefit analysis for app developers on at least trying to use this new channel is probably pretty high. Next up, a prediction from Olivia Moore called Voice Agents Take up Space. Olivia writes, in the last 18 months the idea of AI voice agents managing real interactions for businesses has gone from science fiction to reality. Thousands of companies, from SMBs to enterprises are using voice AI to schedule appointments, complete bookings, run surveys, do intakes, and much more. These agents save costs for businesses, generate additional revenue, and free up human employees to do higher leverage and more enjoyable tasks. But because the space is so nascent, many companies are still in the voice as a wedge phase, offering one or several types of calls as a point solution. I'm excited to see voice agents expand into handling entire workflows and even into managing full customer relationship cycles. I'm almost gonna take this one in a direction that's a little bit different than Olivia or more expansive. I think the likelihood of this is extremely high. I give it a four and I think that the value and my X factor are also pretty high. I think voice as the modality of interaction is still wildly undertapped, and you can tell it's wildly undertapped because we're still doing workarounds like using Whisperflow instead of the native voice to text solutions that our devices offer right now, which are still unbelievably bad. I think that there is so much rich territory to explore here and people are going to get really, really used to talking to their phones and their computers in a way that they don't currently. Next up, back to the enterprise from Seema Amble AI creates a new orchestration layer and new roles in the Fortune 500 in 2026, enterprises will shift further from isolated AI tools to multi agent systems that will need to behave like coordinated digital teams. As agents start to manage complex interdependent workflows, organizations will need to rethink how work is structured and how context flows across Systems. The Fortune 500 will feel this shift most acutely. The transition will force leaders to reimagine roles in software. The rise of multi agent systems isn't just another step in automation. It represents a restructuring of how enterprises operate, how decisions are made, and ultimately where value is created. Look, this one is pretty easy for me. This is a yep, yep and a yep. I think it's very likely I'll give it a three because it's hard, even though if we expand out over the next two to three years it's definitely a five. The value I'll also give a three, but also again only because it's going to be hard. And the X factor I'll give a 3 on this one because even though it's a corporate thing, some of the biggest opportunities for an increase in the satisfaction that we have at work come from these new roles and redesigning how we interact in a big way. For the sake of being a little bit more contentious, let's look at Mark and Drusko's prompts. Free and proactive applications arrive. Mark says 2026 marks the death of the prompt box for mainstream users. The next wave of AI apps will have zero visible prompting. They'll observe what you are doing and intervene proactively with actions for you to review. Your IDE suggests to Refactor before you ask. Your CRM drafts the follow up email when you finish a call. Your design tool generates variations as you work. The chat interface was training wheels. Now AI becomes invisible scaffolding woven throughout every workflow, activated by intent rather than instruction. I'm giving this a one, one and a one. I don't think it's likely, I don't think it's valuable, and I just don't really like it in practice. Now I understand why so many people think that there are big limitations with the prompt box is the only way that we interface with AI, and certainly I think that there are going to be interface evolutions and changes. But we keep having this discussion and people keep trying things that are different, only to come back to the chat interface as a really good default option. Also, at the risk of being biased on early versions, I don't know that I've ever disliked a feature as much as I dislike Gemini's interaction with Gmail right now, where it by default puts in a response to an email and I have to click around to get out of that. Basically this might be me being a boomer curmudgeon, but I think that a lot of these suggestions, at least at this stage, are a hell of a lot closer to products trying to convince you that they're valuable than they are actually being useful. And honestly my suspicion is that that's not because of some big capabilities gap. I think that this idea of being activated by intent that Mark is talking about is incredibly, incredibly difficult to do. Well. Intent is such a subtle and multivarious thing, and when it comes to proactive suggestions, even if the intent that it's guessing at is a little bit off, that makes the whole thing completely useless. Now I could be wrong and I probably need to soften a little bit on this. For example, I am noticing that the vibe coding tools are getting a lot better at suggesting the next thing to do. And so maybe there's more discrete space for this than I'm giving it credit. But yeah, in general I think that the death of the prompt box is wildly exaggerated. Lastly, two that I'm going to combine that get my most rah rah Hell yeah rating the first is building the AI native industrial base. David Ulovich writes, America is rebuilding the parts of the economy that create real strength. Energy, manufacturing, logistics and infrastructure are back in focus, and the most important shift is the rise of an industrial base that is truly AI native and software first. This is opening major opportunities in advanced energy systems, robotics, heavy manufacturing, next generation mining, and much more. I can design cleaner reactors, optimize extraction, engineer better enzymes, and coordinate fleets of autonomous vehicles with a level of insight no legacy operator can match. The same shift is reshaping the world outside the factory. Autonomous sensors, drones and modern AI models can now give continuous visibility into ports, rail, power lines, pipelines, military bases, data centers, and other critical systems that were once too large to manage comprehensively. Erin Price Wright adds to this with her prediction of the renaissance of the American factory. America's first great century was built on industrial strength, but it's no secret that we've lost much of that muscle. Some of it due to offshoring, some of it due to an intentional society wide failure to build. But the rusty wheels are starting to creak into motion again, and we're witnessing the rebirth of the American factory with software and AI at its heart. By applying techniques that Henry Ford developed a century ago, planning for scale and repeatability on day zero, and layering in the latest advances in AI. We'll soon be mass producing nuclear reactors, building housing that meets our nation's demand, constructing data centers at breakneck speed, and entering a new golden age of industrial strength. Like I said, these get a rah rah. Hell yeah. For me, fives unlikely fives on valuable fives on X factor. I don't think it's that hard to understand why I would think this is super valuable if it happens. But I want to talk about likeliness. Here's why I give this a 5, maybe even a 6 on likelihood. Not only is there immense financial incentive and immense demand and need for this, it is also, I believe, when done well, the best answer to the political acrimony that AI is going to face in 2026. I've said before and I am quite sure that 26 is going to be a rough year when it comes to AI politics. I A lot of politicians who are trying to get elected or reelected in the midterms are going to use anti AI positions as a populist cudgel. It is insane to me that we are and need to build all of this crazy infrastructure and that that is turning into a liability rather than an asset. Why the companies who are building that infrastructure are not doing more to get the communities where it's happening excited and involved and upskilled and more prosperous because of that building is just mind blowing to me. But I think optimistically that we'll stop screwing that up in 2026. Which is why I think all of this is so likely. So friends, there are a lot more big ideas that I didn't even get to, but those are a sampling. Those are my power rankings. Hope you enjoyed this big Think episode of the AI Daily Brief. Like I said, we've got a little bit more in terms of normal episodes before we fully settle into end of the year content. For now, that is going to do it for today's episode. Appreciate you listening or watching as always. Until next time. Peace.
The AI Daily Brief: Artificial Intelligence News and Analysis
Host: Nathaniel Whittemore (NLW)
Date: December 21, 2025
In this special "Big Think" episode, Nathaniel Whittemore (NLW) reviews and power-ranks a series of predictions for the most consequential AI developments expected in 2026, primarily sourced from the recent a16z partner roundup. Each idea is scored across three axes:
NLW’s perspective is candid, amused, and occasionally skeptical—inviting listeners to ponder, debate, and prepare for the coming AI year.
This episode provides a whirlwind tour of AI’s most hyped, debated, and potentially transformative predictions for the short-term future—always with NLW’s characteristic blend of skeptical optimism, dry humor, and a deep respect for both the challenges and opportunities ahead. It's a must-listen for anyone mapping their own AI strategies or just hoping to keep up in an ever-accelerating field.