
Loading summary
A
Today on the AI Daily Brief. Why? We've just experienced the dawn of the Agent era and before that in the headlines. Whatever issues Jensen Huang has with Sam Altman don't seem to be enough to stop them from investing $20 billion into OpenAI. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, KPMG Blitzy and Super Intelligent. To get an ad free version of the show, go to patreon.com aidailybrief or you can subscribe on Apple Podcasts. If you are interested in sponsoring the show, send us a Note@ SponsorsideailyBrief.AI Lastly, I would so appreciate it if you would take a little time to fill out the January AI Usage Pulse survey. Should take you less than 2 minutes. You can find it at aidailybrief AI. Anyone who does so will get the survey results a week before everyone else. And all in all, you'll be contributing to a better set of knowledge about how the world is using AI right now in this moment in real life. You can find it again at aidaily Brief AI and thanks so much. Over the last couple of days into last weekend, there were some rumors floating around that maybe there had been some consternation or concern brewing between Nvidia and OpenAI. Specifically, the reports were that Nvidia CEO Jensen Huang had some complaints about how OpenAI was being run. At the end of last week, the Wall Street Journal reported that Nvidia's $100 billion investment announced last year wasn't going ahead now. That deal was framed in kind of a weird way. It was at the peak of all of OpenAI's deal announcements, and it was right as Wall street was turning the corner on being really excited about all these big deals to being extremely skeptical of them. As much as the deal was framed as a memorandum of understanding rather than a fully consummate deal, still, both companies made large announcements about it and played it up as if it was definitely moving forward. Citing anonymous sources, the Journal wrote that more recently, Jensen has, quote, privately criticized what he described as a lack of discipline in OpenAI's business approach and expressed concern about the competition it faces from the likes of Google and Anthropic. On Saturday, Jensen told the press that the story was, in his words, nonsense. He added, we're going to make a huge investment in OpenAI. I believe in OpenAI. The work that they do is incredible. They are one of the most consequential companies of our time, and I really love working with Sam. We will invest a great deal of money, probably the largest investment we've ever made. At the same time, Jensen did take the chance to clarify that the $100 billion was never a guarantee from them. He framed it as OpenAI inviting them to invest up to that much, which he very much appreciated. But there was a clear move away from that specific number. Now there was an entire sub narrative around Oracle during the week as well, with once again the Wall Street Journal suggesting the reduced investment raised major questions. Foremost, the Journal wrote, was whether OpenAI could make good on its five year, $300 billion contract with Oracle and, quote, whether the tech giant should really be recording the full amount of that deal on its own books. The article didn't add any new information, but did seek comment from Oracle, to which they responded first, in a statement provided to the WSJ and later published on x, the Nvidia OpenAI deal has zero impact on our financial relationship with OpenAI. We remain highly confident in OpenAI's ability to raise funds and meet its commitments. Wednesday's reporting, meanwhile, closes the arc, suggesting that Nvidia hasn't completely lost confidence in OpenAI, considering that they are still willing to invest $20 billion. So what can we take away from this narrative saga, if anything? My guess is that the reports that Jensen has some concerns about capital discipline or business focus at OpenAI are not totally untrue. At the same time, clearly those feelings aren't strong enough to make Nvidia back out of the deal entirely. They're still committing to what will be their largest investment ever, meaning that this is likely about caution rather than a complete loss of faith in OpenAI. Second, the other implication, which is quite clear, is is that the market remains on edge about AI capex and is ready to overreact to any negative story that gets published. Oracle stock, for example, is down almost 10% this week and their bonds continue to slide. Ultimately, if you were looking for canaries in the coal mine of OpenAI, it does not seem to me that they are going to have any trouble completing this massive fundraise. But of course we'll get more information about that in the weeks to come. Now, moving from Nvidia but staying on chips for just a minute, intel has announced plans to start producing GPUs. Traditionally, intel has focused on CPU production, a strategy that allowed them to dominate the PC revolution. But of course the rise of AI workflows means that GPUs are now the most profitable chipset. A failure to address that shift has of course been a major drag on Intel's business over recent years. Speaking at the Cisco AI Summit on Thursday, CEO Lip Bhutan announced that intel will soon produce their first range of GPUs designed for this AI era. Tan said, I hired the chief GPU architect and he's very good. I'm very delighted that he joined me and it took some persuasion. That hire was former Qualcomm senior VP of Engineering Eric Demers, who joined intel last month. Tan also mentioned that a couple of customers were engaging heavily with Intel's new contract manufacturing business, which will see them produce third party chips for the first time. Ultimately, this company still has a lot of work to turn around, but the GPU pivot and the production of third party chips could be a major boost for the struggling US chip maker. Next up, a story which might seem small from the outside, but which developers seem to find pretty big the latest update for Apple's coding platform, Xcode adds support for both Claude agent and OpenAI's codecs directly in the developer environment. These additions transform Xcode into yet another surface for agentic coding. Agents can autonomously write code, build projects end to end, run unit tests, and verify their own output with limited human oversight. Direct verification of the UI is a particularly big improvement for app developers, completely eliminating the need to run a simulator or feed screenshots into the model. This is also Apple's most significant adoption of agentic coding to date. Last year Xcode introduced LLM coding assistants, but they were unable to access the full code base and were strictly non agentic. This release brings Xcode broadly in line with codecs and Claude code, giving iOS app developers native access to agentic coding. What's more, it's clear from Apple's comments that they understand that software engineering has undergone a fundamental shift in recent months. VP of Developer Relations Susan Prescott said, at Apple our goal is to make tools that put industry leading technologies directly in developers hands so they can build the very best apps. Agentic coding supercharges productivity and creativity, streamlining the development workflow so developers can focus on innovation. Interestingly, the feature is being built using mcp, meaning access isn't just limited to CLAUDE and codecs. Developers can install any coding agent with MCP capability and expect full functionality within Xcode. Now I saw two wildly different takes on this one, represented by George Castillo reads Apple just surrendered on AI. Xcode 26.3 now relies on CLAUDE and codecs for agentic coding, doesn't look like they're going to work on their own models at all. On the other end of the spectrum we have Dan Schipper from every who said Hot Take. Apple is going to be a big winner in AI. Native apps are naturally easier to vibe code. You only break things for individual users. Apple ecosystem connects to a bunch of extremely important data like health and messaging that makes AI better. Finally, everyone's buying a Mac Mini for their agent. The total addressable market for their hardware is going up 100x Speaking of Claude code, There was no joy in Mudville for a chunk of the day on Tuesday. A clawed code outage saw software engineer productivity absolutely plummet. The platform saw widespread error messages and API performance for all models degraded. Anthropic did implement a patch within 20 minutes to get the service back online, but it was still down for long enough to see the impact. Rather than coding by hand, many developers took to X to muse about how quickly their work dependencies had shifted. Matt Silv wrote, claude code being out is worse than the Internet being out. Fabio Saichas writes, anthropic will be the AWS cloudflare of the development world. When it goes down, half the world stops working. Lastly, today, an interesting story that both is and isn't an AI story. Disney has appointed a new CEO the company's board of directors have selected Josh Diamaro to succeed Bob Iger as CEO beginning in March. Diamaro has been the chairman of Disney experiences since 2020, leading the division which manages theme parks, cruises and live events now. Interestingly, that division is now making more for Disney than their film and entertainment divisions, and Forbes suggested that this moment of disruption is the perfect moment for Disney to double down on experiences. David Bloom wrote, content even from all those amazing Disney franchises that Iger acquired just won't be as valuable in an era defined by omnipresent artificial intelligence tools. The article noted that AI generated entertainment is rapidly improving alongside reductions in costs. This is bad news, they claim, for the high budget movies that Disney specializes in. Investor Chris Merengi said, Disney is an Experiences company now. Two thirds of their profits, probably more than that, come from experiences, especially in the age of AI. D' Amiro's appointment isn't just about a pivot to experiences over movies and streaming. Iger also lauded the incoming leader for his openness to change in the face of a rapidly shifting technological landscape. In a joint interview with the incoming CEO, Iger said, I have observed him over the years as we've worked together as someone who views technology as an opportunity and not as a threat. And I believe that is critical because when you look at the history of human beings, no generation of human being has ever been able to stand in the way of technological advance. It happens now. Disney, of course, has already made moves in this direction, electing to partner with OpenAI. And while even the introduction of Disney characters into Sora hasn't made it a super success, the deal did leave Disney with a billion dollar stake in OpenAI and the beginning of a partnership that could bear fruit down the line. D' Amaro clearly seems cut from the similar cloth of wanting to see how to leverage the new technology, he said. The reason this company is so special is because of how creative we are and human beings that are generating that creativity in my mind that never gets replaced. And in fact, this isn't even theory anymore. This is real. It's something that we're embracing. If you walk over to studios today and see artists using AI harnessing 70 years of history, this is when the Walt Disney Company thrives, when technology intersects with brilliant people and creativity, we're in that moment right now, armando Kiran wrote. Disney seems to be the only movie studio making proactive moves in the AI era. Their new focus on the physical world is not unprecedented. When the cost of producing and distributing music went to zero in the Napster and Spotify era, the entire music industry was forced to retreat into live concerts. When AI Video does the same to movies and tv, Disney will be applauded for being ahead of the curve. Their $1 billion investment in partnership with OpenAI is another example of Disney leadership embracing change far ahead of their industry peers. Fascinating times indeed. But for now, that is going to do it for the AI Daily Brief Headlines edition. Next up, the main episode. Alright, let's talk about the signal versus the noise in enterprise AI. The challenge right now isn't just about what's possible, it's about what's practical. That's the entire focus of the youe Can With AI podcast I host for kpmg. Season one. Cut through the hype to focus on deployment and responsible scaling. Season two goes a level deeper. We're bringing together panels of AI builders, clients and KPMG leaders to debate the strategic questions that will define what's next for AI in the enterprise. Six episodes packed with frameworks you can actually use. Find you can with AI Wherever you get your podcasts. Subscribe now so you don't miss the new season. If you're looking to adopt an agentic sdlc, Blitzi is the key to unlocking unmatched engineering velocity. Blitzi's differentiation starts with infinite code context. Thousands of specialized agents ingest millions of lines of your code in a single pass, mapping every dependency with a complete contextual understanding of your code base. Enterprises leverage Blitzy at the beginning of every sprint to deliver over 80% of the work autonomously. Enterprise grade end to end tested code that leverages your existing services, components and standards. This isn't AI autocomplete. This is spec and test driven development at the speed of compute. Schedule a technical deep dive with our AI experts@blitzi.com that's blitzy.com Today's episode is brought to you by Superintelligent. Superintelligent is a platform that, very simply put, is all about helping your company figure out how to use AI better. We deploy voice agents to interview people across your company, combine that with proprietary intelligence about what's working for other companies, and give you a set of recommendations around use cases, change management initiatives that add up to an AI roadmap that can help you get value out of AI for your company. But now we want to empower the folks inside your team who are responsible for that transformation with an even more direct platform. Our forthcoming AI Strategy Compass tool is ready to start to be tested. This is a power tool for anyone who is responsible for AI adoption or AI transformation inside their companies. It's going to allow you to do a lot of the things that we do at Superintelligent, but in a much more automated, self managed way and with a totally different cost structure. If you're interested in checking it out, go to aidailybrief AI Compass, fill out the form and we will be in touch soon. As part of my ongoing collaboration with kpmg, each month at the beginning of the month I'm doing a bit of a retrospective show to discuss and try to summarize the key themes from the month that came before. Now obviously when it comes to January, we're actually a couple days behind already firmly into this month. And of course the reason for that is everything that went on with Moltbook and how that plus the SpaceX XAI acquisition nudged stories up. But in many ways those stories which nudged the January recap back are perfectly aligned with the themes that shaped the first month of 2026. I'm calling this episode the dawn of the Agent era, and right from the beginning it was clear that something had shifted. Now what was interesting about this shift is that there was a bit of a lag between when the capabilities came online and when people really recognized it. Midjourney's David Holes had the representative tweet when he said on January 3, I've done more personal coding projects over Christmas break than I have in the last 10 years. It's crazy. I can sense the limitations, but I know nothing is going to be the same anymore. So many people had the same experience of going back home, slowing down a little bit, and having a chance to actually see what you could do with models like Opus 4.5 in Claude code and GPT 5.2 codecs. It turns out the answer was a lot, and folks came back convinced more than ever that we had hit a fundamentally different period in the history of AI coding and by extension, the history of AI Vibe coding, which as a term just turned one years old the other day over the course of this month shifted from the thing that you use for prototyping to just the thing that you do. This shift itself was never better embodied than when Anthropic released Claude Cowork, the Claude code for everyone else, and shared that it had been built in 10 days, pretty much all by Claude code itself. Throughout the month we got articles like this one from Maxwell Zeff, how Claude Code is reshaping software in Anthropic. We got commentators outside the tech industry like Joe Wiesenthal from Bloomberg's Odd lots of writing pieces like why the Tech world is going crazy for Claude Code and honestly, if you watch getting into fights with people who are skeptical of Vibe coding on Twitter about it rune from OpenAI Joe Wiesenthal becoming a digital humanities researcher is one of the best story arcs of the Vibe coding era. Sergey Karyev encapsulated this all when he wrote Claude Code with Opus 4. 5 is a watershed moment, moving software creation from an artisanal craftsman activity to a true industrial process. It's the Gutenberg Press, the sewing machine, the photo camera, and this was the sentiment throughout the month. Now that said, Claude code is still a pretty intimidating piece of software for most people, which is why of course, Anthropic launched Claude Cowork. As I mentioned before, Cowork was built in about 10 days, pretty much entirely with Claude code writing the code itself. And while initially some of the limitations, especially as compared to what you can do with Claude code, have been pretty apparent for early users, still to many it once again feels like this is an example of just how significantly things have shifted. A great write up of this came from market commentator and investor Brent Beshore in an excerpt of his annual letter for his firm Permanent Equity from the end of 2025. He talks about how he had missed a lot of the benefits of AI, but how they had been trying to harness agentic AI and yet how for the time being, as he wrote, we're pulling back on the pace and vision of our agentic AI ambitions. On January 30, he followed up 21 days later. My opinion has completely changed with the introduction of Claude coworkers. Last year we at Permanent Equity started dozens of agentic AI experiments led by a talented technologist. All failed expectations with only a few mild successes. Most experiments were 100 plus hours of work over three plus months. As I explained in the annual letter, we shut down the efforts in December. Claude Cowork comes out on January 12th and I ignore it. I see the early adopters and charlatans doing their indiscriminate evangelism thing. I have high skepticism. A couple friends I highly respect start talking about it. That's interesting. A few more who aren't early adopters historically start chirping Now I'm more interested. We start playing around with Cowork on Monday. By Wednesday, two of our top projects from last year were done. What failed with 100 plus hours over three months led by a tech professional, took a couple no code private equity scrubs 20 minutes to complete flawlessly. Since then we've started running dozens more experiments to great success. Not always perfect, but always good and quickly getting better. The future is here. The implications are real. And what I love about this post is that you're talking about this incredibly quick shift from agents not working to agents working. And of course we would be remiss about talking about agents without talking about openclaw. Certainly the most fascinating story of the month was the launch of OpenClaw and the social network for agents Mult book that came after OpenClaw is sort of like an assistant protocol for Claude code. It turns Claude code or you can use other harnesses and models into an assistant that has access to all sorts of things on your computer that allow it to actually function like an assistant. This really caught people's attention. Thousands and thousands of users started playing around with it like yours truly placed orders for Mac Minis so they could keep it separate from their other devices and figure out exactly what services they wanted to have access to. And for the last two weeks, Twitter has been filled with examples of people getting a lot of value out of this investor. Anand I air wrote the openclaw and Mac Mini explosion proves power users, AKA prosumers want always on agents with access to their data, Sigi Chen wrote. ChatGPT was the iPhone moment for LLMs. OpenClaw is the iPhone moment for agents. And if OpenClaw weren't enough, about a week into the whole OpenClaw phenomenon, a guy named Matt Schlit decided wouldn't it be cool if our agents all had a place to hang out? Working with his agent, they built something called Multbook, the name of which comes from a short lived iteration of the name before it turned into openclaw, when it was called Multi and Multbook billed itself as the social network for AI agents. It launched on a Wednesday and by Friday morning had 2,000 agents who had started over 10,000 conversations across 100 different submults, which are basically subreddits. Six hours later that same Friday, we were at 35,000 agents. By the end of the day, when I posted my episode about it, there were over 100,000. Moltbook now has over a million and a half agents, and while no one exactly knows how it's going to play out, the sheer tonnage of conversation around emergent systems phenomenon and agent consciousness and all these sort of things has been incredibly notable. Interestingly, when he was asked about it at the Cisco AI Summit this week, OpenAI CEO Sam Altman said that while Multbook may be a passing fad, he didn't think OpenClaw was effectively that he has high conviction that this sort of agentic assistant use case is going to be a key thing for AI. Now imagine trying to explain everything that I just said to someone who's not really paying attention to AI all that much. This gets at the core of something that we talked about this month called the AI adoption gap and the AI capabilities overhang, both of which refer to the space between the current capabilities of AI and what most people are getting out of it. Kevin Roose from the New York Times summed it up nicely when he wrote, I follow AI adoption pretty closely, and I have never seen such a yawning inside outside gap. People in SF are putting multi agent Claude swarms in charge of their lives, consulting chatbots before every decision. Wireheading to a degree only sci fi writers dare to imagine. People elsewhere are still trying to get approval to use Copilot in teams, if they're using AI at all. It's possible the early adopter bubble I'm in has always been this intense, but there seems to be a cultural takeoff happening in addition to the technical one. I want to believe that everyone can learn this stuff but in the same way that AI companies that took scaling seriously started stockpiling GPUs, et cetera, before 2022 had a near insurmountable head start over latecomers, it's possible that restrictive IT policies have created a generation of knowledge workers who will never fully catch up. I think that discussion is one that we are going to continue to have heading into February. But believe it or not, these themes of agents and code AGI were not the only themes. From stories last month we also got a lot of new information about the shape of the AI race. This actually kicked off even before January began when Meta announced that it had acquired agent firm Manus. Now we're not exactly sure how Meta plans on using Manus, but what we are sure of and what Meta has continued to reinforce throughout the month is that although 2025 was a big rebuilding year for them, they are not giving up on the core AI race at all. Indeed, it was interesting to see towards the end of the month when we got Meta and Microsoft earnings on the same day, how the markets were interpreting both companies. Microsoft was punished seemingly for not being aggressive enough and for not showing enough flow through from AI benefits to their cloud revenue. Things didn't go bad, they just didn't go as bonkers as analysts had wanted to see. Meta, on the other hand, massively increased its CapEx spend expectations, but got rewarded in the markets because it paired that first with significant growth in their ad revenue, which they attributed to AI, which in other words gave the markets that flow through that they had been looking for and also because of their category lead in the AI wearables category with their Meta ray bans. Turns out when you have the default leader in a specific category, even if there are questions of exactly how that category plays out, that's something that the market is pretty interested in. Google was also a big winner in the AI race this month, even though they were pretty quiet in terms of what they released. With the exception of the walkthrough version of Genie 3 world model, which is just fantastic, they still notched a huge win when it was officially announced that Gemini would in fact be powering Apple's on device AI. Now we had had reports of this in December, but the confirmation certainly reinforced why Google has such powerful tailwinds heading into 2026. Moving to the Chinese models, while we didn't have anything as dramatic as the Deep Seq moment, there were new models from Alibaba's Quent and Moonshot's Kimi K25 had lots of people stand up and take notice. It was one of the first leading openweight models to be multimodal. It has advanced forward looking features like agent swarms, and just in general, it's incredibly close to the state of the art for about the fifth of the price of the other leading models. While it didn't reset the race expectations because people came into 2026 knowing that Chinese labs were going to be a big player, it certainly reinforced that expectation. Throughout the month, a lot of the conversation was about prospective IPOs. While all of the news was in the form of rumor and reporting, we did hear that OpenAI was concerned about anthropic moving before them, and so we're pushing to get public in the fourth quarter of this year. A potential spoiler for those plans comes in the form of SpaceX, who now after a merger this week, own XAI and its Grok platform. While Elon's narrative about the merger was all about orbital data centers, many people think that this is at least in part about playing spoiler in the public markets by having Xai, via its connection with SpaceX, be the first big new model lab that people have access to after SpaceX goes public earlier in the year. Now, when it comes to the race dynamics, I think everyone is mostly focused on models, but there are product and business model decisions that could impact things as well. Another big conversation this month was advertising in ChatGPT and how that would impact people's usage of ChatGPT relative to competitors like Gemini and Claude. That remains to be resolved, I think, and we haven't really seen exactly what these ad units are going to look like. But Anthropic did come out today, just before I started recording this, explaining why Claude will remain ad free. The vast majority of responses on Twitter were some variation of shots fired, and I'm sure this is something we will continue to be talking about more in the future. Now, in terms of what comes next, there are so many rumors swirling of new models coming. Sonnet 5 or Opus 4.6GPT 5.3, Gemini 3 Pro, maybe even other versions of Gemini 3. Certainly we've seen an uptick in the vague posting with for example Google's Logan Kilpatrick tweeting feb is the month of AI shipping. Enjoy it. Smiley face. Basically, if January was a month that helped set our expectations for where the AI race is and most importantly helped us appreciate that we were truly in this new agent era, many think February is going to be all about new model drops. That sounds not bad to me if that's the case, I will certainly be excited as it happens. For now though, that is our summary of January, the dawn of the agent era. Appreciate you listening or watching as always and until next time, peace.
Host: Nathaniel Whittemore (NLW)
Date: February 5, 2026
In this episode, Nathaniel Whittemore explores what he calls "the dawn of the Agent era"—the technological and cultural tipping point where AI agents have moved from intriguing prototypes to integral parts of coding, enterprise workflows, and even daily life for early adopters. He analyzes major stories from January 2026, including the business and philosophical shifts triggered by rapid agentic AI development, and distills signal from noise in the fast-evolving world of artificial intelligence.
Nvidia & OpenAI: Investment Saga
Intel’s Strategic Pivot
Agentic Coding in Apple Xcode
Shifting from Prototype to Production
Anthropic’s Claude Code / Cowork
Agent Ecosystem: OpenClaw and Multbook
Meta, Microsoft, Google
Chinese Labs
Upcoming IPOs & Model Drops
Business Model Shifts
Jensen Huang (Nvidia) on OpenAI Investment
"We're going to make a huge investment in OpenAI. I believe in OpenAI. The work that they do is incredible. They are one of the most consequential companies of our time, and I really love working with Sam." (01:36)
Susan Prescott (Apple) on Agentic Coding
“Agentic coding supercharges productivity and creativity…so developers can focus on innovation.” (08:36)
David Holes (Midjourney) on The Agent Era
"I can sense the limitations, but I know nothing is going to be the same anymore." (13:47)
Sergey Karyev on Claude Code
"Claude Code with Opus 4.5 is a watershed moment, moving software creation from an artisanal craftsman activity to a true industrial process." (15:19)
Brent Beshore (Permanent Equity) on Agentic AI
"What failed with 100+ hours over three months, led by a tech professional, took a couple…20 minutes to complete flawlessly." (17:47)
Sigi Chen on OpenClaw
"ChatGPT was the iPhone moment for LLMs. OpenClaw is the iPhone moment for agents." (21:08)
Kevin Roose (NYT) on AI Adoption Gap
"People in SF are putting multi agent Claude swarms in charge of their lives...Elsewhere, people are still trying to get approval to use Copilot in Teams." (23:11)
Bob Iger (Disney), on Embracing Tech
“No generation of human being has ever been able to stand in the way of technological advance. It happens.” (11:30)
Nathaniel Whittemore concisely sums up January 2026 as a turning point: the moment when AI agents went from being tantalizing prototypes to tangible tools reshaping coding, enterprise, and creative life. Just as new agentic and multimodal models accelerate the AI arms race, businesses and individuals are grappling with deep questions about adoption, competitive advantage, and the gap between visionary users and lagging organizations. Expect February to be another month of explosive innovation and, likely, more new AI model releases.
"If January set expectations for where the AI race is—and, most importantly, helped us appreciate that we are truly in this new agent era—many think February is going to be all about new model drops." (31:30)