Transcript
A (0:00)
Today on the AI Daily Brief, the open qualification of AI and before that in the headlines, the fight between Anthropic and the Pentagon heats up. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, kpmg, AI, UC Robots and Pencils and Blitzy. To get an ad free version of the show go to patreon.com aidaily brief or you can subscribe on Apple Podcasts if you are interested. To learn more about sponsoring the show, send us a Note@ SponsorsideailyBrief AI and last note today, big apologies for missing the show yesterday. Our journey back from South America ended up being significantly longer and more complicated than we expected. Not only because of the east coast snowstorms, but because we also had an emergency deplaning in in Manaus, Brazil that led ultimately to the entire trip being more than 55 hours door to door. So what we're going to do this week is treat the Wednesday show as the off day and we will have a normal show on Saturday. So get prepared for some weekend listening. Still, we have a lot to catch up on. So with that, let's dive in. We kick off today with a topic that could easily be a main episode. The Pentagon has handed Anthropic an ultimatum in what could be the most crucial battle for AI safety to date. Now, heading into this week, we had building reports that Anthropic's contract with the Department of Defense or the Dow, depending on who you ask, was in jeopardy over AI readiness. Anthropic was insisting that their technology should not be used to power autonomous weaponry or for the surveillance of Americans. At certain points in the reporting, it seemed like Anthropic had contracted on the basis that their technology would not be used to assist in any lethal exchange. The Pentagon, on the other hand, was insisting that tech companies should not have input on how technology is used in military operations. They have been pushing for each AI lab to provide their models with terms that allow, quote, all lawful use, basically establishing the position that only US law rather than tech company policy should be the limiting factor in the conduct of war. The conflict had, of course, erupted after reports that Anthropic's AI models had been used during the raid that captured Venezuelan President Nicolas Maduro. The reporting has been unclear to what extent AI was used, but it appeared that Anthropic was furious that their technology was used in what was ultimately a lethal operation. In that context. Anthropic executives were summoned to a meeting at the Pentagon on Tuesday to settle the dispute, and things did not go well. Defense Secretary Pete Hegseth personally gave Anthropic CEO Dario Amadei a Friday deadline to agree to their terms or face a government blacklist. And this is not just they would drop their contract. The threat was that Anthropic would be designated a supply chain risk and barred for use among military contractors as well. That designation had previously only been applied to foreign companies such as Huawei. New reporting substantially fleshes out the conflicting views. Sources familiar with Anthropic's position said they are not confident that their AI models are reliable enough to operate weapons without a human in the loop. They also note that there are no current laws dealing with how AI can be used in domestic surveillance. Hegseth reportedly told Anthropic that they have until 5:01 on Friday to get on board or not. He also warned Anthropic that he would ensure that, quote, the Defense Production act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of whether they want or not. Most are taking this to mean the Pentagon would compel Anthropic to provide a version of Claude without guardrails trained into the model. The Defense Production act does allow the US Government to dictate contract terms on goods and services deemed critical to national security. However, it has rarely been invoked against the will of individual companies and is more typically used to order entire sectors to prioritize the production of certain goods to fulfill government orders. For example, it was invoked during the 2021 California wildfires to increase the production of fire hoses and during the pandemic to boost the supply of ventilators. Katie Sweetin, a former liaison for the Justice Department to the Department of Defense, noted that Heg says threats are kind of contradictory, telling cnn, I would assume we don't want to utilize the technology that is a supply chain risk, right? So I don't know how you square that. What it sounds like is that the supply chain risk may not be a legitimate claim, but more punitive because they're not acquiescing. Despite being tense, sources said the meeting was cordial and that there were no raised voices. Hegseth even praised Anthropic's models, which has been one of the undercurrents of the story. The Pentagon seems to believe that Anthropic's technology is so far ahead of the competition that it can't be replaced by another supplier. The Pentagon has standing contracts with OpenAI, Google and XAI. However, XAI is the only one to agree to all Lawful Uses standard across both classified and unclassified settings. CLAUDE is the only model currently cleared for use in classified settings, according to Axios. Sources familiar with Anthropic's thinking said that they have no plans to buckle to the Pentagon's demands. Their official statement was polite but firm commenting, we continued good faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do. Meanwhile, a Pentagon spokesperson said, this has nothing to do with surveillance and autonomous weapons being used. You can't lead tactical ops by exception. As of Wednesday, Axios reports that the Pentagon is taking preparatory steps towards labeling Anthropic a supply chain risk. They've reached out to major contractors, including Boeing and Lockheed Martin, to determine if they have any critical dependencies on Anthropic's technologies. Now, the reason that this is so important and like I said, could easily be a main episode and who knows, might need to be at some point, is that there's much more than Anthropic's $200 million contract at stake. The dispute goes right to the heart of the debates around AI safety that have been raging on the Internet for the past decade. We are no longer talking about hypotheticals, but a very practical question of whether AI labs or the US Government should have the final say on how military AI is deployed, semaphore writer Reid Albarghatti commented. The most likely conclusion of this saga is that the Pentagon forces Anthropic to comply using the Defense Production Act. In some ways, this means both sides get what they want. The Pentagon no longer has restrictions on how it can use claude, and Anthropic gets to save face even if it ends up complying with the Pentagon's demands. For some, this would be a very unsatisfying conclusion, Lawfare Media wrote. The terms governing how the military uses the most transformative technology of the century are being set through bilateral haggling between a defense secretary and a startup CEO with no democratic input and no durable constraints. Congress should be setting these rules, and it should do so in a hurry. Obviously this story continues to evolve and is something we will watch. But for now we move over to one of Anthropic's big competitors in OpenAI, where reporting suggests that their Stargate project has stalled out. Project Stargate was announced last January and was one of the first and catalyzing events that kicked off the AI infrastructure boom, Sam Altman went to the White House to announce the half a trillion dollar data center investment. Also in attendance were SoftBank's Masayoshi Son and Oracle's Larry Ellison. Stargate set the tone for data center investments moving forward. Prior to that, hyperscalers had sort of been hedging their bets, being cautious of overbuilding. But the announcement of half a trillion dollars in spending was Sam Altman's declaration that he was going to build AI compute on a civilizational scale and everyone else should get with the program. And yet, despite the grandiose narrative, the project had challenges from the start. Indeed, the day it was announced, Elon Musk tweeted, they don't actually have the money. And although we know Elon has a particular ax to grind here, that did seem to at least be partially the case with reporting later in the year suggesting the project was struggling to get financing in place. Now, however, the information reports the entire Stargate project is stalling out. Their sources claim that the joint venture still hasn't been staffed up, and that there is no data center development under the Stargate banner. In fact, the sources state that the joint venture began falling apart within weeks of the announcement due to a lack of leadership and coordination. OpenAI, Oracle and SoftBank couldn't agree on who would do what, and so the project languished, stymied. OpenAI sought to develop data centers independently, but lenders apparently balked at the prospect of multi billion dollar loans tied to an unproven company. Months later, of course, OpenAI circled back to partnerships and began developing data centers in collaboration with both Oracle and SoftBank. But none of the projects involve all three Stargate participants. There's a lot of nuance here, but ultimately the issue is that the scale and speed of the buildout has fallen short of OpenAI's ambitions. They originally targeted having 10 gigawatts in Stargate commitments signed by the end of 2025. But as we stand in early 2026, OpenAI has 6 gigawatts under contract with Oracle and 2 gigawatts with SoftBank. The projects have all been hit with delays, and only one is known to be partially operational at this stage. Ultimately, all of this may be less dramatic than it seems and just the challenge of doing things on such an unprecedented scale. Satchin Khadi, who works on compute scaling at OpenAI, weighed in writing, stargate is the umbrella brand for our compute strategy. It's about mobilizing the full ecosystem to deliver a step change in global AI compute and over the past few quarters that vision has become a reality. We exited 2025 with around 2 gigawatts of available compute and a model designed to scale long term capacity agreements, purpose built co located deployments and deep collaboration on next generation data center design. Compute leadership is the foundation of research and product velocity and Stargate is how we build what comes next. Overall, I don't really know what to make of this one, but the reason it's worth paying attention to, if for no other reason than you know that Wall street as it tries to figure out what to make of this entire space is going to be watching it closely. Speaking of Wall street, we turn finally today to Nvidia, who absolutely crushed earnings, delivering another record quarter for revenue Wednesday night's earnings left no room for interpretation. The AI chip boom is still in full swing. Nvidia's revenue was up 73% annualized to reach 68.1 billion, exceeding expectations by a massive 11 percentage points. They also guided even higher growth for the current quarter, projecting a 77% annualized revenue gain. Data center business grew even faster than top line revenue, coming in at 75% growth. Nvidia also noted that they're still assuming zero revenue from China in their forecasts, so revenue could see another big boost if exports start flowing. CEO Jensen Huang remarked that AI demand is going from strength to strength, commenting, the demand for tokens in the world has gone completely exponential. I think we're all seeing that to the point where even our 6 year old GPUs in the cloud are completely consumed and the pricing is going up. Still, Nvidia also disclosed massive new backstops for the AI buildout. They've now provided 3.5 billion worth of guarantees to companies developing data centers, four times the amount they disclosed in the previous quarter. These guarantees allow smaller NEO cloud companies to access debt on more favorable terms. However, if AI demand falters, Nvidia could be required to take over leases and the operation of data centers in a struggling market. Nvidia did not disclose which companies they gave these new guarantees to. AMD recently gave a similar guarantee to Crusoe, pledging 300 million, suggesting these arrangements are becoming more common. While the backstop adds marginal pressure on Nvidia's balance sheet, ultimately these deals are pretty modest compared to the booming profits. Jensen also weighed in on the recent drawdown in software stocks, arguing that the markets got it wrong. He restated his view that existing software firms will build agentic features on top of their products, but their value as systems of record and databases will remain, he said. All of these tools that we use today, whether it's Cadence OR Synopsys or ServiceNow or SAP, these tools exist for a fundamentally good reason. These agentic AI will be intelligent software that uses these tools on our behalf and help us be more productive. Nobody's going to Service better than ServiceNow and they're going to come up with agents that are really fine tuned and optimized for the work that uses the tools that they have. In the end, we need the tools to finish their work and put the information back in a way we can understand now. Despite the rosy outlook, the market response was a little underwhelming. Nvidia stock gained 3% in after hours trading once earnings were announced, but fell below its closing price later in the evening. Ultimately, right now I just think that markets are completely unmoored in general, but especially when it comes to AI, and there's basically no better evidence for that than its response to this Nvidia announcement. In any case, friends, that is where we will end the headlines for today. Next up, the main episode. Agentic AI is powering a $3 trillion productivity revolution and leaders are hitting a real decision point. Do you build your own AI agents, buy off the shelf or borrow by partnering to scale faster? KPMG's latest thought leadership paper Agentic AI Navigating the Build, Buy or Borrow decision does a great job cutting through the noise with a practical framework to help you choose based on value, risk and readiness and how to scale agents with the right Trust, Governance and Orchestration Foundation. Don't lock in the wrong model. You can download the paper right now at www.kpmg.usnavigate. again, that's www.kpmg.usNavigate. there's a new standard that I think is going to matter a lot for the enterprise AI agent space. It's called AIUC1 and it builds itself as the world's first AI agent standard. It's designed to cover all the core enterprise risks, things like data and privacy, security, safety, reliability, accountability and societal impact, all verified by a trusted third party. One of the reasons it's on my radar is that 11 labs who you've heard me talk about before and is just an absolute juggernaut right now just became the first voice agent to be certified against AI UC1 and is launching a first of its kind insurable AI agent. What that means in practice is real time guardrails that block unsafe responses and protect against manipulation, plus a full safety stack. This is the kind of thing that unlocks enterprise adoption. When a company building on 11 labs can point to a third party certification and say our agents are secure, safe and verified, that changes the conversation. Go to AIUC.com to learn about the world's first standard for AI agents. That's AIUC.com Most companies don't struggle with ideas, they struggle with turning them into real AI systems that deliver value. Robots and Pencils is a company built to close that gap. They design and deliver intelligent cloud native systems. Powered by generative and agentic AI with focus, speed and clear outcomes, Robots and Pencils works in small, high impact pods. Engineers, strategists, designers and applied AI specialists working together to move from idea to production without unnecessary friction. Powered by RoboWorks, their agentic acceleration platform teams deliver meaningful results, including initial launches in as little as 45 days, depending on scope. If your organization is ready to move faster, reduce complexity and turn AI ambition into real results, Robots and Pencils is built for that moment. Start the conversation@rootsandpencils.com aidaily brief that's robotsandpencils.com aidDaily Brief Robots and Impact at Velocity Want to accelerate enterprise Software development velocity by 5x? You need Blitzi, the only autonomous software development platform built for enterprise code bases. Your engineers define the project a new feature refactor or greenfield build. Blitzi agents first ingest and map your entire codebase. Then the platform generates a bespoke agent action plan for your team to review and approve. Once approved, Blitzy gets to work autonomously, generating hundreds of thousands of lines of validated end to end tested code. More than 80% of the work completed in a single run. Blitzy is not generating code, it's developing software at the speed of compute. Your engineers review, refine and ship. This is how Fortune 500 companies are compressing multi month projects into a single sprint. Accelerating Engineering Velocity by 5x Experience Blitzi firsthand@ Blitzi.com that's Blitzy.com Foreign welcome back to the AI Daily Brief. I am finally back in the studio after not only a couple weeks abroad in South America, but also three non stop days of travel. I think I can say fairly confidently that I have never in the entire time I've been doing this show, been as disconnected from the AI news as I was for the last three days. And so coming back to it as I was looking through all of the news that I had missed, the big question for this show is going to be what was the most important theme that I had missed that I wanted to focus this first main episode on. The answer, I believe, is the open claw ification of AI, a phenomenon where more and more products in the AI space are starting to resemble openclaw in ways that I believe are about something much bigger than just trying to compete with a hot project. So let's talk first about some of the feature announcements that inspired this show. Right at the top we have Claude Code's new remote control feature. It is exactly what it sounds like and brings a thing that people have wanted forever and that numerous startups have built around to the mainstream, which is the ability to code with Claude code from your phone. On Tuesday, Anthropic posted new in Claude Code Remote Control Kick off a task in your terminal and pick it up from your phone while you take a walk or join a meeting. Claude keeps running on your machine and you can control the session from the Claude app or Claude AI code. Some of the first reactions were to joke about the social life implications of this. Cremio wrote, anthropic just saved the San Francisco nightlife. And yet for most, the very clear comparison was in fact OpenClaw. AppClub's MetaPreston writes, this is the only reason I wanted Claudebot. Investor and content creator Sarah Dietschy writes, okay, I guess I don't need to try claudebot anymore. Guy L. Breton calls it openclaw for grown ups, and Ali K. Miller says, this is the feature I've been waiting for. I work from my phone for hours every day. Remote control is about to turn me into a Claude Code maniac. So one of the big things that people love about openclaw is the ability to interact with your agents via Telegram. And that's not because Telegram is necessarily the best experience when you're sitting in front of a computer. It's because using Telegram or WhatsApp or whatever chat app you've connected it to makes it so that you can interact with your agents on the go. This was certainly one of the first things that attracted me to openclaw was the ability to have an on the go coding agent so I could keep working on my projects when I wasn't sitting in front of the laptop. Lots of other people clearly felt similarly. Nate Herc writes, claude Code just added what everyone wanted. With remote control, you essentially have Claude Code in your pocket so that you can check in on progress and continue to assign work as you're out and about. Some even went so far as to argue that the use cases for OpenClaw and this new remote control feature might actually end up being a little bit different. They wrote Claude code is your work engine. It's a professional terminal tool. It lives in your code base to write functions, run tests and open PRs while you walk the dog. OpenClaw is your life engine. It's an always on butler. It's better for managing your calendar, booking flights, or texting your mom. Why the Remote control is different first is native versus messaging. Openclaw talks to you through telegram or WhatsApp Claude code. Remote control gives you a dedicated web or mobile terminal bridge to your local hardware. Deep context Claud code is aware of your specific dev environment. It knows your repo, your Node modules, and your local CLI tools session versus Butler. You boot up Claud code for a work session. OpenClaw stays running 24. 7 on your machine or VPS. From a pragmatic usage perspective, use Claude code to start a heavy refactor, head to a meeting and check the diffs on your phone. Use OpenClaw to stay on top of your notifications and general digital existence while you're away from your keyboard. If you want a code from your pocket with surgical precision, get Claude code. If you want a 24. 7 digital butler, keep openclaw now. What I don't totally agree with here is the idea that OpenClaw is only for these sort of personal use cases like managing your calendar or booking flights. In fact, as I mentioned in my video from a couple of weeks ago about my Open Claw setup, my most useful agents are my research agents that are running 24. 7 and updating information. That's both useful for this show as well as for some of our ecosystem projects like AI, DB Intelligence. But I think two things. First of all, the points about why this is going to be so valuable to people from a coding perspective because it plugs into the main system that people are already using in the form of CLAUDE code is, I think, true. One of the reasons I believe that I found myself using my my remote coder openclaw agent a little bit less is the context gap between it and where I'm doing most of my other projects. The second thing that I think is relevant about Internoun's post is that even if one doesn't agree with the idea of how openclaw is characterized here, the actual underlying point is that we're in a broader paradigm shift of agentification and that there are going to be different tools and different platforms that have similar types of setups and similar types of interactive modes, but with very different use cases. Whatever the use cases end up being Remote control hit like gangbusters Claude codepm Noah Zweibin had to come back to Twitter the day after the announcement to apologize for the capacity issues that they had run into on their end and to ensure that a broader rollout was coming for some remote control was just further evidence of how fast Anthropic is moving. Anonymous AI Rumor account I rule the world writes Anthropic have now shipped openclaw. Dario and the gang are unfathomably ahead and their velocity is insane. It's a fair observation that they may have reached escape velocity, crushing every benchmark eating enterprise, Department of War begging. One of the labs made a huge breakthrough. Still others, like LLM Junkie AM Will noted that what mattered here wasn't so much Anthropic picking up on Claude code features, but starting to see the new primitives of a new modality of AI coming to the fore. They write this is the first official phone to local machine integration from any major AI lab, but not the last. This will become widely adopted. Anthropic, however, was not done. Just one day later, they returned to X to announce a new feature, this time for Cowork, called Scheduled Tasks. Once again, it was exactly what it sounds like they wrote. CLAUDE can now complete recurring tasks at specific times automatically. A morning brief, weekly spreadsheet updates, Friday team presentations. They also write it gets better with plugins, which give Cowork domain expertise across design, engineering, operations, and more. Also, we're adding a new customized tab in your Cowork sidebar, one place to manage your plugin skills and connectors. Now going back to the idea of the clawification of AI. If you've listened to my episodes about openclaw, or if you've tried it out yourself, you'll know that a big part of the magic is the way that it interacts with scheduled tasks. First, openclaw agents have something called a heartbeat, which is basically a reminder the time interval of which you can set, but which is every 30 minutes by default, where effectively the agent reminds itself to remember its mission and to go keep working if it perhaps has lapsed in the previous period of time since the last heartbeat. On top of that are cron jobs, where basically you can tell an agent to do a specific thing at a specific time. When you dig into openclaw, it turns out that a lot of what the magic is is just being able to schedule tasks at certain times. Across my five project manager agents, they all have different schedules for the couple times a day when they check in with me either to remind me of all the to do lists or to go interact with external sources to find the state of different projects. And so this idea of scheduled tasks feels once again not like Anthropic trying to compete with OpenClaw, but instead recognizing a new primitive in agentic AI that didn't stop people making the comparison. However, content creator Ole Lemon shared the post and said Open Claw for Normies has landed. Akash Gupta gets that part of the paradigm shift is that AI that works continuously without you having to prompt it every time he writes. Scheduled tasks means Claude stopped being software you talk to and became software that works while you sleep. That's a category change, not a feature update. Now he, along with lots of others, also pointed out that this is unlikely to make Wall street, which was already nervous about every new announcement from Anthropic, any less scared about the implications for white collar work. Later in the same post, he writes a morning brief that compiles overnight slack activity, email threads and calendar changes before you wake up. A weekly spreadsheet that pulls data, runs formulas, and drops a formatted Excel file into your folder every Friday. Contractor reminders that fire at 3pm without anyone remembering to send them. Each of those tasks used to be a SaaS product, a Zapier workflow, or a junior employee's morning routine. Now they're a single line in a scheduling interface running on a $20 a month subscription. Chat was a toy. Scheduled tasks is a labor primitive. When a model can do work on a schedule inside your tools without you asking, it stops being help. It becomes the cheapest employee on earth. Now this particular episode is not about white collar job displacement and the concerns there. But I will say just for posterity, that I think that the idea that the only implication for this will be the elimination of employees, rather than the mass empowerment of employees to do more, will seem, in retrospect, fairly Reductionist developer Simon Willison wrote a note about testing these new features, both of which he notes overlap with openclaw. And while all the Twitter commentators were breathless, he pointed out that these features are still early, to use his words. For example, remote control is a little bit janky right now. Now, funny enough, at the end of his post about remote control, he said it's interesting to contrast this to solutions like OpenClaw, where one of the big selling points is the ability to control your personal device from your phone. Claude code still doesn't have a documented mechanism for running things on a schedule, which is the other killer feature of the claw category of software. Of course he then had to update it. I spoke too soon, sharing that Anthropic had now announced scheduled tasks as well. He did point out one limitation that scheduled tasks only run while your computer is awake and the Claude desktop app is open. Basically, if the computer is asleep or the app is closed, when that task is scheduled to run, Cowork skips it and then runs it automatically once it wakes up or the app is opened up again. Now it is certainly the case that local device management is a whole new aspect of this. I just experienced this by being out of the country for two weeks as the Hudson Valley got hammered by snowstorm after snowstorm knocking the power out and my machines off without automatic startup procedures. That left me open clawless for a big chunk of the time down there. Now importantly, while Anthropic was the big company that got everyone talking about this open qualification, the pattern was exhibit elsewhere as well. On Wednesday, Perplexity announced Perplexity Computer Writing computer unifies every current AI capability into one system. It can research, design, code, deploy and manage any project end to end. Computer orchestrates models to run agents in parallel, leveraging opus to match each task to the model best suited for it. Perplexity Computer, they write, is what a personal computer in 2026 should be. It's personal to you, remembers your past work, and is secure by default. Hundreds of connectors, persistent memory files and web access, all built on top of Perplexity infrastructure, go from a single task to hundreds of active projects, google Senior AI Project manager Shibam Sabhu writes. Perplexity just launched their own OpenClaw. Composable computers is all you need, Perplexity CEO Aravind Srinivas wrote. What has Perplexity been up to the last two months? We've silently been working on the next big thing, Perplexity Computer. In a blog post surrounding the announcement called the AI Is the Computer, he basically made an argument that the differentiator for Perplexity's version of openclaw would was that it only had access to Anthropic's models and hammered on the idea that with perplexity you get 19 models available so that the right task can find the right model, he writes. AI models are becoming so capable that the products built around them have been a bottleneck for showing their true potential. The chat UI is good for answers and agents are good for individual tasks. Meanwhile, the UI for entire workflows has always been the computer. AI is now firmly multimodal. It understands and generates many forms of data in a single coherent system. Jensen, Huang and others have said that the future of AI must also be multimodal. They're right. Specialized models must collaborate like a team. As AI replaces more of the function of the computer itself, the central activity of the computer will be massively multimodal orchestration. So now we've got anthropic getting clawified. Perplexity getting clawified. Who else? Well, I'm holding OpenAI aside because they obviously got Peter Steinberger, the creator of openclaw, to come join them, but we're also seeing it in the product companies. Also on Tuesday, Notion announced their Custom Agents features and sure enough, it had a lot of familiar value propositions in the pitch. Notion calls custom agents the AI team that never sleeps. They're autonomous, built for teams and easy for anyone to build. Give them a job, set a trigger or schedule and they'll get it done round the clock. Most AI waits for you to ask. Custom agents just go route bugs to the right place, answer questions, update docs, draft weekly updates and ping the right people. They're multiplayer model, agnostic and built for every team, not just the technical ones. Creator Andrew Werner writes. Notion just launched openclaw for regular people. Pros. No need to buy a Mac mini, very visual so anyone can use it, runs automations based on time, slack Messages, Notion Comments, etc. And your team doesn't need to know terms like cron job, sudo or cli. The cons, he pointed out, is that you need to buy a seat for each person on your team or no one gets it. Which, when you've got a lot of people in Notion, could be problematic. A couple weeks earlier we also saw Airtable launch something in the same family called Super Agent, which honestly feels a little bit closer to the paradigm of Manus or genspark, but is still pushing a new, similar clawfied territory. Now, I think it would be a mistake to view all of these moves as companies trying to, quote, unquote, catch up with Open Claw. I don't even think that they are particularly focused on the competition with Open Claw. I don't think they're trying to harness or latch onto the lightning in a bottle that OpenClaw captured. What I think instead is that they're all an acknowledgment that OpenClaw was the first to, if not see, at least to popularize what AI looks like going forward. Interactive modalities that follow you wherever you are and are not contingent upon one device or another. Persistent work that happens without you having to prompt it agentic capabilities that can interact with all of your personal context in which you can give permission to use your systems to do things. The point is that these are not features to be copied. They are new fundamental primitives of the agentic era that we're moving into. They will become ubiquitous because they unlock a new set of things that people very clearly want. Now, one thing that I do disagree with very heartily is the idea that because these features are getting easier and better productized, you should just skip the whole openclaw setup phase. Many will do that, and it's reasonable. But one of the best reasons to actually do the hard work of setting up openclaw now is that it's going to give you a better education in understanding these primitives than will the products where they're abstracted away. Certainly most people will not take the time to do that. But if you're listening to this show, I know you are not most people. In any case, I expect that this open clawification of AI continues throughout the year, and ultimately the claw is seen not just as some hype phase confined to the junk heap of history, but as the starting gun for a totally new era of AI. Thanks for listening or watching as always. Great to be back with you guys. Until next time, peace.
