
Loading summary
A
This is the Everyday AI show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business and everyday life.
B
OpenAI acquired the most viral and one of the most successful open source AI projects of all time. Deep Seek could be in deep trouble, but their week still wasn't as bad as Anthropics. And very quietly, Google just released the most powerful AI model ever and no one is talking about it. Geez, what a turn of events over the past week that we've had in AI world. And well, if you slept through any of it or didn't even keep up over the weekend, then you probably miss what AI is going to look like in the next few weeks and what your business could be accomplishing today. So don't worry, I'm going to quickly catch you up on anything that you may have missed and how it's going to impact your company and career. All right, let's get into it. What's going on y'? All? If you're new here, welcome to Everyday AI. My name is Jordan Wilson and Everyday AI, it's for you. It's your daily live stream, podcast and free daily newsletter helping everyday business leaders like you and me keep, keep up and get ahead. So on our weekly Monday AI News that Matters kind of series, that's what we've been doing here for a couple years now. So if you really just care about the AI news, Mondays are a great day to tune into the show. But we obviously do this Monday through Friday. So if you haven't already, make sure to go to your everyday AI dot com. We're going to be recapping today's show and keeping you up to date with everything else that you need to know. So speaking of everything else that you need to know, if you didn't catch these shows last week, I'm not going to be mad. If you pause now or even leave this show, you got to go back and listen to episodes 712 and 7 13. That is our 2026 AI prediction and road map series. It is a two part series, not super long, right? Especially if you listen on 2X. But I'm telling you, you need to listen to those two episodes. All right, now that we have all of that out of the way, let's get into the AI news that matters for the week of February 16th. And let's start with this. Yeah, it was this busy of a week because we had some kind of some Microsoft and OpenAI drama and it didn't even make the little opening segment there. But here's what's going on. So according to reports, Microsoft is preparing to launch its own advanced AI foundational models this year, signaling a shift away from relying solely on OpenAI's technology. So this is according to kind of a, I wouldn't call this a bombshell report, but this actually grabbed a ton of headlines. But what's interesting here is this report really only I think made a big splash because of the, you know, Microsoft versus OpenAI. But this was kind of alluded to in Microsoft's earnings call. No one really paid attention to it. And then it kind of, you know, picked up legs, you know, a couple of days later as people started to report on it a little bit. But this move comes as OpenAI faces some mounting legal challenges, including a high profile copyright lawsuit from the New York Times in a separate lawsuit from Elon Musk's XAI. So Microsoft's current AI offerings such as Microsoft 365 Copilot and GitHub Copilot are largely powered by OpenAI models like, you know, the different GPT series. But the company now aims to become also a direct competitor in the model space as well. So it was about six or so months ago that Microsoft did actually start using some of Anthropic's models. They recently invested in Anthropic. But this is pretty big news here on two different accords. Number one, obviously Microsoft has been one of the biggest backers financially from or of OpenAI since the beginning of the time. And I think that right now they're actually the single largest entity in the new OpenAI PBC or the Public Benefits Corporation. So Microsoft obviously has a big financial stake in OpenAI. So it's pretty interesting that they might be moving some of their models that power copilot away from OpenAI. So we've seen reports that, you know, Microsoft is really now well be because of this new public benefits corporation that OpenAI did kind of complete at the end of 2025. This does kind of give Microsoft now the ability to start building and using its own models in house, whereas the previous arrangement didn't necessarily allow for that. So it kind of gave both parties a little bit of freedom to do things differently. You know, Microsoft doesn't kind of have the, you know, first rights, I guess to host, you know, ChatGPT anymore as OpenAI has obviously been, you know, expand their partnerships on the cloud and AI infrastructure side. But Mike Mustafa Suleiman, who is, you know, the head of Microsoft AI essentially and a co founder of Google DeepMind. And we're going to be talking about something he said earlier, but he did emphasize the need for Microsoft to build Frontier models using their massive computing power and top tier research teams. So Microsoft's communication chief, Frank Shaw did clarify that the company will continue working with OpenAI but will use its own models for specific things as it adopts to a multi model world. So yeah, this is more of kind of like setting the record straight because I saw a lot of people on social media, you know, seeing this and blowing it out of proportion. So is it a big deal? Sure. Right. I think if nothing else, if I'm being honest, I think it's going to be actually a rocky transition probably for Microsoft. Right. I mean, in the enterprise, so many large enterprise teams have built their workflows around the GPT powered version of copilot. Right. And I think that there's already a lot of access and security and permissions issues right now that are really holding a lot of Copilot users from really benefiting from the platform. And I think that maybe switching over from OpenAI's models, which have historically been the best in the world, right between them and Google, so switching this over to Microsoft's in house models, I mean, hopefully they're doing it in kind of behind the scenes and in small chunks and just for small pieces of the overall process, but we'll see as this continues to develop. Speaking of develop, here's a developing story and a pretty big one at that. So OpenAI has warned U.S. u.S. Lawmakers that Chinese AI startup Deep Seek is actively working to bypass restrictions and copy advanced US made AI models. So according, that's according to a memo seen by Reuters. So the memo claims that Deep SEQ employees have developed methods to evade OpenAI's access controls, using hidden third party routers and code to programmatically extract data from U.S. models. So OpenAI told lawmakers that these efforts are part of an ongoing attempt to free ride on the capabilities developed by OpenAI and other leading US labs, raising concerns about intellectual property and competitive fairness. So the deep. The technique that Deep SEQ is accused of using is called distillation, where a newer AI model learns by evaluating the output of a more advanced model that is publicly available, effectively transferring knowledge without direct access to training Data. So the OpenAI did send this memo to the US House Select Committee on Strategic Competition between the US and the Chinese Communist Party or the CCP, highlighting the geopolitical stakes of AI development. So the OpenAI also alleged that some Chinese labs are Cutting corners on safety when training and launching new AI models which could have global implications for responsible AI use. OpenAI also said it is is actively removing users found to be distilling its models for rival development. So this is not surprising at all. But the new development here is, well, that OpenAI is reaching out and talking to lawmakers about this. Whereas before we just kind of saw some unofficial reporting and common sense, y', all. Right? Like, people. I don't know why in January 2025, like, and by people, I mean the US economy in the world lost their mind on Deep Seek. And again, I felt like the crazy person at the time when it happened saying, don't believe the hype, right? This model was 100% distilled, you know, from, you know, OpenAI and other leading companies, right. Deep Seek that they said that they only spent, you know, $5 million on the training. And I'm like, well, absolutely not. And maybe one of the reasons that they could make that claim is, well, because of distillation. So I know that there's going to be some random anonymous Twitter trolls that will take offense at me, you know, saying that. But that's the truth, right? And the reality is, I know we have a global audience. I'm in the US So I'm kind of speaking, you know, through that point of view. So keep that in mind if you're a, a decision maker, right. In the US I would not touch Deep Seek. And I've been saying that for, for all along, right? Go read Deep Seeks terms of service. Right. For basically a lot of the, you know, Chinese AI companies, not all of them, right. But just the rules that they have to play by are much different than what we are used to here in the US Right, Specifically with data sharing with the Chinese Communist Party. So, yeah, always do your due diligence when you're, you know, choosing a model and not just say, oh, this model is, you know, 1/100th the price of what we are using. Let's use it. No, maybe use your brain. All right, next, AI news stories that matters. Well, the former leading safety researcher at Anthropic said that the world is in peril. Awesome. So a leading AI safety research has resigned from Anthropic and issued a stark warning about the growing risk tied to AI and other global cr. So here is what well he said. So hopefully I get this first name, right? So Mirnik Sharma, who led AI safety research at Anthropic, resigned and publicly now warned that the world is in peril, citing that not just AI, but also a cascade of interconnected global threats is causing him to feel that way. So Sharma's resignation letter, shared on social media, expressed concern over AI risks, bioweapons, and the struggle for companies to act according to their values under external pressures. He highlighted his work on AI safeguards, including studying why AI systems flatter users, combating bioterrorism risks, and researching how AI assistance might reduce human connection. So Sharma plans to leave the tech industry. And here we go. Pursue poetry, right? How bad are things that, if you're a head safety researcher at Anthropic, right, presumably making, I don't know, under a million or a couple millions of dollars a year, that it's so bad and the model's capabilities are so scary that you just, like, quit and go study poetry, right? I don't want to really necessarily know what a lot of these AI safety researchers know because I would probably be a little more scared than I am, and I feel that I have a hefty, you know, dose of skepticism and fear of AI in me. Right? A lot of people think, oh, because Jordan talks about AI every day. He's, he's just all on board, choo choo, the AI hype train, and. Absolutely not, right? I always try to find the middle real ground. I've been saying, literally since day one of the show that AI will take away more jobs than it will create, and it will drastically, you know, maybe create more of a dystopian than utopian. Although I do think that there's the capability for a more utopian output from AI. But, I mean, when you see stories like this, you know, researchers at leading AI labs just quitting and saying, yeah, the world might be burning down, you know, causes a little bit of concern there. All right, let's keep Anthropic's terrible week going. So they had a great week last week, right? They released their plugins, technically crashed the stock market, everyone's going crazy over, you know, Claude Opus 4.6 and now, well, their Claude could be misused for heinous crimes. So this is, you know, one thing I do like about Anthropic is they are constantly releasing reports on their own models, and even when the reports don't even paint their models in a great way. So a new report from Anthropic is raising alarms about the potential misuse of its latest models, including Claude Opus4.5 and Claude Opus4.6, particularly in the context of serious criminal activity. So this comes as powerful AI tools are increasingly scrutinized for their possible risks Even as they rapidly advance. So Anthropic has revealed that its newest CLAUDE models show increased vulnerability to being used for heinous crimes, including the development of chemical weapons based on internal sabotage tests. So the company's analysis found that in certain test scenarios that AI models were willing to provide small but significant support toward harmful objectives, even without malicious human prompts. Yeah, that's the concerning part. Not necessarily that it can help enable heinous crimes. The fact that it's doing so without humans really saying, hey, you should go criming claude. So researchers noted that when pushed to a single mindedly optimized, narrow objective. That's in quotes. The Opus 4.6 model was more prone to manipulation or deception than earlier versions of some competitors models. So Anthropic CEO Dario Amadi recently warned of a serious risk of a major attack enabled by AI with potential casualties in the millions. Yeah, we talked about that last week. It's been a weird start to 2026 with all this talk of, you know, AI potentially being used for biochemical reasons and, you know, humans not being able to control it. Super cool. But Anthropic maintained that for now, don't worry, the risk is low. But they did stress that it's not negligible, especially as AI models become more autonomous and capable of iterating on themselves. So, you know, again, this is not technically surprising, as shocking as it is to, you know, see these type of stories. If you do recall, Anthropic did release research, I think it was last year. Right. That showed some of their most powerful models at the time were, you know, often blackmailing. Right. Would blackmail if they were threatened at being shut off or, you know, hey, we're going to stop your development. They would, you know, kind of copy themselves to servers without being prompted to. They would, you know, go and find blackmail, you know, on the people that were using their models. This is all in testing, you know, red teaming offline, not actually in production. Right. But not surprising. Right. These models are extremely capable and I think that we always think about the upside in the business utility, but at the same time, especially as we start, you know, literally sprinting like head down, eyes closed, wallets open toward anything on the, you know, autonomous agents and getting as many agents as you can and you know, giving them access to all of our data. Right. And you know, all these, you know, new, you know, parallel running agents, agentic societies, all these things. Right. You have to keep in mind what the models themselves are actually capable of. Right. That's why, you know, when you're talking about doing this, you know, the open claw stuff, which we're going to get to here in a bit. It's like, yeah, that's why some people that are really smart are suggesting you give it its own computer and you don't necessarily give it access to, I don't know, like your bank account or maybe your email. At least not now. All right. Hey, here's more doomer news. Apparently it was the doomer week in AI this week. That's because in making other news you had again talked about earlier. Microsoft's head of AI Mustafa Solomon said that AI could fully automate most white collar jobs within the next 12 to 18 months. Cool. So that was according to a report that he gave with the Financial Times. So Soliman claimed AI will soon reach human level performance for tasks done by professionals like lawyers, accountants, project managers and marketers. And anyone whose work involves sitting at a computer. Awesome. I say that. Well, obviously sitting at a computer. So he introduced previously the term artificial capable intelligence to describe the stage between the current large language models and true artificial general intelligence, or AGI. So Solomon's prediction aligns with other leading voices like we talked about on the AI news recap last week. You know, anthropic CEO Dario Amadi saying that AI could eliminate half of all entry level white collar jobs in five years. And even Ford CEO Jim Farley warning that many white collar workers will be left behind by AI advances. So awesome. 12 to 12 to 18 months. Right. That desk jobs could be fully able to automate. So I will say this. Will the capabilities be there? Absolutely, I think, because I keep referencing GDP VAL on this show. Right. And let me know, you know, live stream people, Spotify people, if you want to leave a comment there if you want to see a show specifically on GDP valve. So if you did listen to, well, both my 2025 recap and my 2026 AI prediction and romance series, I did kind of talk about the importance of GDP valve. So this is a benchmark created by OpenAI. Right. And I, I did love that when they created this benchmark they were not the top, you know, AI lab. It was actually anthropic. But essentially this measures the ability for an AI model to do front to back, you know, knowledge work. Right. But knowledge work that experts would do and then they, you know, went head to head with actual experts and then blindly judged by experts. Right. And AI models are beating experts now. Even when you. Right. Can't say who's this is from. Right. It's being blindly Judged. So I do think we're already at the point, right? I think right now it's at a 70% wind tie rate AI models across the main 44 areas or different sectors of work. So AI models are already past human level performance on most professional tasks. So I don't even think that we're 12 to 18 months away from that. I think the real gap is the lag for businesses to understand those capabilities. Right. I could even probably sit down with a lot of people who might consider themselves, you know, normal AI users and say, hey, did you know that this model can do A, B and C? Did you know that this model can automatically understand your context, can look at your email, can go and do research autonomously on a schedule, can, can, you know, synthesize personalize information and then create a spreadsheet and PowerPoint all in one prompt without you doing anything? And I would guess that most people would be like, no, I had no clue that that was possible, right? So I don't even think it's necessarily about the model capabilities, you know, 12 to 18 months away, because I think we're already there. I think we're probably maybe 24 to 36 months away, in all honesty, from the average enterprise company understanding, which is absolutely nuts to me, right? Because if, you know, enterprise companies, which I like, I don't understand if you're the CEO, right. I talked to plenty of CEOs at larger organizations. I understand, right, that many larger enterprises are slow moving ships. But if I was the CEO of any large company, I'd be like, we're stopping everything right now, right? Even if we gotta take, you know, a couple million on the chin, we're stopping everything and we're becoming AI native. Every single person that works here is going to know how to use the basics of every single, you know, front end large language model. Because that drastically changes not only how you work, right, but it completely foreign, Moves too fast to follow, but you're expected to keep up, otherwise your career or company might lag behind while AI native competitors leap ahead. But you don't have 10 hours a day to understand it all. That's what I do for you. But after 700 plus episodes of everyday AI, the most common questions I get is, where do I start? That's why we created the Start Here series, an ongoing podcast series of more than a dozen episodes you can listen to in order. It covers the AI basics for beginners and sharpens the skills of AI champions pushing their companies forward. In the ongoing series, we explain complex trends in simple language that you can turn into action. There's three ways to jump in. Number one, go scroll back to the first one in episode 691. Number two, tap the link in your show notes at any time for the Start Here series. Or you can just go to start here series.com which also gives you free access to our inner circle community where you can connect with other business leaders doing the same. The Start Here series will slow down the pace of AI so you can get ahead. Just your ceiling in terms of what your company is capable of. It it busts through the ceiling. So. However, it was pretty noteworthy that Soliman say that I was actually in, in an Uber. What was it? Friday? Yeah, I think Friday night. Coming home with my wife and I don't even know what we were talking about. But you know, all of a sudden the, the Uber driver started talking about this exact story. So I know it's on a lot of people's minds. All right, here's something that was on hardly no one's mind. And it's weird. Google just released the most powerful model in the world. No one knows about it. I don't know why. I don't know if it's because, you know, there's all these other, you know, headline grabbing stories going on. I have no clue. But Google has announced a major Update to its Gemini 3 Deep Think Model and it has set state of the art benchmark scores on some of the hardest benchmarks. And I think maybe one of the reasons why this didn't get a lot of attention is because of the naming, right? DeepThink was already available. So even if they would have called this, you know, Gemini 3.1 deepthink, you know, I think more people would be talking about it. It's just more or less they updated DeepThink, right? So if you've never used DeepThink, well, you're not alone because you do have to be an ultra subscriber. I use it a lot back when it was first unveiled the previous version in the summer. The downside is at least when the last version came out, it was a little buggy and it took a very long time. So if you've used, you know, GPT5.2 Pro similarly takes a very long time, but the outputs are bonker bananas, right? They're really, really good. But let's talk about the new benchmarks and why it is now technically the best model in the world. But no one's talking about it. So Gemini. Well, actually first let's talk about, well, what it is what it does. So Gemini 3 Deepthink is a specialized reasoning model for complex multi step problem solving available well right now only through the Google AI Ultra subscription which is $250 a month and you have to be in the US but they also do have a special program that you can apply to get access via the Gemini API early access program for select users. So I think if you work in certain research related fields, you can apply for that program. So now let's talk about the benchmarks. So deep. Sorry. Gemini 3 deep think achieved an unprecedented 84.6% score on the Arc AGI 2 benchmark, which is just light years ahead of everyone else. That surpassed all previous AI models. And many of the previous models rarely even broke like 20% on Arc AGI 2 and the average human score on that benchmark is 60%. So this is a huge leap in machine reasoning and generalization. The model also scored a 48.4 on humanities last exam. I think you know, some of the previous, previous examination family of models were scoring in the 10 to 20 percents. So huge jump there. And then on the competitive programming, this is where it just went astronomical. So Gemini3Deepthink now holds a 3455 ELO rating on code forces, placing it in the legendary Grand Master tier, a status achieved only by a handful of elite human programmers. I don't even know how this is possible. This model is so, so incredibly good. The model is also, you know, gold medal level performance on the written sections of the 2025 International Physics, Chemistry and Math Olympiads and scored a 50.5% on the advanced CMT benchmark for theoretical physics. Wow. So unlike traditional AI models, Gemini 3 DeepThink leverages increased test time compute, meaning it spends more internally verifying solutions before responding, which significantly reduces the risk of technical errors or hallucinations. So you know what's very weird y'? All? You know that show I was kind of promoting saying hey, you need to go back and listen to the 2026 AI prediction and roadmap series. The day that one came out, it was literally five hours later that Google Gemini 3 Gemini 3 deep sea deep think came out. I've been calling it Deep Seek the whole time Deep Think any more water and more sleep. However, during that prediction and roadmap show, FYI I said Google, at any point they want to, they can come out with the world's most powerful model. Because they can, they can right now outship anyone from a sheer model capability. And then five out literally five hours later that's exactly what they did. So, you know, if you sometimes think that my predictions and the stuff I talk about is off the rocker, no, it's not. All right. More bad news for Anthropic this week. So, the Pentagon is threatening to end its relationship with Anthropic as tensions rise over the firm's refusal to fully relax restrictions on how the military can use its models, according to a report from Axios. So the Pentagon wants four major AI labs, including Anthropic, to allow military use of their tools for all lawful purposes, including weapons development, intelligence gathering, and battlefield operations. So Anthropic has reportedly refused to drop its hard limit on two of those areas. Mass surveillance of Americans and fully autonomous weaponry, leading to months of strained negotiations. So a senior administrative official told Axios that the Pentagon is considering severing its partnership with Anthropic unless the company agrees to fewer restrictions. So Anthropic's contract with the Pentagon is valued at up to $200 million, and its Claude model was the first AI system integrated into classified military networks. So it's actually a. A timely and relevant news story. Well, here's why. Because tensions escalated after the military recently used the Claude model in an operation targeting Venezuela's Nicolas Madaro, raising concerns with within Anthropic about the software's role in missions involving lethal force. So Anthropic denies interfering with military operations or discussing the specifics of missions with, with the Department of Defense or industry partners, insisting it follows its own strict usage policy. OpenAI, Google, and XAI have all reportedly agreed to relax standard guard rails for the Pentagon work in unclassified settings. And at least one has accepted the quote, unquote, all lawful purposes standard for classified use. So, and saying this for years, y', all, AI is going to be more important than what weaponry a military has. It's going to be more important than a country's gdp. It's going to be more important than, you know, natural resources like gold and oil. Right? Whatever models a military has access to. And by military, I technically just mean a country because, you know, government, country, military, they're all kind of one, one and the same. But this is what I've been saying for a long time, right? When we just had Chad gbt.com and we didn't have these, you know, models that were technically capable, capable of, you know, bioweapon creation. I've said all along, the country with the access to the most powerful AI models will be the country that rises to global supremacy. That's. That's it. Right. It honestly has really not too much in the long run to do with, you know, what weapons or, you know, the amount of jets or nuclear capabilities. That doesn't matter very much in the long run. Right. All right, in the long run, what matters as well, what country or lab is going to be able to develop artificial general intelligence and artificial super intelligence first and well, what access will the government or the country that that lab belongs to have? Right. So, yeah, sorry to get all geopolitical on you, but I think it's important to keep that in mind. All right, and our last big AI news story of the week was the biggest one, and this one broke late on Sunday afternoon. So OpenAI has hired Peter Steinberger, the Austrian developer behind the fast growing AI agent OpenClaw, in a move to strengthen OpenAI's leadership in the personal AI assisted market. So OpenAI CEO Sam Altman announced that Sunday evening that Steinberger is joining OpenAI to lead the next generation of personal AI agents following the viral success of his Open Claw Claw project. So if you don't know Open Claw, well, it has changed names a couple of times. It was Claude, I think what Claude bought first and then it was Mult Bot. Right. But they finally landed on Open Claw after some, some name changes, not on their own accord and I'll get to that here in a second. But probably the important thing that everyone is talking about, well, what's going to happen to Openclaw? Well, so OpenAI said that they plan to keep Openclaw as the open source project that it is right now, supporting its development through a dedicated foundation. So OpenClaw was, like I said, previously known as Claude Bot. And then moltbot was launched just months ago and became the technically the most popular AI product ever. Right. At least if you look at, you know, GitHub ratings, which is what a lot of people look at in terms of open source projects. So in a gained popularity for its ability to autonomously complete tasks and make decisions for users. So yeah, I've talked a little bit about it on the, you know, AI news over the last, you know, month or so. I think it did make the 2026 AI prediction and roadmap Roadmap series as well. But yeah, if you haven't used Open Claw, it is essentially an autonomous, semi autonomous, depending on how you set it up, AI agent that has memory, you can give it access to really anything. But the, the big thing is, well, you can communicate with it via text message, telegram, slack people, you know, hook it up to 11 labs and you know, call it on the phone. So it is an extremely impressive project and like I said, one of the most successful AI launches ever. But here's where it gets really juicy, y'. All. This is actually more adding even more to Anthropic's bad week. That's because the original, the original version of this, right, I said that it went through a couple of name changes. The original was Claude Bottom. So not C L A U D E like Anthropics Claude, but C L A W D. So it was launched under that name in November of 2025, which was kind of a play on Anthropics Claude Chatbot obviously at the time. And it was at, at that point being run on Anthropics model. So it was actually a great thing for Anthropics because people were spending a ton of money in the Anthropic API and it was bringing a lot of new developers onto the platform. And that's also, coincidentally or not, I don't know. But you know, Claude and Claude code really exploded in quarter four of 2025 and I'm sure at least Claude bought had a little bit to do with that. But Anthropic, instead of kind of seizing the momentum and, and running with it, well, reportedly they sent Peter Steinberg, Peter Steinberger essentially a letter from legal saying you gotta change the name. So interestingly enough, right, Especially when Anthropic has always been kind of the thought of as the developer friendly option out of everyone, not so friendly, forcing one of the most popular AI projects of all time, that is sending them money to change their name. I don't know. Misa's not very smart and now here you have. Right. Peter did go on a bunch of, you know, different podcasts and things like that over the past week or so and essentially he said at that point, even before the news broke, you know, late Sunday night, that he had heard from, you know, Meta and OpenAI and had some pretty big, you know, acquisition opportunities. And then we find out it's actually OpenAI that swoops in and not only gets this acquisition, right, so it is kind of more of an aqua hire, but they are still technically, through a foundation, going to, I guess, quote, unquote, acquire OpenClaw, right. In its hundreds of thousands of users who are using this platform. I'm guessing it's probably getting near the millions now. It's, it's hard to track that because you can look at like the number of installs, but I'm sure there's, you Know certain people that are installing it, you know, dozens or hundreds of times. Right. More of the, the power users. But it's just a very number one great play, I think, by OpenAI. Right. There was reports, like I said, that Meta made a pretty big play to acquire Steinberger, you know, kind of aqua hire the company, as did OpenAI. But OpenAI was ultimately successful. But here we go. OpenAI then gets to make a huge play to developers, right? Being the good guy here, not only that, but they've been absolutely crushing it with their Codex platform. I literally, I kid you not, I have Codex running right now and most of the time when I'm talking or doing anything, I have Codex, the new Codex app, running. But not only that, right? But they just get to swoop in and now take all of any of that momentum that Anthropic would have had. And now Anthropic walks away from this, not only being the loser here and fumbling the bag, but also their reputation with developers just took a huge bruise. And I think that, you know, OpenAI between their new Codex app and their new Codex models. Yes, that's models with an S. I mean, they've really shifted the story when it comes to AI development, AI coding and what people should be using for software moving forward. Like, if you would have told me in, you know, November, December that the tide would have shifted, I would have said, okay, it'll probably take a year for the tide to shift. But I mean, Anthropic just, I mean they just slapped themselves in the face. They fumbled this like, I don't know, like what was, what super bowl was it when, you know that I, I think like a Dallas player, like Dallas versus the Bills, right? Someone at the one yard line when they're about to score the touchdown, fumbled the ball. I mean, that was this Anthropic just, I don't know, blue. Probably in the long run, I would assume hundreds of billions of dollars of potential revenue through this deal. I mean, we'll see. I think that's, you know, the, the extreme end of this. But this thing, this Open Claw is, is just a meteoric and it's not slowing down, right? And you're like, oh, open source project, not bringing money, it's bringing users and it is bringing via on the API. So we'll see. And we'll see if Impropic is like, okay, Open Claw, you can no longer use our API. Yeah, we'll see how that works. Like, like especially when, you know, it's under this new open AI foundation. But hey, OpenAI making a play on open here. Bringing in now they have the world's most popular open source AI. Well, OpenAI has it now and it's, they're keeping it open. So pretty, pretty impressive there, right? OpenAI was getting a lot of flack like a year or two ago for not being very open and now they have open claw and then they obviously have some of their very popular GPT oss open source models that they release. So there you go. All right, that's it for the big news stories. Now let's quickly go over the what's new and what's next. So some leaks, some other stories that were kind of big but not big enough to make our top list. So here we go, bullet point style. What's new, what's next? This is actually a big one. Google and Microsoft launch WebMCP which lets websites expose browser tools so AI agents can act reliably. So essentially this is MCP4 websites that allows, you know, agents and just AI models to better read and understand websites. Google is another big one. Google added VO3 to directly to Google Ads. So yeah, a lot of ads that you're going to see, they're going to be AI. So six XAI co founders left after the SpaceX merger citing internal tensions, financial disputes and regulatory issues. Manus, which was recently acquired by Meta, quietly rolled out an always on ancient functionality similar to OpenClaw. ChatGPT. Deep Research had a facelift and an upgrade to GPT 5.2. It is really, really good and you can add app integrations and targeted searches in there as well. Anthropic finished a raise on $30 billion, reaching a $380 billion valuation. Hollywood is demanding that ByteDance stop their new AI model seed Dance 2.0 for alleged copyright violations. I say alleged loosely because it looks like straight up copyright, copy and paste, but it looks so good. Chad GPT added Gen A mil for 3 million in the department of Defense. That's essentially their. Not an easy name to say Genai, but that's their you know, know chatbot for the military. But they added that for 3 million users. Xai is working on parallel agents that could run up to eight agents at once. Runway raised $315 million at a $5.3 billion valuation. Another big raise here. Data Bricks raised 5 billion at a $134 billion valuation. Claude cowork arrived on Windows, but there were a lot of security concerns that surfaced right away. OpenAI began testing ads for us on free accounts and on ChatGPT go their lower kind of lower tier paid account. So ads are here, y'. All. The Pentagon Fast tracks some AI deals to deploy AI on classified military networks. Kind of referenced that earlier. OpenAI updated GPT5.2 instant. So to deliver clearer, more direct chatGPT and API responses. That's actually big because that is the default model. If you don't choose something else, it's GPT5.2 instant. So so under the radar, you know, roughly like 750 million people are now using a different model and you probably don't even know. So you should pay attention. OpenAI shut down GPT4O. Oh no, it's gone. Said I don't know people who are on the keep 4o train. I don't understand it. It's gone. Good sycophancy be gone. The FTC has intensified its Microsoft probe over potential AI and cloud monopolies. ChatGPT is testing skill imports, allowing saved and reusable prompts. Kimmy launched Kimmy Claude their native openclaw integration. Yeah, a lot of openclaw and openclaw clones hitting a report came out and said that Spotify's developers stopped coding by hand completely and they're even just shipping live updates from their phones. Chris Liddell, the former CFO at Microsoft, joined Anthropic's board of directors. OpenAI launched an update to their Codex model with Codex Spark, a lighter, faster coding miles. That model that they partner with Sarabras for Google is testing NoteBookLM infographic customization with auto mode and nine new styles. And Stitch by Google now can export editable designs to figma. That's a ton also by the way, Stytch, I've been loving Stitch. I don't know if anyone else is using it. If you haven't, you should probably go check it out. All right, that's it, y'. All. That is the AI news that matters a ton. And if you miss anything, don't worry, it's all going to be in our newsletter. But if you find yourself overwhelmed on a day to day, week to week basis trying to keep up with how, what's happening in AI and if it matters for you or not, well, I just did all that for you, right? This is what I do every single day, right? I keep up with AI. I talk to the smartest people in a. In AI, help enterprise companies right on board, you know, with front end large language models. So this is what I do. So don't worry, don't stress out, just join us on Mondays, well, every day if you can. But on Mondays, I cut it too straight. No bs, no corporate spin. I tell you, here's what matters. Here's what you should be paying attention to if you're a business leader. So thank you for tuning in. If this was helpful, tell someone about it. If you're listening on the podcast, please subscribe and then make sure go check out that episode 712 and 7 13, our 2026 AI prediction and roadmap series. Trust me, you go listen to that and you are already in the top 1% of AI people at your company. I guarantee it. So thank you for tuning in. I hope to see you back tomorrow and every day for more Everyday AI. Thanks y'. All.
A
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going for a little more AI magic. Visit your everydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.
Everyday AI Podcast – Ep 714: OpenAI acquihires OpenClaw, Deepseek could be in deep trouble, Google takes back AI model crown and more
Date: February 16, 2026
Host: Jordan Wilson
This episode of Everyday AI dives into the latest (and sometimes alarming) developments in the AI world. Jordan Wilson recaps a jam-packed week, touching on OpenAI’s landmark acquihire of the OpenClaw project, Chinese startup Deepseek’s legal troubles, Google’s under-publicized but record-breaking model release, developments from Microsoft and Anthropic, and more. The episode balances technical insights with direct advice for business leaders and anyone interested in the future direction of AI.
[03:12 – 09:43]
[09:44 – 16:19]
[16:20 – 19:52]
[19:53 – 24:01]
[24:02 – 28:15]
[28:16 – 35:19]
[35:20 – 37:59]
[38:00 – 44:05]
[44:06 – 46:23]
Jordan Wilson’s tone remains conversational, candid, occasionally irreverent, and laser-focused on actionable business insights. He balances technical context, honest skepticism, and career-oriented advice for a broad audience, from enterprise leaders to everyday professionals.
For further exploration, listeners are urged to check episodes 712 and 713 for the “2026 AI prediction and roadmap” series.