Transcript
A (0:00)
The Pentagon just named seven AI companies that it will let into its classified network. And Anthropic, surprise, surprise, is the one company that it shut out. And this is all happening at the exact same moment that Anthropic has given its investors 48 hours to confirm if they want in on its 900 billion valuation round of funding. Before that, Elon Musk admitted in his most recent trial hearings that Xai used OpenAI to train Grok. Anthropic dropped Claude Security in the public beta with CrowdStrike and Palo Alto on the board and Google's Gemini is taking over the dashboard inside of 4 million GM cars. A ton of different brands getting this, we're going to get into all of that on the podcast. The first story I wanted to cover was this gemini rollout into 4 million GM cars. And also we've just got news that Volvo is included in this. So they announced yesterday that Gemini is replacing the Google assistant in about 4 million cars. Isn't like a new deal. This is Android Auto that's in there. So anything that is from or from after 2022 will have the Google built inside of it. So everything with Cadillac, Chevrolet, Buick, gmc, Volvo's in there. So you don't have to go and you know, swap out your hardware. I'm still remembering many painful experiences where I swapped out the entertainment system inside of the car to a new Android Auto or to a new screen. It's a painful experience but you know it is part of the fun. You don't have to do that anymore. This is all over the air or Volvo announced the same shift is going to happen for 16 of its models going back to 2020. So this is going to be a big upgrade for a lot of different vehicles beyond. You know I think we're all kind of sick of trying to talk to Siri and it not working. Amazon and Alexa have made big upgrades. Google has made some massive upgrades here specifically like I recently mentioned on my AI applied podcast, because Google has an incredible AI upgrade with Google Maps and they're going to be bringing in Google Gmail and Google Docs all into that. So you're going to be able to just talk to your car and I mean technically probably get work done if it was a self driving car. Anyways, interesting things are going to be happening. Okay, let's talk about what's happening with Anthropic. They just launched cloud security in public beta yesterday. It's available right now to Claude enterprise customers with the Claude AI Sidebar teams. Max plans are all going to start getting it next. And essentially the thing that I think is interesting about this, Ryan Nareen, who's over at Security Week and also Silicone Angle, was talking about this. And essentially what it does is it scans your code base the same way that a security researcher would, and then it traces all of the data flows, the read sources, it looks at how components interact across files and it hands back vulnerabilities, kind of like an impact report. And then it's going to give you production steps and it's going to recommend fixes. Now this is kind of coming on the backs of Anthropic saying, look, we have this new Mythos model. It's really good at security vulnerabilities, finding them, exploiting them, and so it's giving it to the kind of the top firms. And so this seems almost like a dumbed down version that they're giving to everybody. And maybe dumbed down is a bad version for. Because Mythos, essentially you can chat with and get it to do what you want. This feels more like it is like, look, we're just scanning, we're just going to give you a list of security vulnerabilities. My understanding on this is you're not going to just go to a software and it's going to, you know, be able to go and just hack into it from, from viewing it. It's going to have to get access to the code. So you would add, you know, you, you'd connect your GitHub repo and be able to go scan that. You then go drop its report straight into CLAUDE code and it's actually going to be able to go and patch it. This is kind of the same integration that we see with CLAUDE design, where it's going to, it designs something amazing for your website or your app or whatever, and then it has this little like handoff document that it makes that you can just give to CLAUDE code to go and actually implement. So I really love this. I think this is a really, really smart move by Anthropic. They don't need an API. They don't need to like have any sort of like literal integration. It just creates a document, an MD file, right? That just explains what needs to happen. And you just give that file to CLAUDE code that can go and execute on that. The beta is adding a bunch of scheduled scans, right? So beyond just getting to do it once, you can actually have it run a scheduled scan and find any sort of vulnerabilities and then you also get to dismiss anything that's not that Important, there's a CVS and also a markdown export that you can also route to your ticket system. So a lot of really cool things rolling out. And the partner list that they have with this right now they have CrowdStrike, Palo Alto Networks, Sentinel One, Trend, Macro, Wiz. All of them are integrating Cloud Opus 4.7 directly into their platforms. Accenture, BCG, Deloitte, Infosys, PwC, they're all spinning up deployment practices around this. And I think that's basically every name we have in enterprise security right now, all the big consulting firms. So I think, you know, the under discussed angle on this is just the timing. Cybersecurity is one category where Anthropic isn't blocked from the federal market. Even with the Pentagon right now, you know, they have this big fight. They're, they're blocked out of some other big deals coming out of the Pentagon. But security is something that they're still allowed to, to do for the government. And so it feels like this is an area they're pushing on very heavily. Okay, let's talk about what's going on with the Elon Musk Sam Altman trials. I mean these are going to be rolling around for the next couple of weeks and we're just going to have. I basically, I feel like bas daily tidbits coming out of this. But Elon yesterday was in federal court in California in the lawsuit and he was asked whether XAI used distillation on OpenAI models to train Grok. Right. So basically asking OpenAI model a question, getting an answer and then feeding the answer or the question in the response back to GROK to help train it. And he said that they partly did. He tried to kind of wave it off saying it's kind of general industry practice, which to be fair, yeah, it is Deep Seek. That's how Deep Seek did it. Like actually every AI model company does it and it's against the terms of service of OpenAI. But it's not illegal. Right? Like OpenAI doesn't want you to do it. And Anthropic actually tried to, I think, lobby recently to get the government to like force companies to stop doing it because it's a, it's kind of like a cheap way to train a good model because you can just get a lot of these outputs really fast and it kind of speeds up the whole training process because you're telling it. Look, when someone asks this type of question, this is the kind of response that we would like you to give. Now it's kind of interesting because if you want the absolute best cutting edge, well thought out answers, you're not probably going to get it per se in this, in this way. But anyways, another conversation in my opinion and speaking of getting the best results out of these AI models, if you're already paying for ChatGPT or Cloud or Gemini or Grok, I would love for you to check out my software platform which is called AI Box AI. You get access to over 80 of the top AI models, all of the latest stuff, including all of the amazing audio tools like 11 labs, music generation tools, video generation tools, image generation tools. All of that is over on AI Box AI and it's currently 8.99amonth. If you get that price, you'll get locked in to that, we won't raise it on you and you're able to get access to over 80 different models all in one place. We also have an annual plan which makes that 20% off as well. So anyways, go check it out. You can also talk to our Vibe builder which links together all of these different models and automates workflows for you. I hope this is something that saves you a ton of money and is incredibly useful for link is in the description. I hope you check it out and let me know what you think. Next up on the news we have the Pentagon which has just released a statement naming seven different AI companies that they're going to be letting into their classified network. So they have OpenAI, Google, Nvidia, SpaceX, Reflection, Microsoft, AWS, all of these are getting access into kind of their classified networks. Reuters and US News both kind of broke the formal announcement and the companies right now are integrated into impact levels 6 and 7, which is basically the highest classification tier. You get, you know, seeker and top secret data and anything that touches an actual active operation. So these companies are, you know, basically getting the top of the line information and helping with some of the biggest projects the US government is working on. Some of the most critical things now in that list is probably like what's not included in that list is the bigger story in my opinion, which is Anthropics. Now this is all of course coming from the dispute that the DoD had with anthropic over and you know, they eventually labeled them as a supply chain risk, which is like a terrible designation for a company that's like a foreign adversary usually because they wouldn't give the, you know, the DoD unrestricted access to Anthropic to do whatever they want with it. And there was kind of this whole legal spat. Emily Michelle went on X and she's the Pentagon undersecretary and she called Dario Amadeo a liar with a God complex. Anyways, there's lots of drama in that. What's interesting, though, is with all of that, a federal judge in California already blocked the supply chain risk designation back in March. What's interesting is if you look at all of the data, Claude is actually one of the most used internal tools at the Department of War. And of course, this is something that is, you know, very powerful. And so it's no shocker that it would be, you know, highly sought after, especially for developers who. This is kind of the. The tool of choice right now in the industry. Axio's Mike Allen actually reported earlier this week that the White House Chief of Staff, Cecy Willis and the treasure Secretary, Scott Bessette, already met with Amadeo to try to find an exit. Right. So they're trying to find, like, how could they. How could they bring them back in? The quote is how to save face and bring them back in. This hasn't shipped yet, so we're going to see what happens there. But this is also really incredible considering the timing for Anthropic. Right now. They have a $900 billion valuation round of funding that people are looking at, that they're allegedly looking at doing with about a $50 billion in funding that they're going to raise. And apparently, according to TechCrunch, they just gave investors 48 hours to submit allocations and agree to the amount that they're going to be putting in. So this seems like something that might be wrapped up here very quickly. You know, I mean, this is. That's pretty aggressive to just tell your investors, hey, look, you have 48 hours. Are you in or are you out? But the reason why I think that that is important is because a lot of people have been saying that Anthropic hadn't been out actively seeking funding, they just raised some. They'd been allegedly fine, but they had a bunch of people just like, coming to them and basically begging them to take their money. So whether that's true or part of the narrative for all of this is left to be said. But Anthropic is probably the last private. This is probably the last private round for Anthropic before they have their ipo. So if anyone wants to get in now, before the valuation probably goes up, now would be the moment. Or at least that's the pitch Anthropic is giving so wild times for the entire AI industry right now. One Other thing that I think is not talked about enough though, with the whole deal that the DoD did and all the different companies that they announced that they're going to be, that are going to be working on them is because there is a company called Reflection AI that is currently on the Pentagon's list. Reflections. Two founders are Misha Laskin and Lonis Antuoglo. Both of them are from DeepMind. Formerly Laskin ran their reward modeling on Gemini. Antologo was a core architect at AlphaGo. So, you know, these are obviously very legendary researchers. They raised $2 billion at an $8 billion valuation last October. But these aren't the biggest companies in the world. And I would also say, you know, most people probably haven't even heard of Reflection. I think the Pentagon basically putting them on the same list with like OpenAI and Microsoft and leaving Anthropic off of that is kind of their own way of saying, look like we have some Frontier Lab options that you've never heard of. We're going to fund these alternatives to Anthropic ourselves if we have to. And so I think that's kind of an interesting strategic move. They put a lot of these big companies on there, then they put some brand new ones on that feels like they're kind of funding and they'll probably get some custom stuff done that they need that others might not be willing or able to do. And I think the Anthropic exclusion is kind of a symbol, right? Reflection is basically getting waved in as a. As a substitute perhaps, I don't know. But anyways, it's very fascinating to see what's going to happen here. Okay, that's it for the show today. If you got something out of it, it would mean the world to me if you could drop a comment over on Apple Podcast or if you're on Spotify, hit the stars. It's on the about tab and it would help this show out a ton. If you want to get access to over 80 plus AI models and an automation builder that runs super Simply, it is 8.99amonth to go. Check out AI box AI. There's a link in the description. All right, I'll catch you guys all in the next episode.
