Transcript
A (0:00)
Welcome to the podcast. I'm your host, Jayden Schafer. Today on the podcast we're covering some big news out of Anthropic. There's a couple pretty, pretty crazy stories going on right now. The first I think is pretty exciting. Anthropic has made a big acquisition. They've acquired Vercept. It is a computer use AI startup. And this is right after Meta came and poached one of the founders of that startup. So there's a lot of drama, but I think a lot of strategic advantage to Anthropic trying to get in and acquire this company. And at the same time, there is some pretty wild news with Anthropic right now. They're in the middle of a high stakes standoff with the US Government, the Pentagon, basically. It came out that the Pentagon used Claude, kind of like Claude code basically, but their coding assistant, or the CLAUDE assistant when they did the Venezuela, when they captured Maduro, they were using Claude to help them with a lot of the planning. And I'm not sure if that's exactly what sparked it, but at some point at end, the Anthropic said, look, we don't want the US government or the military specifically using Anthropic for different things. And so they kind of like banned the military. And then of course the military gets upset. And so anyways, there's a high stakes standoff going on right now where Anthropic has until Friday evening to either allow the US military to use its model unrestricted or there's going to be a bunch of consequences. We're going to go into all of that on the podcast today, but before we do, I wanted to mention I have just completed a massive overhaul and redesign of AI Box AI and I would love for you to check it out. It's my own startup. You get access to over 40 models in one place. You can talk with all of them without having to have subscriptions to Anthropic and Gemini and Grok and 11 labs and all of the different platforms. You get them all in one place. In addition, you can prompt it to create you a workflow or an AI tool. Even if you're not a developer. It will link together different AI models that you can go and tweak the prompts inside and it will automate a lot of the work that you do. So if you want to check it out, it is AI Box AI. I'm leaving a link in the description. You can see where you can. We added a bunch of new pricing options. You can get it for just 8.99amonth and you can get an additional 20% off if you get the annual plan. So all of that's linked in the description AI box. AI, let's talk about what's going on with Anthropic. So the biggest thing that I think is happening is first of all just this huge acquisition. And I think the reason why this is important is Anthropic is getting deeper and deeper into this kind of computer use area. Personally, I use it pretty extensively for their Google Chrome extension, which is basically like ChatGPT agents except it sits on the side of Chrome and so I can open to a tab and I can just tell it like, hey, go to this tab and relabel all of the items inside of this spreadsheet here. Doing XYZ or I recently was like trying to format something. I'm not a developer in any case, but I was trying to work on a project I was doing with Lovable and Lovable was unable to like set up the backend in Google Cloud for something I needed to do. And it was like giving me instructions like I was a developer, I was like, great. So I open up Google Cloud, make an account and then I just go to Claude, the Chrome extension and I'm like, hey, here's the instructions, please go do these things. And it's like clicking around inside of Google Cloud and setting stuff up for me. I'm sure for developers this would be terrifying for them, but I was able to get what I was trying to get done, which is basically allowing my Vibe code tool that I'm just. It's personal tool I built for myself called podcaststudio.com it lets me, it helps me with all my podcast publishing and editing, um, and it allows me to publish two hour long podcasts now. And so anyways I was, I was adding that capability. It wasn't able to do it. I don't really know all the technical reasons why, but you needed some like you need to beef up our back end. And so I was able to get Claude Code to do this. So obviously this is a very powerful tool, but it's not perfect. And I think Claude and Anthropic really are trying to put a big push and that's why they're going and acquired VersePT now. Another interesting thing about this is VersePT is a company. It came out of AI2, which is Seattle's kind of AI focused incubator. It came out of the Allen Institute. But anyways their co founders have a bunch of ties to the institute specifically, they worked with a lot of researchers. One of them, Matt Dietik, he was kind of in the news last year because he negotiated a $250 million compensation package to join Meta's superintelligence lab. And so recently they had also raised like vercept had raised $50 million in total funding. And according to their CEO, Alex Kiana Ehsani, who basically kind of posted about this acquisition on LinkedIn recently, Seth Bannon, who's from 50 years, led the investment and was serving on the board of the startup. And they previously had about a $16 million seed round last January. So they had a bunch of awesome angels, angel investors, including Eric Schmidt, Jeff Dean, Kyle Vogue and Arash Ferdowsi. So I think they have a really solid team, but Anthropic now has acquired them. So he said that the whole team is going to be joining the company as part of the transition. I think not everyone from the company is moving over. Oren Atsani, who's the founding CEO of the Allen Institute for AI and kind of previously described as both a co founder and an investor in Versept, he's not going to be joining Anthropic. And beyond not joining, there's actually a ton of drama because he went over on LinkedIn and he was posting about this because, you know, versep, they'd raised $50 million, so it seems like they have a lot of momentum. And over on LinkedIn, he said that Versep was, quote, throwing in the towel and he was basically complaining that just after a year of running and they've been able to raise $50 million, it feels like they have this momentum. They're giving their customers about 30 days to transition off the platform because really the platform is kind of shutting down and the team's moving over to Anthropic. So on the one hand, he's like, hey, congrats to the whole team. Over at Anthropic, he was, you know, quite critical. And in another exchange, he actually started criticizing Bannon, one of the early investors, suggesting that the company had not hired the right business leadership. Bannon, of course, then on LinkedIn, is pushing back publicly, he's kind of defending the founders, and then he more like frames the acquisition as a good outcome. So anyways, there's this back and forth that kind of escalated. There's like legal threats and all of these things in the comments, a lot of accusations. It's pretty crazy, to be honest. I think what's interesting is beyond just kind of the drama behind this whole Exchange. And, like, why did this have to happen publicly? I'm not sure. But what's interesting is just the fact that there is a startup, it seems very promising. It's got a lot of traction, it's got $50 million in backing, it's got a lot of, you know, credible people involved with it. And in less than a year, it's basically folding up and going into one of the top AI labs. They didn't say what the terms of the deals were. The deal was at Zoni said that he got a positive return on his investment, but he also did kind of a statement over to GeekWire. He said he was disappointed the company, with as much momentum as they had, was basically winding down so quickly. Like, did they really give it a full shot after less than a year and the amount of money they'd raised? So I think if you compare that to Asani and what she was saying on LinkedIn, she kind of described this as a really kind of straightforward. Joining forces with Anthropic was kind of the best bet that they had to be successful. And I mean, this is kind of how every company that gets acquired, they're always like, look, we know you guys loved us, but, like, we're better together with Anthropic. But I think for the customers anyways, they always get. It's like, it's pretty annoying to trust a startup that gets acquired by Anthropic and then the startups like, and we're winding down. So any of the tools and features you like are gone, but you go find something new anyways. It leaves bad taste in a lot of people's mouths, but obviously they're making a lot of money, so that's. That's the direction they want to take. Now, what I will say is this acquisition is now coming in a really tense moment for Anthropic. This isn't the only kind of drama going on at the company. According to Axios, they just put out a report. Anthropic has a ton of pressure that's increasing from the Pentagon over access to its AI models. So the Defense Secretary, Pete Hegseth, he apparently told the CEO, Dario Amadeo, that Anthropic had to provide the US Military with unrestricted access to its models, or they were going to be. They're going to risk being labeled as a supply chain risk. This is basically a designation that the U.S. you know, the Department of Defense or the U.S. military is typically giving to foreign adversaries. Alternatively, they could invoke the Defense Protection act, which is Basically force Anthropic to tailor their systems for military use. If you like, want to know kind of the context or background on the Defense Protection act, it basically gives the President the authority to require companies to prioritize or basically to expand their product for national defense. And the last time this was used was during COVID 19. They actually used it to force General Motors and 3M to make ventilators and masks. And so, you know, this is something that's been used in the past. Is it, does it warrant it today? Is the real question. Obviously, applying it in dispute over AI guardrails is going to be an interesting expansion of how this has been used in the past. Anthropic has basically, I mean, they've been pushing back, so we'll see what happens. They have some sort of deadline now, but they've for a long time said that they don't want their technology to use for mass domestic surveillance or for fully autonomous weapon systems. And they also say that they don't plan to relax any of those restrictions for the US Government, even with them asking. So it's going to be very interesting to hear what happens here. And the officials over the Pentagon are arguing basically that military use for AI should be governed by US law and constitutional constraints instead of internal policies of like a private company, for example. So of course now you have this whole standoff. There's a lot of ideological tensions here, as many know that listen to the podcast. David Sachs, who's in the administration, he's the AI advisor, he's publicly criticized Anthropic's safety posture as being overly restrictive. And Dean Ball of the foundation of American Innovation said that, you know, so he's, he's taking the other side of this. He says that the DPA in this context would basically show there's some deeper instability, framing it as the government using economic leverage against a company over policy disagreement. So there's, there's obviously two sides of this argument. I think right now Anthropic is in an interesting position. And the reason why, I think the why it feels like the US government and the Department of Defense specifically is trying to force Anthropic's hand in here, right? Because obviously a free market, they'd be like, okay, fine, whatever you don't like, you don't want us to be able to work with you. We'll go find someone else, right? Maybe Google or OpenAI or Grok or, you know, Xai, like one of these other alternatives. But apparently there's some reports that say the only frontier AI lab with classified Department of Defense access right now is Anthropic. So basically the Pentagon has no immediate alternatives and we know that they're actively using this because it came out when they did the raid and captured Maduro, that they were using Anthropic for that whole raid to go and properly execute that. That was the model that kind of ran that raid, which was obviously very successful. Whether you agree with it or not. No American soldiers were killed in that raid and it happened very quickly so, and efficiently. So Anthropic seems to be the only vendor right now and this is kind of the problem. Apparently there's some reports that you know, XAI or would come in and help instead. But like we really know Anthropic is just crushing it at this kind of multi step reasoning. All the developers use it for a reason and Google and OpenAI are trying to catch up but Anthropic's the winner. So yes, there's alternatives but it, I'm sure for the Pentagon and for the Department of Defense they're like, well it kind of sucks to like have the best AI model in America. It's not in China, it's not in Russia, it's not in another country, it's in America. We can't use it, but we'll use the second best. Like I'm sure for them that kind of irks them the wrong way, especially considering I'm sure there's creative ways that China or other players could figure out how to use Anthropic for their use cases, maybe with, you know, multiple accounts and I don't know, I'm sure there's like ways to be sneaky about it and so wouldn't it suck if the US government got forced to like the Department of Defense got forced to use the second best model and then let's say China figures a sneaky way to use Anthropic and have the best model wouldn't, wouldn't be the greatest thing I'm sure in their opinion. So my personal opinion on this is basically Department of Defense, yes, I would like the best AI model to power the department that defends my country. But at the same time I, I can see where Anthropic's coming from as far as mass surveillance of Americans. Nobody really wants that. Why the government has those expanded controls is, I mean, you know, well, we all know the background of 9, 11 and, and everything, you know, the Patriot act and all that kind of all the controversy and drama that comes out of that. So personally I'm Not a huge fan of being spied on by the government, but I know it's inevitable and there's probably nothing I can do about it. So I'm not sure if having Anthropic in the mix versus any other AI model is going to change anything drastically. So at the end of the day, as far as the military goes, I would like the best AI model to go to the military. But anyways, I think right now with kind of this versep acquisition and the Pentagon dispute, there's a lot of pressure facing these AI companies. On the one side, you have kind of this race to consolidate talent and accelerate technical progress, right? So Anthropic is going to, with versept acquiring versept and their team, they're going to get better at a lot of these computer use things. But on the other hand, there's a lot of friction that they're seeing right now with the government who's trying to put those into national defense and other areas. And Anthropic is kind of pushing back. Honestly, this reminds me of like Google had a whole, whole spat a while back. It was kind of vogue for Google and all their employees to say they didn't want to work with the Department of Defense and they had a big walkout and I, I can't remember where they're at today, but I feel like they pulled back from the, from the military and then at some point they started working with them again. But I think it was just kind of on their own terms because they wanted the money. So anyways, it goes back and forth with a lot of these Silicon Valley startups. But in any case, it feels like we are going to see what happens in the very near future because Anthropic has a deadline. We'll see if they start working with the US Government and, and on the bright side, I'm thrilled that the Vercel, the versept team is over Anthropic right now. That product, the computer use is going to get better and that's something I use a lot. So I'm excited about that. All right, thank you so much for tuning into the podcast today. I hope you enjoyed this episode. If it was insightful, if you learned anything new or thought of any new perspectives, let me know in the comments and make sure to leave a rating and review. Guys, I am so close. I'm at like 148 reviews on Apple. So if you are listening on Apple, honestly it would help the show a ton of help me get to 150 reviews. It would be amazing. It would make my day. I'm so close. Thank you so much for tuning in, and I'll catch you in the next episode.
