A (24:09)
Yeah, yeah. And if you think that power vacuum is bad in terms of messaging the war from the, in the immediate term in the White House, just think about the one that's now developing in Iran in the wake of all of this. So, so, yeah, it's, there's, there's a lot of chaos. Not a lot of it seems to be adding up to anything good. We will obviously continue to follow this story very carefully in morning shots and in 50 other Bulwark products. I mean, this is, it's crazy, the stuff that's going on. What a time to be alive. We can turn away from that now. I did want to talk a little bit as well. Again, Like I said, it feels weird to talk about this in the midst of this actual war breaking out and maybe starting to metastasize, hopefully not too badly across the Middle East. But I have been working and writing about something else related to the Pentagon. Sort of a policy thing that, that you never know in the long term could in fact end up being just as important, which is this fight that has been going on between Pete Hegseth and the AI company Anthropic over the last couple of weeks came to a head late last week, resulting in the Pentagon, which had previously been really integrated with Anthropic. Anthropic had been the only AI company licensed for its AI to be used in classified settings. So, for instance, even as late as this weekend, the attacks that we carried out in Iran and then as our commands around the world coordinated our responses or planned for Iran's responses against us, they were using anthropic AI. Anthropic is integrated into these systems, but it's not going to be for long. And the reason for that is because on Friday, Pete Hegseth pulled the plug. He said not only are we canceling our contracts with Anthropic and we're going to sign, you know, we're going to backfill those. We're going to sign similar contracts with a couple of other AI labs. OpenAI, which runs ChatGPT, and Xai, which is Elon Musk's Grok. That'll be good in classified settings, I think. But not only are they, are they switching to those other AI software partners to contract with. But they are also forbidding Anthropic from doing any work with any government contractor, period in perpetuity, until, until Hegseth decides to take his foot off of their neck, which is a real, you know, existential threat to the company as at least as far as Pete Hegseth has characterized this. If he got his way, Anthropic would no longer be able to partner with many of the companies that own a lot of its stock, Google and Amazon. They would no longer be able to buy video chips that power its technology from Nvidia. They would no longer be able to sell their software to any number of companies that have contracts with the dod. So it's a real, it's a real threat. And the reason for this is because Anthropic and Pete Hegseth had a disagreement about what the DoD should be allowed to use their software for. Up until now, under the terms of an agreement that was first signed under the Biden administration, but which the, the, the Defense Department under Pete Hegseth re ratified last year, the Defense Department had very broad latitude to use Anthropics AI for, for classified military purposes, even lethal purposes. But there were a couple of red lines, one of which is Anthropic said you can't use our models to conduct mass, mass surveillance domestically. You cannot do broad American citizen surveillance with our models. And the other red line was we don't think that our models are currently reliable enough to be used to power lethal autonomous weapons systems. So like, you know, self targeting, self actualizing killer robots and drones out there, the models are not reliable enough to do that yet. So we do not think the Defense Department should be able to use our model for that. Hegseth basically said, we disagree with these, we don't think you should be able to tell us what to do. Hegseth says, we don't want to do either of those things yet, but, but sort of on principle. Hegseth is basically saying you can't do that. You can't tell us not to do those things. Anthropic said, well, we're going to anyway. So that was kind of what led to the nuclear blowout here. Obviously this is kind of continued to reverberate. It's going to take a long time. Anthropic is suing and, and you know, some of these other companies are, are striking these new deals with, with the Defense Department. But also it's not very popular the idea that, that I would be used to surveil Americans at en masse and pilot killer robots. So these other AI companies are having to sort of pretend to the public that that's not really what they're doing. They also really have respect for civil liberties and, and, and a healthy fear of, of the robot apocalypse. So there are amazing things happening in the Defense Department right now. I have just been rambling a lot about the stuff that I find interesting in the world and report on Bill. I don't know. Do you have a take on this, on this DOD anthropic blow up? I mean, there's so many different weird angles about the future that this implicates.