Transcript
A (0:00)
Today on the AI Daily Brief we are discussing a question that is extremely easy to ask and much more difficult to answer. Who controls AI? The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors kpmg, Insightwise, AIUC and blitzi. To get an ad free version of the show, go to patreon.com aidaily brief or you can subscribe on Apple Podcasts. To learn more about sponsoring the show, send us a note at sponsorsaidailybrief AI While you're on aidailybrief AI, you can also find out about the other projects in the AI DB ecosystem, including Claw Camp, Enterprise Claw registration for which is going on right now. Basically, if your enterprise wants to learn how to build agents and agent teams or just more podcast related stuff like subscribing to the newsletter, which is newly rebooted. Now, if you've been listening this week, you'll know that we had something of a time of getting back from South America door to door. It ended up being about 55 hours, and that didn't include the seven hours that it took me to go drop off the rental car and pick up our old car, which was sitting at the airport parking lot. In any case, because of that, I had to miss Wednesday's show. Not something that I do very lightly. And so as a makeup, I had slated to do an extra show over the weekend on the day that I'm usually off. As it turns out, this was a pretty opportune week to have that slot open because my goodness, as Ron Burgundy would say, boy, that escalated quickly. I'm referring of course to the skirmish turned all out war between Anthropic and the Pentagon that came to a crescendo in a head on Friday night. The tldr of what happened is that not only did the Trump administration decide to decline to work with Anthropic, they are attacking them in ways that go far beyond just declining to do business with them. Now, for the necessary background and to get caught up with the story from where we left it, we actually have to go back to Thursday, when Anthropic CEO Dario Amadei released a statement about the dispute. Earlier in the week, you'll remember Defense Secretary Pete Hegseth had given Amadei an remove, terms of use limits by Friday, or be blacklisted from the entire military supply chain. Anthropic's red lines were that Claude should not be used for domestic surveillance of Americans or for powering autonomous weapons. Their stated view was that Claude is not reliable enough to power autonomous weaponry and that AI surveillance is undemocratic and perhaps more pertinently, has underdeveloped legal safeguards. The White House's position, meanwhile, was that a technology company should not be dictating how the US Government uses that technology and should be fine accepting terminology that allows the US Government to use it for all legal uses. Dario's post from Thursday begins I believe deeply in the existential importance of using AI to defend the United States and other democracies and to defeat our autocratic adversaries, and it is worth noting here, especially if and as this conversation gets caught up in broader partisan talking points. Historically speaking, Anthropic has been more vocal about things like China not having access to advanced technology than some of their peers, whereas some of the other AI companies have been either fine with or actively lobbying for the ability to sell into China. Think specifically around Nvidia and advanced chips. Amade and Anthropic have been consistent that they think that is a very, very bad idea. Point being, at least based on the history, Anthropic is not a pacifist organization. Now in the blog post, Amodei continued, Anthropic understands that the Department of War, not private companies, make military decisions. We've never raised objections to particular military operations, nor attempted to limit use of our technology in an ad hoc manner. However, in a narrow set of cases, we believe AI can undermine rather than defend democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now. He then restates Anthropic's objections to mass domestic surveillance and fully autonomous weapons. Now, when it comes to those exceptions, he says, to our knowledge, those two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces. To date. Then, in one of the spicier sections, he writes, the Department of War has stated they will only contract with AI companies who accede to any lawful use and remove safeguards. In the cases mentioned above, they have threatened to remove us from their systems if we maintain these safeguards. They have also threatened to designate us as a supply chain risk, a label reserved for US Adversaries never before applied to an American company, and to invoke the Defense Production act to force the safeguards removal. These latter two threats are inherently contradictory. One labels us as a security risk. The other labels Claude as essential to national security. Regardless, he says, these threats do not change our position. We cannot in good conscience accede to their request. Now it is very clear that that this public statement did not make Anthropic any friends in the White House. Assistant to the Secretary of War for Public Affairs Sean Parnell was diplomatic but clear. The Department of War has no interest in using AI to conduct mass surveillance of Americans, which is illegal. Nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media. Here's what we are asking. Allow the Pentagon to use Anthropic's model for all lawful purposes. This is a simple common sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let any company dictate the terms regarding how we make operational decisions. They have until 5:01pm on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for the Department of War. Former Uber official and Under Secretary of War for Research and Engineering Emil Michael was not so diplomatic. He wrote, it's a shame that Dario Amadei is a liar and has a God complex. He wants nothing more than to try to personally control the US Military and is okay putting our nation's safety at risk. The Department of War will always adhere to the law but not bend to whims for any one for profit tech company. Now, coming into Friday, it seemed like the court of public opinion was sort of leaning in Anthropic's favor. More than 200 Google and OpenAI staff signed a petition that supported Anthropic's red lines, which you can find@notdivided.org, and you even saw a bunch of comments like this one on that post from Sean Parnell. Hi Sean, just FYI, nobody believes this and it comes off as in genuine. I'm generally a conservative leaning voter. I'm also pretty tech forward. I am wildly against this reminder that the entire tech lobby flipped on Biden for the exact same reason in May 2024. So that's where we were heading into Friday morning. Now, outside of the substance of the argument, it was pretty weird to a lot of folks that it was being had so publicly. As quoted by Axio Senator Thom Tillis said, why the hell are we having this discussion in public? Or why isn't this occurring in a boardroom or in the secretary's office. I mean, this is sophomoric. So that's where we were heading into Friday morning. In the morning it seemed like at least OpenAI was lining up alongside their AI peers, or at least as CNBC put it, trying to help deescalate the situation. Late on Thursday night, in a memo to his team, OpenAI CEO Sam Altman said, We've long believed that AI should not be used for mass surveillance or autonomous lethal weapons and that humans should remain in the loop for high stakes automated decisions. These are our main red lines. In an interview on Friday morning with cnbc, Altman said, for all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety and I've been happy that they've been supporting our warfighters. I'm not sure where this is going to go. And while a lot of folks on social media were excited that Altman seemed to be lining up alongside anthropic, OpenAI was clearly having conversations with the DoD at the same time. He indeed said explicitly in that memo that they were exploring whether they could deploy their models in classified environments in a way that in his words, fit with their principles. That was the state of things until 3:47 in the afternoon Eastern time when President Trump took to Truth social to write in all caps the United States of America will never allow a radical left woke company to dictate how our great military fights and wins wars. That decision belongs to your Commander in Chief and the tremendous leaders I appoint to run our military. The left wing nutjobs at Anthropic have made a disastrous mistake trying to strong arm the Department of War and force them to obey their terms of service instead of our Constitution. Their selfishness is putting American lives at risk, our troops in danger and our national security in jeopardy. Therefore, I am directing every federal agency in the United States government to immediately cease all use of Anthropic's technology. We don't need it, we don't want it and we will not do business with them again. There will be a six month phase out period for agencies like the Department of War who are using Anthropic's products at various levels. Anthropic better get their act together and be helpful during this phase out period or I will use the full power of the Presidency to make them comply with major civil and criminal consequences to follow. We will decide the fate of our country, not some out of control radical left AI company run by people who have no idea what the real world is all about. Thank you for your attention to this matter. Make America Great Again Defense Secretary or a Secretary of War or whatever the heck you want to call him at this point, Pete Hegseth chimed in this week, Anthropic delivered a masterclass in arrogance and betrayal, as well as a textbook case on how not to do business with the United States government or the Pentagon. Our position has never wavered and will never waver. The Department of War must have full, unrestricted access to Anthropic models for every lawful purpose in defense of the Republic. Instead, Anthropic and its CEO, Dario Amadei, have chosen duplicity cloaked in the sanctimonious rhetoric of effective altruism. They have attempted to strong arm the United States military into submission, a cowardly act of corporate virtue signaling that places Silicon Valley ideology above American lives. The terms of service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social the Commander in Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic's stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the federal government has therefore been permanently altered. In conjunction with the President's directive for the federal government to cease all use of Anthropic technology, I am directing the Department of War to designate Anthropic a supply chain risk to national security. Effective immediately. No contractor, supplier or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is finally immediately the lawyers jumped in to start figuring out what the heck the implications of all this were, senior research fellow Charlie Bullock wrote. Hegseth claims that this declaration that no Pentagon contractor or supplier can do business with Anthropic is effective immediately, which seems absolutely insane under 10 USC 3252, which is almost certainly the authority Hegseth has to rely on here. There are multiple requirements that do has to fulfill before the SCR declaration becomes effective, they have to complete a risk assessment. They have to make a written determination that declaring Anthropic a supply chain risk is necessary for national security and that there's no less intrusive way to address the risk risk. And they have to notify Congress. It's possible the Dow has already done some of that behind the scenes quick work if so. But it's hard to believe that they fulfilled, eg the congressional notice requirement in the time between 5pm Eastern and Hegseth tweeting. In all likelihood, it's just not true that the declaration is effective immediately, as Hegseth claims, Prinz writes. To put a finer point on what just happened, Hegseth Post says that no contractor, supplier or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic serves its models through the cloud. Its primary partner is aws, but it also serves its models through Google Cloud and Azure. All of Amazon, Microsoft and Google do business with the US Military. If we take Hegseth's post literally, Anthropic should now find itself unable to serve its models via any of these providers. This is what Dan Primack from Axios wanted to know as well. He tweeted, practically speaking, does this mean Amazon, Nvidia, etc. Can't do any business with DoD? What about Palantir Dean Ball, who, to be clear, was integral in writing Trump's policy on AI, wrote Nvidia, Amazon, Google will all have to divest from Anthropic if Hegseth gets his way. This is simply attempted corporate murder. I could not possibly recommend investing in American AI to any investor. I could not possibly recommend starting an AI company in the United States. A little bit after that, Anthropic dropped a response statement that mostly sought to assure customers that they could just chill for now. They noted that so far all of their information is coming from the same source as all our information, which is social media. Anthropic rights we have not yet received direct communication from the Department of War or the White House on the status of our negotiations. They of course promised to challenge any supply chain risk designation in court. The business section was titled what this Means for Our Customers, in which they write, Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. Legally, a supply chain risk designation can only extend to the use of Claude as part of Department of War contracts. It cannot affect how contractors use Claude to serve other customers. In practice, this means if you are an individual customer or hold a commercial contract with Anthropic, your access to CLAUDE through our API, claude, AI or any of our products is completely unaffected. If you are a Department of War contractor, this designation, if formally adopted, would only affect your use of CLAUDE on Department of War contract work. Your use for any other purpose is unaffected. Now, unfortunately, I'm sure Anthropic knows, as anyone who has studied either of the operation chokepoints over the last decade, that when it comes to governments exerting pressure on private sector companies to not work with other private sector companies, you need is a little push in an implication for those companies to ditch the offending vendor a few minutes later. And by the way, this is all happening within the span of an hour or two, Fortune magazine Sharon Goldman wrote. Sam Altman told OpenAI employees at an all hands meeting on Friday afternoon that a potential agreement is emerging with the Department of War to use the startup's AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed. According to Goldman, Altman said the government is willing to let OpenAI build their own safety stack, that is the layered system of technical policy and human controls that sit between a powerful AI model and real world use, and that if the model refuses to do a task then the government would not force OpenAI to make it do that task. A few hours later, Sam Altman confirmed that a deal had gotten done. He tweeted we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the Dow displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The Dow agrees with these principles, reflects them in law and policy, and we put them into our agreement. We will also build technical safeguards to ensure our models behave as they should, which the Dow also wanted. We will deploy forward deployed engineers to help with our models and to ensure their safety. We will deploy on cloud networks only. We are asking the Dow to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy and sometimes dangerous place. Agentic AI is powering a $3 trillion productivity revolution and leaders are hitting a real decision do you build your own AI agents? Buy off the shelf or borrow by partnering to scale faster? KPMG's latest thought leadership paper, Agentic AI Untangled Navigating the Build, buy or borrow decision does a great job cutting through the noise with a practical framework to help you choose based on value, risk and readiness and how to scale agents with the right Trust, Governance and Orchestration Foundation. Don't lock in the wrong model. You can download the paper right now at www.kpmg.usnavigate again. That's www.kpmg.usNavigate as a consultant, responding to proposals can often feel like playing tennis against a wall you're serving against yourself, trying to guess what the client really wants. That all changes with Insight Wise. Now you've got an AI proposals engine that thinks just like your client. It returns to the brief time and time again, picking apart your work, identifying key evaluation criteria and win themes, and making recommendations to ensure you stand out. Suddenly you're on center court. But this time you've got a secret weapon. Insight Wise gets rid of all the time consuming manual work so you can focus on winning more business more often. Generate reports, pull insights from your own data, build competitive advantage, and go to sleep before 2am when it comes to proposals, you only get one shot with insight Wise. Make yours an ace. There's a new standard that I think is going to matter a lot for the enterprise AI agent space. It's called AIUC1 and it builds itself as the world's first AI agent standard. It's designed to cover all the core enterprise risks, things like data and privacy, security, safety, reliability, accountability and societal impact, all verified by a trusted third party. One of the reasons it's on my radar is that 11 labs, who you've heard me talk about before and is just an absolute juggernaut right now, just became the first voice agent to be certified against AIUC1 and is launching a first of its kind insurable AI agent. What that means in practice is real time guardrails that block unsafe responses and protect against manipulation, plus a full safety stack. This is the kind of thing that unlocks enterprise adoption. When a company building on 11 labs can point to a third party certification and say our agents are secure, safe and verified, that changes the conversation. Go to AIUC.com to learn about the world's first standard for AI agents. That's AIUC.com if you're looking to adopt an agentic SDLC. Blitzi is the key to unlocking unmatched engineering velocity. Blitzi's differentiation starts with infinite code context. Thousands of specialized agents ingest millions of lines of your code in a single pass, mapping every dependency with a complete contextual understanding of your code base. Enterprises leverage Blitzy at the beginning of every sprint to deliver over 80% of the work autonomously. Enterprise grade end to end tested code that leverages your existing services, components and standards. This isn't AI autocomplete. This is spec and test driven development at the speed of compute schedule a technical deep dive with our AI experts@blitzi.com that's blitzy.com. Now. We'll come back to the reactions to that, but first let's try to summarize all the different strands of conversations that I saw going on all over the Internet. Slightly reductively last night, I summarized the positions that I was seeing as 1. Anthropic is right. There should be AI red lines we don't cross. 2. Anthropic might have a reasonable moral take, but the government can't be constrained by them. 3. It doesn't matter whether Anthropic is right or not, a private company shouldn't set government policy. 4. Not only should a private company not set government policy, Anthropic's moral stance is wrong too. 5. Whatever. I think if the US government doesn't want to work with a vendor, they should just not work with a vendor. But maybe don't try to kill them. And 6 punish the infidels lest the other uppity AI CEOs get ideas. As you might imagine, there were comparatively fewer of that last one, but they were in fact there. One interesting example of the anthropic is right camp came from Eric Voorhees. Eric is the founder of Venice AI and has long been an actual libertarian, willing to call out policies he didn't like on the left and the right, he tweeted. Anthropic is definitely woke and lefty, but their refusal to permit Washington to use their tech to carry out warrantless mass surveillance of Americans is eminently based on dime. Pointed out that the language of left and right kind of didn't belong here, they wrote. None of us voted for dystopian AI spyware, surveilling us in a way that makes the Patriot act look quaint. None of us voted for fully autonomous weapons on robots. I understand wanting these things in the AI arms race with China, but Trump's actual comments are shocking. It is not left wing to want less domestic surveillance and fully autonomous murder bots. I think it's pretty safe to assume that most of America doesn't want AI used like this. Now among those who are really against Anthropic, mostly it came down to some version of yeah, but China. Mike3 writes, People cheerleading for Anthropic either want China to win the AI supremacy war or they're so politically brain rotted they don't fully understand what's at stake and think the US government just wants to use it as a tool of oppression. Geiger Capital writes wanted to jump on here quick and say China doesn't give a crap about Anthropic's moral red lines. We can argue both sides, but they won't. They are implementing AI into their entire military chain and they are doing it with zero democratic or civilian oversight. Now while a lot of the chatter was from the chattering class, one person who agree or disagree with his positions has been living in these questions for much longer than basically any of us is Anduril founder Palmer Luckey Palmer writes, Do you believe in democracy? Should our military be regulated by our elected leaders or corporate executives? Seemingly innocuous terms from the latter, like you cannot target innocent civilians, are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a target versus collateral damage? Existing policy and law has very clear answers for these questions, but unelected corporations managing profits and PR will often have a very different answer. Imagine if a missile company tried to enforce the above policy that their product cannot be used to target innocent civilians and that they can shut off access if elected leaders decide to break those terms. Sounds good, right? Not really. In addition to the value judgment problems I list above, you also have to account for questions. What level of information, classified or otherwise, does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more? What if an elected president merely threatens a dictator with using our weapons in a certain way, a la madman theory? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of those determinations vary if the current corporate executives happen to like the dictator or dislike the president? At what level of confidence does the cutoff trigger? Both in writing and in reality, the fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of Ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say, but they will have cutouts to operate with autonomous systems for defensive use. But you immediately get into the same issues and more. What is autonomous? What is defensive? What about defending an asset during an offensive action? Or parking a carrier group off the coast of a nation that considers us to be offensive? At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without sourcing the real levers of power to billionaires and corpos and their shadow advisors. I still believe. And that is why, bro, just agree the AI won't be involved in autonomous weapons or mass surveillance. Why can't you agree? It is so simple. Please. Broke is an untenable position that the United States cannot possibly accept. And again, even if you disagree with where Palmer is coming out on this, I think he is rightly identifying that this at core is a question of control and by extension, and this is where it gets complicated, a question of checks and balances. Part of the problem and why people I think are sympathetic to anthropic is that checks and balances on executive power, that is the folks in Congress don't really seem to be doing their job. Sure, a bunch of them like Senator Ed Markey and Senator Mark Kelly took to Twitter after this to say that Congress needed to be involved. But if that's the case, I think people can be forgiven for being a little bit cynical as they ask, why weren't you involved before now? For some, all of this is just a little highfalutin. Commentator sucks on Twitter, writes, if you build a superweapon and it lives in a data center in the usa, it's not your superweapon. You don't own or control it. The people with the aircraft carriers and nuclear weapons do. This is how the world has always worked. Roman helmet guy who you might have seen on social media, writes satirically, hi, I'm a private citizen who developed a superweapon potentially a thousand times more powerful than nukes. And now I'm selling it to the government, but I get to choose who they fire it at and how. Everyone, please respect my decision. For many, like Nathan Lands, this was inevitable. He writes, people don't get it. AI is becoming critical. Infrastructure, it will power, defense, finance, intelligence, everything. If a private company can decide how the US government is allowed to use it, that's not ethics. That's corporate leverage over a sovereign nation. And yet still, for all the people who are sympathetic to that point, and there are many, it feels to me like where the majority of them get uncomfortable is not with the US Government's decision to not work with Anthropic, but all of the threats and the retaliatory action that it seems like it's coming with. Lindy Founder, Flo1. The government is rightly annoyed at a very important vendor thinking they can tell them what to do with their technology. Two, Any company has a right to refuse service to anyone, including the government and army, at least when not in wartime. Three, this does not justify the government going ballistic and treating them as an enemy of the nation. Adam Holter writes, as a conservative, I do not support labeling Anthropic as a supply chain risk for refusing to comply with an all legal purposes clause. I can also see why the Pentagon can't set a precedent of letting contractors dictate terms, so they should walk away from the deal and cancel the $200 million contract. However, threatening to label Anthropic as a supply chain risk is an unprecedented action for an American company. Nothing Anthropic is doing is dangerous for government contractors to use. All they're doing is providing two lines and two restrictions regarding how their technology can be used. You can't punish a company for not providing a service, especially not in peacetime. If you disagree, as a conservative, I want you to think, if the Biden administration were doing the same thing, would you be against it? Dean Ball, again, perhaps feeling a little betrayed given that he shaped AI policy initially for this government, went farther. Think about the power Hegseth is asserting here. He's claiming that the DoD can force all contractors to stop doing business of any kind with arbitrary other companies. In other words, every operating system vendor, every manufacturer or hardware, every hyperscaler, every type of firm the DoD contracts with, all their services and products can be denied to any actor at will by the Secretary of War. This is obviously a psychotic power grab. It is almost surely illegal. But the message it sends is that the United States government is a completely unreliable partner for any kind of business. The damage done to our business environment is profound. No amount of deregulatory vibe sent by this administration matters compares to this arson. And some version of this, I think, was what a lot of people felt. Gale Wiener writes, the whole reason Silicon Valley dominated for decades was the promise build here. We'll protect your intellectual property. We won't interfere with your business. The courts work. The rule of law holds. That was the deal. If you're a brilliant AI researcher in London or Seoul or Berlin or Bangalore right now, and you're watching the President of the United States threaten criminal prosecution against an AI company for having ethics, why would you build in America? Why would you incorporate there? Why would you put your IP under that jurisdiction? Trump just blew that up on X in all caps Growing Daniel writes, all of this is just so bad for the defense tech ecosystem. Like who wants to deal with this? What a crappy customer strategy. Professor Kevin Bryan writes, moral of this story is that no smart company is going to do business with this government. Anthropic built literally the world's best AI and integrated with the military as a national service. They fulfilled their contract precisely. Result, they are being treated like Huawei. Now one part of the story that'll be interesting to watch over the next couple of days is how this will shake out narratively for both anthropic and for OpenAI. Self proclaimed AI security hawk Peter Wildeford writes, I think it's important to circle back to Sam Altman here. About 20 hours ago, people including me were applauding his moral clarity. But that moral clarity lasted barely half a day. Altman sees a short term way to torch a competitor and he's going to take it no matter what happens to OpenAI. Anthropic, the US or US trader Mark Valorian writes, I don't know who needs to hear this, apparently all of Twitter. But OpenAI did not just magically get the DoD to agree to the terms Anthropic was asking for. Sam is blowing smoke to distract from the fact OpenAI just took the terms anthropic considered so egregious it warranted jeopardizing an enormous part of their business. Assume all OpenAI data will now be used for what Anthropic deemed mass domestic surveillance of Americans. And while I think there is of course massively, massively more nuance to whatever was going on behind the scenes with OpenAI and the Dodge that none of us who are commenting on Twitter have the actual context for, one part of this story is going to be the court of public opinion Signal writes as of this writing, Claude is now number two in the App Store. And there's a real non trivial downside scenario here for OpenAI that many aren't really grasping. It's low probability, but structurally interesting if a clean meme forms on TikTok and Instagram tying OpenAI to the Department of War and that framing hits mainstream liberal users. The reaction won't be analytical, it'll be visceral. Most people won't parse contract scope, defensive use cases, or historical precedent. They'll respond to timing and symbolism. And if this perception hardens, the competitive alternative becomes emotionally obvious. An association that feels morally dissonant could trigger switching behavior, employee discomfort, media amplification, and even long tail brand drift. I'm not arguing companies shouldn't work with the War Department. That's not the point. The point is that in a memetic environment, perception compounds faster than facts. And if that perception locks in among a politically concentrated user base, the second and third order effects on consumer AI could be far more significant than most people expect. Now, what I think signal might be missing here is that this is already starting to happen. It has been widely shared in progressive circles that OpenAI President Greg Brockman is one of Trump's biggest donors this cycle, which has already led many to shift. For some, this is going to be confirmation that this is not a one time thing, but an actual pattern. Katy Perry, for one, has already switched. Before all this went down, Mike Solana got at the damnable complication of all this writing. Am I wrong or is the situation just frozen at one. We don't want to force private companies to do something they don't want to do. 2. We don't want private companies running the military. 3. We are in an AI arms race with a country that controls its AI labs. I don't really see any satisfying answer here for a free society that also needs to maintain an edge against a successful authoritarian country racing towards a potentially, probably, eventually brand new doomsday weapon, to be honest. Ultimately, Kristin Faulkner nails it when she writes the anthropic Pentagon standoff is not a tech story. It's the moment AI ethics stopped being theoretical and became geopolitical. As AI becomes more powerful, the power to dictate how AI can and should be used will become even more sought after. Whoever decides the ethics of AI will be deciding the ethics of society. And so here is my positive note to end on the situation right now. For anthropic is and for OpenAI and the Pentagon and everyone else is messy. But for all of the rest of us, it's an opening. It is yet another reminder, a big blinking reminder that is cascading from our little corner of the world into mainstream consciousness of just how important these conversations are. As the forces of partisanship always do, many will try to wrestle this narrative into confirmation bias for their particular partisan story. That is, in spite of the fact that, at least right now, while the left and the right may in general have nudging impulses in different directions on AI, it is not in any way a hardened or calcified partisan conversation. That, I believe is a good thing. It's too important to just be eaten up as another culture war issue, and so my plea is to ignore anyone who's trying to do that to this conversation. I'm sure there will be a lot more to cover as things evolve, but for now, that is where we are going to conclude this AI Daily Brief. Appreciate you listening or watching as always. And until next time, peace.
