Loading summary
A
It's only getting every customer's order right. It's only a point of sale system connected by Spectrum Fiber powered Business Internet helping you track hundreds of secure transactions and it's all backed by 24. 7 US based customer support and local technicians. It's only everything. Get Business Internet advantage free forever. When you get four mobile lines from Spectrum, visit spectrum.com freeforlife to find out how restrictions apply. Service is not available in all areas. Get that Amex Gold Card ready. I'm way too tired to cook tonight.
B
You read my mind. With the Gold Card we can get up to $120 a year in statement credits. Are you feeling the Cheesecake Factory five guys? Either of those sound good?
C
Yes.
B
Which one? Both pay with the Gold Card to receive up to $10 a month in statement credits at participating partners. Uncover more ways to reward yourself@americanexpress.com Explore Gold Enrollment required Terms apply ChatGPT uninstalls
C
surge by over 295% after their Department of Defense deal ChatGPT is in some hot water. People are not very happy about this. And then CLAUDE now is being installed more than any other app out there. It's the number one. It's reached the number one spot. There's, there's a big shift going on and I think it's for a lot of reasons besides just the Department of Defense deal. But I think that really turned a lot of people off to ChatGPT and, and they're worried about their security, their privacy. And CLAUDE already has been on the come up come up because of Claude code. Lots of people are starting to use claude and they may, they may end up taking the number one spot for the most used AI company eventually. We'll have to see about that. But Jaden, why don't you tell them about what's going on here in this story?
D
Okay. I actually have a contrarian take that. If you are easily upset about things, you will be upset about me, no doubt. But I'll give you my contrarian take on this. And when I say contrarian, if you go to TechCrunch and you want to know the latest in AI news and their AI section, every other article for the last 20 articles has covered this topic in one specific, one specific point of view, which is it's like ChatGPT installs Surge, no one has a good plan for how AI companies should work with the government. Users are ditching ChatGPT for Claude. Here's how to make the switch. Tech Workers Urge Department of Defense to Withdraw ANTHROPIC label supply chain risk. Anthropic Claude reports widespread outage, presumably from, you know, so many people using it. Right. Anthropic's Claude rises to number one in the App Store following Pentagon dispute. OpenAI reveals more details about its agreement with the Pentagon. Okay, it goes. The trap Anthropic built for itself. Like, I'm reading you all of these because this is like 70% of all the AI articles being written by TechCrunch and other places are covering this all from the same angle more and more, yada, yada, like it goes on and on and on and on for like two weeks. This is the only thing that anyone will talk about. So what's my take on this? I'll tell you. TechCrunch's take. TechCrunch's take is anthropic good. They're standing up to the government. OpenAI bad. Sam Altman is now bad. And I love to see this in tech where basically you, it's, it's whatever the quote is where like if you, you a hero lives long enough to see himself turn into the villain. I can't even remember that quote and I'm sure I'm butchering it. Jamie, if you know it, please, sounds close. Shout it out correctly. But basically chatgpt was the hero. Sam Altman was the hero. And now we've come full tech cycle where like once your company gets big enough, you become the villain. This is with Google and Microsoft and Mark Zuckerberg. Like when Mark Zuckerberg came out and made Facebook, he was like this cool indie hacker in his bedroom. And now we're like, Mark Zuckerberg is like the spy king. Give you all the spyware with the ads and you come full circle to a villain because your company's big enough and the news will slander you. And is that a bad thing now? I mean they have billions of dollars. I like to see a lot of pushback on the billion dollar companies. So it's, is it good or bad? I don't know. I appreciate the opposition to it. In any case, you would hate for them to have a billion dollars and everyone say they're amazing all the time. They get like a kind of God complex, right? So all this to say, I mean, it's just the predictable cycle of what's happened. And evidently Sam Altman, because his company is bigger than Anthropics, like he is the target of this. So it's really funny for me to watch this. I actually don't think he's in the wrong. I think Sam Altman played the ultimate 40 chess move. Because let me give you the layout of how the story happened. Anthropic and the Department of Defense got into a beef. Anthropic is the. Was the only company approved for classified. Department of. Department of Defense, Department of War. I don't know what we call it anymore. Someone said, I don't really know. So anyways, Department of Defense, Department of War, whatever we call them, they were the only ones classified to be basically integrated into their tech stack to do a lot of these classified operations. So what happened? Well, the US went and takes out Nicolas Maduro, the president of Venezuela, and. And then it leaks that, oh my gosh, they used Anthropic's CL fraud in that mission. Which, like, sort of could have been a great marketing. I mean, marketing. This probably sounds terrible. It could have been like a great marketing thing because on the one hand, like, that mission was executed in the middle of the night before anyone knew it was happening. They captured the president, they brought him back, not a single American died. And so if you're like Anthropic, like, orchestrated it, like love it or hate it, the missionary or the. The love or to hate that mission, that military mission, like, it was executed perfectly in a sense that like, no one died. No, no Americans died. I'm sure the Venezuelan guards may not have done as well, but no Americans died in that mission. So you could call that a success. That could be like Anthropic being like, look how smart our AI is. Anthropic heard about this, got upset and was like, look, we don't want you guys using this in the Department of Defense. And they, they added a couple stipulations which personally I sort of agree with. I don't know if you saw these, Jamie. Do you see? They're like red line items Anthropic made.
C
No, I didn't. I'd be interested to hear, though.
D
So they were like, they were like, hey, so you can't use our AI models for mass surveillance of American citizens or for fully autonomous AI systems. Like, you can't have Anthropic powering like a, an AI bomb that goes and shoots down. It goes and shoots down like an Iranian jet without like a human having intervention, right? And like, to be honest, like, I sort of agree with those things, like mass surveillance of American citizens, which I fall under. I'm like, yeah, I don't. That doesn't sound good to me. I don't like the idea of that. And also, like, fully autonomous AI that just runs around shooting people Terminator style. Like. Yeah, I don't really like that idea. Right, right. The problem with it though, I don't think necessarily is the red line items. The problem is that they had a government contract and they were the only authorized classified AI system. And then after the fact that they came up with like terms of service redline items, which are the ones I like. Sure. But like, what if they made one that I didn't like? What if they said actually this can't be used in any battle planning that's going to result in a human casualty? Because I mean, you could make a moral ethical case for that as well. Right? Well, it's like, okay, well then our whole like military got nerfed. The funny thing I think about this, which is like crazy to think about, is with the US and Israel doing their whole Iran bombing invasion thing, like no comment on that whole situation. But with that whole situation they literally had this public beef with Anthropic, evidently because they needed an AI model to power a mission. So it's like we're not even going to go invade or launch an attack on a country unless we are 100% guaranteed that like we have our AI model locked in. So to me that is pretty crazy. That's the point the US government is at. Before they can do any sort of military campaign, they have to make sure they have their AI models locked in. Which was kind of crazy if you think about it.
C
Yeah, it's kind of a mind blowing thought and really pretty scary if you think about it. But yeah, I mean, like, I don't know, I see where Claude's coming from, especially when it comes to mass surveillance. And I think back in the day Apple kind of shut down the government too when it came to surveillance with, and they had all the encryption stuff. And I think honestly that gained a lot more, they gained a lot more trust in their product because of that whole situation. I kind of feel like the same thing's going to happen with, with, you know, Anthropic, I think let's say you're coding a, you're coding a banking app. Am I going to trust Anthropic with that or am I going to use, you know, OpenAI now who is, I would assume is, is, is agreeing to the mass surveillance idea? I, I don't know the exact details of course, but I, I personally would put more trust in Anthropic if I was making a, like a banking app, for example.
D
Yeah. And I think it's, it's Fair. I think a lot of people will have, like, a lot of different use cases and opinions on, like, what AI model they use now, which I think is good. I think it's good to have a lot of healthy competition in the market for a lot of those use cases. I will say it feels like Sam Altman pulled the ultimate heist, though, in this whole deal, because Anthropic had this beef with the government. The government said, no, you just have to, like, let us basically have unrestricted access. Don't put guardrails on us. Anthropic said, nope. The government said, okay, we're designating you a supply chain risk. Everyone that works with the government has to pull Anthropic out of their stuff within six months and, like, you lose your $200 million contract with the government, with the Department of Defense or whatever. So Anthropic is like, okay. And this whole time, Google and OpenAI are both like, we support Anthropic's decision. Like, Anthropic has a good. Is, like, doing a good thing. So Anthropic gets the boot. And immediately after they announced that Anthropic got the boot, that same night, Sam Altman goes and tweets and is like, we have just come to an agreement with the Department of Defense to take out. I mean, basically to take over the contract. So, I mean, Sam Altman's like, yeah, we agree with you, Anthropic. And then as soon as Anthropic gets kicked out, he's like, and we'll take that $200 million, thank you very much. So anyways, I believe Sam Altman paid the ultimate. Or yeah, had the ultimate heist here for a $200 million deal. Now, in his whole thing, when he. When he, like, announced his $200 million deal, because I'm sure if he's listening, he feels like I'm very unfairly representing him. Okay, I'm. I'm being a little bit sarcastic, obviously. Guys, relax, Sam, relax. Okay? Don't send me a cease and desist. But so, yeah, obviously, I mean, a little sarcastic in his. In his response. He actually said, we reached an agreement with the military. We. We also have, you know, had them agree to the fact that they won't do mass surveillance and autonomous AI drone, blah, blah, blah. And so now we're going to power them. The skeptical part of me, and I'm a very big conspiracy theorist, so, like, forgive me if you believe everything Sam Altman says, a skeptical part of me says that this is all like. I mean, basically, Sam Altman could be lying, and maybe that's part of the rules, but no one's actually ever going to verify it. And I mean, like, like very well. If he really wanted $200 million so bad, he could just say that they're going to do that and not do that. So that's a skeptical part of me thinks that that is a reality that exists because the government lies to us all the time. Let's be real. The realistic part of me says, or the not realistic, but like another side of the coin could be that maybe he truly does have that terms and truly they are going to abide by them. The government just didn't like anthropic making rules while they were already in there, like adding new terms of Service. And if OpenAI comes in with those, that's fine. They're just not allowed to add any more. And so that's probably part of the. That's probably part of the deal. So one of those two things I think are likely to be a reality. Either way. The fallout in PR and publicity has been like, Jamie mentioned that ChatGPT installs or uninstalls are up 300%. What does uninstalls up 300% mean? Does that mean that of all the installs of CHAT GPT, like, I know a huge chunk of them are uninstalling? No. That means that if a thousand people uninstall every day, that number is up 300%. And so if it was a small number, like, it's really easy to be like, this is surging 300%. It could be a very small subset of people is all I'm saying. Like, I don't actually think it's going to make any material impact on OpenAI, especially if they got an extra $200 million from the deal. Like, there's no way whatever amount of people are unsubscribing, which I think is probably relatively small, is going to impact their bottom line. At the same time, anthropic, it says it's surging to the number one in the App Store. I also don't think that that's going to make a huge impact on anthropic revenue. There's a small subset of people that probably care a lot about this. Probably a lot of it is concentrated in maybe Silicon Valley. New York and California is where I hear the loudest voices on this issue. And I think that there's like a small amount of people that are, that are moving. I don't think it's going to move the markets. And I'm not saying that's good or bad, I'm just being realistic. I don't like for all of TechCrunch's 20 articles on this topic, I don't actually think that all of them are super accurate that this is the end of OpenAI and this is the new rebirth of Anthropic. However, I do think it's great for the market, yada yada. I like lots of competition. I'm just trying to be realistic there. So if I popped anyone's bubble at TechCrunch, I am sorry. Realistically I don't think it's actually going to make a big impact on the industry.
C
Yeah, I do have a couple stats here for you, Chad. GPT also, despite all this, has reached officially 900 million weekly active users. So that's an insane number. Like you said, if that goes down to 899 million weekly active users, probably not going to affect the company too much. But I will say, you know, Anthropic is now number one in the App Store, which is, I would say fairly significant. And then there's also a chart that I found. I'll put it up on the screen if you're, if you're watching this, but the installs have actually gone up quite a bit for Claude. So if you look here, starting, you know, cloud was at about 50,000 downloads. I would. Yeah, per day, $50,000 per day. But then starting on like February 27th it jumped up to over 150,000 per day. So that's 100,000 per day jump from previous. And then it looks like to me that ChatGPT is holding pretty steady. It did take a dip from around 150,000 per day down to 100,000 per day after the news. But that's still. They're holding pretty steady. But I would say it is significant that Claude has now jumped up to be about on the same level as ChatGPT. So their growth may be, you know, increasing more so than ChatGPT. But again, 900 million active users, that's, that's a pretty significant amount of the world's population.
D
So and I will also mention on like in regards to this chart is my. Yeah, is that, that is a US based chart. So on a global chart, I mean this is kind of a US based kind of beef, the US government versus the company. So I don't think it's going to like spill over to India or like a lot of other countries that probably don't really care about US politics as much that Being said, going from a hundred thousand and 150,000 to 100,000 for ChatGPT. I mean, so that's like down 50,000 in a day of their 900 million weekly. And by the way, to have 900 million weekly active users, you would have to have way more installs because a lot of people have the app and they may be use it once a month or maybe once every two months or they downloaded it once and whatever. Right. So it's as far as app installs go. Like, I'm gonna say, like half of app installs aren't using it. Maybe that's generous every week. No, like maybe 75% of people that have the app on their phone don't use it every week, so they wouldn't be included in weekly active users. So yeah, anyways, all I'm saying is like a dip in it is and a jump for anthropic of 50,000 people, 25% of those are going to use it every week. It's just these numbers are not that big in the grand scheme of things. If the trend continues forever, then it will be a big deal. My bet, if I was going to bet on Polymarket, which I'm not a betting man, but if I was, it would be that this doesn't make a big impact on the overall industry. Anything that's going to happen. It's hard for any news really to make that big of an impact, but who knows? I mean, like even like Mark Zuckerberg on Facebook. Like, we all hate the. We all hate how much data they collect on us, yada yada. And yet everyone is glued to Instagram. So like, does any of that stuff make a long term impact on the industry if you like the tool? Not really. And Anthropic doesn't even have image generation. Guys, come on. It's not going to. Yeah, it's not going to be a realistic long term thing, but I do like to see the trend. It's kind of fun. But yeah, end of the day, I don't think it'll make a huge impact. All right, guys, we've talked way too much about this. Thank you so much for tuning into the podcast. We appreciate you all. Hope you all have a fantastic rest of your day. Leave us a. Leave us a review on the, on the show, if you enjoyed it. If you have personal beef with my own opinions, send me a message on LinkedIn. I'm happy to chat about them and I'll catch you guys all in the next episode.
E
Did you know you can save up to 70% on the best brands just by shopping at from rebel.com we're talking about strollers, car seats, high chairs, espresso machines, cookware. Everything you need for way less. Here's how it works. Every single day, Rebel drops thousands of new products on the site for up to 70% off. It is a constant stream of endless deals from top brands like UPPAbaby, Nuna, Baby Bjorn, Breville, Nespresso, KitchenAid, Le Creuset, and more. But you have to act fast because every deal is one of a kind. So if you see something you love, make sure you add to cart fast. So stop paying full price when you don't have to. Whether it's baby gear, kitchen upgrades, or a treasure for your home you didn't know you needed, Rebel has it for way less. Up to 70% less. Shop from rebel.com and save big.
F
You're listening to a podcast right now. Driving, Working Out, Walking the Dog. If you're into podcasts, chances are you have something to say, too. With RSS.com, starting your own podcast is free and easy. Upload an episode and we distribute it to Apple Podcasts, Spotify, Amazon Music, and more. Track your listeners, see where they're from, and start earning from ads just like this. If you've been thinking about starting a podcast, this is your sign. Start your new podcast for free today@rss.com.
Podcast: AI Hustle: Make Money from AI and ChatGPT, Midjourney, NVIDIA, Anthropic, OpenAI
Hosts: Jaeden Schafer, Jamie McCauley
Date: March 13, 2026
This episode dives into the high-profile split between Anthropic and the US Department of Defense (DoD), the immediate pivot by OpenAI to secure a major government contract, and the public's shifting trust and usage patterns with large language models (LLMs) like ChatGPT and Claude. Jaeden and Jamie dissect recent news buzz, PR fallout, industry trends, and implications for entrepreneurs relying on AI tools.
"Anthropic is the... only company approved for classified [DoD]... And then it leaks that, oh my gosh, they used Anthropic's Claude in that mission."
—Jaeden, 04:25
"Anthropic had this beef with the government... as soon as Anthropic gets kicked out, [Sam Altman is] like, and we'll take that $200 million, thank you very much."
—Jaeden, 09:25
"Back in the day Apple kind of shut down the government too when it came to surveillance... honestly that gained a lot more trust in their product."
—Jamie, 07:55
"Of all the TechCrunch's 20 articles on this topic, I don't actually think that all of them are super accurate that this is the end of OpenAI..."
—Jaeden, 12:35
"To have 900 million weekly active users... a dip [of] 50,000 in a day of their 900 million weekly... these numbers are not that big in the grand scheme of things."
—Jaeden, 14:46
“It's just the predictable cycle... once your company gets big enough, you become the villain.” (Jaeden, 03:10)
About tech darlings like Sam Altman becoming targets as they reach scale.
“It could have been like a great marketing thing... the mission was executed perfectly in a sense that no one died. So you could call that a success.” (Jaeden, 04:55)
On the PR framing of military AI partnerships.
“I would put more trust in Anthropic if I was making a banking app, for example.” (Jamie, 08:20)
Linking corporate AI policies to consumer trust in sensitive fields.
“Anthropic doesn’t even have image generation, guys, come on... it's not going to be a realistic long-term thing.” (Jaeden, 16:00)
On the functional limitations impacting mass adoption versus fleeting PR moments.
Summary by AI Hustle Podcast Summarizer, maintaining the hosts’ candid, analytical tone and attribution throughout.