Claude Surges to No. 1
Loading summary
A
Jason.
B
I'm Jason Heiner filling in for Leo Laporte. And I've got our co hosts, Paris Martineau and Jeff Jarvis. We have a conversation with Dan Patterson and it's a huge week for AI news. We talk about the anthropic Pentagon showdown, Claude becoming the most popular app in the world. And Perplexity Out Open Clawing openclaw. That's what's coming up next on Intelligent Machines. Podcasts you love from people you Trust.
A
This is TWiT.
B
You're watching Intelligent Machines, episode 860, recorded on March 4, 2026. You gotta get computer.
A
Hello.
B
It's time for Intelligent Machines, where we cover artificial intelligence, robotics and all the aspects of the AI revolution. The AI industry. I'm not Leo Laporte. I'm Jason Heiner, editor in chief of the Deep View, filling in for the inimitable, irreplaceable Leo Laporte, who's off this. And of course, I'm joined by our regulars. We'll have Paris Martineau in a moment joining us, but I have the ever reliable Jeff Jarvis.
C
What else you need to say?
B
I love it. I love it. The Jeff Jarvis, so distinguished that his intro needs no intro. It has its own intro, which is outstanding. And we also are joining our special guest for this week is Dan Patterson as well. Dan, welcome.
D
It's great to be here and great to see you both. Thanks.
B
Yes, thank, thank you. It's always, always a pleasure. Thank you for making the time. And Dan, you and I go way back, so I'm so thrilled that we get the chance to, to come together and talk a little bit about some of the stuff that you've been doing, some really important work, some really valuable things, and the stuff that you do, Dan, and Blackbird AI, the company that you work for, I feel like it only gets more valuable every day right now, the way that the world is moving. So really appreciate you being here.
D
Well, likewise. Thank you for having me. And both of you do equally important and interesting work and really fascinating work. Jeff was just talking about his latest book, which is going to be pretty mind blowing. And Jason, thank you.
A
Thank you.
D
I, I think what you're doing at the Deep View is just, it's every day I check in on the site and the newsletter and it's always innovation.
B
Thanks, Dan. Yeah, it. I've never seen anything like this news cycle we talk about. We're going to talk about the AI news. I mean, I know we were, we were talking about it as we were getting ready for the show and it's unreal. We were even looking at the stories for this week and going like, wait, that happened this week too? Oh yeah, that one was also this week. You mean it's unreal. So we're going to get to all of that, of course. And, and Jeff, who is a, a connoisseur of the news of these things, of course is going to be one of, one of. I'm going to go to Jeff for a number of these stories as well, who has great context and also amazing insights on, on all of it. Before we do that, before we get to the news, Dan, let's talk a little bit about kind of where you're at and, and what you're doing. I I'm. You're very familiar with the show. You've been coming on the tw time I think are well, very well known to the audience. But just in case there's a few people who don't know, why don't you talk a little bit about, you know, what you do at. Blackbird AI and for also I'll just say for those who don't know, Dan is a longtime journalist. He works for me at publications that I've worked for multiple times. We go back a long way and Dan is, you know, an incredible investigative reporter and, and news news anchor, news person going back multiple now and now working in the AI space, in the cybersecurity space, in the disinformation or misinformation space. So Dan, talk a little bit about Blackbird for those who don't know.
D
Yeah, it is great to be here and it, it does now feel like multiple generations. I mean with you both, like you guys have seen so many different generations and right at with. With the Twitter, I feel like there's just been evolution through different generations of tech.
B
Yeah.
D
Going back to, you know, podcasting, almost predating social media. So really we've seen a lot of change here and have been pretty fortunate to know you guys both and to be on the network. So I mean the, the kind of short version of what we do at Blackbird and. Right. I think both of you guys, once a journalist, always a journalist. That's a lot of what I do.
C
Like a Marine. Yeah. Yeah.
D
I hope we can aspire to more. Although I've known many great Marines too. So Blackbird protects, I mean the, that we like to to share is that we protect organizations, executives and governments from narrative based disinformation attacks that can cause operational, financial of course, physical, sometimes harm. So what we do, I think probably Many listeners are fairly familiar with the concept of social listening or tag clouds or like, categories. You can kind of see or get a sense of what's happening on social media by using these tools that kind of gauge conversations on social networks. But of course, we all know that social media is atomized now. It's not just one dominant network. There's many networks, many different chat applications. There's the dark web, which, you know, a ton of. It's. It's old news by this point, but it's still. There are a ton of bad actors on there. And we call it a disinformation attack or a narrative attack, because it's almost like we're not just listening and there's many jokes. You know, we here for you. We're not just listening or using social listening. We're tracking the narratives, the conversations, the bad actors who will use sometimes automated tools. You know, all of us are familiar with bots, but they'll use. Now, in the age of generative AI and different forms of artificial intelligence, they use generative tools or AI tools to amplify a narrative attack. And these can target people or governments or organizations. You know, there's a very famous example of a beer brand a couple years ago. I forget exactly which one, but I think all of us can think about, you know, maybe the experience of being doxed or, you know, the type of media that we can encounter online often is coordinated, and it's amplified by bad actors who have agendas. And sometimes that is at a tremendous scale. So I think both of you have probably heard the narrative that, well, you know, AI is just good for slop. And it's, you know, we like, what good is it doing? It's costing jobs and taking all this energy. And, like, there's some truth to some of those narratives. But we use artificial intelligence to find narratives because. And we call it a narrative because it traces and it moves across different applications, from chat apps to social media in different platforms. So we use that to find the actors, the narratives, what they're saying, who they're targeting, and importantly, how they're being amplified. The tools that are being used to amplify these narratives and who they're targeting and why they're targeting different people or governments or organizations. There's some things that, you know, we definitely can't or won't talk about because they're confidential. Some of our partners are organizations like NATO and large governments and representatives or people, not representatives. We don't want to get into politics with specifics, but, you know, we talk, We. We make sure that we are protecting fairly important actors and organizations from this kind of innovative new form of attack.
C
Dan, is it just attacks or is it also. You know, I've argued that journalists have to learn new skills to listen better.
D
Yeah, for sure.
C
Is it also. Do your clients hear things that they, before all the Internet and everything, couldn't have heard before and learned from, act on?
D
Yeah, that's exactly right, Jeff. It's. It is. And that's one reason we use a narrative. You know, in this case, a narrative attack or a disinformation attack. We use that word. Although we've. We've kind of, you know, the words disinformation and misinformation don't have a lot of meaning to the general public. Yeah, right. So. And especially in the age of, you know, there are more specific ways to talk about disinformation, like a deep fake, but. No, Jeff, you're exactly right. You know, we try to use the term narrative fairly often because it. It really is. There's a story in everything. Right. And there's a story being told even in, you know, one post can tell a large story and the person behind it, again, maybe, you know, not precisely metadata, but the. The idea of metadata, you know, the. The person, place, and thing talking about something and the way that they talk or shape or craft a conversation. Communicators kind of inherently understand this, but that is almost as important as what is being said. So, yeah, I think a lot of organizations, executives, and companies are interested in learning the narrative. It's not just social listening, which feels a little dated. It's. It is the narrative of what is happening now.
B
Yes. And I want to say we also now have. Paris is here. Paris Martineau.
E
I'm a little late.
B
No worries. Investigative journalist Paris has boss.
E
You know, sometimes you can't. It's unfortunate. When you have a job, you can't simply be like, I can't be in this meeting. I must podcast. You have to sit there politely and participate and then frantically message your podcast chat that you're going to be a little late. But I'm happy to be here.
C
Were you. Were you twitchy, Paris? I imagined you being twitchy.
E
I was definitely twitchy. It's been a strange day. I was like, yeah, absolutely.
B
And it's the perfect. That. That kind of chaos is perfect for the week that we've had in AI, which we'll get to, because it has been such a. What, a week in the news, you know, too. Right. So we're glad you're. We're glad you're here, Paris. And Dan Patterson, our, our special guest for this week. Dan, I wanted to ask you, I wanted to double click on one of the, the things that you talked about and your CEO Wasim Khaled talks about this a lot. He, we just had him on the, our, our show the Deep View conversations and he talked about this idea that you, you also referenced, which is essentially that perception itself has become an attack surface.
D
Exactly.
B
And that is something that is really almost a little bit mind blowing. But it helps conceptualize the sort of the level of challenge that, that we're dealing with and that Blackbird especially is trying to help companies, executives, high profile people who are in danger of being doxxed or in danger of also potentially being physically attacked. You all have signals where you will, as I understand it, and you can double click on this for us, but that if you see a certain amount of chatter, you can present levels of risk and even as I understand it, different like lights, you know, red, yellow, green or in reverse, you know, green, yellow, red in terms of the level of risk of someone potentially in your organization being physically attacked based on the, the chatter that's out there. So all of that is something we wouldn't have even was barely I think on the radar 10 years ago when, when Blackbird started this. But now you have a lot of clients that, that depend on that kind of intelligence day in, day out, week in week out. Maybe you could just talk a little bit more about that.
D
Yeah, Jason, that's exactly right. What was seamless referencing and he will go into great detail in that Deep View podcast is right. Perception is the attack surf own realities. Especially when we spend time in these algorithmic silos that becomes our own reality. And yeah, we did, we have these. I mean this is fairly nerdy, but this audience will understand that we did just release this API and we do have these. It's called Constellation and we use that metaphor for a reason because you can kind of see clusters of conversations and right. We do present. I mean everybody has a dashboard. This is not, not. I mean it is a dashboard, but it presents information in a vastly different metaphor and different type of view structure because the information is far more like a narrative. And you will see, you can kind of see right as those lights go up, you can kind of get a sense of actions that might happen. It is really fascinating because especially when you think about a lot of right perception is the attack surface much like a cyber attack. When you see or when people who work in IT or work in cybersecurity, you can kind of see different risk signals that happen across your network when you see similar signals. And again, it's using a metaphor, and I'm kind of mixing metaphors, but you can see different signals happen in narratives and then get a very similar sense that an attack is about to happen or one that is in progress could lead. Lead to physical or other types of, of harm.
B
You know. You know what's really interesting about this too, Dan, where I think it really gets to the intelligence aspect of what you do, is that it would be really easy for you all to just say, you know, to sort of be the boy who cried wolf. Like any signals happen, like you could help the company freak out, right? Like, here we are, we're going to send you that like, look out, something bad is going to happen. But one of the things that you all do is you will also, as I understand it, you will tell companies, don't respond to this. There's something that's happening right now. But what we can tell from the patterns is that some of this is bot traffic, some of this is not actual, you know, people or the number of people are the one that are involved are, you know, have an alternate view. And you all will give companies advice where you'll say, do not engage. Because if you engage, you will potentially amplify this to a level where it could become a. You could increase the risk. And so you will. So it's not just always telling people that they should be freaked out sometimes it's telling people, this is not a. This is not worth getting, you know, rolling in the mud on. You should let this go. Let it play out. And our intelligence tells us that this is likely to just play itself out quietly over a period of the next sort of 24 to 48 hours or whatever the case. Am I my characterizing that correctly?
D
Yeah, for sure. Although I think that maybe we might do that on a macro level. And that's kind of just good comm strategy. Every journalist knows that, like, just don't be the trolls. Don't get involved. And I think that on a macro level, we probably don't advise companies on how to respond, but we will give them the tools that allow their teams to make better response decisions so they can not just make better decisions, but make those decisions faster. Because as everybody here knows, sometimes this happens very quickly. I don't know if any of you have had this experience I had, you know, years ago, I was covering stories that sometimes were pretty prone to pick up different types of bad actors. And sometimes it would happen very fast and they can find out a lot of information about you, your family, your friends, where you work, what you do. And I just remember from personal experience that happened within seconds. And so we probably advise companies to pay attention to certain risk signals or look out for types of behavior as opposed to like, do this in this particular instance because everybody and every organization is different. But again, my advice is always don't just what you said, Jason, don't respond, don't get into it.
C
What about cases where, I mean, the attack scenario is somebody comes after you, they don't like you, they think you're vulnerable. There's various scenarios. I'm gonna go to my favorite story of the week. And it's not AI and it's not anthropic. It's McDonald's CEO eating the new Archburger. Did you all see that?
D
I did not. Oh.
C
Oh, it's brilliant.
E
It's a. A man takes a bite of a burger in a way that makes it very clear he would rather be doing anything else in the world.
C
He's the CEO of McDonald's. The tiniest bite you can IM and
E
he's like, this is delicious.
C
And you know, he had multiple takes because the number of fries in the fry container went up and down. Maybe not so. So I just want to get that in there because I thought it was so funny. Burger King came along and said, and the CEO Burger King took a monster whopper bite of his whopper, which he's reinduced. My point is finally, there was no one in that room at McDonald's had the courage obviously to say, boss, I don't think you want to do this. I think something's going to happen here. This is self inflicted damage. But they didn't have. There was a management issue there in terms of not understanding how to tell the boss something. But there was the larger question of saying, what are you going to say about the company in this case? What narrative are you creating? How does that dynamic work in when it's self inflicted and when management doesn't know what to do? How much are you in a position of kind of educating them about their own companies and their own selves?
D
Well, you know, we don't have to say anything to a company, but what we do is kind of a spectrum of. We provide a spectrum of tools and technologies. On the one hand, like I was talking about earlier, we definitely just released an API that's hyper nerdy. Like the engineers are going to understand the API. But we also, Jeff, we, we have this technology called Raven Recon, which is easy to understand and easy to use. And anybody from an engineer to an executive can understand this tool. And that is kind of built for. We call it Recon because it's built for finding information that is happening to or about individuals. So even without listening to your own comms team, even though in this, they might be sitting there cringing, you could give it to an executive and have them. In fact, my phone's going off with a likely scam right now. You could give it to an executive and they could easily understand that, okay, these narratives are happening about you right now. You know, make whatever decision you want to make. But here are the risk signals. Here is what's happening. And again, because there are more technical capabilities with the technology, with our, it's called our Constellation platform, like I referenced earlier, because it is like stars in the sky. It touches a lot of different points. You can then say, hey, engineering team, let's learn a lot more about these narratives. Who's pushing them? Are these anomalous or they bots or these actual humans saying actual human things, which can give you a lot of information? You know, if it's, if it's bots or if it's real people reacting to. Again, Jeff, in that scenario, anybody can react to that and say, okay, I need to learn more about what's happening. I see the risk signals accelerating.
B
Yeah. You know, Dan, that's probably one of the reasons why a lot of your early customers were a lot of, like, crisis comms and other organizations that were, were dealing with where they had some kind of crisis and they wanted to figure out about how can we manage this, how can we be smarter about, you know, understanding it and really stay on top of it. And like I said, there's a, there's a level of intelligence that, that your company provides that was just not even on the radar, you know, a decade ago. And now, you know, you help companies be a lot smarter, you know, about this area. Since then, you, you mentioned this, you know, at the top too. You know, you've also started to engage other clients, nation states, NATO, others. And so can you talk a little bit about that? Like the, the evolution, how the evolution of your, of the companies that are coming and asking for, you know, your services and the intelligence that you all are, are offering and how that's kind of changed and evolved the. Both the mission of the company and, and maybe the, the tools that you are and the tool set that you have to offer.
C
Offer.
A
Yeah, right.
D
I mean, it really is about making more strategic, faster and intelligent decisions and enhancing those capabilities. You know, I've been with Blackbird just under three years and our CEO, who Wasim, who you spoke with and our CTO Nishad, they started working on these problems about a decade ago, back in the era where I, I'm sure you all are familiar with the term fake news, when that was kind of the term du jour about what was happening in the media ecosystem and the social media ecosystem. And I, you know, they have also been working on, along with some of our other engineers, been working with artificial intelligence long before it was fashionable. And our technologies kind of advanced as those, again, I mean, no pun intended, as those narratives and as those ideas advanced. Right. We went from kind of an unsophisticated concept that we had the word fake news news for sure, which really didn't do a good job of explaining the phenomenon and their technologies kind of looked at. Okay, here is a good use case. Maybe it is crisis comms because we can kind of figure out using AI or at the time, probably they're using machine learning and other technologies. And now, you know, as we advanced through maybe the crisis comms era, and I know that we had APIs and we had different ways of tapping into. By the time I came on, we developed this tool called Compass and we still use this. This is the only consumer, I mean any, if you have technical abilities, you can use our tools, but any consumer can use Compass, Blackbird, AI. And this is, you know, we don't actively promote this to consumers, but it's very easy to understand. You do have to create a login and that's mostly to prevent spam and other junk. But any, it will check any claim that you see online. You know, often if you're scrolling social media, you'll see a lot of very confident claims and you'll, you'll see something that could be disinformation, it could be accurate, or it could be intentionally accurate information. It could be intentionally or unintentionally misleading information. Often we share something stuff we don't mean to, but share stuff that is misleading misinformation. And you can put anything, literally anything, post a link to something, post, just type something in there. I saw so and so talking about such and such. And it will not just give you a yes, no answer, it will give you the context with links to where you can learn a lot more about this. And it will do it fairly quickly, a paragraph or two. We have a fast version of the. That will give you a sentence or two, but the longer version will give you good context. Now we've built it out, so kind of to answer your question, Jason, about the trajectory, it will, it will check videos, it will check photos and images. So you can tell was this a deep fake or was this a manipulated, you know, a cheap fake? Was this something that was manipulated to. To advance a narrative? So those technologies and tools, I think, kind of help us look into the future. And like I said, we built this about two and a half, three years ago when I joined the company. But now, you know, with this new API and recon, it really does take things that are, on the one hand very technical, but on the other hand very, you know, for executives or individuals, pretty easy to understand. It does require a technical deployment, but once it's deployed, it's easy to use and understand and can allow you to make very fast strategic decisions that help you, I mean, make better decisions faster and in theory stay safe. Or whether you're in comms or governments or an executive in organization and individual, make decisions that are better or better informed.
E
How do you make sure that tools like that themselves aren't unduly influenced by disinformation or kind of deep fakes or just the, I guess the general low quality nature that much of our information ecosystem has taken on kind of, especially in this age of AI?
D
That's a great question.
B
Very generous. Yes, go ahead.
D
Yeah, I mean, that's very interesting as well. Right. So I think, if I understand your question, Paris is like, if something is, or if the tools are dependent on the ecosystem and the ecosystem itself is being manipulated, how do you make sure the tool then isn't manipulated?
B
In.
D
Yeah, yeah, right. So that is again where. And it is also where, like, I don't have the engineering chops to tell you technically how it works, but it is why we have engineers who really do, you know, we don't have, like, here's one white list of sources and we make sure this is a good, pure list of sources that will always tell you the truth. It is pretty dynamic and robust. We don't just look at, at all the social networks or all of the news websites. I said this a little bit before you joined, but we will look at chat applications, the dark web, the entire information ecosystem. We have a pretty good understanding of what's happening. And because we are full of experts who are building systems that can look for this, we do see the actors that are pushing manipulated narratives and we see the behaviors. And so we Also understand the trends, the tactics, the techniques. There are very technical words for this. But working with some partners, again like NATO, understanding these tactics is not a new practice. And so they, they will inform our engineers about the signals and the types of behaviors and the platforms on which information is manipulated. And so again, I'm not an engineer, but I, I know that we take some of those signals, many of those signals, and we build those into the system so we have a better understanding and are not manipulated ourselves.
B
Dan, so want to be respectful of your time. One last question I'm thinking about. So this Compass Blackbird AI, this is a great resource for, you know, everyone in, in the audience to, to be able to use if they have questions about the veracity of an image of a report of a video, you know, is it a deep fake, is it manipulated? All of those things. How are, is your company, as I understand it, how is the. Could you talk a little bit about how the company itself is using AI? How's it using, you know, AI? You know, are you developing your own models for, for that tool? Also, when, when I talked to Waseem, one of the things he mentioned that sort of, of, you know, scared me straight a little bit was he was saying that if you're not using AI, AI, if you're a leader and you're not try working and thinking about AI agents right now, and you're really just still using chatbots you're already behind. Like, you need to really be thinking about what are the ways that agents can transform, you know, your organization, the ways that you work, the ways that you operate, all of those things. And so I thought this would be a great opportunity to talk a little bit about the way you all as a company, even though you, you are and have been an AI company before it was fashionable. As you said, you know, AI itself as a tool is, is changing some of what you do and the ways that you do it.
D
Yeah, yeah, for sure. So we do build and train our own models and, you know, the second part of your question, Jason, you know, we seem probably spent quite a bit of time articulating what executives and decision makers should be comes to agents. But I think that the reality is many of these tools are so accessible that they're being used by your, your teams and they are transforming the business. So above you and below you, they're transforming business. And it is, I don't want to speak for him, but I, my guess is that you would, you would just have to have the vocabulary and the capabilities to use these tools that are advancing so much more rapidly than almost any other technology we've seen in prior to be able to manage and to lead teams to make good strategic decisions for your own companies and know the difference between homegrown and home built networks and what you can have with your own generated systems or, or trained systems versus and make the decisions on, you know, know, maybe kind of the old technology challenge. Do we build it, do we buy it, do we integrate something? And I think that these things are happening so rapidly that decision makers and executives must have the same vocabulary as the rest of their team and their, their clients, their partners and the rest of the, the players in the ecosystem.
B
So great insight. And that's one of the things that, you know, podcasts like the intelligent machines are, are trying to do, help, help people have that understanding, have that knowledge and awareness of how these tools are advancing so that they can work with them, learn about them and be able to sort of lead from the front, as it were, in their companies. Dan Patterson, thank you so much for being here. Always a pleasure. Thank you for the important work that you and Blackbird are doing. Really providing people with tools that were not possible before and are making the world safer, are making us, you know, smarter about the level of threat and risks that are out there. And, and also just a pleasure to have you. You are one of the, the best, you know, people in this industry. One of my favorite humans. And so such a, such an honor always to, to be with you.
D
You too, Jason and Jeff, Paris and Benito. It's, it's great to, to with you all. I really appreciate being able to talk about this stuff. Thanks.
B
Amazing. Take care, Dan.
A
Take care.
D
We'll see you. Talk to you soon.
C
Okay.
B
All right. Dan Patterson, what a powerhouse. I mean that, that stuff that they're doing, you know, I, I just couldn't even have, have, have conceived of some of those things, you know, even five or 10 years ago, you know, and rapidly accelerating. It is accelerating, right? Like the ability to do the things that they're talking about. Right. It's, it's empowering threat act ways that we never could have. You know, I guess we could have anticipated. Science fiction has anticipated some of it, but it's at a level and a speed that's just out of, out of this world right now.
C
I mean, that's kind of a chicken and egg thing though, right? Because science fiction is also the, responsible for some of this stuff.
E
I was going to say a lot of the people doing these sort of things are directly influenced by science Fiction.
C
True.
B
Chicken and egg. Chicken and egg.
E
Chicken and egg. And indeed.
B
Well, Paris, great to see you and likewise. And Jeff, thanks for. For letting me, you know, sit in the Leo seat for this week. Enjoying taking over.
C
We get in trouble when we do it back. We.
E
We do laugh. I'm gonna say Jeff and I are on hiatus because we. We bring up too many spicy stories when. Leo.
B
I'm here to take all the bullets this week.
D
Thank you.
B
So thank you. Everything. That's great. I will tell them where was all your guys's idea. You know, all of the mistakes were mine.
E
So now you're speaking our language.
B
Excellent. Excellent. We. We have so much to cover. I. I want to start.
E
It's been a big week in AI.
B
Oh, my gosh.
C
Yeah.
B
Oh my.
C
I want to start Money.
E
I was say we have a lot of ads this week.
B
Let's. Let's pause and send it over to Leo to talk about one of the sponsors for this week.
A
Show this episode of Intelligent Machines brought to you by Delete Me. If you have ever wondered how much of your personal data is out there on the Internet for anyone to see, please do me a favor. Don't look, because it's more than you think. It's appalling. Your name, your contact info, your Social Security number, in many cases your home address, even information about your family members, all being completely, legally, I might add, compiled, filed by data brokers. And they are completely legally selling it online to anyone. Even foreign governments, marketers, law enforcement, anyone, hackers, anyone on the web can buy your private details. And that can mean the worst. Identity theft, phishing attempts, doxxing, harassment. But there is a way to protect your privacy with DeleteMe. Look, I live online and I know how important this is. In fact, we use Delete Me. I think every company should use it for their management. Because we were getting phished, people were able to find out all sorts of information about our team and use it to send very credible phishing texts trying to rip us off. We immediately signed up for Deleteme and we've been using it for years. It really works. It really works. That's why I recommend Delete. That's why we use Delete Me. And it's why you should use Delete Me. It's a subscription service. Now, that's important because it doesn't just do it once. It removes your personal info from hundreds of data brokers.
D
This is the key.
A
There are more than 500 data brokers, and there's more every single day. So what you do you go to delete me? You sign up. By the way, it's joindeleteme.com make sure you use the right address. Joinedeleteme.com you sign up, you provide them with exactly the information you want deleted. They need to know what you don't want and that way they don't delete stuff you do want. They take it from there. Their experts know exactly where to go and how to delete it. They'll send you regular personalized privacy reports. We just got one the other day showing what info they found, where they found it, what they removed. So you know what they're doing and this is important. It's not a one time service. Deleteme is always working for you. Constantly monitoring and removing the personal information you don't want on the Internet. And you need that because these data brokers are, are not the nicest people and they're constantly rebuilding those dossiers even after you have them deleted. They have to delete them by law, but nothing stops them from recreating them. Plus there's new ones all the time. In fact, the sleaziest thing they do often is change the business name so they can start over with a clean slate and all your information. To put it simply, Deleteme does the hard work of wiping you and your family's personal information from data broker websites. And no one does it better. Take control of your data data. Keep your private life private. Sign up for Deleteme. We've got a special discount just for you today. You can get 20% off your delete me plan when you go. And this is important, get the right site to joindeleteme.com TWIT use the promo code TWIT at checkout. The only way to get 20% off is to go to JoinDeleteMe.com TWIT and enter the code TWITTCHECKOUT. JoinDeleteMe.com use the promo code TWIT. If you just Google Delete me, you'll go to the wrong place. There's another company in the EU and they don't do the same thing. You want to go to this one? It's joindeleteme.com TWIT. Don't forget that offer code TWIT. Now back to intelligent machines.
B
Okay, well, we have to talk about this weekend. Yeah, yeah. I.
E
It's very rare that all of my conversations with normal people who don't care about AI begin with oh my God, the AI news. And this was one of those weeks.
B
I certainly in AI it's the most consequential weekend news weekend I've ever seen. But I almost think, even in terms of tech, I don't know that I've ever seen a weekend like this where, you know, tech was the story. The. The story. Even when something as consequential as. As the US invading another country was.
C
Well, that's almost. Almost coincidental.
B
Right.
C
Oh, and by the way, we also invaded a country or bombed a country.
E
And we use Claude for that.
C
And we use Claude for it. Right. So the whole anthropic open AI saga here would be huge on its own.
B
Yes.
C
Added a war, too.
E
Just context for anyone who doesn't know what we're talking about. On Friday, Trump directed every federal agency to im. Cease use of all anthropic technology. This was the culmination of a simmering brouhaha between Anthropic and the Department of Defense. In part, we spoke with this last week. It's this kind of paradoxical thing where Pete Hegseth has simultaneously designated anthropic a supply chain risk to national security. And. And they also used Anthropic and Claw in particular as part of their operations to enact war in Iran.
B
Yes, yes. The number of aspects of this to unpack are so many.
C
So we discussed this a bit last week where I think Leo's starting point was similar to structures this week. Kind of the government has to decide how to use these tools. I disagreed, and I think that there's. I disagree with tretory as well. I think there is a. A need for. Especially in unusual times. Shall we try to be not too political and call this unusual times? A need to speak one's conscience and decide what's used and what's it's not. The analogy I make is that certain pharma companies will not sell certain drugs to certain states if they're used in executions.
B
Yep, yep.
C
And so companies have some rights there and have that ability. So Anthropic came along and said, you can't use our stuff to autonomously kill people, and you can't use our stuff to surveil Americans. And there was a moral aspect to that, but there was also a practical aspect of that. Like, this stuff ain't ready.
B
Yeah.
C
You don't want to use it for that. You know, I wouldn't trust it to go kill people. What are you. What are you doing? Then the hexa stuff got all macho chest beating. Yes. And then Trump got all macho chest beating. As Paris recounted. They're out.
E
It's Worth noting that this like designation the Pentagon, as of when I checked today, the Pentagon has not formally issued this supply chain risk designation threat through any official channels. All this messaging is done on social media, as is kind of the norm in this administration, which is really adds an unusual aspect to all of it. What we've heard so far is the Washington Post also reported this weekend that a like hypothetical around nuclear ballistics might have been kind of what, for lack of a better term, blew this whole thing up. The Washington Post reported that Emil Michael posted and posed an extreme hypothetical during a meeting in January 2026. If an intercontinental ballistic missile was launched, the US could the military use Claude to help shoot it down? And do the accounts as to what happened next diverge sharply? The Pentagon's version is that Anthropic responded, you could call us and we'd work it out. And officials were really mad at the that because they were like, that is ridiculous. Anthropic's version is they say that's totally false. We said we've always agreed to allow Claude for missile defense. Crazy.
B
It wasn't part of the red lines, as they call them.
E
Anthropic says the red lines are two specific categories. Mass domestic surveillance and fully autonomous weapons.
C
Yeah, right.
B
Yes.
C
So that was crazy enough as a story right, right there that Friday. Huge implications. Is, is the government going to destroy Anthropic? How far could this ban go? A friend of mine, University of Virginia said, do we have to stop using it because the university gets grants from the, from the federal government. That's, that's, that's her shattering questions.
B
Yeah.
C
And then along comes Sam Altman.
B
Yes. Who
C
comes in and says, okay, I'll do it. And apparently was doing this all along and at various points supposedly had his own rules, agreed with Anthropic's rules, but really didn't because otherwise they wouldn't have done it. Then he admitted that it was opportunistic and sloppy. Then he whined to his staff that this was really painful. Give me a break. And so that's added a whole nother layer here to where this goes. I've got to ask you both because you're younger than I am, which is not hard to be. Does the name Eddie Haskell mean anything to you?
B
Oh, yeah, no.
C
Oh, Paris, Paris, Paris.
E
But I also couldn't remember most names that I've heard.
C
Well, no, this is a. This is an old, old guy TV show. This is Leave it to Beaver.
E
Oh, I know Leave it to Beaver.
C
Well, then you should know Eddie Haskell. Eddie Haskell was the, the friend of of Wally's who was the two faced slimy ass kisser. Oh, Mrs. Cleaver, you look absolutely lovely today. Right. And that's what, and I did I, I got AI to do a gift for me of, of Sam Altman meeting Eddie Haskell, but it doesn't mean anything to you, so I'll not even bother showing it. But Sam Altman proves himself to be a two faced traitor of ass kisser to the government.
E
Is this surprising to anybody? Is this not. Not to oversimplify this, but is this not the sort of behavior that led to Sam Altman's original ouster from opening?
C
Exactly. So now what happens? So now it gets more interesting because he whines about well yeah, this could be damaging to the brand, but there's a movement to to delete ChatGPT. Anthropic leads goes to the top of
B
the downloads and, and the Google Play Store also it is skyrocketed as the number one app in the world, passing ChatGPT, which had been the number one app in the world for, you know,
E
three years anecdotally as listeners of this podcast will know. But Jason, for context, I'm in a lot of subreddits for all the various, various models and I really enjoy being in the OpenAI ones in part because up until recently my main source of joy was whenever OpenAI would depreciate a model, people would freak out because they were going to lose their girlfriend. Now all of those people are aggressively organizing to switch to Anthra.
C
They're angry. There have been hundreds just a Paris understand. These were adamant ChatGPT fans.
E
These were chat GPT fans, so adamantly fans that they're like attuned to different models making lengthy hundred word plus posts whenever a brief change happens to a model or some sort of tweak is made to the system. And these people are not only switching en masse to and it overwhelmingly seems like Claude, but they are out of support, really relishing in the experience of not being in chat the chat GPT ecosystem. I mean maybe this is just the anecdotal experience that I'm seeing. I've seen probably 20 to 50 posts in the last day or two not looking for it of people being like wow, I like Claude so much more or wow, I think I've seen one or two that said they like so here's my question.
C
Did Sam Altman do permanent damage? Did he shoot his company in the foot? Or does this go past?
E
I Mean it's. There's of two minds. One, yeah, Chat GPT is in a position now where they are the dominant market leader. They have such a head start isn't even the way to describe it. They have so much more market penetration than any of the other companies and significantly, significantly more. Both they and Gemini have so much more penetration than Anthropic, so it's hard to compare the two. But one consequence of that is when you're overexposed, you are increasingly likely to end up getting kind of a negative to have your brand reputation be tarnished. And I do think that there's been a number of instances that have increasingly tarnished the chat GPT and OpenAI brand. Starting with everything going on at time, the same sycophancy, the suicidal impulses. A common complaint I see in all these forums is that as a counter to the kind of sycophancy and AI psychosis inducing tendencies of these models now like you'll be asking ChatGPT for help sorting through some emails and it'll be like, you're not crazy, you're not broken. Take a deep breath and we'll work through this together. And people are like, what the heck? I'm just asking for my email emails to be sorted. I do think so. I think that this exists in that context.
C
Meanwhile, they didn't report a suicidal person in Canada, of course, a homicidal person.
E
I think that this is a huge sticking point for like I was out to dinner last night with a friend that is not plugged into AI stuff at all. I'd say an, an anti AI person in every sense of the word. She was like, oh yeah, me and a bunch of other people have like signed this stop using Chat GPT thing and I've been seeing that being shared around every, everywhere. I do think that it's a notable moment so far in this company's history.
C
What do you think, Jason?
B
Yes. So I, I think that it reminds me a little bit of the Instagram. There was like the, the Uber, the boycott Uber movement, you know, because of all the things that came out about them, you know, until they changed CEOs. They did, it caused them to change CEO. Those for sure. There was like the Instagram one similar to that, a boycott of Instagram. It does remind me a little bit of those for sure. And they were, they both were consequential and I think that they were brand moments where they were where there was brand damage done. They did both recover. And so I expect this to be a bit like that. And here here's why I'm thinking so and then I there's a couple parts of it I'd love to unpack with with you all and get your thinking on too. So I think that what open AI still has going for it is with the level of talent that they have at the company they are also making these tools like the easiest to use. I I think by and large even lately I've heard some programmers talking about the fact that developers that they're using codecs instead of cloud code because they're like it's actually gotten better over the last few weeks. And so that was some kind of surprised me because cloud code has been really has just haven't had it figured out right among developers for a while. But but in Monk Chat GPT itself like some of its controls, user customizations, things like that are just a little bit better. I think their browser also is is, is is just a little bit easier to use. So I, I People often go to the path of least resistance as we all know human beings and so I expect as long as they don't have and there's the risk of this some people have left open AI kind of publicly right Even in the past sort of week because over this employee sentiment
E
is a huge aspect of this.
B
The employee if they start to lose and they lost they've lost several important people over the last few days. If they if that exodus becomes more acute then I'm going to be more worried. But I do think right now they still have the people and the staff and the this mission of making these tools better and easier and faster to the fact that I think that they will likely still own a lot of the consumer you know, sentiment of this and they will continue to this will be a little bit of a blip. I do have some bigger questions though too and I'd love to to get your all thoughts on this the the Sam Altman thing. I do think that you know Altman's he is very much almost like the mirror image of of Dario Amadai. You know Dario is he's very and my sense has always has been this way. He's very sort of almost like single mindedness and like steadfast on an idea right like that he pushes forward and I want to talk about one of those ideas first. Whereas Altman I think Altman takes in a lot of things and and reads sort of the tea leaves and then sort of makes some decisions of like you know, which ways which way things are going and let's let's sort of listen to our audience. Listen, listen to.
C
Is he, is it unfair to say he's a little Trump like and that he's impulsive?
B
I think there are. I mean he will act very quickly
E
right on your even consider this move impulsive. I think this was calculated and okay, I assume that Sam Altman OpenAI Google I'm sure even meta are all salivate were salivating at the idea of getting
B
anthropom that contract $200 million contract that
C
but comes with a log of not just baggage but bombs.
E
Well none of them care because I mean Google rolled back its internal Prohibition on an AI for weapons and surveillance
C
in yeah, now you have 900 employees of Google have signed a letter.
E
Well that's what I think is the most interesting part here is I think that yes all these like consumer reputational blights that will continue to exist but the real place where this could actually make a mark a day difference is in employee like in the employee talent wars that these AI companies are doing. And this is something that Karen Howe got into in her book An Empire of AI which is that all of these companies are paying their employees crazy amounts of money. They kind of have the pick of the litter in the sense and it's been a real struggle for all of them to figure out hey, how can we attract the best possible talent that can give us the edge of. A lot of these people were attracted to these companies by making them making like lofty promises about ethics, doing the right thing, building a technology that's going to change the world. And you're starting to see that in the way that employees of OpenAI and Google are really reacting negatively to this and being like hey, why aren't we standing up to these absurd demands like Anthropic is? And I think that that that's the sort of thing that people are going to listen to when they want to make their employment decision that decisions about where to work if Anthropic comes to any of those people that have signed this. Now Anthropic has a, a literal list of a thousand employees that they could possibly spoof from these.
B
They could go, they could go pick off. So here's the, here's the thing that is is interesting I think for us to consider if we consider the two, those two red lines I if we go back to that for a quick second because there's, there's been a lot of. And you mentioned the STR piece. There's also Altman who, who tweeted out and he said I don't feel like the executives of private companies should be making decisions in a democracy that you know it. They should be made by elected officials, not non elected executives of private companies. You know, to a degree, Ben Thompson is trickery unpacks that and essentially is saying much the, the same thing. But here's what, here, here's my is. And, and you all tell me if you understand this the same way. I don't feel that Dario and Anthropic is necessarily saying this is wrong and nobody should do it. They're saying we are not comfortable with it. As Jeff, you said it that like we know this technology, we know the reliability of it, we know that the challenges of it and we are not comfortable with this technology being ready to be given the, you know, weapon where it can choose. Humans should be, you know, taken out that, that there should always be a human in the loop on that. That seems like a pretty reasonable, you know, ask. And then the other mass surveillance, which is illegal. Yes. What the, what the government has said and the very important part in the contract was like for essentially we can use this technology. We want to be able to use this technology. And as I understand that this is standard language in all Pentagon contracts for all legal purposes. And what they said verbally was we don't have any intention to use your technology for mass, you know, for, for killing of, of humans autonomously and, or for mass surveillance of workers. And yet we, we have to go to the line. We're not going to allow any carve outs of language saying we won't agree to those things specifically because they are illegal, legal and all things are, you know, that are, are illegal are part of the contract. And Anthropic said like we're not comfortable with that. We want that carved out because those are things, these are areas where if humanity like we've seen this movie, right, where when, when robots and AI can kill humans without any human intervention, the potential consequences are very, very negative. And we, we believe that we don't want the technology we've built to be a part of that because, because you know, our feeling is the, the consequences are very negative and we don't want to do that. That to me seemed like a very reasonable consequence and I feel like it, it has gotten mischaracterized as them moralizing to the government and, and trying to tell the government what to do. Is that, am I understanding it correctly? What do you think?
C
Yeah, I mean this is what we discussed last week too. But, but, and, and Ben, Ben, who does great analysis Though I will confess, I go to. I always go to Gemini and ask it to summarize him first because he's so long. Ben argues strenuously that you can't have companies deciding what's what. You've got to have elected officials deciding how to use these tools, otherwise you'd have a dictatorship of companies.
B
Well, right.
C
There's a few issues here. One is we are in exigent circumstances with the government we have, and individual responsibility and accountability will matter. And so, just as the six members of Congress who did the video say, reminding members of the military that they should not follow illegal orders because they are ultimately responsible under the laws the president established at Nuremberg. Pardon me, I'm not going Godwin's Law here, but I'm going to end up going there a little bit for a minute. Sorry. It's going to get worse for a second, then it'll get better. So there's responsibility for the military person. Is there not also responsibility for the company? As I mentioned earlier, pharma companies choose not to sell their drugs to states that are going to use them in executions. Ige Floderben, the manufacturer of Zyklon B, has been held responsible by history and, and, and others for selling that poison to the Nazis in the concentration camps. And we hold, we say to that company to this day.
E
I was thinking of that exact comparison, Jeff, you shouldn't do it up.
C
You are held responsible for that. You are accountable for that. And so the trajectory is worried. Ben's worried about the dictatorship of the company. Okay, I get that. But I'm worried about the dictatorship of the government.
E
Leadership of a dictatorship.
C
Exactly. And if you're going off and doing things, what. And we constantly are saying to the AI companies, you need to be careful about how your stuff is used. You need to put in guardrails, though. I think that's impossible. But we still say that. People say that, right?
B
You.
C
You are ultimately responsible. We say the same thing to social media companies. We say that in all these companies, you are responsible, you are accountable. You have to be moral in these decisions. Decisions. Well, okay, so. So Anthropic comes along and says, yeah, we have a moral line and here it is. And we don't want our stuff used in this way. And then they're being accused by people like Ben of trying to be dictators. No, they're trying to be accountable and responsible in a. In again, exigent times where the risk is very high that their technology could be used in a way that would in the least shame the them in history. So I think that they've got an opportunity and a need and a responsibility and a right to say no. So it's a really interesting issue here of where you go. And then if we go to Google, I think Google's right now trying to hide, like just forget us for a while. We're not ready.
E
It's very interesting. Google and Meta, let them fight this out.
C
And Microsoft, I think, and Amazon, right. They're all Amazon's. Well, we'll get that in a minute. But they're all kind of trying to hide. But Google now has employees rising up again as they did in the robot days, that is the days when Google had a robot company saying don't use this stuff for war at all. And so where do these other tech companies go for all the reasons that you mentioned, Jason, but also for their moral and legal responsibility to themselves and their legacies.
E
I also want to point out Lt. Gen. Jack Shanahane, who was the inaugural director of the DOD's Joint Artificial Intelligence center, the guy who led Project Maven, the Pentagon AI program that famously caused that like Google employee revolt in 2018. He waited on the subject this weekend as well. And he said painting a bullseye on anthropic garners spicy headlines, but everyone loses in the end. He called anthropics red on lines reasonable and said, quote, no LLM anywhere in its current form should be considered for use in a fully lethal autonomous weapon system. It's ludicrous to even suggest it. I think that speaks for itself. You know, I don't think that. I think. I'm just, I'm baffled as to how this situation got to the point that it is at now because you're dealing
C
with some nut jobs.
E
I mean. Yes.
C
I mean Hegseth is at the same time time he went to the Boy Scouts and said, okay, you can keep girls for a little while, but you have to get rid of all the dei. So I'm sorry, I'm going to do it again here. I'm going to go to the same place. I'm going on the normal. So it's now the Hegsect Yugend. Right. And. And they're. They're dictating to the Boy Scouts. What do you do in that case? You've got macho. It's a macho thing and you don't dare disagree with me. I'm going to get you. I'm going to destroy you. And they have the mechanisms to do so.
B
Yeah, the. What was incredibly unexpected maybe was that they did this and they told them. Now, to, to your point, Paris, they haven't actually executed on the, the promise which is to make them, you know, a supply chain risk which is normally reserved for foreign adversaries.
C
Right.
B
So it was definitely a DEFCON 5 move. It's like we're going straight to DEFCON 5. Like we're going to, you know, we're going to term you an adversary, you know, to the, you know, to the Republic. And there was some concern, I think that was that. Okay, that could really damage anthropic. Right. That could have some kill them almost, couldn't it? It could, it could kill the company. Right. Or at least it could, it could make a lot of people uncertain about whether they could still do business with them.
C
Yes.
B
And they are largely, usually for those who, who aren't familiar, I'm sure most of our audience is. They make most of their money on their, on their API, on their enterprise business companies paying to use their services. So if a bunch of those companies have questions about whether it's legal for them to use it, that causes a lot of problems. And then something happened that we didn't expect, which is that American citizens and maybe people around the world were so, so inspired by the stance of, of what they did, of this sort of principled stance that they had taken that they started downloading Claude. Claude has been. Most people, if you say Claude, they don't even know it's a chat box. Until last week. Yeah, it's like this very.
C
Give me pretty pictures.
B
Yeah.
C
What does this go for?
B
What. I mean, nobody really knew what Claude was. Not newbody, obviously. It had its, it had its fans. A lot of the tech nerati, like the people who are really into AI. A lot of them for, for years have, have not years, for months have been saying, because this is a, this industry. It feels like years, but months have been saying like, Claude is the best. And I, I've seen this and probably you all have seen it with a lot of, you know, people, you know, who are really deep into the AI ecosystem and use. A lot have been using Claude because like, there's just parts of it that are a lot better and more accurate and less hallucinations and safer and all that. So fine. But the broad, the broad spectrum of people did not know what Claude was until this weekend. And all of a sudden it went from like, it was like 40th to 100th on a lot of Jimmy Kimmel
C
of AI
B
shot to exactly the top number one within hours, essentially after the the word came out on Friday.
E
Surge so much that clot also went down this week and experienced. Well, it was in part twice in part because a surge in downloads and usage, but also I think a data center got hit by a missile. So, you know, a complicated weekend for good.
B
Very complicated. So Claude rises to the top and I don't know about you all, but I'll even share an experience I had with. I have a friend, does not work in tech, works in nonprofits as an educator, very smart person, wonderful friend of mine who came to me on Saturday afternoon and he had downloaded an AI chatbot for the first time and he downloaded Claude and he was having to do two things. He was having to sort of make a, like a social speech. And then he was working on this presentation that he needs to give for, for in an academic forum. And he told me, he's like, I have to show you this. He's like this one. I asked it to sort of make the, the basics of a, of a speech for me and he said, I have to tell you. And then the other thing he said, and the other thing is I helped it think, helped it have. It helped me think through the arguments that I need to make in this, this presentation. And he said, and I have to tell you, and I'm going to show them to you. He said, I'm both really upset and incredibly impressed. He said, I'm upset at how good it is and I'm also impressed that it could help me, you know, think through some of these things in such a, in such a powerful way. So he showed me them and, and, and he had done a great job with these things and I just was, I, I was so blown away that the first place he went was Claude because he had seen all this and he really, he subscribes to our newsletter, but he only does it I, I think because, you know, because since I started there in December and has been keeping up with it, but that told me that, that Claude has truly broken through the mainstream like it is. It has skyrocketed into the level of attention and, and not only that, but usage in a way that really is giving it a moment that I do wonder into your all's question, you know, is it durable? Is it. Is it. Are they truly know be this counterpoint? Is it a cultural moment? But I think we have to acknowledge that we kind of in the, at least in this sort of three year AI boom that we're in, we haven't seen anything like this before. Something, you know, come out of nowhere and go from.
E
I had a very similar experience this weekend. A friend of mine who's always been, I guess a big Gemini head but has kind of like waxed and waned. Like there's maybe a time where he was using it more but because that often he mentioned to me this week and he was like, yeah, you know, I download Claude and I'm just like playing around with it because thought. And by Monday he was like, Claude's helped me realize reevaluate my whole like current life professional progress. I've vibe coded a CRM for my custom outreach that I'm going to be doing this week. I've died. I literally just got a text five minutes ago. It was talking about some sort of camera that he wants to buy. He's like, I've got to ask my new best friend, Claudio. Good for you, bro. I guess
C
so the other. I didn't even put this in the rundown, but Nvidia Jensen Wong said they're going to invest the 30 million in OpenAI. But he said that's probably it for both OpenAI and anthropic. And what he hid behind was because they're likely to do IPOs.
B
Yes.
C
So that becomes another interesting wrinkle in this is they're both headed that way, I think. But I think that Open AI just got delayed.
B
Yeah, I think you're right because Anthropics
C
might have just gotten accelerated.
B
Yes. Sentiment, you know, the, the markets are based on sentiment, you know, in addition to earnings. But really the sentiment and the sentiment on both of these companies just shifted so dramatically, you know, in the last 70, you know, or the 72 hours over the weekend that it, it really changes the, it changes the game, at least in the short term. But longer term, these are both going to be public companies. They're destined to be some of the biggest companies in the world. I, I think they are on the way. You know, Nvidia, OpenAI and Anthropic. It feels like in three to five years from now, you know, they are the sort of Apple, Microsoft, Google, Google, you know, of. Of this now we shouldn't count Google out either, right? Google. It was just a few months ago that Google was sort of the bell of the ball with Gemini 3 and Nano Banana. And they do have some things going in the right direction and so we, we can't count them out either, I guess.
E
Yeah.
C
So Paris, you raised in our private little chat. It's probably time, Jason, to earn some more money.
B
It is.
C
And then come back for some More stuff because you've got three ads to get in before we turn into a pumpkin.
E
You know, everybody loves to advertise on our here show Intelligent Machines.
C
They do.
B
We love them for today.
C
Leo's gone and look what happens. The money comes kaching in, shaking out of the street. Jason's here. Get me there.
B
Amazing. All right, so let's send it over to Leo to talk about another one of this week's sponsor.
A
This episode of Intelligent Machines brought to you by Modulate. Everyday enterprises generate millions of minutes of voice traffic. That's customer calls, agent conversations, and, sad to say, fraud attempts. Unfortunately, in most cases, that audio is still treated like text, right? Flattened into transcripts, which strips it of tone. More importantly, strips it of intention, intent, strips it of risk. Modulate fixes that. Modulate exists to change that. First proven in gaming, Modulate's technology has supported major players like Call of Duty and Grand Theft Auto. These games really needed it to separate, you know, playful banner from intentional harm. And they do it at scale. Today, Modulate helps many enterprises, including Fortune 500 companies, understand 20 million minutes of voice every single day by interpreting what was said and what it actually means in the real world. This capability is powered by modulate's very powerful elm. They call it Velma 2.0. I love it. Velma is a voice native behavior aware model built to understand real conversations, not just transcripts. It orchestrates 100 plus specialized models, each focused on a distinct aspect of voice analysis analysis. So it can deliver accurate, explainable insights in real time. This is an amazing technology. Velmo ranks number one across four key audio benchmarks, beating all the large foundation models in accuracy, cost, and speed. Because it's designed to do exactly this. Velma's number one in conversation understanding, number one in transcription accuracy and cost. Number one in deep fake detectives, number one in emotion detection. Built on 21 billion minutes of audio, Velma is 100 times faster, cheaper, and more accurate than LLMs at understanding speech. And that includes the best Google, Gemini, OpenAI XAI. Nobody does it better than Velma. Most LLMs are just, you know, black boxes. Velma doesn't just assess a conversation as a whole. It breaks it down for greater accuracy and transparency by producing timestamped scores and events tied to moments in the conversation, which means you can see exactly what's going on when risk rises, when behavior shifts, when intent changes. With Velma, you can zoom right in. You can improve your customer experience. You can reduce risks like fraud and harassment. You could detect rogue agents and more. Go beyond transcript. See what a voice native AI model can really do. Go to modulates live ungated preview of Velma. It's at preview module modulate AI. That's preview modulate AI. See why Velma ranks number one on leading benchmarks for conversation, understanding deep fake detection and emotion detection. Again, that's preview modulate AI. Now back to the show.
B
All right, thank you, Leo.
C
And we have more war news.
B
We do. So before we could talk about this whole anthropic Pentagon open AI thing for another hour, I'm sure the last thing I wanted to. To wrap it is we ran a poll on the Deep View. So we, we have our audiences goes. It's about half a million people every day. We, we run the top stories in AI and we have a poll. And so in our poll we asked should Anthropic have acquiesced to the Pentagon's request to remove safety restrictions?
C
All right, before you just wonder the results, Paris, have you cheated and looked?
E
No.
C
What do you think the answer is going to be?
B
Yeah, what do you, what do you think the answer is going to be?
E
Overwhelmingly no. I'm going to be optimistic.
B
Okay. What do you think, Jeff?
C
Yeah, I'm gonna be optimistic too. We're siding with anthropic.
B
79 said no, they should not have acquiesced. Best 6. 17% said yes. 5% said, you know, other.
E
Do you have any demographic, any other details on the 19 who said yes? Like, are they. Do they have a sub bucket that's like corporate shill or something?
B
I don't know. But you know, our, our audience is pretty diverse. It's mostly professionals who work in the AI industry or who work with AI. And it's pretty diverse across the US and Canada. And so the Canadians skew it then.
C
All those good guys.
B
Yes. Yes. So the. I was, I was surprised by that. Like, that, that.
E
What about it was surprising to you?
B
What was surprising to me? I figured it would be, I figured it would be, yes, the majority. But I thought 55, 60%. Right. Will, Will say most anything these days.
C
You will find a minimum of 35 to 40% who will side with the administration.
B
Exactly. Exactly.
E
Well, as someone who's spent probably many hours at this point with our survey team at Consumer Reports, I've had to write these sort of things. There's, I think that part of the reason why you get such an overwhelming response is the way the question, the
B
way the question was asked.
E
Asked. It's, it's definitely asked in a way that it makes it Clear with the moral choices.
B
It's true. It was a little bit of a leading question, so.
E
But it's also kind of a leading scenario. You know, I think that that's a. I would argue that that's an accurate way to describe the situation. Even if it is leading.
C
How would you. How would you otherwise word it? Did anthropic do the right thing or the wrong thing? Check one.
E
Well, no, probably. I mean the thing I've learned about talking with like, we have this whole team of like, professional. I don't even know what the profession is called of survey us like. And it's a million different caveats. It would be like thing being like a paragraph that dryly summarizes the debate, providing arguments on both sides and then says like, do you agree agree or disagree with anthropic stance as stated or something like that. It would. It would be making it a lot more boring, opaque and kind of hard to parse, which I think is perhaps a disservice to.
B
It's like would. Should. You know, anthropic made its stance on the two items that it believes should not have been left to LLMs to do, whereas the government, you know, believes that they are. Are elected officials and should be the ones that decide, you know, where do you stand? Boom, boom. Like something like that would have been a little more.
E
Yeah, but I think also, I mean, part of the thing is there's like a million ways to slice and dice surveys. I'm not sure that it's entirely a useful endeavor.
B
When I think I thought this was the. Because I wrote. I also full disclosure, I wrote the question. So like I thought. Thought this was the most obvious question to ask ultimately was like, should they have.
C
Well, I argue that all surveys are biased by their nature, period.
E
Yes, it's fair. And I'd also argue that anyone who's reading your newsletter and responded to the survey already has a more robust understanding of the situation than survey respond.
B
That's right. That's right. So that. That's fair. Like, they understand what an LLM is. They understand the risks. They understand. Understand hallucinations. They understand how long they get things wrong. And they probably are less likely to trust them to do really, you know, important and. And sort of existential kind of things.
E
Is this one of the most overwhelming responses you've gotten?
B
It is. It actually is one of the most overwhelming responses we've ever gotten to a question that went sort of one direction.
C
Or did you ask about whether the
E
others off the top of your head,
C
Claude, should Claude 4O. I mean, should a ChatGPT 4O be, be killed?
E
I mean, should we, should we be allowed to marry? Chat GPT 4 85% say yes.
B
Yes. Yes. Now, I do want to go to the Amazon questions. There are some other war, you know, related things that we should touch on.
E
So Jeff, war section tonight, guys.
B
Oh my God. We do a whole, a whole one. So Jeff, why don't you talk a little bit about the Amazon.
C
This is straightforward, that Amazon says the drone strikes damaged three facilities in UAE and Bahrain and there's no one saying directly that they were targeted. However, things are pretty well targeted these days. And Amazon as an American institution bigger than Kentucky Fried Chicken in these foreign places with the Internet and technology, with everything that's going on, it's really interesting to me that, that I think that American tech now becomes a pretty clear target.
B
Yeah, that it's a, it's a really interesting development that one of the things that are most known about American, about America and Americans are these companies, these global tech companies that are the biggest companies in the world world are in some sense the biggest symbols of, of American, of what is American in the same way that Coca Cola might have been or Nike or you know, other companies in past generations. The most iconic, you know, things Kentucky Fried Chicken you mentioned, Jeff. You know, McDonald's in, in a very real sense, the tech companies are the emblems of what America is. And so in, in that sense they are also so that what we've learned now the biggest targets to, if you want to make a statement about your feelings.
E
Rest of World had a good reporting on this as well this week. It kind of captured the larger stakes which is that, I mean, the Gulf has basically positioned itself as a safe harbor for the world's data, like to attract Silicon Valley. Like they've, there's been like over $2 trillion in investment pledges like made during Trump's Gulf Gulf tour last May. And it's been kind of positioned as quote, unquote, the third global center for AI alongside the US and China. And now, I mean, there was a researcher at Qatar University who told rest of world the security frameworks behind the US UAE AI partnerships were built for supply chain control and political alignment, not for protecting buildings during a military crisis. And now, I mean, this just makes it increasingly complicated.
C
And I want to go back to what we were talking about before too, in terms of Google, Microsoft, Meta and Amazon. They were all scared of pissing off the administration. Now they're scared also of pissing off the Populace of pissing off nations I read in Europe that are not necessarily aligned with what's happening out of America and Israel right now. And they're, they're scared of pissing off Middle east powers. They're hot under the collar. This is not easy.
B
There are, there are hard decisions to be made. You know, in, in those cases. Yeah, this is where it would. This. I remember somebody telling me, like, this is one leader. That's when you actually have to be a leader. Like, most of the time when things are going well, your job is just sort of keep the trains running on time. Right. When things get hard and there are difficult decisions, like, that's when you need a leader, right? That those, the time when leaders, you know, have to earn their money, they have to make very difficult decisions. And this is one of those moments where there's, there are signals to, to sort out and, and try to try to understand. And I think the one that you mentioned, Jeff, that maybe was the X Factor factor is the populace. I don't think we expected, you know, like, we, we saw it in, in terms of, like this poll, but, but also in terms of the way people voted with their downloads in, in overwhelming fashion with quad over the weekend. Like, people have made a large statement about where they stand on this in ways that has really been in one sense, encouraging. Right. From a, maybe a democratic process standpoint. I don't know if you want to call it that, but. And another, in terms of almost like a level of engagement and activism, and I don't mean activism in maybe the traditional sense, but maybe just a level of not being, you know, just whining bystanders, you know, sort of.
C
I'm serious about Jimmy Kimmel. I think it's a Jimmy Kimmel moment for AI, okay. Where Viacom, when it lived, thought, okay, no big deal. You know, I, I'm sorry. Disney. Disney. Wrong.
E
Wrong. Megacorp.
C
Yeah. Wrong. Wrong. Wrong. Late Show. Disney thought, well, okay, so this is the obvious thing we got to do, so we'll do it. Okay? No big deal. We'll, you know, we'll. We'll eat some crow with kibble and then figure it out.
B
Nope.
C
Got into much hotter water then and, and gave them the COVID and the courage to say no to the administration. So that's what's going to be interesting in all this.
B
Yeah, Yeah.
C
I, I do think European regulators are going to sort of speaking up too, and saying, no, we don't want to use tools that are used to autonomously kill people. We don't want to Use tools and why are you just saying don't surveil Americans? Why don't you have the same standard for the rest of us?
B
Sure, sure.
C
What gives here? They're going to get caught in that
B
vice if they could encode some of that. I mean what's been happening we've been seeing because of the gridlock in the US the, the, because of the, you know, political division and the gridlock in, in passing laws in, in sort of functioning. The, the, the legislative branch has been really in, in gridlock and not functioning well for a couple decades. And because of that the European regulators have been really setting the, you know, setting the standards on a lot of these things. And then when they do it, these companies that are global companies, they, they often prefer not to have two different sets of rules they play by. And so the, the European, European standards will often be, will often be propagated. Although we have start to see that splinter in some ways the last few years. There are some things that features or products that aren't available in the same way, you know, in, in Europe that they are in the US and so we'll see if how sustainable that is long term. But there, this could get codified. Some of these things that anthropic to your point Jeff, like, like some of these things anthropic has brought up could get codified by the EU and or other places and that could make that more of a, have this sort of global impact on, on some of these companies. That's going to be really interesting to
C
watch if you're anthropic or if you're a company that now doesn't know what to do. The question is who can give you cover? Oh gee, we, we love, we love to do this, but we really can't look at all the implications.
E
Yeah, I do think one aspect of this that has been interesting to me is there's, I'm probably bungling the precise details of it, but I've always heard that there's some part of the terms for anthropic employees equity packages that say, like by working at Anthropic you have to recognize that we may very well make choices that reduce your equity to absolutely nothing, make it absolutely worthless based on our moral and ethical standards that we have built central principles of the company. And it's like this is a perfect example of that. An example of the competing values inherent in trying to combine morality or ethics within a not only a capitalist ecosystem, but perhaps one of the most hyper capitalist ecosystems we've ever seen. In terms of the AI race.
B
Yeah, it's interesting. This came up, you know, over this weekend as well. Paris, I'm glad you surfaced this because. Or elevated this because I hadn't seen that before or heard it or read it but, but basically some people took the language and put it on Twitter and said like in the, the employment agreement it says we may make decisions, just as you said, that could take the value of the company, you know, or to zero, but that we will make these decisions based on our, you know, essentially our company mission. So now I'm going to play devil's advocate for, for a minute.
C
That's the proper Leo role. Yes.
B
Thank you. So Nat Rubio Licht, who works for the Deep View, wrote a perspective or a commentary. We have this at the end of all of our stories, it's called our deep review, where we're trying to, you know, really get to what's the, what's the thing right, not just report the, the news but, but also get to what's the, what's really going on here. And in, in one of these, over the past few days, what she wrote was that in one sense, what else was going anthropic going to do. They built the company on this mission that we are the safe AI, we are the principled AI and we are the ones that's going to put guidelines in place. And so did they really have any other choice? You know, like acquiescing would, would have literally.
E
Absolutely they did. Every, every single company in existence has basically made the other choice. Google used to have a core principle of don't be evil. And they went so far the other way that we're like, we're scratching that out, buddy. It's too hard.
B
So this is, this is why, you know, we, we published that argument just as it was. And I, I'm really proud of the way that she wrote it. I thought it was very well reasoned and very clearly stated when, when we talked about. One of the things that I said was, was similar to what you, you mentioned, Paris. What we've seen throughout history is when these moments come, what the technology companies have all said is we make the tools. How other people use them is up to, they sort of wash their hands of it. This goes all the way back to IBM and World War II selling technology to Germany. So I know we keep going back to Germany and the 1930s, but here
E
go we are back to Germany.
B
All roads always lead back there Browny face for the, for the disaster that it became. But that was what IBM said in the 30s. This is what Microsoft said in the 80s and 90s. This is what Google said. As you said in more recent history, Paris is the, the technology companies always default to being sort of morally neutral. And we make the tools what people do with them we can't really control. And so, so one of my, you know, counterpoints was that anthropic actually to saying, no, we will not do that. We have certain, you know, red lines that we, that we can't cross because we are not confident that technology is, can do this and do it well. And the, the consequences of it not working correctly are disastrous. And so and are core to democracy, our core to human rights. Like we, we can't do it and we won't do it. It. I find that pretty unique in human history in terms of all these tech companies. I can't think of another example. Other. Jeff's example of the, the drug companies is a good one. The pharmaceutical companies, that's pretty edg. But that is, I mean, had you
E
asked me a week or two ago what I thought anthropic would do, I would have said, oh yeah, they're going to cave, keep their military contract, make sure they, they're not cut off from the supply chain like every other company. I will say it is very startling to me, surprising that they decided to literally put their money where their mouth is.
C
So let me, let me go devil's advocate again.
B
Okay, thank you.
C
In the hall of yes years here is I contend that the idea, and I've done it on the show often, that the idea of guardrails is a lie. It's a general purpose machine. My example always. Hello, Gutenberg, is. Thank you, Benito, for the, the plug there for the full screen. For those watching, you can go back to the three, three shot now. Is that the praying press was a general machine. You couldn't have said to Gutenberg, okay, you can do this, but, but there's this guy who's going to be born called Martin Luther. Keep him away from the damn thing. You can't because it's a general machine. And AI though I don't believe in AGI and all that, it is a general machine to the extent that anybody can make it do anything they want and the guardrails are a lie. So in a sense, on the one hand, anthropic could have said, yeah, we, we have no control over how people use our tools. Exactly the way you put it, Jason. Exactly that we have no control. It can be used anyway. But then on the other, other hand, they say that it's not up to the tool to be controlled, it's up to the people. And so it's up to us to tell the customers, you may not use it for this. And there's plenty of other examples in there, right? I mean, when I recorded the audiobook for magazine on sale now, right at the end, they, they were going to have me read a statement saying no company may use this, this in any form or any universe, this or in the future for AI or we're going to kill you. And I said to the producers, I can't say that not coming out of my mouth. And so, so they're setting a restriction. What are terms of service? They're all restrictions that are put on products. Whether we read them or not, whether we follow them or not, it doesn't matter. But companies all the time say you may not use this in this way way.
B
Okay?
C
You may not do these things. So it's not about the, the tool itself being foolproof, it's about the need to tell the, the people who use it how they may use it in your view. And if you don't like it, don't buy it. Should be what should be said. I think it's also interesting here. I know we're going back to our first story again because it's so big.
B
It's so big.
C
What I don't, one thing I don't understand about the timeline of all this from the beginning is that this was in their contract and in their rules right from the beginning. You may not use it for these two things. What was it just the war game that motivated them to come? Was it just wanting to be hard asses with anthropic.
E
Yeah, how did it get this bad?
B
Yeah, we, we are missing some information on how that unfolded and why and when. I think it's, it's clear, likely to come out in basically the coming days, weeks, months. But yes, I think we, we don't know that and it will, it will be potentially, I think, helpful to us in understanding the story for now. I'm going to send it back to our good friend Leo to give us another one of this week's wonderful sponsors.
A
This episode of Intelligent Machines brought to you by Zscaler, the world's largest cloud security platform. Look, we here at IAM know the potential rewards of AI and you probably should know about it too. It's just too great for your company to ignore. But we're also aware, and I hope you are too, of the risks, not just, I mean, loss of sensitive data attacks against enterprise managed AI, but also frankly, threat actors. Generative AI increases their opportunity, helping them to rapidly create phishing lures to write malicious code to automate data explorer extraction. There were 1.3 million instances of Social Security numbers leaked to AI applications last year. ChatGPT and Microsoft Copilot alone saw nearly 3.2 million data violations. You don't want that to happen to your company. You gotta rethink your organization's safe use of public and private AI. Just check out what Siva, the director of security and Infrastructure at Zwora, says about using Zone Zscaler to prevent AI attacks.
B
With Zscaler, being in line in a security protection strategy helps us monitor all the traffic. So even if a bad actor were to use AI, because we have tight security framework around our endpoint, helps us proactively prevent that activity from happening. AI is tremendous in terms of its opportunities, but it also brings in challenges. We're confident that zscale is going to help us ensure that we're not slowed down by security challenges, but continue to take advantage of all the advancements.
A
Thanks, Siva. With Zscaler Zero Trust plus AI, you can safely adopt generative AI and private AI to boost productivity across the business. Their Zero Trust Architecture plus AI helps you reduce the risks of AI related data loss and protects against AI attacks to guarantee greater productivity and compliance. Learn more@zscaler.com that's zscaler.com Security now back to the show.
B
So I want to talk about one more big thing that happened. There's so many big things. Like we could go and talk about a lot of other things, but there
E
is one another three hours of this podcast.
B
I know we could do a lot. I've never seen a week like this and I feel like I've said that about three or four times so far in 2020. Here we are. I want to talk about a product that Perplexity has announced over the past weekend. Perplexity had been flying a little under the radar so far in 2026, but they released a product in in the. You know, this has become the year of AI agents. AI. The personal AI agent has become the thing Claude Code so Anthropic product Claude Code and Claude Cowork were were a big part of this that has become Agent Leo has talked a good deal about that and the success he's had in doing some and automating some things in ways that that he has found incredibly helpful and powerful. And then of course we've had the whole open Claw Claude Bot, Maltbot, Malt book phenomenon all of its own. And, and then Open AI, of course hired Peter Steinberger and. And then Open Claws become its own foundation that has has again the risen to the level of consciousness, new level, this, this concept of AI agents. And then when OpenAI hired Peter Steinberger, they basically said, we're gonna let, we're gonna have Peter come here and create AI agents for everyone. We're gonna make AI agents that are just so much easier to use because you have to be a bit of a techy to, to use either openclaw or Claude code and Claude co work. And so they're like, we're going to make this a lot easier. And then one week later, Perplexity released the product that they. Exactly the product that they talked about. And obviously they didn't do it in a week. They've been working on it for a couple months. But clearly the level of acceleration in this space and the level of being able to use these coding tools to, to, to elevate what engineers are capable of and the speed at which you can ship new products has just taken the velocity of this industry to a level that we've just never seen before. And Perplexity Computer. So full disclosure, I had a bit of an exclusive on that. I published the story. It was on the top of Tech Meme when they were released it, you know, last, last week, at the end of last week and, and got a chance to use it a little bit right away. But beyond that. So I can, I can speak to it. But there was something else interesting. A couple. There's like three things that this does that other AI, AI agents don't. And we can, we can talk about. But the thing that happened with this, that was really wild was that on Twitter, essentially the Perplexity team, you know, and I, I have this on good. On good, you know, terms from the Perplexity folks that they said to their team, like they do every time they launch a new product, they're like, hey, this is our new product. If you want to tweet about it, you're, you're welcome to. And usually you get like a handful of people that do it. Well, un. You know, foreseen by Perplexity, their team, which had been trying this thing and like loving it, they went on Twitter and just like exploded Twitter with it. And so this thing spread far and wide really quickly and gained a little bit of a viral moment. Got helped by one other thing, which is that the, there were some people that combined Perplexity Computer is what they call their AI agent with Perplexity Finance, their sort of Yahoo Finance competitor. And they basically were like, I use this to make a Bloomberg terminal competitor. And they're like, I one shot it. This is. I one shotted it. And I used this AI agent to build my own Perplexity Terminal computer. And I just canceled my $30,000 subscription. And so that gave it a whole nother sort of level of. Of interest and. And buzz. And so. But this Perplexity Computer thing is. Is really interesting. You know, Jeff, when you were talking about it before, you were like, so is this Open Claw? This is just like an open claw for everybody that can use. And I thought that was the perfect. That my. My headline had a little bit. My very first headline had a little bit of that in it too. It's a great way to think about it.
C
Let me ask you two questions about that. One, I think I asked whether it was Open Claw, but ready for prime time. Is it in some way better, safer, not just slicker, but better. Safer than Open Claw? Is that possible?
E
And then second, I mean, I think almost anything is. It would be safer than Open Claw.
C
Is it safe enough? Yeah, that's. That's true. That's true.
B
Yeah.
C
Though I just saw that there was a story I don't think I can put in the rundown. That cursor is it. What do they call their browser?
B
Oh, Comet. Comet.
C
Comet. Thank you.
B
Comet.
C
Could be, I think one calendar invite. Until about a month ago, a calendar invite could corrupt everything you have, which they fixed, but that was an issue. But my other question. Question is because the one thing Perplexity has always done well is stay on top of pr. They're really good at like, common as an example where everybody, they knew everybody was going to come out with these things. They came out with theirs first. They do these sometimes outrageous things. Do you think that they wished they had released Computer before openclaw or did openclaw open the door for them, saying, we got some. Something better?
B
Yeah, I think you're. You're right, Jeff. They like to move fast and they're. They've pioneered a bunch of the things that eventually Open AI and Anthropic eventually ended up doing right as you're. As you're getting at. And so I think they do like being first in this case. I do think that Open Claw gave them a chance to ride that buzz a little bit. And like, in a way that maybe AI agents would have felt a lot nerdier. I mean, they still feel necessary nerdy. But at least they openclaw created a lot more curiosity. But it's very hard to use. You have to be very technical. It's very command line oriented. There are some hosted versions of that. You could get that a little easier. But did you get to play with.
C
With, with computer?
B
I did. So you only have, you can only use it if you have a Perplexity Max subscription, which is their 200, 250$, you know, subscription. So I had a version of that that I could test it with and there are just a few things that it does really, really well. But Paris, I can see you're dying to.
E
My issue is that it's a terrible name because Jeff just said, did you get to play with computer? And that made me involuntarily laugh. Like, they might be good at pr, but naming your product Computer, it's not gonna work. How am I gonna tell my mom to download computer?
B
Download computer.
E
Mom, you gotta get on computer.
B
I know, know. I had the same first reaction, Paris. The funny thing is how quickly I'm like, okay, Perplexity Computer is like PC, right? And so it's like it's, it's basically.
C
It's even worse.
B
So generic. Yeah, that it's. That it's bad, but also at the sense, like you don't have to remember a whole lot about it. You know, at the same time it's. But the, but the pro, the product itself is interesting. It does three things really well and interesting, I think, in these AI agents. So this is where it gets super nerdy. In these AI agents, you have to have an API key, which is essentially where you pay per use for any. Because these things use a lot of, of what are called tokens. That's AI inference. And I know this is a lot of things, but they basically every time you use one of these AI models, it's expensive. And right now, if you pay your $20 a month plan, you know, you, you are typically somebody who probably doesn't use it a lot or most people. People don't. So you don't. But these AI agents use a lot of computing power, if we just put it that way. And so if you're using them, Open claw. If you're using even Claude code, you have to have an API key because basically you're paying per use. If you use a bunch more, you're going to pay a bunch more. What the first thing that Perplexity did to sort of make AI agents easier is they do away with all, all of that and that's why they only have it on the expensive plan for now, because they essentially are giving you a bunch of extra. You. You get a bunch of usage when you use that plan. Now if you go over this massive cap, so if you're a really hardcore coder or something, fine, you'll probably still have to pay. But if you don't, if you stay under that, you're just going to use this like everybody would. That was the first thing. The other thing that it does that's really unique and interesting is Claude code only uses anthropic stuff. Models, right?
E
Yeah, I think that's the actually really notable and interesting thing here is that you can use.
B
You could have like different models.
E
4.6 for core reasoning. Gemini for Deep Research. Grock for yelling at someone on the Internet chat. GPT for sycophancy. You could have it all.
B
It. And it's pretty good. It has like, it routes your queries to the best models. Just like as you're saying, you know, it knows which one are good at which things. And that part is, is pretty good. That's the second thing. The third thing it does is when you do it, say you want it to build an, an app. Say you want it to build the, the Paris app for scanning, you know, Twitter and giving you, you know, story ideas that have to relate to topics X, Y and Z. And then you also want to share it with like one person on your team. Perplexity Computer can do it. It can deploy it to the web and then you can share that, that URL where URL. Whereas if you had Claude code, if you had codex, that's open AI's program, if you had Replit, you know, Cursor, if you had one of those, you have to make the code, then you have to go deploy the code to a server somewhere. The one reason why they learned this from Lovable, if anybody, for those who are familiar with that, that's a code, you know, an AI agent sort of coder where you just make the thing and it deploys it right away on Lovable. And then you can say you can make your thing in 10 minutes and then you can send a URL. You just it make. Made a web app and you.
C
And it uses Gemini. Right? Lovable uses Gemini.
B
I'm not positive about that, but I think you're right. I think it does. It is working with Gemini. It's a company that's based out of Europe, Northern Europe, but we like that. Yeah, they are, they are a company that has figured out that one piece of like, oh, if we let people make the thing and deploy it right away, that's a big plus. Well, Perplexity Computer learned that and it does that as well. So, for example, I had it make an app and I, I had this app test that I have where I want it to go. And every morning scan all of the sources, you know, and I tell it, here's a bunch of sources, scan these and a bunch more like it, and it'll make like, you know, 20 sources for me. And I had, I had chat, GPT and Claude, you know, make this query for me to make a, essentially a morning news scanning app for me. I took that. Most of the code, most of these programs break on it. When I do it, it either doesn't do it right, it messes something up. Lovable did the best job of making it and then putting something on the Internet that I could use right away. The only other one that could do that was Perplexity Computer. I put it in there the first time is the very first thing I gave it was this kind of somewhat complex, make me a morning, you know, AI news gatherer. For me, it did it right away and it deployed it and I could send the link out. And I was like, whoa, okay, so this is more powerful than Lovable.
C
It deployed it as well. You didn't have to go into Terminal, you didn't have to put it on a server. You didn't have to do any of that.
B
Didn't have to do anything.
C
I've been arguing about this with Leo because Leo is nerdy and he loves nerding out and he wants everybody to be using Terminal. And I'm saying you're not going to scale at that level. You're not going to scale if people have to install things to run them. You want to say, look what I made world with a link.
E
I'm nervous and I get nervous when I'm in the terminal. Yeah, I mean, I still do it, but it's just. Even the barrier of going from Claude cowork to Claude code is a lot for the average person.
B
For sure. For sure. So Perplexity Computer, it is the que. The name is a little questionable. The project, though, is really promising for the reasons that, you know, Jeff, you just mentioned, like the ability to be able, an average person to go, go in, describe what they want, and it spit out a thing that you can just take a link and send it to anybody. That is the thing that is really powerful. And then as you mentioned, Paris, the fact that you can do it. It can essentially do best of breed models across all of these labs is also a bit of a, of a superpower. So really interesting. I'm going to go now, I'm going to send it back to Leo to do our last sponsor for the show and then we'll come come back and talk a little bit more about some tools.
A
This episode of Intelligent Machines brought to you by outSystems, the number one AI development platform, the agentic shift is happening. You know that if you listen to the show, we're really moving beyond simple chatbots. And here's the good news. Outsystems is leading the agentic conversation. Outsystems helps businesses build AI agents that can actually do work work. It's amazing things like taking actions, making decisions, integrating with data rather than just answering questions. Outsystems is solving the talent gap. There really aren't enough AI engineers in the world, but Outsystem empowers the developers that your company already has to build at an elite level. It's like a superpower for devs. Outsystem is the secret weapon behind the world's most successful company. And not just for the little apps. These are for massive complex systems. Systems that run banks, insurance companies, government services. Outsystems even helps companies with aging IT environments bridge the gap to the AI future without a rip and replace nightmare. And I can give you an example. I can give you several. They helped a top US bank deploy an app that lets their customers open new accounts on any device, delivering 75% faster onboarding times. They even helped a global insurer accelerate the development of a portal and app for their insurance agents, delivering a 360 degree view of customers, enabling those agents to grow policy sales. That's just a small sample of what Outsystems can do. Outsystems combines the speed of AI with the guardrails of low code. It's actually a marriage. Make in heaven the safest and fastest way for an enterprise to go from we need an AI strategy to we have a functioning AI application. Stop wondering how AI will change your business and start building the agents that will lead it. Visit outsystems.com TWIT to see how the world's most innovative enterprises are using AI powered low code to transform. That's Outsystems. O U T S Y-S-T-E-M-S.com TWIT to book a demo and see the future of software development. Outsystems.com TWIT we thank him so much. For supporting intelligent machines. Now back to our intelligent hosts.
B
Thank you, Leo. All right, so now it's time for the picks of the week. Paris, would you like to get us started?
E
I will. And this was an important pick for me to do in the week that Leo's not here. Listeners of the show will know that last year, last week, one of my picks was the New York Times Crossplay app, which I'm some. I'm somehow more addicted than I was last week. I perhaps have. I have more than 12 games going on.
C
Leo has been one of which mightily in our chat.
E
Leo, because I am trouncing him. I was really worried at first because I got such a bad rack of tiles. We started playing, but I beat him in the end. And we are rematching and I'm still. But one of the reasons I think I'm beating him is I've gotten really into over the last couple of years, more Scrabble strategy theorizing. And there's a great book and online resource called breakingthegame.net that is like all beginning, intermediate and advanced Scrabble and Scrabble tournament strategy. And you know, wow. I'm choosing to show share this in the show because I've, I shared this then last week with one of some of my other friends that I'm absolutely dominating in Scrabble and it has not improved their ability to play. So I feel safe sharing it in a place where Leo could hear it. But I don't know, check it out if you want to beat your friends more in Scrabble. It gives you some good, some good strategyms to be thinking.
C
I quote Leo from our WhatsApp.
E
Yeah, we're a WhatsApp family now.
C
It's Garg, you're a stone cold killer. I've been sandbagged.
E
I mean, our board right now is a carnage. We've really played ourselves into several corners, none of which are particularly good. But I was very worried that I wouldn't beat him in the last game because he last. The last turn was like up by 30 points, but I think I ended up beating him by two in the end. So I don't know. Get on the New York Times crossword app and use breakingthegame.net net to trounce your friends even harder. That's my pick of the week.
B
Stone cold killer gives the p me very good. All right, Jeff, how about you?
C
All right. This is my one. I have more than one. I want to mention this the story just for, just for the record to put on there that News Corp did a big deal. 150 million with Meta. And the Robert Thompson who I disagree with constantly about all matters of Internet and all that, he said that that News Corp is now basically an AI input company, which I found amusing, but that's not my pick. I could do a few different ones here. I could do a paper that's out that says we don't know how social media bans will affect youth, but we're doing it anyway. But I'll leave that aside. I could do a nice New York Times feature about Bell Labs and all that has happened there. This is why I wrote an op ed a couple years ago now begging for the soon to be vacated Bell Labs in Murray Hill, New Jersey to be turned into a museum.
B
Yeah.
C
But instead I'm gonna do Walkman land. So, Paris, are you too young for Walkman?
E
No, I had one once and they also came back. I feel like in the last five, they.
C
They have to a bit.
E
10 years.
C
So if you go to Walkman land. Well, I guess we can't show it. Can we show it bonito or. No, we can't. I'm working on it. He's working on it. Sorry, I should have warned you. But okay, so it is. I knew Paris would do exactly that. And. Ooh,
B
why don't you.
C
So this is pages of walk men. I always did the plural. Walk man's. Not walk man's. Walk men.
E
Walk men is right.
B
Walkman.
C
Good, good. That's correct. I had this one. I had this one. I had this one. Which you did.
E
The Iowa HSPS 008 is so pretty. Honestly, a lot of these are very pretty. The Phillips a Hugh 6492 is gorgeous.
C
I'm up to 17 pages. I'm trying to see how many pages there are. 20 pages. It goes on and on and on and on.
B
All The Walkman models. 52 pages.
C
Yep.
B
That.
C
How many?
B
That is crazy. 52.
C
Jeez. Jeez. And it was, it was. It was a life changing.
E
Well. Oh, some of my friends have the Sony TCM4500. The. My first Sony range. That's a real popular one among the Brooklyn crowd nowadays.
B
Oh, you mean like today they have
C
one like right now?
E
Today I know at least two people who have that. This. I'll put it in the chat in their home right now. And I always see it at people's apartment and then take a photo of it and then never look it up, which I'm.
C
So now I know this was a huge change. Right. People could take Their music anywhere. The bigger change, of course came before that. I went to Greenbrook Electronics today, which I can't wait to take Paris and Leo there because it's this weird kind of dusty museum store and I had to buy a transistor for a class I'm teaching Friday. I'll put this in front of the camera.
B
Wow, look at that little thing.
E
Oh,
C
of course. Replaced this. The tube.
B
The tube.
C
As Benito pointed out out earlier, he still uses tubes because he's an audio freak. So anyway, this is what changed everything because this is what enabled the portable radio. This is what made. Made it so you could take music with you anywhere.
B
The transistor.
C
Right.
B
Amazing.
C
And. But then that was. You were stuck with radio, you were stuck with DJs, you were stuck with all that. The Walkman gave you the first control. And so I think it's important. So that's. That's that. I want to. I'll mention one other thing. Where is it here that according to Edison Research podcasts now lead AM FM in spoken word listening?
B
Really? First time. This is. It's crossed Rubicon, I'm kind of thinking.
C
I'm surprised that didn't happen before, but there's a lot of. Our grandparents are still listening. Listen to AM radio and like in.
B
In factories and stuff like that, they still leave it on all day right there. There are lots of. And even restaurants like the back kitchen and the off. Back offices and things like that.
E
A lot of the Brooklyn girlies also have AM radios. I would say a lot of a coolest one is one of those. Like it's a. Under cabinet AM like, or a radio set setup that also has a cassette player in it. And one of my friends who has one of those is moving to Chicago and I'm hoping that I get it in the.
B
Oh, okay. You know, the. There's also this kind of comeback of the. Of the ipod. So because it has notifications so hot right now, it's. Yes.
E
It's like actually expensive to get an ipod.
C
So that's not just the New York Times making up a trend.
B
No, as a matter of fact, I saw the. The. Oh, Tony Fidel, who was one of the, you know, creators or, or was on the team, you know, him tweeting out. He's like, look, I don't know if Apple's going to start making it again, but he's like, look, it's official. Like this. This is going. This is going pretty big. And, and I think a lot of it is. It is the sort of anti Screen time device, you know, where no notifications.
C
Yeah, that's what. That's when I roll my eyes.
B
All that I. I do a little bit. Like, could I imagine going there and. Probably, probably not. But anyway, it's. It's also the flip phone. I know some people that have done the flip phone, you know, thing as well, but one of my friends has
E
a flip phone and I ridicule him every single time.
C
As you should.
B
It would be like going back to the tubes when you have the transistor, you know, it doesn't.
E
He also got one and he, like, tries to text from it. And we're in group chats and I'm like, rick, you can't be doing. Doing this. It's rude.
C
The ipod thing is so weird because, like, for the longest time we were just like, just put this thing in my phone, please. Put this in my phone already, please. And now we're just like separating it again. That's funny.
B
Now we're like, I want the iPad, the ipod back. All right, well, for my pick, mine is going to be something that I feel like should be so obvious on its face and really should be a feature and not a company, and yet I almost, like, can't live without it on a daily basis, which is this, this app, Whisper Flow, that lets you, you know, on. I use it on Mac and I use it on, you know, phone as well, where you just hold down one button and you can dictate to it and essentially puts it into clear text. You know, it'll. It'll sort of correct it and make it. Make it clear into complete sentences. And I feel like this should. Right, Like Alexa, Siri, Google Home, Google Assistant, like, should have done this really well a long time ago. But you know, it's funny that for all of the challenges that Apple has an AI, if it would just buy Whisper Flow and make. Make it so that when you talk to Siri, it actually works every time. Because this thing essentially works every time or 90% of the time. The perception of what Siri being better, better. The way that it would go up so dramatically would be incredible. And so that's what makes me think like, this is really something. There's a couple other ones like this, a couple competitors as well. Whisper Flow is probably the best, the best known, and I find it to be, you know, the sort of one of the easiest to use, especially on the computer, because all you have to do is hit the function key on your. On a Mac and then it'll. It pops up and you can use it for anything. The thing that I found, there's two things that I. I love about it. One. One is that it tracks how fast you go. You know, most people can type if you're really fast. You type like 75 words a minute, right? The average person is like 45 to 50 words a minute. And I think I was. I'm about in that range. You know, when you speak, you're up to about 150 words per minute. 125 to 150 words per minute. So I've noticed now there are times when I can't do it and where it's like I'm in a cafe or something that's a little weird. But the whisper is the Right. Because you can actually do it where I could, like, say it like this and I could just whisper. And it actually works. Whisper into flow and it works, which is pretty. Which is pretty cool. And then, you know, the, the other aspect of it that, that I really, that I really enjoy is the fact that it sort of does. It sort of gamifies it a little bit, right? You can. You can get the stats on. On how fast you're. You're going and, and you can do it. Like, I can. I can be in bed and everybody's asleep, and I can like whisper into it and my note. And then the last thing I love about it is I. I do my best thinking when I walk. And I've never. I was like, if I could only type. I get my best ideas to write when I walk, but if I could only type. And so sometimes I would, you know, type on my phone, leave notes and Apple notes and stuff. But with this, I. I have actually started writing some things while I'm walking, and it's been super, super handy. So that's mine. It's like the AI feature that really shouldn't be an AI feature. I feel like every one of these.
E
But AI is so good at it. I mean, I use. I've talked with the show. I have a thing on my computer, Mac Whisper, that allows a bunch of Whisper Kit transcriptions. And that's how I transcribe a lot of interviews. And it's phenomenal. It's local. I love it.
B
So good. So good. So if you're. If you're still typing most of your stuff out there, folks, you know, you could use this tool. Use another one.
C
No, no, I gotta. I can't. I can't. I can't think without that.
B
You gotta use your fingers.
C
I gotta use my fingers. I got to that's, that's part of my, my book.
B
Yeah.
C
Hot Type. The End is a typographical autobiography where I talk about how the keyboard changed the way I thought and then the computer changed the way I thought. But I, I, unless my fingers are poised over the home keys, I can't fix.
E
Think.
C
No.
B
Interesting. Interesting.
C
Be able to do it.
B
So it's, it's an interesting world. It is an interesting world. I, yeah. Those are, those are all fun picks. Those are all really, really fun picks. Well, Paris and Jeff, thank you for letting me be here, do this with you. What a pleasure.
C
Really, really good job, Foley.
E
Thanks so much for steering the ship. This was phenomenal.
C
Phenomenal. It was great.
E
And we're finishing before a large cane emerges from off stage to drag you out of the podcast studio booth, which is.
B
That's right. There's the cane. All right, well, appreciate it. Thank you everybody for tuning in to Intelligent Machines. Leo Laporte will be returning, so you can count on that and thank you for a great week. And we will, of course, this show is back every week and Paris and Jeff will be here again. And Leo Laporte will be back and you can get, and you can count on even more news that's going to happen. However big the news was this week, it's going to be even better. I'm sure. It never stops.
E
And where can people go to follow your work, Jason?
D
Yeah.
B
Thank you. So. So the deep view.com you can find me there. Subscribe. The deep view.com is how you can get our newsletter. Every, every day we have a send of the newsletter with the top, top stories. In AI, we pick three stories and we also try to unpack them. And then you can also find me if you want my updates in real time on, on Twitter. God help us all@x.com Jason Heiner and yeah, thank you again and have a great rest of the week.
A
Hey, everybody, Leo Laporte here and I'm gonna bug you one more time to join Club twit. If you're not already a member, I want to encourage you to support what we do here at Twit. You know, 25% of our operating costs comes from membership in the club. That's a huge portion and it's growing all the time. That means we can do more. We can have more fun. You get a lot of benefits ad free versions of all the shows. You get access to the club to discord and special programming like the keynotes from Apple and Google and Microsoft and others that we don't stream otherwise in public. Please join the club. If you haven't done it yet, we'd love to have you find out more at Twit TV Club Twit. And thank you so much.
B
I'm not a human being. Not into this animal scene. This episode is brought to you by
E
Nespresso Introducing Vertuo up, the latest in a long line of innovation from Nespresso.
B
It's innovation you can touch, sense and
E
taste in every single cup. With a three second start, easy open
B
lever and dedicated brew over ice button,
E
it's even easier to enjoy your coffee your way. Sit for yourself Shop Vertuo up exclusively@nespreso.com
D
if you're the purchasing manager at a manufacturing plant, you know having a trusted
A
partner makes all the difference. That's why hands down, you count on Grainger for auto reordering.
C
With on time restocks, your team will have the cut resistant gloves they need
A
at the start of their shift and
C
you can end your day knowing they've
A
got safety well in hand.
D
Call 1-800-GRAINGER Click Grainger or just stop by Granger for the ones who get it done.
Host: Jason Heiner (filling in for Leo Laporte)
Co-hosts: Paris Martineau, Jeff Jarvis
Special Guest: Dan Patterson (Blackbird AI)
Date: March 5, 2026
Main Theme:
This consequential episode of Intelligent Machines dissects a week jam-packed with transformative news in the AI world: the Pentagon's dramatic standoff with Anthropic, Sam Altman and OpenAI's opportunistic maneuvering, the dramatic surge of the Claude chatbot, Amazon data centers as wartime targets, and the debut of Perplexity's powerful new AI agent. The episode explores both the ethical dilemmas and technical innovations surrounding advanced AI as it permeates society and geopolitics.
[02:16 – 22:32]
[38:45 – 95:49]
[66:17 – 113:11]
IM 860 is a landmark episode, capturing how AI innovation is increasingly synonymous with political power, business ethics, and citizen activism. The team’s layered discussion, from Blackbird’s defense against weaponized narratives to the public surge behind Claude, exposes how quickly public sentiment and technology can pivot when ethical lines are drawn. Add in Perplexity’s clear-eyed focus on making agentic AI mainstream and you have an episode both timely and prescient.
Key Takeaways:
Listen if:
You want to understand why AI is no longer just about tools but about power, ethics, and the shape of tomorrow’s society.