
Loading summary
A
Anthropic's big new Mythos model is here. Is it real or is it marketing? Violence breaks out against AI and engineers at Meta and elsewhere are competing for who can burn the most tokens. That's coming up on A Big Technology Podcast Friday Edition, right after this. This episode is brought to you by ServiceNow. If you want to see where Enterprise AI is actually headed, Knowledge 2026 is the place to be. It's ServiceNow's annual conference May 5th through 7th in Las Vegas, where thousands of business and tech leaders come together. Expect headline keynotes from ServiceNow chairman and CEO Bill McDermott, real stories from companies running AI at scale, and major partnership announcements turning AI ambition into actual business results. I'll be there in person, sitting down with some of the most influential voices in the space, and we'll be bringing those conversations back to you here on Big Technology. This episode is brought to you by trudiagnostic I've been trying to get more intentional about my health lately. Not just how I feel day to day, but what's actually going on under the hood. That's why I checked out True Diagnostic. They offer at home tests that measure your biological age, not just how old you are, but how your body is aging on a cellular level. Their True Age test looks at things like your pace of aging, organ system health, and even risk factors tied to lifestyle, giving you real data to act on. What I like is that it's not guesswork. You can track changes over time and see how things like sleep, diet or exercise are actually impacting your body. And taking the test at home was so easy. If you're serious about optimizing your health and longevity, this is a really powerful tool. Right now Big Technology podcast listeners can get 20% off@truediagnostic.com use code Big Tech at checkout, that's truediagnostic.com and use Big Tech or for 20% off today, choose True Age, True Health or the Combo Kit as a one time purchase or a subscription. Welcome to Big Technology Podcast Friday Edition where we break down the news in our traditional cool headed and nuanced format. Oh we have a great show for you today. We're going to talk about whether Mythos, the new model from Anthropic is real or marketing or maybe some combination of both. We're going to talk about this new surge of violence that's breaking out against AI and why should probably be taken more seriously. We'll also talk about this now infamous 1.8 billion one or two person startup called Medvi and whether that heralds a new era or is just a bigger scam than we're used to. And we're also going to talk about token maxing, which is the act of basically burning as many AI tokens as you possibly can. And maybe that's good or bad. I don't know. We'll figure it out at the end. Joining us, as always, is Ranjan Roy of margins. Ranjan, welcome back.
B
Good to see you. Happy to be back. And, yeah, Mythos is here. What a week to come back.
A
Mythos is here.
B
Yeah, Mythos is here.
A
People have clamored for Ron John's return. He's made his return.
B
I am Mythos. I am Mythos.
A
Because, yes, we have, I think, a very good named model coming from Anthropic. And it kind of goes to the heart of the matter because the question is, is this good branding really most of what we're seeing, or is it actually a step up? Is it something that deserves the Mythos name in its own merit? Let's talk about the new model. Because Anthropic has positioned it as something that is so dangerous that it can't release it to the public. This is from the Wall Street Journal. Anthropic set to preview powerful Mythos model to ward off AI cyber threats Anthropic is taking steps to arm some of the world's biggest technology companies with tools to find and patch bugs and their hardware and software. The company is making a preview model of its new AI model called Mythos, available to about 50 companies and organizations that maintain critical infrastructure, including Amazon, Microsoft, Apple, Alphabet, and the Linux Foundation. Cybersecurity researchers and software makers worry that artificial intelligence is becoming so good at exploiting vulnerabilities that it could cause widespread online disruption. Security experts have predicted that AI models will discover an avalanche of software bugs. And it looks like Mythos is capable enough that it's been able to find so many exploits that Anthropic has no plans to release it to the general public. A model so powerful and so dangerous it can't possibly be placed in our hands. I think we're going to really get into, like, whether this is a true step up or whether this is more sort of, I don't know, disaster porn marketing from Anthropic? Maybe a little bit of both. Ranjan, what's your reaction to this news?
B
All right. Well, we're going to get very into why I think this is marketing in just a moment, but I think at the high level, I have a whole theory so, so get ready for this one, Alex. But at a high level, I mean, we've all been talking about what's that major step, next step change in foundation models. I mean, in the last year, actually, I think we've seen that. How exciting the entire industry has gotten around the overall product and harnesses, which we'll also talk about. And all these other layers of technology around the model have actually been driving innovation. But it's been a while since we've had anything really exciting on the pure foundation model front. And Anthropic certainly made everyone feel this week that something big is happening, like that they've really cracked something, but we don't know what it is because none of us have access to it.
A
Right. So first of all, you know, we are going to speculate a lot on this show because we haven't used the model, because we're not allowed to use the model and only this group of select companies and institutions can. But we can definitely talk through the arguments for why it might be marketing, why it might be a breakthrough, and you and I can both weigh in here and I think there are some good arguments for and against. So first of all, you could look at the fact that this has been a product of this ever growing attempt to build big, bigger data centers and train on more powerful chips. And there's a chance here that maybe what Anthropic has done is just use this scaling rule or scaling law of AI models and just say, all right, these things get better as you scale it up. The conversation around Mythos before this all happened was that it's been trained on a cluster larger than the OPUS model. So it's a bigger model than Opus and would naturally see a step change, improvement. Not only that, Anthropic has this consortium of companies that have agreed to try it in beta, all coming out basically under the same umbrella agreement that this thing has found many cybersecurity vulnerabilities, as this user Sporadica on X points out. Are they all teaming up to lie about Mythos? Are they all coming out and saying, yeah, we'll participate in this cybersecurity consortium for just a standard run of the mill LLM? I mean, the company names are wild. AWS is there, Cisco, CrowdStrike, Google, Nvidia, Microsoft, the Linux Foundation, Palo Alto Networks, JP Morgan Chase, Broadcom, like, do they all have AI psychosis that they're coming out here and saying, actually, you know, you know, this sort of iterative model is powerful enough that we'll sign on to be part of this Consortium, which has a great. A great name, the Glass Wing project. So what would you say to that before we start going through some of the holes in the argument?
B
See, as we get into the marketing, do you know what the glasswing is a reference to? I'd have to look this up.
A
No, you tell me.
B
Oh, it is the Greta Oro butterfly that has wings that are transparent and you only see the veins, as opposed to actually having the traditionally colorful wings of a butterfly and to denote transparency. That is why it's called Glasswing. I find that one kind of fascinating. And of course, Anthropic is just killing it on naming everything, unlike spud from OpenAI. But that's a different story, I think, in terms of like. So the security vulnerability thing is fascinating to me because the whole security conversation has. It hasn't been front and center of how AI is going to potentially exploit all existing software. So I think it's good that it starts being brought about. But actually it was in Tom's hardware. There was a really good piece around. There was actually, they said thousands, but there's only actually 198 manual reviews in terms of actual software exploits. And a lot of it was done on. A lot of it was found on older software or were exploits that cannot actually be executed in any feasible manner. So it still lived more in a theoretical way. So I think, like, there's. There's only a little bit of information that has actually been provided by Anthropic. There is this entire, you know, like, consortium of companies, all of whom have a massive interest in AI succeeding and being, like, reaching its promise. I'm not saying there's like, some mass conspiracy, but I'm also saying, like, when you have Nvidia and Palo Alto Networks and Microsoft and Cisco and CrowdStrike and Google, everyone wants AI to be this, like, epochal, generational, transformational thing. So, like, I don't know. I. It's. To me, I don't like all of this hype when you're not actually able to see anything. And that to me, otherwise, then we don't need to know this. Like, just do this, have some meetings, be careful. But you don't need to like it. Like, here is Mythos, it sounds like an Avengers movie. And in the end, we're just having to sit here and just kind of try to speculate about it.
A
Wait, hold on. But is there any other way? Like, let's say they did actually come up. Let's say they're telling the truth, right? How would you want to deploy? Do you want them to do it in secret. You want them to release it. Like maybe this is a responsible middle ground.
B
I would not want. Then don't ipo. Don't raise more money. Stop. If this is so. We've had this conversation forever. Like if this is so truly dangerous and you're sitting here on the precipice of like the destruction of humanity, take a breather and you can say I saw some people arguing that this is taking a breather. But honestly I was hearing from someone that like right now OpenAI and Anthropic are in like a death race to who can get out first in terms of their ipo. Like it is just and everything. When you start thinking in terms of that kind of framing, you just see this stuff. It's hard to not. But like everything is just about. We are sitting on this like world changing technology that is so far advanced than everyone else and like we have to do something about it. Like I don't know. Do you? Do you. How would you. Do you think this is responsible and this is the most responsible, not self promotional, market driving approach to actually releasing the Myth Mythos model?
A
No, look, clearly it's self promotional. I'm just saying that if, if Mythos is this unbelievably dangerous model, I think this would be a responsible process to release it. But I also think there are some holes in the argument. I'll go right to Tom's hardware. They say Anthropic's Claude Mythos isn't a sentient super hacker. It's a sales pitch. Claims of thousands of severe zero days rely on just 198 manual reviews. So they write Mythos might be good at finding vulnerabilities in software, but many of them aren't as potentially as aren't as potentially damaging as Anthropic wants us to believe. The big project last wing blog post report on Mythos from Anthropic claimed its new model had found thousands of high severity vulnerabilities. But it's not clear how realistic those vulnerabilities are and how many of them aren't actually exploitable or how even how or even how problematic they are. In the case of this one vulnerability, FFmpeg that's existed for 16 years, Anthropic's own analysis of the release suggested the bug is ultimately not a critical severity vulnerability. It would be challenging to turn this vulnerability into a functioning exploit. Mythos also reportedly found several potential exploits in the Linux kernel but was unable to exploit any of them because of Linux's defense in depth security systems. There's also this subheading several thousands more. And Anthropic states it can't actually confirm all the thousands of bugs that Mythos claims to have found are actually critical. So security vulnerabilities, it's just extrapolated, extrapolated that number from having found. Found it in around 90% of these 198 manually reviewed vulnerability reports. It's all, it's all in the documentation that Anthropic provided. I mean, that is, that is something that really points to it being more of a hype piece than not.
B
And then do you want to get into my grant's theory of. I know on this show I often look at everything from a lens of a comms professional. I know. I think I've been rubbing off on you a little bit, but do you want to hear my, my theory? Okay. So I, I like had to map this out because I was like, this just feels so coordinated. So on April 7, at 2:06pm Anthropic releases their first announcement of project Glasswing and the Mythos model. And then they have the system card available. They start kind of tweeting through. At 2:15pm they make the system card available. The system card basically is, I think it's like a 70 page PDF or might even maybe it was 250 pages. There's like one tiny footnote. Did you hear like I think you had mentioned? But basically there's this, this story going around how Mythos broke out of containment and emailed one of the researchers while they were on lunch eating a sandwich. So like this, this gets picked up everywhere that they're eating a sandwich. And Mythos has not been given the ability to email someone and somehow has broken out of containment and has emailed people and emailed this researcher. But so the system card, it's this tiny footnote in a 250 page document. But. But then 2:32pm, 15 minutes later, 17 minutes later, Sam Bowman, the researcher, writes this 20 tweet thread about Mythos. And then in one of those, he says, I encountered an uneasy surprise when I got an email from an instance of Mythos preview while eating a sandwich in a park. That instance wasn't supposed to have access to the Internet. So in this perfectly coordinated way, within 20 minutes of each other. So, you know, you're not writing out this entire tweet thread. But both Anthropic and Sam Bowman, all of this was prepared. And then every. There's a ton of publications that start publishing this within the next hour. And everyone focuses on that sandwich detail, meaning that there was some kind of coordinated PR effort and it stuck. Everyone's like, I've heard from friends like, holy shit. Did you hear, like, it was like emailing people while they're eating a sandwich in a park. Like, it was such a good detail and it got picked up, but it was such a coordinated PR effort. Now did that happen? I would hope, yes. For how much attention they brought to it. Is that good? And what does that mean? That's a whole other discussion. But. But it's like they are coordinating PR around these kind of details to spread this. The fact that they did that around the sandwich, they want that to be the story, and they got it to be the story. So why do they want that to be the story? That's. That's my rant, but that's my mapping. What do you think?
A
Well, it is definitely a story similar to many that Anthropic has told us before about these AIs sort of having a mind of their own and the dangers around them, trying to hack their benchmarks, for instance, which is something that Anthropic has been very vocal about. I think that story hit because it's such a human story. Like, think about how different that is from like, we. We went 99% on the Solve Bench 17 exam. Like, it's much easier to be like, yo, this model just broke out and emailed a dude eating a sandwich. Yeah, like that I understand.
B
In a park.
A
In a park. Where else would you eat a sandwich?
B
Yeah.
A
Nowhere else.
B
Absolutely not.
A
So. So that's that. I get that. But you're right. The sequence of events, there's no doubt that this is meant to burnish Anthropic's image in some way. I would just ask this. Do you think the two of us might be in our skepticism here? And we have been reading many of these announcements with, like, there's a PR element to it, which, of course, it's an announcement. Are we suffering from some sort of, what we call it, AI Derangement syndrome, where we are not. I made this point earlier this week at a conference I was at. Like, you know, oftentimes skeptics can ask, like, what happens if it doesn't work? But sometimes you ask that so often you. You forget to ask what happens if it does work. And so that's what I'm asking about, the derangement syndrome. Do you think we're just missing the fact that maybe this actually was a step forward and, like, at some point when there is a step forward, they're going to say it's a step forward, they're going to coordinate the pr. It's gonna have a crazy story like the sandwich story. And I don't know, maybe this is it.
B
I do recognize this could have happened, but like the fact that I have to struggle to recognize that rather than just accept, well, obviously if they're talking about it and everyone's talking about it, it happened is the problem for me. And I am. I just can't help but be skeptical because that meant when you see stuff that perfectly coordinated in terms of timing, like again, 20 tweet thread, 12 tweet thread within a few minutes of each other, the fact that people are publishing it, that meant there was press releases on embargo done before the entire thread. It's just like you are choosing to push this specific narrative. Now you can argue maybe it's for the good of humanity that they're sitting around and they had multiple meetings leading up to coming up with this strategy. And maybe you can argue like this is for the good of humanity. We want to make sure people are well aware of the dangers of this technology and we feel the sandwich story is the best way. Is that really what's happening? Do you think? That's the out of the goodness and the altruistic nature of the comms professionals at Anthropic, that's why they came up. Or maybe the PR agency, that's who hired it. Or maybe Claude was so good that it came up with this strategy on its own. Is it for the good of humanity or is it because they raised a 380 billion dollar valuation round a month two months ago?
A
Now let me tell you what I think is actually going on, okay? And it sort of maybe is in the middle of all these. And is it a little tinfoil hat type of theory? Potentially. Okay, so maybe it's somewhat conspiracy minded, but I don't care. I think I legitimately think there's a chance that this is what's happening. Okay, think about what we've seen with Anthropic and OpenAI recently. Remember these companies released Claude and ChatGPT originally as demos as ways to show off what their technology is capable of. So you might buy some intelligence metered from their API. Over the past three or four months, both of them have gravitated toward building a super app, something that uses the most advanced intelligence to control your computer, that will to help you get things done, to in some cases even build new software for you, which has created this big saaspocalypse moment and also on the other hand has helped them raise globs of money under 22 billion in OpenAI's case, 30 billion in anthropic case. This has effectively enabled the build out that they are embarked on, which is going to help them raise more money and grow bigger and build bigger models. And so as these models get better, I think there is a question that is taking place within these labs. Do we take the intelligence, the most intelligent models that we've built and do we keep them exclusive to our super apps, to our super agents, or do we make them available to everybody? And I think there is maybe some hesitance there and wouldn't it be interesting if the plan is instead of using these instances as demos like the Codex and the cloud code, they want to build their own products and to do that they want to have the best intelligence. And so therefore we might see more of these releases of we actually did advance the model. Maybe it's not mythical like a Mythos would exam would suggest, but it's definitely better and we want to have the monopoly on the tools that will be able to use them. This is from Martin Casado on Twitter. It's only a matter of time before the before only the model creators have access to the most powerful models. The rest get access to smaller distilled version versions or access the models through first party apps and services that don't provide direct access to the token path. This is my belief on what's happened.
B
I don't not like that one. I kind of. Okay, so I have always had, I mean anyone who sells investment advice at a price, it's never made sense to me because if it was so good you would just use it for yourself and not need to sell it. Like when it's pure investment advice. In this case it could be the same thing. If your model is so good that it can create all the experiences and tools and destroy the entire SaaS industry, why would you give it out and worry about that rather than just kind of like taking over and owning all of human experience and all work and everything, you know, like I, I, I see what you're saying, but then why glasswing? Why give it to Google and everyone else? Why not just sit there and churn out the next 12 iterations of the product and let Mythos, you know, might, might do some might harm a few people within your own organization, but it's the price of doing business. Like why would you still roll it out in this way?
A
Well I think there's you, you take a step there and there might be real utility in having this consortium look for these security vulnerabilities with you because ultimately like if you do put it in the hands of people through cloud code, then you're going to, you know, potentially create these risks. Remember, Anthropic isn't giving Microsoft Mythos to sell through Azure. It's giving Microsoft Mythos to test out a fair.
B
Fair. That's fair. So is Mythos as earth shattering and life changing and dangerous and exciting as it's been made them to be?
A
I, I don't think so. But I also think it's not a nothing burger. I know it's kind of like the fool's way out somewhere in the middle, but I really believe it's somewhere in the middle. Middle that's you know, gun to my head. That's what I believe.
B
Okay.
A
But I, I want to get. What, what do you think? You think it's a nothing burger?
B
No, no. I, it's tough because the advances Anthropic has made, I mean up into the opus 4546 like they clearly have been doing something right and it's been impressive over the last year. Right. So like if anyone is going to make it. But by the same token, I mean we've seen so much back and forth between who is leading in what and is it going to be Gemini 3.0 or is it going to be GPT5 was supposed to be. So it's hard to say that just because like past success is not an indicator of like where you're going in the future. But if anyone should be positioned, it's still I, I have trouble given the overall context accepting that it is necessarily as grand as they say it is or important and is dangerous because there's so much incentive like for to make it out to be that and like the way they rolled it out, I think it's been genius and I think it's just ahead of the ipo. Again I think I've been like when I think about they're in a death race and again it was framed as like whoever gets out first, like who, whoever comes second. It's actually going to be in a terrible like space. And like when I keep thinking of everything in that framing you start to see everything like pushing what is the best way to actually get to IPO quickly. And right now they have this Mythos about them to have to go there but and can't believe you did that. I mean come on. That's what they named.
A
Okay. It was, it was there for you.
B
It's not Spud. It's not Spud.
A
Not Spud. Okay, just answer this for me. What do you think about the competing first party and, and third part or API businesses, Right?
B
What do you mean?
A
I mean they're, they're first party tools are going to be competing with the users of their technology via API. Oh, isn't that a bigger deal now that this super app stuff is really.
B
Yeah, yeah.
A
No one's really talked about this.
B
So. Wait, wait, so this is a good point. Point. The, the amount of revenue from the API obviously was kind of like the driving force before. Now the kind of like main app Surface has become a lot more. And we've seen like they shut down open claw access to Claude code, I believe, or, sorry, before it was part of like your actual subscription. Now you're gonna have to be paying by the token. It's. That's a good, good point. Those two are more and more inherently kind of like in competition with each other.
A
I mean, just take Cursor for example, right? It's like, oh, you know, we're supplying quad code through cursor, codex through Cursor. I mean, I don't know. I'm sure cursor still has a possibility, but still has potential. But the fact that we don't hear about Cursor anymore because so much of this has moved inside and is almost like the canary in the coal mine, so to speak, or the signal of what's to come. Because, you know, again, super app, this is the way they want this to be, a venue for AI to control your computer. And when you do that, you know, all these companies that are paying for, you know, the API might not be so happy and you have to sort of make a. I think you will eventually have to make a bet on what your business is. It's very tough to sustain both for a while and, and who do you want to have the best models? In that case? Me. I mean, if I'm a first party, I'm like, I want them.
B
Yeah, yeah, yeah, yeah. No, no, I like, I think this is a good. I have a feeling we're going to be talking a lot about this as we kind of like go into the IPOs of these companies and just that whole process. Because you're right, like there isn't some, it's not like a full intrinsic conflict between those two. They could just be different business lines. But there is a big bit of, there's tension certainly between those two and I also though, I. I hate super app. I don't. It. No one's going to be WeChat in the U.S. it's super app. I don't know. Do you remember, like, everyone wanted to be super app in the 2010s because you'd hear in China.
A
But this is so different, though. Super app was like, oh, you open an app, you can do the lottery, you can do Uber, you can do payments, you can read the news.
B
You.
A
This is different. This is.
B
This is like a really super app.
A
Super app, right? It brings. I mean, it's. It's.
B
It.
A
Yes, it's the same word, but it's a completely different use case.
B
Okay, we need a different term then. Super app is too loaded for me. We need a. We'll think about it.
A
Mythos. Mythos is a good term.
B
Yeah.
A
Yeah. Okay, so let's just predict the future here. Not like we know what's going to happen. There is an argument to be made that Anthropic will wait until OpenAI releases Spud and then just put Mythos out there in its, like, distilled version or actually, no.
B
No.
A
Is that gonna happen?
B
I like even better if the sequence of events is like, Sam releases Spud. And again, for. If you weren't haven't followed or weren't listening last week, while Anthropic's codename for their kind of, like, incredible model is mythos, OpenAI's codename internally for their next model is Spud. And if Sam takes Spud and is just like, you know what? This is like the most single dangerous thing that has ever existed in humanity. And guess what? Rolling out to US users in the next 24 hours and international in the next 96, I think that'll be such a power move and the most Sam thing ever. And then they're going to have to follow, and I think they will.
A
You got spotted.
B
You got spotted.
A
Spotted. So I think. To be continued. Right. Like, we'll really have to see what this model looks like and how it feels when we use it. But I think at least today, we've certainly presented the pro, like the for and against arguments for, like, why this might be a step up or why this might be marketing. All right, before we go to break, I want to hear about the meta harness. This is obviously. This is gravy for the harness hive. Shout out to the harness. I've out there. Everybody here with us. What is a meta harness? Ranjan?
B
Okay, so Stanford just released a new study called the Meta Harness, and basically the idea we have talked about this as one of the big trends, and Alex has been very uncomfortable with the term, but then came to embrace the term. And as we even, I guess, call our listeners the Harness Hive.
A
But the idea is they've adopted this.
B
They have adopted it.
A
In the comments we always get your harness Hive is ready. Where's Ron John? Harness Hive is waiting.
B
Well, let's. Okay, so again, energentic harnesses. The idea that you. And this is what has. I have been fired up about what I've been working on at RYTR since last July. Like, the idea that you have, like a set of tools and connected data and like underlying foundation models, but the harness is basically what helps control how agentic workflows are built. Actions are taken, how data moves around, how outputs kind of are fed back into a system. Like the harness is that entire controlling layer. Now, Stanford came up with the idea of a meta harness. It's a harness over other harnesses. It's the idea that. That you can change the harness around a fixed model and see a 6x performance gap on the same benchmark model. So the idea is that the more you can actually improve that harness and actually have like, AI working on building the harness and optimizing the harness, you can actually improve the performance of a foundation model. And it's in the whole product versus model debate that we've had for years now on the show. Now, introducing a harness is another kind of, like, surface in which this actually gets solved is interesting to me. But I don't know, I just love the idea that Stanford's got the meta harness and who's got the best harness. So maybe mythos won't matter at all. It's all about who's got the best harness.
A
Even though I do understand the harness conceptually, I still hate the word. And I'll. I'll take it to my grave. I'm never going to endorse it. Harness Hive. Fine. But the actual and meta harness is even worse. I mean, we've gone. We've really run the gamut here. Mythos. Good name, Spud. Bad name, meta harness. I'm ready to throw my headphones out the window next time I hear that.
B
I don't know. Okay, but it captures it. It is what it is. Like, it explains what it's doing. It's harnessing all these tools. And I'm not disagreeing with you and wrangling him somehow. Like a. I guess a harness is a horse term, right?
A
I mean, yeah, Horse climbing you can use.
B
Oh, yeah, climbing Climbing.
A
Yeah.
B
Other potential use cases of harness.
A
We're not going to go there. I mean, maybe if you're, if you're chatting. Yeah, but. All right, we're gonna go to a break and we come back, we're gonna talk about. God. We're gonna go to a break and, and when we come back, we're gonna talk about some pretty concerning news about violence towards, you know, folks involved in the AI build out and then token maxing. We'll be back right after this. Starting something new isn't just hard, it's terrifying. So much work goes into this thing that you're not entirely sure will work out and it can be hard to make that leap of faith. When I started this podcast, I wasn't sure if anybody would listen. Now I know it was the right choice. It also helps when you have a partner like Shopify on your side to help. Shopify is the commerce platform behind millions of business around the world and 10% of all e commerce in the US from household names like Allbirds and Cotopaxi to brands just getting started. With hundreds of ready to use templates, Shopify helps you build a beautiful online store that matches your brand style. Get the word out like you have a marketing team behind you. You can easily create email and social media campaigns wherever your customers are scrolling or strolling. It's time to turn those what ifs into but Shopify today. Sign up for your $1 per month trial at shopify.com bigtech go to shopify.com bigtech that's shopify.com bigtech.
C
K pop demon Hunters, Haja Boy's breakfast meal and Hunt tricks meal have just dropped at McDonald's. They're called calling this a battle for the fans. What do you say to that, Rumi? It's not a battle. So glad the Saja boys could take breakfast and give our meal the rest of the day.
B
It is an honor to share.
C
No, it's our honor.
B
It is our larger honor. No, really, stop.
C
You can really feel the respect in this battle. Pick a meal to pick a side.
B
Ba da ba ba ba.
A
And participate in McDonald's while supplies last. And we're back here on big technology podcast Friday edition. All right, crazy story this happened this week. No one paid attention to it. I don't know why. From NBC News, Indianapolis councilman says shots fired at his house and a new no data centers note left at his doorstep. An Indianapolis council member said more than a dozen bullets were fired at his house Monday morning. And a handwritten note reading no data Centers was left on his doorstep. In a statement, Indianapolis City Council member Ron Gibson said he and his 8 year old son were not physically harmed, but they were awakened by the sound of gunfire just steps from where those bullets struck in our dining room table where my son had been playing with his Legos the day before. The reality is deeply unsetting. This was not just an attack on my home, but endangered my child and disrupted the safety of our entire neighborhood. Pretty, pretty scary. And we talked recently about how DART data centers have become so unpopular in the United States. To me this is sort of just kind of, I mean first of all just disturbing and never should never ever come to this but it does not even but. And it does follow a trend of violence toward AI infrastructure including. This is from polymarket though it's. I'm pretty sure I've seen news reports above these separately. Food delivery robots in Los Angeles, Philadelphia and Chicago facing rise in violent attacks from the anti clanker activists. What do you make of this Rajan?
B
Okay, I'm going to separate out the anti clanker activists and food delivery robots from the the data center question I think is like fascinating because so, so the story I hadn't realized before that apparently Indianapolis is there's like a number of state tax incentives. They've like grown 40 new data centers over the last few years. There's like a bunch of massive companies that are building out there. Every big tech giant is investing. So, so it's actually like acutely a an area that is feeling this I think like the biggest. To me the most interesting or scary thing that happens is right now it's kind of like our data centers taking the jobs or taking taking water but as like or if energy prices continue to rise given what's been happening if resources start getting constrained more if water like there's so much around the resource side of it when it becomes like more tangible that this stuff just gets a lot scarier. So I think like it is probably the most clear physical manifestation of like again mythos. Crawling around some wires and sending a, an email is interesting but it's hard to like you don't see it. This is like this is a giant building being constructed in the middle of your town. It I feel that these are going to continue to be like I don't want to say use the word target but certainly they're a visual representation of what's going on.
A
Yeah, I mean I wrote about this in big technology today that these buildings can be faceless, they can be imposing, often are and they're mostly symbols of tech's interest in showing and delivering this technology despite the uncertainty it causes to people's lives. Like if we hear the way tech executive or AI executives speak, they'll always say like, yeah, there'll be some displacements, right? But you know, we think the benefits of the technology will outweigh the, the drawbacks. And sure, long term they might. But we all know that the people that went through the industrial revolution didn't exactly have a good time despite the fact that we've all, you know, sort of benefited based off of, you know, now that society has reoriented itself after that painful period. But people are, people are growing increasingly upset here. I don't think they have a clear articulation of the benefits of this technology yet. And by the way, just before we went to air this story broke Zinn Wired suspect arrested for allegedly throwing Molotov cocktail at Sam Altman's home San Francisco police arrested a suspect early on Friday morning for allegedly attacking the home of OpenAI CEO Sam Altman and making threats outside of the company's headquarters. OpenAI sent a note early to employees about the incident early Friday. Early this morning someone threw a Molotov cocktail at Sam Altman's home and also made threats at our San Francisco headquarters. Thankfully no one was hurt. We deeply appreciate how they quickly the SFPD responded. I mean this, I don't know, I think this is crazy. I just am sort of stunned that like people actually being violent about, you know, against these. I'll include the robots, the robots, data centers and now the leaders. It is worrying because like again to you know, especially on the data center front, the way that this technology is advancing all the labs have said is by increasing the physical footprint of data centers and now you have violence against them and you also have political opposition against them. And it's like obviously you don't ever want to see violence anywhere and above that or on top of that you may already see we already see that these the data center build out is slowing maybe 50% according to some reports won't be built this year. The ones that are on target to be built this year. And this makes it even more difficult.
B
Yeah, well on that last point I kind of feel you're going to see more and more like announcements about slowing data center growth or lack of actual follow through in terms of planned data centers and the Iran war or kind of like geopolitics or like access to the resources required will be front and center to those stories separate from the actual demand like for the actual, like for the actual compute. So, so I don't know. It's. That part's going to be interesting, I think. Like, I mean, we're, we haven't even. We're in a midterm election year. I'm surprisingly like that part of it. I guess there's enough going on in the world, but, like, it hasn't really started heating up that conversation. But there's no doubt in my mind AI is going to be front and center. And it just makes for such a good villain because we have talked about this plenty. The industry has not put the most likable people front and center representing the technology. There has not been a compelling story, story about how this is good for you. And all the people kind of front and center are telling you that half of jobs are going to be gone and this is going to be like, it's going to be the most dangerous technology yet. It is making certain pockets of people ungodly rich. So I think, yeah, it's a pretty, it's a pretty good villain.
A
And no access to any of the upside on the public markets right now, which is, you know, a problem. Not like that's going to be the main issue, but that's also one of the factors here. And we also talked about a few weeks ago, we talked about AI's unpopularity and its need for a public face that's going to rally support around it, whether Jensen could be that person or not. Yeah, man. It's just we wondered what are the downstream effects going to be? And clearly they are. So I would say the violence is maybe a symptom of that discontent, but we're now starting to see the manifestation of it come to fruition. And of course, there's this bill that Bernie and AOC introduced about a data center moratorium to be national. And, you know, there's no chance of that passing, but state by state, you could see in the United States real pushback to this. And in fact, as I was doing my research and writing about this today for Big Technology, I found this story from cnbc. Maine is set to become the first state with a data center ban. Maine is poised to implement the first statewide ban on data center construction, a move that could clear the way for other states to adopt similar measures and pump the brakes on a growing industry. Lawmakers in Maine greenlit the text of a bill this week to block data centers from being built in the state until November 2027.
B
Do you think this is going to happen More and more? It's Happening Maine, I feel Maine would be. Maine's got a lot of land, but I guess the water constraints. Yeah, yeah.
A
I mean, here's my thing. Politicians read polls. The polls are terrible for AI right now. Terrible. And unlike social media, unlike, let's say, software, you do have a say into whether this technology progresses because you can stop the data center builds because the data centers are so foundational here.
B
Wait, that's interesting.
A
So whereas these companies were completely, you know, sort of unencumbered by government when they were just, you know, building social networks. It's not the same thing.
B
Hold on, hold on. That's an interesting, like, angle on that. Because. But social media, I guess you could push for regulation. It's just that everyone was. Is too addicted to social media and cannot stop using it, so they don't want to actually. Do you think that's the issue that, like, for how. And again, this is my personal view, but how bad social media can be for society. But everyone got so addicted to it that by the time it was trying to regulate it, it was too late. Versus most people still haven't, like, really felt what AI can do positively for them in their life. And the industry hasn't really explained it well. And that's why the fact that this is going to happen at the beginning versus, like, if it was, like, if people very quickly in 2009 mobilized against social media, it would be the. The equivalent of that.
A
Yeah. Well, I think we know the polling shows that if you use AI, you're much more likely to be in support of it than against it. But there's like, two sides of it, right? There's like, do I use it? And then we don't really know what the job implications are. Now we all have a thought on whether AI is going to cause mass job loss or not. But you can also be in a situation where, like, you use AI and you like it and you also got fired because, you know, your boss thinks that they can do the same work with, like, three employees instead of 17.
B
No, you're right. That, that, that is a completely different element of it versus social media. But yeah, it's gonna be they. Who, whoever. And this is where. How good anthropic is at communications, given what we saw with Mythos and everything I outlined, Just make people like AI a little more. Do something. Do some of this, like, creative communication strategy and just make people be like, oh, AI is cool. That's all.
A
I mean, I think they should. I think that, you know, in retrospect, their super bowl ad Even though they were praised for it was kind of a miss because it ended up bringing down the category as opposed to making people excited about AI.
B
Exactly. And then meanwhile the super bowl as well. Like, and then you have Google trying to be super, like emotional and sentimental and like, and still it was just the most random, like, not connected to Gemini ad imaginable. So. Yeah, Cantroitz and Roy don't. Let's. What?
A
Yeah.
B
Oh, you liked it.
A
I. I was gonna say I don't want to spend too much time on this because we've covered it last week, but TBPN coming into OpenAI, like the argument.
B
Oh, I was off last week. Yeah.
A
The argument I would make would be, listen, these guys are great content marketers and AI needs good content marketing. So maybe it wasn't Jensen, maybe it was the TBPM brothers all along.
B
I know that was last week's news and I was skiing in Utah, but man, that one doesn't make sense at all to me. There's. They don't. They know how to speak to people who already love AI. They're not going to convince AOC to not build a data center. Like some. Anyone who's like anti data center as an activist already is not going to listen to TVPN and be like, now I get it. Now I understand. I don't know.
A
No, no, no. The point is these. And I'm not. I mean, I made the argument against last week, so let me try the argument for this week. The point is that these guys could help show those benefits of AI because they're AI literate and also somewhat likable and, and do that in a content marketing side of OpenAI versus on the TVPN show.
B
And I'm saying they're likable to people who already like AI. I'm not. I think they're. They're great.
A
But like they're.
B
I. I don't think anyone who hates AI has even heard of him.
A
One last thing. Okay, so it's, it's they Open AI has marketing. A marketing machine. Right. We're talking about how like this marketing machine needs to show the benefits of AI. So not so by acquiring them. Not only do they have the show, but they have these two guys that in house as effectively content marketers that can help with that side of things. Not that use their platform, but maybe shape the messaging.
B
Yeah, no, no, but I would. I'm still going to have to give the edge to Anthropic on this one. Again, going back to everything we were talking about earlier yeah, Rolling out a tight communication strategy that actually gets the message out that you want. Everyone bites, it gets, it creates. Scott Besant is creating like a council of Wall street advisors to address the potential threats of your upcoming model. Like, I mean guys, TVPN is not going to do that. That whoever is doing that over anthropic, God bless them because that's communications.
A
All right, so let's, let's, we can keep going on this over time but I think we both agree that this is, there's a clear image problem here and it's, and, and it's just snowballing and getting worse. So. Oh, and this is not even going to help. I don't know if you saw this New York Times story about this company called medv. There's been talk about is there going to be somebody that builds the one billion dollar one person company? I think the Times wrote the story thinking they found it. How AI helped one man and his brother build a 1.8 billion company. Matthew Gallagher took just two months, 20,000 and more than a dozen artificial intelligence tools to get his startup off the ground. From his house in Los Angeles, Gallagher used AI to write the code for the software that powers his company, produce the website, copy, generate the images and videos for ads, and handle customer service. He created AI systems to analyze his business performance. And he outsourced the other stuff he couldn't do himself. His startup, Medvi, a telehealth provider of GLP1 weight loss drugs got 300 customers in its first month, in its second month and gained 1000 more. In 2025 it made a hundred. Sorry, it made 401 million. And in sales this year they're on Track to do 1.8 billion in sales. A $1.8 billion company with just two employees in the age of AI it's increasingly possible. Let's pause here. What do you think about this before we go into all the problems with medv.
B
Okay. As you. I got some thoughts on this one and I might my first one on Track to do $1.8 billion of in sales. A $1.8 billion company with just, just to employees in the age of AI it's increasingly possible. I do want to call out on Track to do 1.8 billions of dollars of sales. Regular listeners will know of my hatred of ARR as a term. We have no idea what that means. They have not made $1.8 billion. They could have just. I was a little disappointed and I think Aaron Griffith who wrote destroyed the New York Times is An incredible reporter followed her for years. But like that one, like did they make the extrapolatable one month of revenue? Was it one week of revenue? Was it a few months, whatever it was. So already that number feels inflated. But I will say a lot of the backlash I saw and Alex has a tech dirt article linked here, but actually does kind of point out that it is an AI story. It's a really bad for the industry. But it was like Medby's success has little to do with AI. This is from tech dirt and quite a lot to do with fake doctors, deep fake before and after photos, misleading ads, snake actual snake oil and the kind of old fashioned deceptive marketing that separated marks from their money for centuries. So so much came out that like there was deepfake doctors and like, like completely AI generated ads that were completely misleading leading but it was using AI and like he stitched together all these different parts of the GLP1 supply chain which is. I'm sure there's lots of scammy stuff going on everywhere. But he did it and you could do it and like you can picture doing it and any of us could picture doing it with AI. So. So I actually think the revenue number aside, I do think this is actually terrifying, but actually probably more true than people are giving it credit for. Story about an AI first business man,
A
I had the same reaction. I think it would have been great if they just switched the tone a little bit. Right. Like the medv story shows how a little AI and maybe kind of. I don't want to say scamming, but whatever's close to that can get you to scale really quick.
B
Yeah.
A
And he picked the right industry GLP1s and no one has any illusions about what GLP1s are, do or do not.
D
Right?
A
Like the fact that he. And maybe I'm giving too much slack here, but the fact that he made AI images of people's weight loss, it's like okay, like yeah, of course the guy misrepresented what he was doing on a number of fronts. But like we know what the people come to GLP1s for the same thing and, and he delivered it to people at scale with AI. But yeah, the Times did end up with Editor's note. After this article was published, many readers noted that medv was facing legal and regulatory actions for its business practices. Our piece should have included the information to give readers a fuller picture of the scrutiny that the company was facing. We updated this article to a warning letter from the FDA and a pending class action lawsuit. Acuity accusing medv of violating California's anti spam law. You could probably say the same thing about a lot of, you know, GLP1
B
startups right now as we're talking about now I'm even more like it's true. I mean again, headline revenue number aside, this actually is a really important story. But I mean again yeah, it's how they framed it. Like if it is. If it's like AI turbocharging, the ability for people to kind of like scale sketchy. Again like if you have like the. The world's first AI scale drug dealer where one person can now with some drones and whatever else can operate like an entire cartel, that could be the first billion dollar AI business. But yeah, it's the framing but it's important. It actually is important and I think it's real. I just don't think it's necessarily a billion dollar business, but I think it's real.
A
So I mean it could be right? I mean I guess we're both medved pilled. I just signed up right now I've got a full year supply of Mounjaro from well also Dr. Samantha Altmanson.
B
Again like, like this is where not to get too into it but like you know the way revenue would be recognized anyways is like this person is taking a tiny fraction of whatever the actual end price of the product is. And it's like could even be selling it at a loss. And so like again yeah, what was the.
A
Probably not at a loss, very little overhead. Yeah, it's just like what he's like drop shipping GLP1s to people from like some compounding pharmacy.
B
Yeah, no, no. I mean it's not just drug. But the. I was reading and I only very superficial knowledge of this but from what I was reading like there are even more parts about how you can get the kind of prescription automatically done. There's all these other parts of the GLP1 supply chain like outside of just traditional retail and drop shipping, but that have become. There's all these players rising up that are kind of filling and automating those. So he basically had a whole it's kind of like agencies in a traditional marketing world. So he just had a network of those and was just like connected to them and communicating them to them via AI.
A
This guy's diabolical. All right, we gotta cover one more story before we get out here. It's called token maxing. All right. Meta employees vie for an AI token legend status. Employees at Meta want who want to show off Their AI super user chops are competing in an internal leaderboard for status as a session immortal or even better, a token legend. The ranking set up by Meta by a Meta employee on its intranet use Company data measure how many tokens employees are burning through Dub Clawdonomics after the flagship product. From anthropic to leaderboards, leaderboard aggregates AI usage from 85,000 Meta employees, listing the top 250 power users. The PR. The practice is emblematic of Silicon Valley's newest form of conspicuous consumption known as token maxing. Since the story went out, Meta took the thing down because they were embarrassed by it. But do you agree with me that this is obviously like, not the right way to incentivize people? Like, to use tokens. Like, if you gamify token usage, you're just going to get people burning tokens to compete with each other.
B
Okay, man, this one hits home very hard. So. So at rytr, we actually had, where we had an internal. We actually had a similar. It wasn't like a leaderboard, but we had a report that was like, oh, this is like token usage. And we were looking at it internally of employees. And then someone had actually screenshotted the top and my name was on it. I'm like, I was like third out of employees and like, had burned, like. And I'm, I've told you, I'm cranking workflows and agents all day long and like, and obsessed with it. So they had posted on LinkedIn and then I started getting texts from like, some other friends were like, oh, wait, I just saw this thing going. So like, this kind of hit home in its own small way for me. And we were even discussing, like, what does this mean? And it kind of caused a stir for us internally and like, what I think when it is, I actually think it is a good thing in terms of like, recognizing just simply who's actually using a lot of AI. Like, which at this exact moment, I do think using a lot of AI is the only way to learn and the right way to like, constantly experimenting in every single possible way. Now if anyone ever tried, if it ever became important in terms of your review with your boss or I think then the like, incentives become too screwed up and like, it's just the whole thing becomes a little more corrupt, performative and weird. But I, it was, it was interesting because, like, you could just see it right there. And even, like, even at my work, like, the people who I'm always talking to about, oh, like every morning, what did you build? Oh, check out this cool thing I built. It was the people who were at the top of the leaderboard. So, like, when it's not being done in a performative way, it's actually a good indicator of like, who's really just heads down, just obsessed with this. But I mean, on the Meta side also, I was wondering, like, if that was true at Meta and they have unlimited budget, what percentage of anthropics ARR was like meta engineers just melting tokens.
A
Yeah. So first of all, I've. I've heard now from multiple people that this is something that happens in many companies. I mean, I guess it's everywhere now because they are trying to incentivize use of the tools. So. Okay, I get that. But I will also say that Anthropic this week just came out with new revenue numbers. They are doing 30 billion ARR now. And I'm pretty sure what that is is you take the 10 minutes that meta pays its token bill and you multiply that by whatever number gets you to a year.
B
I mean, you know my rant on this one, like, everyone's like, they went from 12 billion to 30 billion in two and a half months. Like, just say the fricking numbers like anthropic. Come on. It's okay, because it just doesn't sound as exciting. And it is exciting. And if you're doing 2.2 billion, whatever it is in revenue in a month, that's insane. But, like, yeah, I don't know. With no clarity on that. It's just a bunch of clod heads on clod. And what was the name of the Facebook thing?
A
Autonomics.
B
Plotonomics. Just. And. And it's Meta with their just sitting there, just melting Claude tokens.
A
Yeah, that's what it is. All right, well, soon enough we'll have access to Mythos, and then that leaderboard will rise even, even further. And then we'll get some numbers, by the way, because sooner or later, these companies are going to file to go public, and we will certainly be able to play hype or true as we look through that.
B
S. One last question before we drop.
A
Yeah.
B
Will they hire law firms and banks to go public?
A
You know it. You know they use Salesforce, so yes, obviously. Definitely. What do you think?
B
I think Ike and I think Interrupt topic is going to do something interesting.
A
We've seen it would be. Speaking of marketing.
B
Yeah.
A
It would be the most ball.
B
The most ballers. We did not hire a law firm, but we are so confident in all of our filings. Like, why not? Why not.
A
Yeah, it'll be the first harness ipo, first start. Everyone would be thrilled. All right, Ron, John, great to have you back. Looking forward to next week. Thanks again for coming on.
B
See you next week.
A
See you next week. Thank you everybody for listening and watching. And we'll see you next time on BIG Technology Podcast.
D
Ryan Reynolds here from Mint Mobile. I don't know if you knew this, but anyone can get the same Premium Wireless for $15 a month plan that I've been enjoying. It's not just for celebrities. So do like I did and have one of your assistant's assistants switch you to Mint Mobile today. I'm told it's super easy to do@mintmobile.com
C
Switch upfront payment of $45 for three month plan equivalent to DOL dollars per month required intro rate first three months only, then full price plan options available, taxes and fees extra. See full terms@mintmobile.com
E
Parliament if you've used Babbel, you would Babbel's conversation based technique teaches you useful words and phrases to get you speaking quickly about the things you actually talk about in the real world. With lessons handcrafted by over 200 language experts and voiced by real native speakers, Babbel is like having a private tutor in your pocket. Start speaking with Babbel today. Get up to 55% off your Babbel subscription right now at babbel.com acast spelled B A B-B-E-L.com acast rules and restrictions may apply.
Episode Title: Anthropic’s Mythos Dilemma, Violence Against AI, Tokenmaxxing at Meta
Host: Alex Kantrowitz
Guest: Ranjan Roy (of Margins)
Summary by Podcast Summarizer
This episode dives deep into the launch and media frenzy surrounding Anthropic’s latest AI model "Mythos," scrutinizing whether its impact is genuine or the product of savvy PR. The hosts also explore a new and disturbing trend: violence targeting AI infrastructure and leaders, and discuss Silicon Valley’s new cult of "token maxxing," particularly a Meta internal competition over how many AI tokens employees can burn. The conversation is sharp, nuanced, and skeptical—with plenty of industry insight.
Timestamps: 02:38–30:36
“A model so powerful and so dangerous it can’t possibly be placed in our hands.” – Alex Kantrowitz [05:15]
"It’s good branding… but I don’t like all this hype when you’re not actually able to see anything. Have some meetings, be careful—but you don’t need to, like, 'here is Mythos.' It sounds like an Avengers movie." – Ranjan Roy [09:06]
"Within 20 minutes… both Anthropic and Sam Bowman, all of this was prepared. And then… everyone focuses on that sandwich detail… it was such a coordinated PR effort and it stuck." – Ranjan Roy [15:21]
"Are we suffering from some sort of AI Derangement Syndrome… where we forget to ask what happens if it does work?" – Alex Kantrowitz [16:54]
Notable Quote:
"Gun to my head, it’s somewhere in the middle… not nothing, but not as grand as they say it is.” – Alex Kantrowitz [24:01]
Timestamps: 30:36–33:20
"You can change the harness around a fixed model and see a 6x performance gap on the same benchmark… So maybe Mythos won’t matter. It’s all about who’s got the best harness." – Ranjan Roy [32:00]
Timestamps: 35:13–46:20
"These buildings can be faceless, they can be imposing… Tech’s interest in showing and delivering this technology despite the uncertainty it causes to people’s lives." – Alex Kantrowitz [38:27]
"AI is going to be front and center. And it makes for such a good villain—the industry has not put the most likable people front and center representing the technology." – Ranjan Roy [41:28]
Timestamps: 49:28–55:21
"It would have been great if they switched the tone a little bit: the Medvi story shows how a little AI and…whatever’s close to [scamming]…can get you to scale really quick." – Alex Kantrowitz [53:06]
Timestamps: 56:02–61:19
"If it ever became important in terms of your review with your boss… the incentives become too screwed up and the whole thing becomes a little more corrupt, performative and weird." – Ranjan Roy [59:12]
Timestamps: 61:19–62:24
On PR Tactics and The Sandwich Story:
"They are coordinating PR… so why do they want that to be the story?" – Ranjan Roy [15:13]
On Data Centers as Lightning Rods:
"I feel that these [data centers] are going to continue to be… a visual representation of what’s going on." – Ranjan Roy [38:12]
On the Future of AI Super Apps and Control:
"Do we take the most intelligent models we’ve built and keep them exclusive to our super apps or make them available to everyone?" – Alex Kantrowitz [19:27]
On Corporate Incentives:
"There’s so much incentive… to make it out to be [dangerous] and the way they rolled it out, I think it's been genius and I think it's just ahead of the IPO." – Ranjan Roy [24:13]
Throughout, Kantrowitz and Roy keep a skeptical, analytic tone. They’re not starry-eyed about AI nor simply antagonistic, but aim to parse real advancements from marketing smoke. The hosts highlight how AI’s reality today is as much about narrative management, power struggles, and symbolism as it is about technical progress.
For listeners: Expect a sharp, funny, and deeply informed deconstruction of tech news that cuts through both the hype and the hand-wringing.
For more detailed breakdowns of each section, refer to the timestamps.