Podcast Summary: This Week in Startups
Episode: Anthropic’s Mythos is a cyber-weapon, so you can’t have it | E2273
Host: Jason Calacanis
Date: April 9, 2026
Main Theme:
The hosts and guests react to Anthropic’s bombshell announcement: their new AI model, Mythos, is so powerful—and so adept at finding and exploiting software vulnerabilities—that they consider it a potential "cyber-weapon." Anthropic is not publicly releasing it, and is partnering only with the largest and most critical infrastructure players to use Mythos defensively and harden digital infrastructure. This kicks off a wide ranging and at times urgent conversation about AI security, AGI trajectory, open source vs proprietary models, deflationary tech, startup opportunities, and the existential stakes of the global AI arms race.
Key Discussion Points & Insights
1. Mythos: The AI Superweapon the Public Can’t Have
[00:00–08:53]
- Anthropic’s new LLM, Mythos, is not being released to the public.
- The model demonstrates unprecedented capabilities in finding, chaining together, and exploiting security vulnerabilities—including in systems previously thought highly secure (OpenBSD, FFmpeg).
- Its side-effect expertise in “cyber” comes from being so good at code.
- Security Risk:
- Mythos could empower anyone, including hostile actors, to rapidly discover and weaponize zero-day exploits across global infrastructure.
- Anthropic is working with a consortium (AWS, Azure, Nvidia, etc.) as part of “Project Glasswing” to use Mythos in a controlled, defensive way to patch and secure software.
- $100M in compute credits offered to partners for defensive hardening.
“Basically the gist is with this model anyone can go to any piece of software and find zero day exploits quickly and then basically go to war with them.” —Alex, [01:33]
“Obviously, capabilities in a model like this could do harm if in the wrong hands. And so we won't be releasing this model widely.” —Anthropic team in video, [08:13]
2. Anthropic’s Approach vs. OpenAI: Trust, Pace, and IPO Signals
[09:59–14:22]
- Dario Amodei (CEO of Anthropic, ex-OpenAI) left OpenAI over issues of trust and is now overtaking OpenAI on capabilities, influence, and revenue.
- Anthropic’s move marks a shift from open, democratized AI releases to a “two-tier” AI world where powerful models are restricted to critical players.
- The release strategy is also viewed as an IPO/PR move, showing “investor-facing” caution about future capabilities.
“I think Anthropic has significantly passed OpenAI on a lot of things. The reason is they’ve been more focused.” —Rob May, [12:00]
“Product market fit is the ultimate arbiter... Clearly no one has more PMF than Anthropic today.” —Alex, [14:22]
3. AI Arms Race, Game Theory, and the National Security Angle
[17:35–29:59]
- Mythos’ capabilities introduce “cyber weapon of mass destruction” concerns: a handful of months’ lead time over global competitors could be existential for US digital security.
- The possibility that China or another nation is already developing or deploying similar tools is raised.
- Discussion moves to whether such powerful models should be nationalized or controlled by government, likening the situation to the Manhattan Project and atomic weapons.
“This is a super weapon.” —Jason, [26:43]
“I think we should consider this Mythos model to be essentially a cyber weapon and perhaps a cyber weapon of mass destruction.” —Alex, [21:11]
“There is an argument you have to nationalize this technology. There’s an argument it’s too powerful for a private company to own.” —Jason, [24:21]
4. Societal Trust, Governance, and the Limitations of State Control
[27:37–29:40]
- Both hosts and guests question whether the American public or tech sector would actually trust the government to safely and effectively handle such a model.
- References are made to plummeting trust in journalists, AI labs, and governments, invoking “The X Files” and the need for intergroup collaboration under existential threat.
“Nobody trusts anybody because we’re in literally ‘The X Files’... The truth is out there.” —Jason, [27:52]
5. Open Source, SLMs, and the Coming Hyper-Deflationary Wave
[38:28–58:30]
- Rob May (Neurometric) introduces the world of Small Language Models (SLMs), which are getting more powerful and can now often be run on consumer hardware.
- Panel discusses how SLMs, fine-tuned for specific tasks and constantly becoming more capable (thanks to architectural advances and distillation), are poised to render many SaaS businesses and even frontier LLMs deflationary or obsolete.
- AT&T cited as an example: by shifting 90% of AI workloads to SLMs, cut costs 90%.
- The future may be “hyper-deflationary” for software powered by LLMs—with ultra-low costs driving down margins everywhere, requiring new business models focused on orchestration, analytics, and support.
“Building a business in this AI future is going to be about estimating probabilities…using your compute to figure out code generation versus code checking.” —Rob, [18:03]
“I think this could collapse the value of frontier models…they may not realize it, but they might have just created their own demise.” —Jason, [56:22]
6. Death By Claude: AI-Powered Defensibility Scores & Startup Moats
[64:15–71:13]
- Guest “G Man” (Gyani), creator of "Death by Claude," introduces a tool assessing the AI-defensibility of companies and products, humorously roasting those most at risk of being replaced by AI.
- “Moat” factors that protect against AI disruption:
- Hardware/atoms business
- Network effects
- Deep science or regulated industries
“If you’re doing hardware, we don't have physical models yet. Now, hard means moat, highly fundable. Very interesting.” —Jason, [70:03]
“If you can get replaced, then you score high; if you cannot, then you score low.” —Gyani, [65:19]
7. VC, Startups, and the New Reality of AI
[71:46–76:44]
- Rob May comments that angel/seed investing has gotten dramatically harder; valuations are higher, and barriers to defensibility are lower.
- The value in startup building is now more about relentless execution, rapid iteration, and refining product-market fit, rather than mere capability to build.
- Networks, proprietary data, and unique hardware or regulatory advantage may be the only real long-term moats.
Notable Quotes & Memorable Moments
-
On the existential race for AI security:
“This is becoming the equivalent [of the atomic bomb]. It might not seem as much because a nuclear bomb can cause such a mass destruction of life, but this could cause a massive financial devastation across the economy.” —Jason, [22:04]
-
On AI model release strategy:
“More powerful models are gonna come from us and from others. And so we do need a plan to respond to this.” —Anthropic team in video, [09:21]
-
On the impact of SLMs & startup costs:
“If only there was a product called Claw Pack that for $8 a month, got you unlimited inference…” —Alex, [48:50]
-
On software and company defensibility:
“This is making everybody a little more like a CEO, because as a CEO, you realize you have to review stuff, coordinate stuff...these tools help you do stuff.” —Rob, [71:46]
-
On the deflationary future:
“This is so deflationary that we need a new word for deflationary. There’s deflationary…what is hyper deflation?” —Jason, [56:39]
Key Timestamps
- [00:00] — Introduction, Anthropic’s restriction of Mythos and cyber capabilities
- [08:01] — Anthropic video: Why Mythos isn’t public, Project Glasswing
- [14:01] — OpenAI vs Anthropic: speed and focus
- [17:35] — Polymarket predictions: Mythos release timelines
- [19:02] — National security: What if China already has this?
- [21:11] — “Cyber weapon of mass destruction”
- [24:21] — Game theory: Should Mythos be nationalized?
- [38:31] — SLMs, AI cost deflation & use cases
- [42:55] — Customizing/tuning SLMs for specific tasks
- [45:17] — Making intelligence free: Neurometric’s model
- [56:39] — The hyper-deflationary future of LLMs and SLMs
- [64:15] — Death by Claude: AI defensibility scores and startup moats
- [71:46] — The changing landscape of early-stage investing
Tone & Style
The episode is fast-paced, irreverent, and veers between urgent concern (about the real security threat posed by Mythos and the global AI arms race) and optimistic fascination with the business and technical innovation happening at the frontier. The hosts and guests balance serious analysis with typical “TWIST” banter, dense with memorable, sometimes biting quips and a running thread of startup/VC humor.
Conclusion
In this landmark TWIST episode, the discussion zeroes in on AI capabilities outpacing our security infrastructure, the arrival of “cyber-weapons” through LLMs, how C-suites, governments, and startups may react, and the impending deflationary shockwave poised to hit the tech sector as SLMs and open source models threaten to commoditize AI everywhere. For founders, operators, and investors, it’s a call to rethink everything—especially what it means for software to be defensible in a world that’s on the verge of abundant, cheap, and sometimes dangerously powerful intelligence.
