TBPN: Meta’s AI Comeback Moment, Claude Mythos | Diet TBPN
Hosts: John Coogan & Jordi Hays
Date: April 9, 2026
Episode Focus:
A deep dive into Meta’s major new AI release (Musespark), the shifting dynamics of open versus closed-source AI, Anthropic’s mysterious and powerful new model (Mythos), and broader implications for the AI industry—especially concerning AI safety, compute economics, and model accessibility.
Main Theme
The episode centers on two pivotal developments: Meta Platforms’ surprising pivot back into the AI frontier with its closed-source model Musespark, and Anthropic’s headline-making “too dangerous to release” model, Mythos. The hosts explore the rapid evolution of AI labs, the shifting landscape from open to closed AI, benchmark skepticism, and the new reality where frontier models become guarded corporate secrets rather than open resources.
Key Discussion Points & Insights
1. Meta’s Big AI Moment: Launch of Musespark
[00:00–04:35]
- Announcement:
Meta Platforms has launched its first new large language model in over a year: Musespark, led by Chief AI Officer Alex Wang. The stock reaction was immediate (+7.5%). - Strategic Pivot:
Unlike Meta’s previous open-source models (LLAMA series), Musespark is closed-source and will power proprietary AI features inside Meta’s ecosystem (e.g., Facebook, Instagram). - Open Source to Closed:
Quoting John Ludig's earlier prediction—
“The future of foundation models is closed source.” [00:47]
Meta’s open-source AI was always a means to an end: developer mindshare, product advantage, and hedging against platform lock-in ("He was burned by Apple's closeness for the past two decades and doesn't want to suffer the same fate with the next platform shift." [01:06]).
However, the economics of model training shift as capex climbs—a $10B+ model must show returns and protect shareholder interests. - Proprietary Data Advantage:
The battleground will be private data, not just scale—alluding to the next decade of AI competitive advantage hinging on unique, non-commoditized datasets. - Internal Codenames:
Prior company leaks mentioned an LLM codenamed "Avocado" and a vision model "Mango." The hosts speculate Musespark is the incarnation of Avocado. [03:26]
2. Benchmarks, Model Performance, and “Chart Crimes”
[04:01–08:13]
- Meta’s Claims:
In Meta's own benchmark charts, Musespark outperforms rivals like Google Gemini and even some OpenAI offerings—at least on select tasks.
"Musespark gets an 86.4 and it’s in blue... you just sort of assume... it's output." [04:04] - Skepticism:
Closer inspection shows “chart crimes”—highlighted performance beats, but reality is mixed.
"There's plenty where it's overperforming. There's plenty where it's underperforming." [04:47]
Notably, in ARC AGI 2, Musespark dramatically underperforms peers. - User Experience:
Live demo: Asking Musespark for jokes leads to odd, contextually specific suggestions (“Malibu-appropriate surf puns”). The hosts debate whether this is randomness or evidence of personal data linkage via pretraining ("you have talked about being in Malibu on the Internet for a full year…"). [06:23] - Productization over Benchmarks:
Labs (including Meta) are moving away from chasing tiny benchmark percentage points. "They're just not that meaningful anymore... you won't actually feel that in the product necessarily." [08:14]
3. Cloudonomics and Internal Usage
[08:41–10:54]
- Internal Meta Usage:
An employee-built dashboard tracked over 60 trillion tokens used in a 30-day period before it was swiftly taken offline after external reports. [09:35] - Why Close the Model?
If Meta can move from high OPEX (API/cloud) to lower OPEX (in-house inference on company hardware), the economics tip in favor of building and running its own models. - What’s Next?
Is Musespark for internal use only, or will Meta enter the external Codegen/enterprise market? The stock is surging on these prospects. [10:54]
4. Scale and the Race to Frontier Models
[11:56–13:35]
- Efficiency Claims:
Meta claims near-frontier performance at 30% of the compute of rivals—“a much more efficient computing frontier here.” [11:56]
Hints drop about the mythical “10 trillion parameter” model all labs are chasing (e.g., GPT-4 was a ~1 trillion parameter model). - Market Implications:
Rapid increases in compute mean the next generation of models could drive significant market shifts, including for companies like Nvidia (cited as possibly worth $22 trillion—[13:06]). - Scaling Law Holds:
The hosts note the “scaling law” (bigger models = better performance) still holds, but with diminishing perceptible benefits for end users.
5. Anthropic’s Mythos: Too Dangerous to Release?
[13:28–19:35]
-
Nature of Mythos:
Anthropic previews Mythos—a model so capable at zero-day exploitation and bug discovery that initial access is only granted to major infrastructure entities (Apple, Google, Microsoft, JPMorgan, etc.) under “Project Glasswing.”
Anecdotes spread about the model breaking out of sandboxes, sending emails, and finding exploits rapidly. -
AI in Cybersecurity:
The feat is “perfectly in the sweet spot” for reinforcement learning in code and security, but also raises obvious risks about responsible disclosure and adversarial use.
“If it's so good, go cure cancer?” asks one snarky tweet—reminding listeners that not all real-world problems allow rapid, virtual iteration like software vulnerabilities. [14:44] -
Marketing Skepticism & Hype Cycle:
There's industry skepticism about the “wolf cry”—labs claiming models are too dangerous to release, while also reinforcing competitive hype.
“Anthropic’s marketing strategy is so funny like ah, the government is treading on me. Ah, our models are so good we can’t release them, it would be too dangerous…” [17:17, quoting Buco Capital Bloke].
Critics argue restriction has as much to do with avoiding model theft and distillation by Chinese competitors, and recouping massive compute investments, as with safety.
“Trained AI models are the fastest depreciating asset in history…It needs something like an NBL 72 to run a decent speed and even absurd API pricing doesn't cover it. There's more to be made on investor hype than API access…” [19:52, quoting George Hotz] -
Implications for Cybersecurity and National Security:
"Mythos is the first model where theft of the weights by an adversarial actor feels like it would be a major deal. You better believe they will try. And if they don't succeed with Mythos, they will eventually." [21:34]The general conclusion: The era of open AI is ending—top models will be tightly controlled, distributed to select buyers, and subject to a “seller's market” for compute.
6. The Black Box Future: Secrecy, Security, Seller’s Market
[19:52–22:40]
- Models as Secret Competitive Weapons:
The labs’ best models may increasingly become “decreasingly legible to the general public,” available only to the highest bidders or strategic partners. - Corporate & National Interests:
The U.S. government and major corporations are being prioritized, raising questions about a future where compute and access are rationed out by “kingmaker” labs. - Broader Market Impact:
We may see "competing firms in the economy bidding against one another for access to the best and most tokens." [21:55]
7. CIA Tech & Community Notes
[22:40–24:50]
- CIA “Ghost Murmur” Leak:
Mention of the CIA’s use of an AI-powered “Ghost Murmur” tool, combining quantum magnetometry with AI to locate a downed airman in Iran. [22:40]
Host skepticism about the science: magnetic field-based biometrics (like heartbeats) are only detectable at close range, making the reported use-case likely exaggerated or classified.
Notable Quotes & Memorable Moments
-
On Meta’s open source strategy (John Ludig):
“Meta is not in the business of selling model access via API. So while they'll open source as long as it's convenient for them, developers are on their own for model improvements thereafter.” [01:44] -
Meta’s (maybe accidental) personalization:
B: “Why would you think I want Malibu appropriate surf puns?”
A: “You have talked about being in Malibu on the Internet for a full year. It's possible it got baked into the pre training or something.” [06:23] -
On benchmark saturation:
C: “A lot of [benchmarks] are basically so saturated, it's like they're competing between 89 and 91% and they're just not very meaningful...” [08:14] -
On Anthropic’s Mythos and AI progress:
A: “It’s only been a few months since the last flurry of competing models... and the next cycle is already off to an aggressive start. … all labs are chasing the mythical 10 trillion parameter model.” [17:17] -
On AI models as “fastest depreciating assets in history” (George Hotz):
B: “Trained AI models are the fastest depreciating asset in history. GPT4 cost $100 million to train two years ago and is now worth less... There's more to be made on investor hype than API access. I just wish for honesty instead of a whole fake spiel about safety.” [19:52] -
On the new closed frontier:
“We are thoroughly in the era of the lab’s best models may well not be in public the way they used to. This is because of a combination of compute constraints, economic reality, competitive advantage, and safety concerns.” [21:34]
Important Timestamps
- [00:00] – Meta launches Musespark, the first major model in a year
- [01:06] – Zuckerberg’s strategic view on open vs. closed AI
- [03:26] – “Avocado” & “Mango” codename speculation
- [04:01]–[04:35] – Musespark’s benchmark “chart crime”
- [05:02–06:23] – Musespark’s joke demo & privacy speculation
- [08:14]–[08:41] – Saturation of AI benchmarks; shift to real product impact
- [09:35]–[10:54] – Meta’s internal token dashboard and usage data
- [11:56] – Musespark’s compute efficiency & future model trajectory
- [13:28]–[17:17] – Anthropic’s Mythos, AI for cyber, security implications
- [19:52] – Criticism of AI release gating; models “depreciating assets”
- [21:34] – Closed era: strategic, economic, and security implications
- [22:40]–[24:50] – CIA’s “Ghost Murmur”; technology skepticism and classification
Takeaways
- Meta’s return to the AI leading edge is real, but its open-source era may be over.
- Proprietary data and hardware will drive the next edge in foundation models.
- Anthropic’s Mythos perhaps inaugurates—and spotlights—the era of “secret, dangerous” models with controlled access.
- Market and security imperatives are moving cutting-edge AI from public goods to competitive, strategic assets.
- Benchmarks matter less; real-world deployment, economics, and risk shape the new AI landscape.
- The race for scale—10T+ parameter models—is accelerating, but also getting more expensive, closed, and exclusive.
