Hard Fork – Episode Summary
Episode: Anthropic’s Cybersecurity Shock Wave + Ronan Farrow and Andrew Marantz on Their Sam Altman Investigation + One Good Thing
Date: April 10, 2026
Hosts: Kevin Roose (The New York Times), Casey Newton (Platformer)
Overview
This episode dives deep into the shockwaves rippling through the tech world after Anthropic’s new, unreleased AI model—Claude Mythos Preview—was found to be so adept at finding software vulnerabilities that it triggered an unprecedented pre-release lockdown, with only select companies given access for defensive cybersecurity reasons. The hosts then welcome New Yorker writers Ronan Farrow and Andrew Marantz to discuss their extensive new profile about OpenAI CEO Sam Altman, exploring his trustworthiness, power, and the swirling questions about AI governance. The episode closes with the “One Good Thing” segment, which features both lunar inspiration and a rainbow-detecting weather app.
Key Discussion Points & Insights
1. Anthropic’s Cybersecurity Shock Wave
[03:15–22:49]
-
Unprecedented Model Withheld ([03:15])
- Anthropic announced "Project Glasswing," unveiling a new model named Claude Mythos Preview.
- Unlike common practice, Anthropic is not releasing the model broadly, citing “it is too dangerous to do that” ([04:49]). Instead, access goes only to a consortium of large tech and infrastructure companies—excluding OpenAI or Meta—so they can shore up defenses before wider release.
-
Model Capabilities—and Alarm ([05:38])
- Mythos has uncovered vulnerabilities in major operating systems and browsers. One notably was a 27-year-old bug in OpenBSD found despite decades of professional vetting ([06:15]).
- It also found an exploit in FFmpeg that had eluded security tools after 5 million scans ([06:47]).
- Casey contextualizes: much of the Internet is held together with "spit and glue," with cybersecurity long predicated on what’s effectively luck ([08:01]).
-
Perspectives from the Security Community
- The hosts cite Alex Stamos (ex-Yahoo, Facebook), who calls this a “big deal," validating the need for a cross-company firewall strengthening blitz ([08:58]).
- Casey asks, is this good marketing or a real risk? Kevin argues against a PR stunt, asserting the strategy is rational corporate avoidance of "selling cyber weapons on the open market" ([09:46], [10:16]).
-
Potential Scenarios ([11:47])
- Stamos identifies two futures: 1) A finite number of critical vulnerabilities exist, fixable by an all-hands push, or 2) This model finds/invents endless new exploits, creating a perpetually unstable landscape.
- Kevin predicts a “forced reset” for the cybersecurity industry, noting most software will require updating, but human bottlenecks and lagging patching practices—especially outside the critical 1% of infrastructure—are big hurdles ([12:45]).
-
National Security & Regulation ([14:23])
- The U.S. government, ironically, is not allowed to use the model, as Anthropic is designated a supply chain risk for federal agencies ([14:47]).
- Casey expresses discomfort at the minimal regulation: “Model development of this scale and seriousness remains essentially unregulated in this country” ([15:58]).
- The hosts note this marks a return to a model access gap reminiscent of GPT-2’s original withholding for safety in 2019 ([17:11]).
-
Tension Between Secrecy & Safety
- Kevin: “As hostile and suspicious as people feel toward the AI industry, that only gets worse if they think that there are secrets being kept in a basement that they can't access” ([18:36]).
- Casey points out Anthropic’s founding premise: gain enough influence at the AI frontier to guide outcomes toward safety, even if building dangerous tech was necessary to do it ([19:13]).
-
Cybersecurity Hygiene for the Public
- Casey’s current advice:
- Use a password manager
- Don’t reuse passwords
- Enable multifactor authentication ([21:02])
- Memorable moment: “I am planning to deal with the possibility of a massive cybersecurity breach by just sort of selectively dribbling out incriminating things about myself” —Kevin ([22:26])
- Casey’s current advice:
2. Ronan Farrow and Andrew Marantz on Their Sam Altman Investigation
[23:46–52:55]
-
Can Sam Altman Be Trusted?
- The New Yorker’s 16,000-word piece investigates Altman’s honesty, leadership, and the swirl of internal (and external) criticism and support.
- Ronan: “Even against [Silicon Valley’s] backdrop...there is an extraordinary preponderance of people who emerge from interactions with Sam Altman...with really active complaints and allegations that he lies repeatedly about things big and small.” ([27:13])
- Notable moment: The “gray sweater” detail—Altman’s claim to always wear a gray sweater for decision fatigue, disproven in his next interview appearance ([29:08])
-
Altman Dossiers: Hype, White-hot Rivalry, and the ‘Rap Sheet’
- There exist several “rap sheets” or dossiers on Altman’s behaviors, one compiled by Dario Amodei while at OpenAI, and another possibly by Musk-aligned rivals ([30:13]).
- Elon Musk’s camp is identified as circulating unsubstantiated rumors, underscoring a sometimes toxic atmosphere ([30:52]).
- Ronan and Andrew stress they filtered their reporting to sift real, evidence-based critiques from competitive mudslinging.
-
What’s New in the Reporting?
- Clarifications:
- Altman didn’t just leave Y Combinator; he was likely pushed out ([33:08]).
- Relationships with Gulf royalty are deeper than previously reported.
- The famous board coup: There was never a formal outside investigation report, but rather an 800-word press release, and this was intentional ([34:59]).
- Clarifications:
-
Damning and Nuanced Source Quotes
- Microsoft executive: “...a small but real chance he's eventually remembered as a Bernie Madoff or Sam Bankman Fried level scammer.”
- Board member: Altman is “unconstrained by truth” and has “an almost sociopathic lack of concern for the consequences” ([36:30]-[37:07]).
-
Why Personality Matters in AI Leadership
- Ronan: “...the way the entire enterprise was structured when it was founded as a nonprofit was...to avoid an AGI dictatorship.”
- The lack of “guardrails” means the CEO’s integrity still matters enormously, even if “structures around these individuals” are crucial ([44:41]-[46:23]).
-
Succession and Altman’s Power
- The myth that OpenAI is still unthinkable without Altman is fading; board and executive shuffles are ongoing ([48:07]).
- Altman has hired many ex-CEOs as lieutenants, intensifying future leadership rivalry, possibly inviting his own succession ([49:40]).
-
Favorite Detail
- Whether Sam’s Uber actually crashed, per his 2015 dinner delay text to Dario Amodei, is left for listeners to ponder ([51:41]).
3. One Good Thing
[53:36–61:30]
Kevin’s Good Thing: The Artemis II Moon Mission ([53:57])
- NASA’s Artemis II mission—first human lunar orbit since Apollo—has filled Kevin with “childlike glee and wonder”.
- Notable stat: “252,756 miles from Earth. You would need a chain of 2.37 billion of Nathan’s famous hot dogs to cover the distance that this spacecraft has gone.”
- Introduced to the “terminator line” (the boundary between sunlight and shadow on the Moon).
- Closing wish: “We should go to the moon every single year...This has reignited my faith in humanity.”
Casey’s Good Thing: Acme Weather App ([57:09])
- Acme Weather is a new weather app built by the creators of Dark Sky.
- Unique features:
- Probability-based forecasts for rationalists (“weather app for Bayesian statistics”),
- Push alerts for lightning, beautiful sunsets, rainbows (“They will tell you when there is a rainbow in your neighborhood.” —Casey [59:46])
- Aurora Borealis alerts.
- Community reports (crowdsourcing).
- Personal note: “Some companies are forcing the world to rewrite all software. Others are making a system to tell you where there’s a rainbow. Those are the people I want to highlight today.” ([61:03])
Notable Quotes and Memorable Moments
- Casey: “So I could imagine there being a business benefit to Anthropic of coming out and saying we have the most powerful model in the world and we're not releasing it. Like, yes, I'm sure that there are plenty of businesses that are salivating over the chance to get their hands on it, but they can't unless they are part of this consortium.” ([11:19])
- Kevin: “As hostile and suspicious as people feel toward the AI industry, that only gets worse if they think that there are secrets being kept in a basement that they can't access.” ([18:36])
- Ronan: “An extraordinary preponderance of people...with really active complaints and allegations that he [Sam Altman] lies repeatedly about things big and small.” ([27:13])
- Casey: “It just strikes me, though, that everyone who digs into this winds up coming back with essentially the same story... I feel like we now sort of know, like, the broad outlines of this person's psychology.” ([38:47])
- Casey: “Your passwords should not be, you know, the name of your pet or whatever...and then use multifactor authentication.” ([21:02])
- Kevin: “I'm planning to deal with a massive cybersecurity breach by just sort of selectively dribbling out incriminating things about myself.” ([22:26])
- Casey (on Acme Weather): “Who does not want to be sitting at your wage slave job...and then Acme Weather tells you, hey, guess what, there's a rainbow in your neighborhood. You're going to book it outdoors and you are going to behold the majesty of creation.” ([59:53])
Timestamps for Key Segments
- Anthropic’s Model Announcement & Cybersecurity Context: [03:15–22:49]
- Farrow and Marantz on Sam Altman Investigative Profile: [23:46–52:55]
- One Good Thing (Artemis II and Acme Weather): [53:36–61:30]
Tone and Style
“Hard Fork” maintains its trademark balance of incisive analysis, skepticism, and humor. The interplay between Casey and Kevin, and the witty asides from Ronan and Andrew, provides levity even while dissecting existential-scale tech risks and deep corporate intrigue.
This summary offers an immersive guide to the episode, capturing all pivotal themes, takeaways, and key moments for listeners and non-listeners alike.
