Hard Fork Podcast — Episode Summary
OpenAI's Fog of War + Betting on Iran + Hard Fork Review of Slop
Date: March 6, 2026
Hosts: Kevin Roose (The New York Times), Casey Newton (Platformer)
Special Guest: Arietta Laika (NYT)
Episode Theme
This multifaceted episode unpacks the latest drama at the intersection of artificial intelligence and geopolitics: OpenAI’s controversial Pentagon deal, the sudden surge of prediction markets during the US-Iran war, and the bizarre explosion of AI-generated “slop” in kids’ YouTube content. The hosts dive deep — with humor, skepticism, and lived experience as tech journalists and parents — into the ripple effects of each issue, their ethical and practical implications, and what they reveal about the present and future of tech, society, and power.
Contents
- OpenAI’s Pentagon Controversy — “Fog of War”
- Prediction Markets and War Profiteering
- AI-Generated Kids Content: The “Slop” Review
- Interview: Arietta Laika on AI Slop for Children
OpenAI, Pentagon, and the “Fog of War” (00:34–25:48)
Main Storyline
- OpenAI, Anthropic, and the Pentagon have entered a public, high-stakes battle over military artificial intelligence contracts and the ethical boundaries of AI deployment during wartime.
- OpenAI’s rushed and secretive Pentagon deal triggered a major public and internal backlash — especially over potential surveillance and weapons use.
- Internal dissension, public trust issues, and the historical echoes of nationalization and techworker activism are central themes.
Key Points & Insights
-
Disclosure and Context (03:37):
- Kevin: The New York Times is suing OpenAI (and others) for alleged copyright violations.
- Casey: His fiancé works at Anthropic.
-
The OpenAI-Pentagon Deal (03:39–06:22):
- Sam Altman announced a Pentagon partnership, with supposed “red lines” around domestic mass surveillance and autonomous weapon systems—similar to conditions that made Anthropic’s deal collapse.
- Huge backlash ensued, with users canceling ChatGPT subscriptions, many calling the company’s legal assurances “window dressing.”
- OpenAI did not release the full contract, only “the relevant portion,” further fueling distrust.
- “Until we see the entire contract, it’s just very difficult for us to take at face value the idea that this is the only relevant language.” (Casey, 05:46)
-
Damage Control and the Language of Limits (06:22–08:56):
- Altman admitted to acting “slopportunistically” and amended the Pentagon deal to rule out use for domestic surveillance of US persons.
- Skepticism persists over legalistic loopholes: “There is always a high risk here for what I would call Jedi mind tricks... whether or not you personally are surveilled will come down to semantics.” (Casey, 06:58)
- Employee unrest spilled onto X (formerly Twitter). Senior OpenAI VP Max Schwarzer left for Anthropic, suggesting values-based discontent.
-
CEOs vs. Employees, Tech Power Shifts (10:12–12:03):
- Discussion of a rekindled sense of tech worker leverage among highly skilled AI employees—akin to the “Google walkouts” of yesteryear.
- OpenAI needs its “original core,” the idealistic technical experts, to keep advancing their models.
- “If you’re going to build a GPT-6 and 7... those are the people you’re going to need. And so, yes, almost everything... is aimed at those people.” (Casey, 11:58)
-
Anthropic’s Paradox: Crisis and Boom (12:08–14:49):
- Anthropic is simultaneously battling Pentagon pressure and basking in unprecedented revenue growth (from $1B annualized in early 2025 to $20B projected in early 2026).
- State Department switched from Anthropic’s models to outdated OpenAI models under a Trump administration order—despite no clear legal authority to ban software in this way.
-
Nationalization Anxiety (17:50–25:48):
- Ongoing social media debates about whether the US government will (softly or overtly) nationalize AI labs, given their increasing strategic value.
- “Eventually ... the US government will step in and say ... ‘We run these now.’” (Kevin, 17:50)
- Parallels drawn to the Manhattan Project and WWII, with the twist that today’s AI models have been developed by private, not public, actors.
- Growing “soft nationalization”: subtle government pressures to shape AI to official needs, rather than an explicit governmental takeover.
- “I think it’s going to be ... soft nationalization ... a little pressure to build your models differently ... removing some safeguards ... more likely direction.” (Kevin, 25:17)
- Ongoing social media debates about whether the US government will (softly or overtly) nationalize AI labs, given their increasing strategic value.
Notable Quotes
-
“Most Americans just don’t like AI very much. They didn’t like it for all the normal reasons—my media feed is filling up with slop, my manager is telling me I have to use it or I’ll get fired. Add: the government may use it to spy on you or kill you with a murder bot. Of course, Americans are going to say, well, this fricking sucks.” (Casey, 09:05)
-
“To me, this seems like your dream come true…what does any man want more than his wife taking an interest in his hobby?” (Casey, 01:25, riffing on Kevin’s story of his wife discovering coding with Claude AI)
Prediction Markets and War Profiteering (27:52–42:42)
Main Storyline
- The US-Iran war has propelled prediction markets—such as Polymarket and Kalshi—into the mainstream, enabling bets on the outcomes of world-changing events, including military strikes and political assassinations.
- This raises profound new ethical, regulatory, and societal concerns about war profiteering, incentives for violence, and the role of inside information.
Key Points & Insights
-
Rise of War-Linked Prediction Markets (28:16–29:09):
- Mainstreaming: “You cannot walk down a street…without seeing…ads for prediction markets like Kalshi and Polymarket.” (Kevin, 28:57)
- Kalshi, more regulated, avoided direct war/assassination bets but allowed surrogate wagers like “Ali Khamenei out as supreme leader.”
- When Kalshi voided the market and reimbursed everyone, both winners and losers were angry.
-
Polymarket and Less-Regulated Betting (30:35–31:13):
- Offshore/crypto-based Polymarket enabled betting on exact dates of strikes and more, only drawing a line at nuclear detonation bets.
-
Insider Trading & National Security (31:43–35:26):
- Lawmakers and the public express alarm: Sen. Chris Murphy called it “insane this is legal.”
- Real-world harms: Israel arrested people for betting on war operations with classified info; the Times reports suspicious, well-timed trades before US actions in Iran (33:22).
- The CFTC (Commodity Futures Trading Commission) is tasked with regulating, but enforcement is limited.
-
Are Prediction Markets Useful? (36:08–37:34):
- Hosts are skeptical that such markets deliver useful “wisdom of crowds” or improve public knowledge—usually it’s mostly speculation and insider gambling.
- “What we have is a bunch of vibes plus some insider trading.” (Casey, 37:17)
- Before a strike, markets were actually wrong—a low likelihood was predicted.
- Hosts are skeptical that such markets deliver useful “wisdom of crowds” or improve public knowledge—usually it’s mostly speculation and insider gambling.
-
Ethical Quandaries (38:07–40:18):
- The key danger is the directness of incentive—betting directly on individual deaths or acts of violence is potentially facilitative.
- Assassination markets could function as bounties, incentivizing murder.
-
Regulatory Futures and Political Entrenchment (40:18–42:28):
- Unlikely that the Trump administration will crack down—some prediction market platforms are closely tied to the administration.
- If left unchecked, these platforms could become as influential as the crypto lobby.
- Kevin proposes old-school regulation: “If you want to bet on war, you should have to go to a seedy, OTB betting place…put in some effort.” (42:03)
Notable Quotes
- “Imagine the worst thing you could do on our platform. You can do that.” (Casey on Polymarket, 30:49)
- “If you are one of the traders who did not get your expected winnings from the death of the Ayatollah, I just want to say I don’t care and it doesn’t matter.” (Casey, 30:23)
- “This is not theoretical at all, Kevin…last Friday, more than 150 accounts placed hundreds of bets…correctly predicting there would be an American airstrike on Iran by Saturday.” (Casey, 32:53)
- “The assassination prediction market is…out of bounds…could actually create a bounty.” (Kevin, 39:02)
AI-Generated Kids Content: The “Slop” Review (44:41–53:17)
Main Storyline
- A new wave of AI-generated children’s content—dubbed “slop”—is flooding YouTube and YouTube Kids, targeting toddlers with surreal, overstimulating, and sometimes disturbing short videos.
- The Hard Fork hosts review several live on air, discuss their effects, motivations, and debate whether they’re merely innocuous weirdness or something more insidious.
Key Slop Moments
-
AI-Generated Alphabet Animals (45:34–46:49)
- “Why are ducks coming out of toothpaste tubes?” (Kevin, 46:09)
- “If you’re three years old, you need to know this: that is not how a duck is made.” (Casey, 46:19)
-
Color Injection Animal Videos (47:15–48:41)
- “The key theme in Slop for Kids…is the transformation of animals.” (Casey, 49:10)
- “Whoever is making the slop knows needles are scary to children…an engagement hack.” (Casey, 48:18)
-
Animals as Armored Vehicles, Fruit Bed Lullabies (49:10–52:21)
- “This is just another one that feels a little more surreal than I am comfortable with…it seems designed to confuse children more than to educate.” (Casey, 52:30)
- “Kids like lots of things that are bad for them. That’s why we don’t let them use cocaine.” (Casey, 53:02)
-
Larger Observations
- “Why are people doing this? We already had a lot of videos teaching kids the alphabet.” (Kevin, 50:10)
- “At least when you’re watching a normal cartoon, there could be moments of relative calm…this just doesn’t seem like it’s probably that good for them.” (Casey, 50:27)
- “We’ve crossed the Rubicon a while ago. The big difference now, it’s just easier and cheaper to create this stuff.” (Kevin, 51:32)
Interview: Arietta Laika on AI Slop for Children (53:29–67:41)
Main Storyline
- Arietta Laika of the New York Times explains her recent investigation into AI-generated children’s videos on YouTube and YouTube Kids—how prevalent they are, how difficult they are to filter, and what risks or dysfunctions they might introduce for young viewers.
Key Points & Insights
-
Discovery & Methodology (53:33–55:27)
- Arietta analyzed what kinds of videos are algorithmically recommended after clicking on high-quality staples (like Ms. Rachel, Bluey, Cocomelon).
- Used a coded tool to scroll without influencing the algorithm; then analyzed videos frame by frame for AI tells.
-
Prevalence & Detection (55:37–56:25)
- In one 15-minute session, >40% of recommended videos were AI-generated.
- Telltale signs: morphing objects, distorted text, seamless mixing of real & fantastical elements.
- “Some channels had made very low-quality cartoons for years—now, very suddenly, they were uploading much higher quality, endlessly produced video…from off-the-shelf AI.” (Paraphrased, 56:09)
-
YouTube’s Response and Responsibility (57:16–58:25)
- Officially, only “realistic-looking synthetic media” must be labeled; most kids’ content isn’t labeled as such.
- No comments allowed on YouTube Kids, so parents can’t crowdsource warnings.
-
Impact on Kids (59:45–60:41)
- “When videos contain all these bedazzling elements, kids can’t learn as well…missing narrative arcs, familiar characters, comprehensible phrases.”
- Short-form, non-narrative videos overload and confuse young viewers—potentially stymieing healthy development.
-
Elsagate Redux, Ongoing Risk (63:11–65:00)
- Arietta draws parallels to previous YouTube scandals (“Elsagate”)—bizarre, disturbing, sometimes violent re-uses of kids’ IP—now much easier with AI.
-
Parental Dilemmas and Platform Gaps (65:52–67:30)
- There’s currently no practical way for parents to filter out AI slop besides avoiding the platform or constant vigilance.
- YouTube is adding some controls (e.g. time limits for Shorts), but no comprehensive fix is available.
Notable Quotes
- “At least before, you had to have some animation skills…now, anyone can do this in a few minutes. We’re going to be seeing a lot more.” (Arietta, 64:21)
- “All of these recommendation algorithms find the video that is the closest to going over the line and find that it gets more engagement.” (Casey, 65:33)
- “We are creating a lifelong pipeline of whatever slop is going to be most engaging to you.” (Kevin, 66:29)
Overall Tone & Takeaways
- The episode balances skepticism, dry humor, and genuine concern as new AI and tech layers slam into each other: private and public power, regulation and resistance, and the rapid “sloppification” of online life.
- The recurring question: When do these disruptions go from merely weird and unsettling to deeply consequential for democracy, childhood, and the future of technology?
- The answer: we’re already living through the early chapters.
Timestamps for Key Segments
- 00:34 – OpenAI/Pentagon controversy begins
- 06:22 – OpenAI's damage control, internal unrest
- 12:08 – State of Anthropic: crisis + boom
- 17:50 – Nationalization anxieties in AI
- 27:52 – Prediction markets and war
- 31:43 – Insider trading and ethical dilemmas
- 44:41 – Hard Fork Review of Slop (AI kids content)
- 53:29 – Interview: Arietta Laika on AI slop for children
Memorable Moments
- “Slopportunistic” — OpenAI’s rushed and clumsy crisis communications.
- “...my wife has fallen in love with an AI.” (Kevin, 00:36) — Tech’s infiltration of everyday life, both comic and cosmic.
- “If you’re three years old…that is not how a duck is made.” (Casey, 46:19)
- “Kids like many things that are bad for them. That’s why we don’t let them use cocaine.” (Casey, 53:02)
For those who missed the show, this episode is an in-depth, highly topical exploration of how AI’s ascent is colliding with power politics, public trust, and the most primal battleground of all: the attention of children.
