Odd Lots Podcast Summary
Episode: "This Is How to Tell if Writing Was Made by AI"
Date: April 2, 2026
Hosts: Joe Weisenthal & Tracy Alloway
Guest: Max Spiro (Founder & CEO, Pangram Labs)
Episode Overview
This episode dives deep into the question: How can we tell if a piece of writing was generated by AI? With the rise of AI-generated content across the internet, Joe and Tracy welcome Max Spiro, founder of Pangram Labs, a company that builds AI-detection technology. The conversation traverses topics from methodology and AI "tells," to the implications for journalism, trust, and the very nature of the internet itself as more content is AI-written.
Key Discussion Points & Insights
1. The Challenge of Recognizing AI Writing
- Joe admits that much AI-writing is "pretty good"—often indistinguishable from human writing, with "perfect" punctuation ([02:24]).
- Tracy finds that AI-generated text is clear and grammatically strong but lacks style and distinctive voice ([03:38]).
- Both hosts note a developing trend: books and articles disclosing "no AI used" statements, signaling shifting expectations in content authenticity ([04:16]).
2. Why Detecting ‘AI Slop’ Matters
- Max Spiro introduces "AI slop": easily generated, low-effort-but-legitimate-looking content flooding the internet, diluting quality information ([07:46]).
- The traditional heuristic—good writing signals serious, thoughtful authors—no longer holds when AI can generate perfect prose ([08:21]).
- The information ecosystem's signal-to-noise ratio is threatened by AI slop, harming trust and potentially enabling malicious actors ([08:21]).
3. How Pangram Detects AI Writing
Human vs. AI: The Methodology
- When Pangram started, humans could guess AI vs. human writing with ~90% accuracy; the company aimed to surpass this ([07:10], [09:45]).
- False positives (human labeled as AI): Currently about 1 in 10,000 ([09:45]).
- False negatives (AI labeled as human): Around 1% in non-adversarial cases ([10:26]).
- Technique: The model is trained on millions of human and AI samples—including pairs like “5-star Denny’s review” written by both a person and an AI—so it learns the minute statistical differences between them ([12:53]-[13:23]).
- The differentiators are not always obvious—often they're implicit patterns in word choice and phrase construction that a deep learning model can detect, even if a human can’t articulate them ([12:03]-[13:44]).
Style & Distribution
- AI writing sits as a "small point" in the distribution of all human writing; no matter how varied the prompting, AI doesn't stray far from its training ([16:03]-[16:46]).
- Humans who "mode collapse"—write in the most average, generic way—can sometimes trigger false positives ([16:03]).
- Models inadvertently learn to distinguish not just AI vs. human, but even different AIs (e.g., ChatGPT, Claude, Kwen) ([17:47]-[18:25]).
Handling AI Editing/Assistance
- Pangram distinguishes between AI-generated and AI-assisted content (e.g., Grammarly or spellcheck) to avoid blanket-flagging any text that has had minor machine editing ([28:16]-[28:55], [12:26]).
- Measures the "distance" between the original and edited texts to categorize the level of AI assistance ([28:59]).
4. Societal & Philosophical Implications
- Intent matters: Using AI to flood discourse is more troubling than AI-assisted journalism, though reputational risks exist for both ([21:27]).
- The provenance of training data is a growing issue: it gets harder to be sure what’s “human” as more of the internet is written by AI ([22:17]).
- Estimate: As of early 2026, up to 40% of new web content is AI-generated (“AI slop”); on Medium it's over 50%, Reddit is now over 10% ([23:08]).
5. The Growing Problem of Content Farming and Manipulation
- Companies now use AI bots to simulate “organic” posts on platforms like Reddit to recommend products—motivated by both direct marketing and influencing AI models’ outputs via training data ([24:23]–[25:31]).
- Affiliate marketing and SEO spam have accelerated the spread of AI-generated slop, making quality information harder to find ([25:54]).
6. Counteracting AI Slop—Norms, Regulation & the Future
- Norms are key: Max advocates that undisclosed AI responses are rude—asking ChatGPT for an answer and passing it off as your own erodes trust ([41:48]).
- Platforms face mixed incentives—Google both promotes AI-generated content (e.g., via Gmail’s suggested replies) and tries to suppress AI slop in search ([43:18]).
- Long-term, an Internet overloaded with bots could force genuine, meaningful communication into "walled gardens"—private, authentic communities ([35:38]).
- Advancements in watermarking and hardware signatures (C2PA) may help certify genuine video/image content ([34:44]).
7. Can Pangram Be Outwitted?
- Attempts to “trick” Pangram by excessively optimizing AI prompts often result in degraded, incoherent text—passing as “human” to detectors but failing as useful communication ([31:44]-[32:31]).
- As AI models add randomness and complexity, detectors must also evolve; Pangram relies on "deep learning," surpassing older burstiness/perplexity metrics that gave high false positives, especially for non-native speakers ([39:32]-[40:44]).
Notable Quotes & Memorable Moments
-
Max Spiro:
- “You would have to just happen to make the same exact decisions that the LLM does hundreds of times.” ([11:23])
- “Our false positive [human flagged as AI] number right now is about 1 in 10,000.” ([09:45])
- “We want to be able to differentiate between AI assisted and AI generated.” ([28:18])
- “About 40% [of internet content is AI-generated slop].” ([23:08])
- “It is rude to send other people undisclosed AI outputs.” ([41:48])
-
Joe Weisenthal:
- “AI writing... it’s pretty good. When you consider that the majority of the population... doesn’t know where to put a comma within a sentence.” ([02:24])
- “There is this complete severance of sort of, like, craft and output.” ([47:32])
- “We’ve created an unlimited stream of basically cranks with really good grammar.” ([48:42])
-
Tracy Alloway:
- “What I notice about it is it doesn’t do style very well.” ([03:38])
- “No one’s going to yell at you for using Spellcheck, right? It’s crazy to think that reputational risk is going to hinge on whether you used a chat platform for basic copy editing.” ([06:24])
- “If we assume the world is collectively concerned about AI slop and wants to do something, what would be the single biggest change?” ([41:29])
Important Timestamps
- 01:18 – Start of actual content; hosts discuss “the feeling” of AI-written text
- 04:34 – Disclosure trends (“no AI used” labels on books)
- 06:56 – Introduction of Max Spiro and Pangram Labs
- 07:46 – Why AI slop is a problem: the ease of flooding info channels
- 09:45 – Pangram’s accuracy and false positive rates explained
- 10:55 – What the models are actually “looking for” in detection
- 13:44 – “Delve,” “tapestry,” and obvious AI-output tells
- 16:03–17:15 – Distribution of writing and why humans sometimes trip detectors
- 18:25 – AI detection by model type (e.g., Claude vs. GPT)
- 19:00 – Discussion of impact and false positive experiences
- 21:27 – Debating the role of intent: malice vs. convenience
- 22:17 – Will future detectors run out of genuine human text to train on?
- 23:08 – 40% of internet content is now AI-generated
- 24:23 – How and why Reddit is targeted by AI content farms
- 25:54 – Affiliate links and the changing economy of internet content
- 28:16–28:55 – How Pangram handles AI-assisted vs. fully AI-generated text
- 34:44 – The future of AI detection for images, video, audio (C2PA)
- 35:38 – “Dead Internet Theory” and where genuine conversation may move
- 41:48 – The biggest solution: building new norms against AI slop
- 43:18 – Mixed incentives among major platforms
- 44:08 – Personal anecdotes on forced hand-writing, eCards, and authenticity
- 46:11 – Host reflections on intuition and AI detection
- 47:32 – The changing meaning of good grammar
- 48:42 – The internet: “an unlimited stream of cranks with really good grammar”
Final Thoughts
This episode underscores the message that AI-generated writing is pervasive, often indistinguishable from human work, and poses a challenge to how we evaluate, trust, and navigate information online. Technology like Pangram may provide a stopgap, but the long-term solution—according to the guest—lies in building social norms and expectations about content authenticity.
Quote to remember:
"We’ve created an unlimited stream of basically cranks with really good grammar."
— Joe Weisenthal ([48:42])
(For further content, visit Odd Lots at Bloomberg.)
