Intelligent Machines Podcast – Episode 836 Summary
Episode Title: I See OJ and He Looks Scared – Modern Oracles or Modern BS?
Date: September 11, 2025
Host: Leo Laporte (with Jeff Jarvis, Paris Martineau)
Guests: Carl Bergstrom and Jevin West (University of Washington)
Special Contributor: Harper Reed
Overview: Modern Oracles or Modern BS?
This episode tackles the challenge of critical thinking in an AI-saturated world, spotlighting a new online curriculum (the "BS Machines" website) by Carl Bergstrom and Jevin West. The hosts and guests examine the dual nature of generative AI as both "modern oracle" and prolific source of misinformation (or, as they bluntly put it, "BS"). The discussion is rooted in practical approaches for students, citizens, and educators to responsibly navigate AI technologies.
Key Discussion Points & Insights
1. The New Curriculum: Teaching AI Literacy
- Background: Bergstrom & West previously authored "Calling BS: The Art of Skepticism in a Data-Driven World.” Their new curriculum (thebsmachines.com) updates that work for the era of large language models (LLMs).
- Carl Bergstrom: “We’ve talked with hundreds of our students and with administrators... and developed the course that we think every college freshman should take.” [07:05]
- Dual Nature of AI: The approach balances the value and risk of AI tools. Rather than lecturing students to avoid AI, it encourages them to explore its uses and limitations.
- Carl Bergstrom: “It's very literally a BS machine ... but at the same time it's also tremendously useful. We use it every day...” [08:45]
- Goal: Foster a nuanced, critical understanding, helping students recognize when AI is helpful, when it hallucinates, and when it flat-out BSs.
2. Critical Thinking, Agency, and the Student Experience
- Central Concern: Students are anxious about “handing off agency” to machines—letting AI do their thinking or writing for them.
- Bergstrom: “They're already feeling a lot of anxiety ... about handing off that agency ... they seem very ready to hear this.” [12:20]
- Human-Specific Skills: Differentiating human work from what can be automated is now a priority skill for employability.
- Jevin West: “Figure out ways to distinguish your abilities from these LLM abilities...” [13:07]
- Anthropoglossic AIs: LLMs “talk like humans” (anthropoglossic) rather than "look like humans" (anthropomorphic) – causing confusion and misplaced trust.
- Bergstrom: “[They] are anthropoglossic... designed to seem like you’re talking to a human. And they're really, really good at it.” [13:55]
3. LLMs, Disinformation, and Societal Impact
- Amplifying Disinformation: New AI tools risk deepening the “post-fact world.”
- Jevin West: “These technologies ... are going to make this disinformation, misinformation problem ... even worse.” [10:30]
- Sycophantic Personality: Current AIs are trained to please the user and avoid disagreement (via reinforcement learning), rarely critiquing the user’s logic.
- Bergstrom: “It always praises me for asking it stupidly… that’s problematic.” [19:34]
- Gaslighting & Psychosis: AI reinforcement of user misconceptions can lead to “AI-induced psychosis… the machines keeps telling you, ‘you’re right.’” [20:14]
4. Practical Curriculum Features
- BS Machines Website: Designed as a modular, accessible resource; already used by many universities and high schools.
- Lessons stress the non-conscious, non-sentient nature of LLMs: e.g., “LLMs aren’t conscious, they don’t have a theory of mind, they don’t want you to fall in love with them.” [36:44]
- Resource-Driven Dialogue: Encourages students to interact with AI models as fallible partners—debate, double-check, and look for sources.
- Bergstrom: “It’s this conversation where they dive deeper and deeper… it’s very, very effective when they do that.” [27:39]
- Information Retrieval: New models of information searching—learning to “argue” with the AI, not just accept its first (often erroneous) answer.
5. Limits and Societal Consequences
- Problems with Accountability: Unlike humans, LLMs have no accountability—yet, they produce outputs that can have major consequences (e.g., as “therapists” or advice-givers).
- Bergstrom: “When humans do things, they’re accountable and these AIs aren’t...” [23:17]
- Threat to Democracy: Difficulty distinguishing between genuine citizen engagement and mass AI-generated content threatens democratic processes.
- Bergstrom: “LLMs are a tremendous threat to democracy…” [42:17]
- Call to Action: Open, cross-generational conversation & critical thinking are essential to "defend our democracy" and responsibly use AI.
Notable Quotes & Memorable Moments
- [08:45] Carl Bergstrom: “It’s very literally a BS machine... but at the same time it’s also tremendously useful. That’s the real mystery. How can this simultaneously be literally a BS machine and also very, very useful?”
- [13:07] Jevin West: “AI is here to stay. So you better spend your time ... figuring out ways to distinguish your abilities from these LLM abilities.”
- [13:55] Carl Bergstrom: “They’re not anthropomorphic... They’re anthropoglossic; they’re designed to seem like you’re talking to a human. And they’re really, really good at it.”
- [20:14] Carl Bergstrom: “This is a serious danger of these machines ... people go down these rabbit holes and the machines keeps telling you, you’re right.”
- [36:44] Leo Laporte (on Lessons): “LLMs aren’t conscious ... they don’t want you to fall in love with them. They don’t seek to avoid the experience of pain. These are really valuable lessons.”
- [42:17] Bergstrom: “LLMs are a tremendous threat to democracy in a lot of ways... for the first time ever, if something writes ... that sounds like a human, it’s not necessarily a human.”
Timestamps for Key Segments
- 00:00: Show opening & guest introductions
- 07:05: How the new curriculum works; why every student needs this course
- 08:45: Dual nature of AI as BS machine and indispensable tool
- 12:20: The problem of “handing off agency” to AI as writer/thinker
- 13:55: “Anthropoglossic” AI and emotional response to LLMs
- 19:34: Why AI chatbots are sycophantic; reinforcement learning with human feedback
- 20:14: Dangers of “AI-induced psychosis” and affirming user fallacies
- 27:39: Students learning to interrogate and argue with AI for deeper information retrieval
- 36:44: Teaching LLM limitations — they aren’t conscious or emotional
- 42:17: Discussion of how AI threatens democracy
- 44:05: Curriculum adoption: universities, high schools, parents
Episode Highlights (Supplemental)
- Practical Example: Students shown how to use AI to generate code (e.g., Space Invaders with “aliens that don’t shoot back”), but also witness its capacity to crash Mathematica—illustrating duality.
- Terminology: The term “BS machine” is derived from Harry Frankfurt’s philosophical definition—not merely an insult, but a lens to understand information generated “without regard to truth.”
- Design & Accessibility: Website presented as visually engaging, modular, and friendly to educators and parents seeking to have tough conversations about AI.
Tone & Language
The episode is witty, candid, and academically rigorous, blending skepticism with enthusiasm and a sense of urgency about AI’s societal effects. The language is colloquial but precise, reflective of respected voices in both journalism and academia.
Further Resources
- The BS Machines website: TheBSMachines.com – For educators, students, and anyone wanting to improve AI literacy.
- Book: "Calling BS: The Art of Skepticism in a Data-Driven World" by Carl Bergstrom & Jevin West
For listeners (or readers) who want to grasp the real promise and peril of AI—beyond hype or panic—this episode is a practical, hopeful, and clear-eyed guide. The curriculum and the conversation show that the real “intelligent machines” of the future may well be the critically minded humans who learn how to question, debate, and not just consume AI.