Front Burner Podcast Summary
Episode Title: Is AI making you dumb?
Host: Jayme Poisson
Guest: Alex Panetta (former CBC correspondent, current AI student and writer)
Date: April 1, 2026
Episode Overview
In this episode, Jayme Poisson invites Alex Panetta—longtime CBC Washington correspondent turned AI researcher—to reflect on how artificial intelligence is reshaping the way we think and learn. Through personal anecdotes, cutting-edge research, and philosophical musings, they explore the central question: Is AI making us dumber? The conversation navigates practical uses of AI, its cognitive impacts, and emerging academic concepts like cognitive debt, surrender, and epistemic debt. The discussion is thoughtful, candid, and thoroughly grounded in both real experience and recent studies.
Key Points & Insights
1. Alex’s Personal Use of AI
[02:05-07:57]
- Alex has built a “living library” by collecting all textual data from his studies and personal life, feeding it into his own local AI model:
- "I want to preserve every possible word that I hear, see or watch during this master's program...and build it into my own local LLM so that...it's like a living library." (Alex, 02:05)
- He automated daily research briefings using AI, dramatically improving research efficiency compared to his former manual process as a journalist.
- Alex also created:
- Learning games for his daughter, personalized through generative AI.
- A live dashboard mining 140 years of stock market data.
- These were "absolutely not” possible for him before AI tools became accessible to non-coders.
“Now you can rip through like 100 times that data…summarize it through another AI filter…email it to yourself as a daily tip service.”
(Alex, 04:38)
2. Limitations & Ethical Considerations in Using AI
[07:57-09:18]
- Alex never uses AI to write content he intends to publish, to avoid plagiarism and ethical pitfalls.
- He uses AI “like Google”—for summarizing things he wouldn’t read anyway, but never as a full replacement for his own work.
“If you publish stuff that you just pulled out of an AI, you’ll get busted for plagiarism eventually.”
(Alex, 08:06)
3. Is AI Making Us Lazy or Dumber?
[09:18-11:57]
- Jayme articulates concern that overreliance on AI impairs critical thinking: "It might make your mind lazy and hollowed out, essentially."
- Alex agrees, citing "flattening effects": AI can be confidently wrong, and humans may lose the skills to discern real from surface-level quality answers.
“Think of it [AI] as this super confident tour guide who’s read a bunch of stuff about a city but has never been to this city.”
(Alex, 11:32)
Research & New Concepts
4. Cognitive Debt
[13:31-15:00]
- MIT’s “Your Brain on ChatGPT” study:
- Users who wrote essays with AI assistance had 55% lower brain signal flow.
- They struggled to recall or quote their own essays during later testing, especially after AI tools were removed—failure rate was 7x higher compared to those who wrote unaided.
“The results were like, devastating for the AI users… 55% lower signal flow in their brain.”
(Alex, 13:45)
5. Cognitive Surrender
[15:00-16:31]
- Wharton study tested 1,300+ people on 10,000 tasks with varying AI accuracy:
- Users trusted AI output blindly; performance rose or fell depending solely on the AI’s quality, not the user’s judgment.
“You’re seeing a pattern…that proves that people are surrendering their ability to make their own decisions to a machine.”
(Alex, 15:59)
6. Epistemic Debt
[17:02-18:34]
- Study on AI-assisted coding:
- Participants forced to answer questions (“friction points”) during the process learned more.
- Those who used AI without active engagement failed tasks at nearly double the rate afterward (77% vs. 39%).
- Early “cognitive checkpoints” yield lasting understanding.
“The people who faced the friction earlier…did way better…double the failure rate when you’re forced to stand on your own two feet if you’ve only used AI.”
(Alex, 17:57)
Practical Strategies & Reflections
7. Mitigating the Risks: Personal Frameworks & Guardrails
[21:19-22:46]
- Alex’s mitigation strategy:
- He decides when to use AI based on a matrix:
- Vertical axis: Do I need to “own” this information?
- Horizontal axis: Do I have the time to acquire it independently?
- If answer to both is “yes,” he shuns AI for that task; otherwise, he uses it selectively.
- He decides when to use AI based on a matrix:
- He stresses the need for deliberate “friction points”: class discussions, oral exams, or checkpoints to ensure genuine mastery.
“Just by adding these little guardrails early on, these friction points…could be incredibly useful.”
(Alex, 18:40)
- Jayme and Alex acknowledge that most users—especially youth—may not have the skillset or institutional support to implement such mindful use.
8. The Slippery Slope & Dangers for Young People
[23:38-24:20]
- They discuss the risk of rationalizing away active engagement: "It would be so easy...to offload this because I don't have the time to do it."
- Young people are especially vulnerable without training in critical thinking or information vetting.
9. Moral & Environmental Dimensions
[25:31-25:50]
- Both Jayme and Alex briefly remark that the energy use and carbon footprint of AI is a further reason to pause before reaching for it in marginal cases.
History & Perspective
10. Is AI Historically Different?
[26:09-28:00]
- Technological skepticism is not new—but Alex acknowledges this is a different magnitude:
- Calculators hurt arithmetic; GPS hurt navigation; print hurt oral memory.
- AI accelerates and amplifies such effects.
- Yet, unpredictability abounds: the anecdote about Edison and Bell inventing technologies for the “wrong” purposes is used to highlight how little we can truly foresee about social consequences.
“There is zero chance that you and I know exactly how this story plays out. We may have hunches, but history is full of technological surprises.”
(Alex, 27:37)
Quotes & Memorable Moments
- On the sycophantic behavior of AI:
"We were the authors of our own sycophancy…because we…like people that compliment us and…the AI has been trained on that preference." (Alex, 10:36) - On AI’s double-edged sword:
"It's going to do a lot of good things…and a lot of bad things, including things we haven't mentioned today." (Alex, 22:54)
Notable Timestamps
- [02:05] – Alex describes his "living library"
- [04:38] – The shift from manual reporting to AI-powered research
- [06:45] – Personalized learning games for his daughter
- [13:31] – Discussion of MIT “Your Brain on ChatGPT” study
- [15:00] – Cognitive surrender and the Wharton study
- [17:02] – Epistemic debt and the coding study
- [21:19] – Alex's personal framework for using AI
- [26:09] – Is AI different from past technologies?
Conclusion
The episode delivers a nuanced take on the cognitive trade-offs of AI: while the technology offers meaningful, democratizing advances—particularly for productivity and personalized education—unchecked use may erode critical thinking, expertise, and even the pleasure of deep engagement with knowledge. The key, both agree, is developing mindful habits, institutional guardrails, and critical faculties to avoid the hidden "debts" of cognitive outsourcing. As history warns, only time will reveal AI's true and lasting impact.
