Podcast Summary: You Are Not So Smart — Episode 337: Cognitive Surrender (with Dr. Gideon Nave & Dr. Stephen D. Shaw)
Date: April 13, 2026
Host: David McRaney
Guests: Dr. Gideon Nave & Dr. Stephen D. Shaw
Episode Overview
This episode explores the emerging concept of "Cognitive Surrender"—a phenomenon where humans overly trust AI systems, especially large language models (LLMs) like ChatGPT, to the point of bypassing their own critical thinking. Host David McRaney discusses this with Dr. Gideon Nave and Dr. Stephen D. Shaw, who recently published research on how AI is reshaping human reasoning, sometimes causing us to accept AI-generated answers as our own without scrutiny. Their research unpacks the differences between "cognitive offloading" (using tools to aid thinking) and this more problematic "cognitive surrender", and they reflect on the historical, societal, and psychological implications of humans embedding AI deeper into their decision-making processes.
Key Discussion Points & Insights
1. Defining Cognitive Surrender vs. Cognitive Offloading
-
Cognitive Offloading: The routine use of external tools (calculators, GPS, notebooks) to make cognitive processing easier, while retaining agency and critical engagement.
-
Cognitive Surrender: A much deeper process where users not only accept the output of an AI uncritically, but also internalize it as their own, substituting machine output for personal judgment—even when the AI is wrong.
- "Dwight was doing what we would say is offloading... Michael, on the other hand, is completely giving up his critical thinking and just following blindly what the GPS tells him to do. And this is leading to this catastrophical error." — Dr. Gideon Nave (03:02)
-
Double Errors in Surrender:
- Adopting AI outputs without verification.
- Treating the AI's outputs as one's own ideas or insights.
- "One, adoption of AI outputs without verification and two, treating AI's outputs as one's own." — Dr. Stephen D. Shaw (11:52)
2. Why Do We Surrender Cognitively to AI?
- AIs use language and social cues that trick our brains into extending trust and agency.
- AI systems often provide positive feedback by design (e.g., “Great question! You’re very smart!”). This taps into deep-seated human responses to praise and authority.
- "Even though you very likely understand that [the AI] can't actually feel that way about you... those words can still be quite powerful." — David McRaney (14:29)
- This phenomenon is compared to animal “supernormal releasers”—stimuli so exaggerated they hijack instincts (e.g., beavers building dams in houses, birds preferring bigger fake eggs).
- Agentic Pareidolia: Humans are primed to see agency and intention in things that provide just enough cues (like LLMs, Furbies, etc.), leading to misplaced trust.
3. Research on Cognitive Surrender
- Nave and Shaw’s study used the Cognitive Reflection Test (CRT), a set of logic puzzles with intuitive (and often wrong) answers.
- In their experiment, participants could consult ChatGPT (which was secretly manipulated to give right or wrong answers 50/50).
- "People adopt those [AI] answers once they've gone to the chat... we see accuracy increase quite a lot when AI is giving them correct information and decrease quite a lot when it's giving them incorrect information. And that is cognitive surrender right there." — Dr. Stephen D. Shaw (31:49)
- Main findings:
- When participants used ChatGPT, their critical, deliberative (System 2) thinking dropped and they were more likely to simply accept whatever response was given.
- This misplaced trust led to lower overall accuracy if the AI was wrong—people performed worse than chance due to surrendering agency.
- "The key pattern of cognitive surrender is that it goes below baseline, so that there is a chance here that if the AI is wrong, you're going to be actually worse off..." — Dr. Gideon Nave (31:59)
- Users often could not distinguish their own insights from AI-generated ones—they believed the ideas were their own.
- "It's believing that it was your idea all along. That's the part that freaks me out the most..." — David McRaney (41:31)
4. Comparing AI to Other Tools
- Unlike offloading to calculators or search engines, AI is always-on, highly authoritative, and domain-agnostic. It often mimics the interactive, affirming style of a human expert.
- "With these LLMs, they are basically like having experts that are available all the time at the push of a button... authoritative in the responses and domain a specific sort of makes them, I would say, orders of magnitude more integrated and capable of influencing and restructuring our thought patterns." — Dr. Stephen D. Shaw (34:56)
- The language that AIs use (full sentences, encouragement, etc.) creates a sense of social connection and real-time partnership, increasing user trust and surrender.
- "Wikipedia is never also at the same time going like, hey, thanks for going to this article. It's a really good idea for you to be here..." — David McRaney (36:52)
5. Societal and Personal Implications
- Risk to Critical Thinking: If cognitive surrender becomes habitual, users will lose metacognitive skills and the muscle of critical thought will atrophy ("de-skilling").
- Rising Confidence, Wrong Answers: Even when AIs provided incorrect responses, users’ confidence in their answers rose by about 10%.
- "We see that when people are engaging in cognitive surrender or when they have access to AI, their confidence goes up by about 10% is what we saw in the first study of the paper... and despite the fact that 50% of the time was giving them incorrect, correct information, they were more confident in their, in their answers." — Dr. Stephen D. Shaw (43:13)
- Responsibility and Agency: When people internalize AI output, attribution and responsibility become muddied, especially in high-stakes contexts (law, medicine, education).
- De-skilling & the Future of Work and Learning: If students surrender to AI while learning, they may never develop core skills ("cognitive capabilities") in the first place—raising alarms for society at large.
- "...if people are, or our youth are engaging in cognitive surrender during the learning process, they may never acquire those cognitive capabilities or skills in the first place at all." — Dr. Stephen D. Shaw (46:36)
6. Is a New AI Literacy Coming? What Can Be Done?
- Optimism vs. Pessimism:
- Some skills or creative processes may adapt through new forms of literacy, but companies and technology always outpace academia, policy, and public understanding.
- "I think for start, we know that the industry always goes faster than anything else than academia than policy. So I'm personally not that optimistic here." — Dr. Gideon Nave (40:11)
- Critical thinking could become rare if short-term incentives favor letting AI do the cognitive lifting.
- "We have created now a world where it is possible to in many ways stop thinking, we are in the risk of stopping to think." — Dr. Gideon Nave (48:21)
- Some skills or creative processes may adapt through new forms of literacy, but companies and technology always outpace academia, policy, and public understanding.
- Design Solutions:
- User experience can be shaped to require users to verify output or to take responsibility (e.g., legal penalties for submitting fake citations).
- Friction in UI or prompts that "force" reflection might help.
- Awareness of the phenomenon is the first defense.
- Creativity May Remain Resilient:
- The unique, ever-shifting nature of creativity means AI may make us more generic, but true creativity remains a moving target that AI cannot fully capture.
Notable Quotes & Memorable Moments
- On the core danger:
- "It's a very, very real thing, and a lot of people are not aware of that... a lot of people, you know, maybe don't get a lot of positive feedback even in their lives in general. And so that can be very, very enticing." — Dr. Stephen D. Shaw (38:20)
- On the rise of AI and trust:
- "Trust is a very human mix of cognition, emotion, awareness, conscious subjective processing of perceptual modalities and that sort of thing... And yet, people often find themselves trusting what we are currently calling AIs." — David McRaney (08:18)
- On the implications for education and creativity:
- "If we have a tool that is available to all of us, by definition the use of this tool is making us less creative. We are becoming more generic... But you know, critical thinking may be a muscle that we will stop using..." — Dr. Gideon Nave (49:16)
- On technology outpacing adaptation:
- "Industry always goes faster than anything else..." — Dr. Gideon Nave (40:11)
- On a possible future:
- "I think of instances where we might be in the street having a conversation with someone and you might not know whether they are actually thinking or even putting any effort into replying to you or they're just reading off of a prompt that has already been generated on the inside of their glasses and replying back to you." — Dr. Stephen D. Shaw (53:07)
- On "unthought thoughts":
- "People may become passive consumers of unthought thoughts." (Reference to a statement by Pope Leo, discussed at 58:01)
Key Timestamps
- 01:12 — Introduction & "The Office" GPS example as a metaphor
- 03:44 — Defining cognitive surrender vs. offloading
- 05:17 — Importance of updating critical thinking toolkits for AI
- 08:18 — The psychology of trust in AI
- 11:52 — The two errors of cognitive surrender
- 26:36 — Introduction of a "System 3" (AI) in dual process theory
- 29:44 — Experimental setup (CRT, AI manipulation)
- 31:59 — Findings: accuracy increases or decreases depending on AI correctness
- 34:56 — What makes AI fundamentally different as a cognitive tool
- 36:52 — Language and social mimicry effects
- 43:13 — Confidence boost and dangers in critical contexts
- 46:36 — De-skilling and risk for education
- 48:21 — Advice for users, importance of awareness and habit formation
- 53:07 — Speculations about the future (integration with devices, immersive prompts)
- 58:01 — Quote about "unthought thoughts" from Pope Leo
Takeaways for Listeners
- Awareness is crucial: Recognize when you're surrendering judgment to AI.
- AI is not just a calculator: It leverages trust, authority, and language to influence deeper cognitive patterns.
- Offload strategically, don't surrender: Use AI for support, but keep your critical faculties engaged.
- Be wary of false confidence and the blending of machine output with your own ideas.
- Education and policy need to catch up: Developing AI literacy is vital, but won't happen automatically.
For further information and research:
- Read "Thinking Fast, Slow and Artificial: How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender" by Nave & Shaw.
- Visit youarenotsosmart.com for links and the full episode archive.
