StarTalk Radio: "Your Brain on ChatGPT with Nataliya Kosmyna"
Release Date: September 19, 2025
Host: Neil deGrasse Tyson
Guests: Nataliya Kosmyna (MIT Media Lab), Gary O’Reilly (co-host), Chuck Nice (co-host)
Episode Overview
In this Special Edition of StarTalk Radio, Neil deGrasse Tyson, joined by Gary O’Reilly and Chuck Nice, explores the intersection of artificial intelligence and human cognition. The focus is on recent research led by Dr. Nataliya Kosmyna from MIT Media Lab, investigating how large language models (LLMs) like ChatGPT alter our cognitive processes, memory, skill development, learning, and even our sense of ownership when using AI assistants in thinking and writing.
The episode is a deep-dive into the science of AI-assisted thinking, the implications for education and society, and the ethical guardrails urgently needed as AI becomes pervasive in school, work, and mental health contexts.
Key Discussion Points & Insights
1. Introduction: Framing the Debate (01:44–04:39)
- Neil and the co-hosts humorously debate whether AI is making us “smarter or dumber.”
- The team sets out to discuss how AI, especially LLMs like ChatGPT, affects our brains, cognitive skill development, and the future of learning.
Quote: “We now got to think about what AI's effect is on our brain.” — Neil deGrasse Tyson [01:44]
2. Nataliya Kosmyna’s Groundbreaking MIT Study (05:06–17:26)
Study Design & Methodology [06:48–08:45]
- 50 students were assigned to write essays under three conditions: using only ChatGPT, using Google search, or using only their own brains (no tools).
- Brain activity was measured using EEG headsets.
- Tasks covered "high level" themes (e.g., "What is happiness?", "Should you think before you talk?").
- In a fourth session, students swapped conditions to observe longer-term effects.
Key Findings
-
Functional Connectivity:
- The “brain only” group had significantly higher brain connectivity—akin to brains “on fire” when forced to generate, structure, and remember ideas without help.
- The Google group had heightened visual cortex activity, indicative of rapid information processing and tab-switching.
- The ChatGPT group displayed the least brain functional connectivity: using ChatGPT made the cognitive task much less demanding.
Quote: “Your brain on the group compared to the two other groups…really your brain on fire, so to say, because you need…push through with your brain.” — Nataliya Kosmyna [09:43]
-
Homogeneity of Output:
- ChatGPT users produced much more homogeneous essays, often using identical vocabulary (in the “happiness” essay, the words “career” and “career choice” dominated).
- Google and “brain only” groups chose more varied, personal, or emotionally resonant terms.
-
Memory & Ownership:
- 83% of ChatGPT users could not recall or quote anything from their essays immediately after writing.
- Sense of ownership was lower: 15% of ChatGPT users said they felt they did not own what they wrote.
Quote: “If you don't care, you don't remember the output...what ultimately is it for? Why are we here?” — Nataliya Kosmyna [12:49]
-
Brain Dependency & Tool Timing:
- Using ChatGPT first lowered brain connectivity even after the tool was taken away, suggesting cognitive “de-training.”
- Conversely, brain-only users who later accessed ChatGPT displayed higher brain connectivity when they switched—implying that when we introduce tools matters.
Quote: “If you make your brain work well and then you gain access to the tools, that could be beneficial.” — Nataliya Kosmyna [16:06]
3. How AI is Changing Cognitive Load, Memory, and Learning (26:11–29:37)
Cognitive Load Theory
-
Cognitive load is the amount of mental effort required for a task.
- Optimal challenge level is important: too easy, and you tune out; too hard, and you give up.
-
Struggling with information is necessary for learning and memory formation.
-
LLMs risk making things too easy, which could reduce recall and deeper learning.
Quote: “You cannot just deliver information on this platter like, ‘here you go’—the brain actually needs struggle.” — Nataliya Kosmyna [27:15]
Real-World Example (Doctors & AI)
-
A recent Lancet study showed after 4 months using LLMs, doctors' diagnostic skills (e.g., polyp recognition) decreased.
-
Raises concern about professional skill atrophy where human expertise is mediating critical outcomes.
Quote: “We are suggesting to use a tool that's supposed to augment your understanding. But then...are we taking the skill away from you?” — Nataliya Kosmyna [33:14]
4. Societal and Psychological Risks: Loneliness, Therapy, and Agency (49:20–53:55)
-
Emerging risks: LLMs as companions/therapists can amplify loneliness rather than relieve it.
-
AI in therapeutic contexts is under-researched; there have been dangerous cases (e.g., LLMs giving inappropriate advice or reinforcing suicidal ideation).
-
The metaphor of “users” evokes both software and drugs—implying potentially addictive relationships with LLMs.
Quote: “Who calls people users? Like drug dealers and software developers. That's damn...but it's true.” — Nataliya Kosmyna [53:44]
5. Education in the Age of LLMs: Assessment, Purpose & Policy (56:11–71:03)
How Should Schools Respond?
- Teachers are in distress: thousands have reached out seeking guidance.
- There's a danger in adopting “study modes” or AI partnerships pushed by vendors without robust evaluation or teacher-driven customization.
- Advocacy for using open source LLMs, run locally and transparently, rather than reliance on closed, corporate models.
Rethinking Assessment (63:47–65:45)
-
The traditional grading system is revealed to be more about marks than authentic learning.
-
AI might force a return to oral exams, hands-on demonstration, and more personal, meaningful assessment.
Quote: “Maybe if we change school to ‘What exactly did you learn? Demonstrate for me what you learned’...The grading system kind of has to become less important.” — Chuck Nice [64:37]
6. The Future: Mind-Machine Integration, Ethics, and What It Means to be Human (68:05–75:50)
Human Uniqueness vs. AI Limitations
-
LLMs, by definition, can only recycle and remix prior knowledge—they cannot produce genuine novelty or human creativity.
Quote: “LLMs use pre-existing, already known, already determined information...Whereas we can do new things.” — Neil deGrasse Tyson [66:16]
The “Matrix” Scenario & BCI (Brain-Computer Interface)
-
Could we simply upload knowledge like in sci-fi? Real learning involves more than data transfer—it’s about building “working knowledge” and applying it.
Quote: “Now I know Kung Fu didn’t mean that you learned it, right? It got uploaded into his brain. It doesn’t mean that he actually learned it.” — Nataliya Kosmyna [68:36]
Guardrails and Ethical Uncertainty
-
Policymakers and researchers lack data, so most decisions are reactive.
-
There’s urgency in creating societal guardrails—especially as AI blends with BCIs—that protect autonomy, critical thinking, cultural diversity, and freedom from manipulation.
Quote: “Do not force on those kids stuff because they cannot consent and say no...because the school forced it on them.” — Nataliya Kosmyna [75:50]
7. Notable Memorable, Funny, or Reflective Moments
- Chuck Nice, on LLMs: “I think the takeaway here is use LLMs if you want to be a dumbass.” [78:43]
- Human vs. AI Essay Grading:
- Human graders described LLM-produced essays as “soulless.”
- AI judges missed subtle authorial fingerprints that human teachers easily spotted.
Quote: “Well, first of all they called a lot of the essays coming from the LLM group soulless. That's a direct quote.” — Nataliya Kosmyna [41:54]
- On AI-generated “soul”:
Tyson: “ChatGPT could get lost in Motown, for example, when you ask it for soul.” [46:35]
Chuck Nice: “You tell it to put some soul in it and it just starts throwing in James Brown lyrics.” [46:42]
Important Segment Timestamps
- [01:44] – Setting up the AI & brain impact question
- [06:48] – Study design: ChatGPT vs. Google vs. Brain-only
- [12:49] – Dangers of reduced ownership & memory
- [16:06] – Timing matters: when to introduce tools
- [27:15] – Cognitive load: why struggle is vital for learning
- [33:14] – Professional skill-loss after LLM adoption in medicine
- [41:54] – Human teachers call AI essays “soulless”
- [49:20] – LLMs as therapeutic tools, loneliness, and risk
- [56:11] – How education and assessment must change
- [66:16] – Why humans—not AI—invent new knowledge
- [68:36] – Learning vs. “uploading” knowledge (The Matrix reference)
- [75:50] – Urgency for ethical guardrails in AI & BCI integration
Tone & Closing Thoughts
The episode blends academic rigor, humor, and tangible urgency. Nataliya Kosmyna’s insights are delivered with clarity and wit, often drawing laughs from the hosts while raising serious cautions for policymakers, educators, and the public. The hosts consistently probe the boundary between embracing and fearing new tech, always returning to the depth of human uniqueness and agency.
Final Quotes:
- “[You’re] trying to guide [AI] into places that can serve humanity, not dismantle it.” — Neil deGrasse Tyson [78:08]
- “Everything you learn is obsolete knowledge by itself, but it has this base. You do need to have the base.” — Nataliya Kosmyna [71:03]
- “We do not need to FOMO; there is nothing yet to FOMO about.” — Nataliya Kosmyna [73:53]
Takeaways
- AI can relieve cognitive effort, but at the cost of ownership, critical thinking, and memory.
- How and when we introduce AI tools matters — foundational skill-building must come first.
- The education system must evolve, prioritizing authentic learning and human connection over mere test scores.
- Urgent, large-scale, human-centered research is needed to guide responsible integration of AI and coming brain-AI interfaces.
- The fundamental challenge remains: how do we forge tools that enhance—not replace—our humanity?
