Podcast Summary: Click Here – "Reverse engineering us"
Podcast: Click Here (Recorded Future News)
Episode Date: April 10, 2026
Host: Dina Temple-Raston
Overview
This episode of "Click Here" dives into the intersection of neuroscience and artificial intelligence, exploring how new advances in AI—particularly large language and reasoning models—are enabling researchers to better understand the human mind. Host Dina Temple-Raston visits Dr. Evelina (Ev) Fedorenko’s MIT lab to uncover how machines might help solve the mysteries of human thought, especially where traditional neuroscience methods have fallen short. The episode examines if and how AI models can provide insight into the workings of our brains, and what this means for science, technology, and society.
Key Discussion Points & Insights
1. The Challenge of Understanding the Human Mind
00:02–06:19
- AI's New Role: The episode opens with the idea that AI is now being used to study humans—not just behavior, but emotions and impulses.
- “People are using AI to study us now. Our choices, our emotions, our impulses. The hope is if a machine can model human behavior, it might help explain it.” (A, 00:02–00:17)
- Limits of Neuroscience: Traditional neuroscience has been limited—while you can observe and measure the brain, direct experimentation is almost impossible.
- Brain dissection post-mortem offers only limited insight: “You have to wait until someone dies to study it… The problem…is that it doesn’t necessarily tell you much about how thinking works in real time.” (A, 05:10)
- Ev Fedorenko’s research focuses on mapping how the brain processes language and thought, challenging the assumption that they are inseparable.
2. Language vs. Thought: A Neuroscientific Breakthrough
06:19–08:37
- Testing the Relationship: Fedorenko’s team used fMRI to investigate if language and thought really depend on the same brain regions.
- “So we ask, do these language regions, are they active when you’re doing these things? And they’re not active, they’re basically silent.” (B, 07:35)
- Surprising Discovery:
- “In humans, language and thinking are separate. They rely on distinct parts of the brain.” (B, 07:54)
- Implication: Opens potential for targeting language disorders and deeper understanding of the neural basis of language.
3. Enter: Artificial Intelligence as a Research Tool
08:37–11:52
- Problems with Animal Models:
- “I’ve always been jealous of my colleagues who study vision…they have this abundance of animal models…But you can’t use rats to study human language because…they don’t speak it.” (B, 08:27)
- Technology’s Shift: With the rise of large language models (LLMs), neuroscientists suddenly found something akin to a new ‘lab rat.’
- “Suddenly these large language models started producing language that was incredibly well-formed.…So of course me and a lot of people in my group got really excited.” (B, 10:52–11:10)
- AI as a Comparative Platform: By giving the same language tasks to LLMs and human volunteers, researchers are able to directly compare how each processes language.
4. AI Mirrors Human Language Processing
11:52–13:59
- Key Finding:
- “When people process sentences, those representations are actually quite similar to what you see inside LLMs.” (B, 12:01)
- Mechanisms Align:
- Both the brain and LLMs act as “predictive systems” — constantly guessing what comes next. (B, 12:24)
- Advantages of Digital Models:
- Digital ‘neurons’ (units in the AI models) can be isolated, altered, or even “damaged”—something nearly impossible in living brains.
- “With LLMs, you can destroy certain components that deal with certain cognitive capacities in the model…” (C, 13:40)
- Enables rapid, controlled experimentation: Hypotheses about language and cognition can be tested far more efficiently.
5. Extending AI Models to Study Reasoning
14:01–16:35
- The Limits of LLMs for Thought:
- Fedorenko initially doubted LLMs could model human reasoning:
- “They just clearly don’t reason like humans, at least not at first.” (B, 14:41)
- Emergence of Large Reasoning Models:
- Models trained not just on language, but on math and logic problems—these models “show their work,” break down reasoning step by step, and allow researchers to trace each micro-decision.
- Demonstration:
- “For instance, how much is 25 minus 4?” (C, 15:19)
- The model takes hundreds of “tokens” (steps) to solve even a simple problem—“tokens are like the model thinking out loud, one tiny step at a time.” (A, 15:45)
- Harder problems require more steps, mirroring human problem-solving pace.
6. Broader Implications: Understanding Ourselves—And Our Predictability
16:35–End
- Wider Possibilities:
- Neuroscientists can now map not only language, but potentially the fundamental nature of human thought.
- These models are also being deployed to anticipate human choices – in commerce, behavior prediction, and more.
- “Because the same kinds of models scientists are now using to understand the brain are also being used to understand us. What we want, what we feel, what we buy… It doesn’t just observe us. It can anticipate us.” (A, 17:25–18:00)
- Raises questions about the distinction (or lack thereof) between natural and artificial intelligence.
- Philosophical Note:
- “For centuries, the brain has been the most mysterious machine we know… Now we may have built another one, …not made of cells, but of zeros and ones. And by studying it… we’re learning the patterns underpinning our own thinking.” (A, 18:17)
Notable Quotes & Memorable Moments
- On the Challenge of AI Modeling the Mind:
- “We’re asking machines to solve a mystery we haven’t solved ourselves.” (A, 00:21)
- On Separating Language and Thought:
- “In humans, language and thinking are separate. They rely on distinct parts of the brain.” (B, 07:54)
- On the Breakthrough of AI as a Research Tool:
- “In a sense, she had found a new kind of lab rat. Except this one lived inside a computer.” (A, 12:40)
- On Predictive Mechanisms:
- “The human brain is definitely a very predictive kind of system, always guessing, always anticipating what comes next.” (B, 12:24)
- On the Implications for Predicting Humans:
- “It doesn’t just observe us. It can anticipate us.” (A, 17:55)
- On Artificial Models vs. the Brain:
- “Now we may have built another one, one not made of cells, but of zeros and ones.… And maybe the gap between how we think and how machines do is smaller than we imagined.” (A, 18:20–19:04)
Timestamps for Key Segments
- 00:02 — Introduction & Theme (AI studying humans)
- 05:34 — Dr. Evelina Fedorenko’s Background & Language/Thought Separation
- 07:17 — fMRI Experimentation for Language/Thought
- 08:37 — The Limitations of Animal Models for Language Study
- 10:52 — Introduction of Large Language Models in Neuroscience
- 12:01 — Discovery: Similarities in Representation (Human Brain vs. LLMs)
- 13:16 — Digital Dissection: Probing ‘Digital Neurons’
- 14:41 — Reasoning Models & AI's Leap to Modeling Thought
- 15:19 — Demonstration: Model Solves Math Problem Step by Step
- 17:25–19:04 — Implications: Anticipating Human Behavior, Blurring Line Between Human & Machine Intelligence
Conclusion
By melding neuroscience with artificial intelligence, this episode examines profound shifts in how we investigate, understand, and ultimately predict the workings of the human mind. The blending of digital and biological models suggests new frontiers for both science and technology, highlighting a future where humans and machines may not just interact, but think with striking similarity.
Produced by: Erica Gaeda, Sean Powers
Written & Produced by: Megan Dietre, Sean Powers, Erica Gajda, Zach Hirsch, Casey Giorgi
Host: Dina Temple-Raston