Podcast Summary: Sean Carroll’s Mindscape Ep. 343 | Tom Griffiths on The Laws of Thought
Episode Details:
- Podcast: Sean Carroll’s Mindscape
- Episode: 343 — Tom Griffiths on The Laws of Thought
- Date: February 9, 2026
- Guest: Tom Griffiths, Cognitive Scientist, author of The Laws of Thought: The Quest for a Mathematical Theory of the Mind and co-author of The Rational Use of Cognitive Resources: A New Approach to Understanding Irrational Behavior.
Episode Overview
This episode explores the enduring question: Is there a set of “laws of thought” analogous to the laws of physics that govern human cognition, reasoning, and intelligence? Sean Carroll and guest Tom Griffiths trace the intellectual history of formal logic, probability, and computational models of the mind, culminating in a discussion of resource-constrained rationality and the implications for both human cognition and artificial intelligence.
Key Discussion Points & Insights
1. The Analogy between Physical Laws and Laws of Thought
- Carroll sets up the key contrast: While physics seeks simple laws for matter, is it even possible to distill “laws” for something as complex as human thought? (07:26)
- Carroll: “As a cognitive scientist, as someone studying literally the most complex thing that we know about in the universe, is there any hope that … we should even talk about something like laws of thought?” (07:26)
- Griffiths responds that this has deep historical roots; early scientists like Descartes and Leibniz aimed for mathematical models of both thought and the natural world. (07:57)
2. Logical Foundations and Historical Context
- Griffiths traces logic back to Aristotle’s syllogisms and the quest to formalize good argument.
- “Aristotle did that by thinking about syllogisms, these simple arguments… and so he did some theorizing about, first of all, trying to identify what are good syllogisms, and then second…what makes a good syllogism.” (15:51)
- Leibniz tried and failed to reduce syllogistic logic to arithmetic, foreshadowing vector embeddings in language models. (19:18)
- George Boole made logic mathematical by studying sets and their relations, leading to Boolean logic (true/false algebra) and probability theory. (22:58)
- Even Boole saw himself as a kind of psychologist, but focused on how thought “should” work, not how it actually does—a split that would haunt later cognitive scientists. (25:18)
3. From Certainty (Logic) to Uncertainty (Probability)
- The big leap was recognizing that real-life thought is often about uncertainty and updating beliefs probabilistically, not just fixed logic.
- Boole dedicated half his treatise to “uncertain inference.” Laplace and Bayes further developed probability as a theory of belief. (27:07)
- “In logic…possible worlds…logic is really about what conclusions you can draw with certainty based on the information you have… Probability theory takes one more step… assigning a likelihood…which reflects our degree of belief.” (37:03)
4. Levels of Analysis in Cognitive Science (David Marr’s Framework)
- Griffiths highlights Marr’s three levels:
- Computational: What is the goal/problem?
- Algorithmic: How is the problem solved in abstract steps?
- Implementation: What physical machinery does the solving? (10:25, 78:35)
- Carroll: “So what are the laws of thought?” Griffiths: “At that computational level, I think it’s pretty clear…the things we should be thinking about are things like logic and probability theory…Then [decision theory] associated with that, reinforcement learning…” (12:16)
5. Bayesian Reasoning, Resource Constraints, and Human “Irrationality”
- Carroll: “Should we think about perfect Bayesian reasoning as once again aspirational…or is this meant to be a description of how people actually think?” (39:18)
- Griffiths: Humans are not perfect Bayesians—they use heuristics, not always rational—but their “irrationalities” can often be explained by bounded cognitive resources (time, memory, attention), leading to “resource rationality.” (42:03)
- Griffiths: “...we are both, from the perspective of a psychologist, error-prone decision makers… and from the perspective of computer scientists, these aspirational agents that are doing the kinds of things we’d like our AI to do… We are good at solving the kinds of problems we face, with the resources we have.” (42:03)
6. The Origin of Priors & Differences with Artificial Intelligence
- Where do our “priors” – initial beliefs about the world – come from? For humans: evolution, experience, and learning. For AI (LLMs): mostly massive data. (49:01)
- The difference: Children acquire language quickly due to strong inductive biases; LLMs need much more data. (57:24)
- “Where the big difference between human minds and brains and large language models that we have today is about inductive bias. It’s the thing that comes from our prior distributions, broadly construed as human beings, that allows us to close that gap.” (49:01)
- “If you want to make systems that are more human-like in their ability to learn… you’ve got 4,995 years to make up in terms of the content of that inductive bias…” (57:24)
7. Neurocomputational Models and the Role of Representational Spaces
- 20th-century cognitive science saw human concepts/crcognition as points or structures in high-dimensional spaces, leading to neural network models. (69:00)
- “If you think about objects as points in space, then how close things are…is a way of characterizing…their belonging to a category...” (69:00)
- Neural networks, especially deep learning, operationalize this idea, transforming points through learned layers and weights. (72:09)
- The challenge: reconciling the computational/theoretical models (logic, probability) with the messy approximations used in real neural systems.
8. How should AIs and Humans Think? And the Limits of Formal “Laws”
- Should we build AIs to think like humans? Griffiths notes two main differences between humans and current AI:
- Inductive bias (few data vs. big data)
- Generalizability—AIs sometimes “fail spectacularly” on nearby tasks because their solutions aren’t “human-like.” (63:17, 64:42)
- Having AIs with more “human” inductive biases may help them generalize—and be more interpretable to us.
- There is not a final, closed list of “laws” like in physics; rather, we have levels, frameworks, and constraints. Marr’s levels remind us there are multiple valid perspectives. (78:35)
Notable Quotes & Memorable Moments
What Are the Laws of Thought?
- Griffiths: “I think at that computational level…it’s pretty clear…the things we should be thinking about are things like logic and probability theory…decision theory…reinforcement learning.” (12:16)
Humans vs. AI: The Data Gap
- Griffiths: “A human child learns to use language in about five years of exposure, [LLMs use] the equivalent of between 5,000 and 50,000 years of continuous speech… The thing that makes up that gap is inductive bias.” (49:01)
On Cognitive “Irrationality”
- Griffiths: “We are both, from the perspective of a psychologist, error-prone decision makers… and from… computer scientists, these aspirational agents… If you want to resolve that paradox… [look at] consequences of adaptation—evolution as well as learning.” (42:03)
Multiple Levels of Explanation
- Griffiths: “When we come to thinking about information processing systems, it’s really clear that we’re going to have to have these nice, mutually reinforcing perspectives… understanding those different levels and then how the explanations… relate to one another.” (78:35)
The Limits of Silver Bullet Theories
- Griffiths: “My expectation is… it’s unlikely we’ll find…just one way that the brain does Bayesian inference… When we think about something like the laws of thought, the level at which…I think we can be most successful is at that computational level.” (80:25)
Timestamps for Key Segments
- Intro: Framing laws of thought & cognition vs. computation — 02:09–07:25
- Historical origins: Aristotle, Leibniz, Boole, logic as math — 09:44–22:58
- Probability, induction and abduction in thought — 25:01–32:39
- Levels of explanation: computational, algorithmic, implementation — 10:25 (detailed at 78:35–82:08)
- Bayesian reasoning, human heuristics, and bounded rationality — 39:18–45:37
- Priors in humans vs. LLMs, inductive bias, language acquisition — 49:01–62:27
- Neural network models, from points in space to computation — 69:00–77:20
- Synthesis: Plurality of theories, no single “laws of thought” — 78:11–82:20
Takeaways
- “Laws of thought” exist at the most abstract, computational level: logic and probability theory govern what ideal reasoners “should” do.
- Human cognition is bounded and resource-constrained; “irrational” behavior is often rational given those constraints.
- AI models and humans overlap but also diverge: humans rely more on inductive bias; AIs are (currently) data-hungry blank slates.
- There’s unlikely to ever be a single, simple set of “laws” for thinking; instead, multiple mutually compatible levels and approaches are necessary.
- Understanding those levels can help improve both our science of the mind and the design of artificial intelligence.
“There’s a long history of people wanting to have a sort of single explanation for things…when we come to thinking about information processing systems, it’s really clear that we’re going to have to have these nice, mutually reinforcing perspectives…”
— Tom Griffiths [78:35]
