A Book With Legs | Smead Capital Management
Episode: Tom Griffiths – Laws of Thought: The Quest for a Mathematical Theory of the Mind
Date: March 2, 2026
Episode Overview
In this episode, host Cole Smead sits down with Tom Griffiths, Henry R. Luce Professor at Princeton, to discuss his new book, Laws of Thought: The Quest for a Mathematical Theory of the Mind. Griffiths guides listeners through a sweeping intellectual history—from Leibniz’s earliest ambitions to mathematically formalize thought, through philosophers, logicians, computer scientists, and psychologists who tried to make sense of the mind using rules, symbols, and, ultimately, artificial intelligence (AI). The discussion explores the philosophical and mathematical journey to understand the mind, the emergence of computers and neural networks, and the enduring mystery of human intelligence versus AI. The episode blends philosophy, science, business, and investing, reinforcing the value of mental models and worldly wisdom.
Key Discussion Points & Insights
The Origins of Mathematical Thought and the Mind
-
The Ambition of Leibniz ([02:49])
- Leibniz’s 1679 vision: A mathematical language to resolve debates and model thought.
- Griffiths: "[Leibniz] had this vision of having some kind of a mathematical language that you could use to write down the thoughts that you had. ...Put them into a machine and turn a crank and then you'd be able to get out the answer about who was right and who was wrong..." ([03:19])
- Leibniz failed to realize this vision, but inspired future generations.
-
Philosophy and Formal Reasoning ([04:26])
- The role of Aristotle’s syllogisms—identifying good arguments—and the later quest to formalize them mathematically.
- Transition from philosophical speculation to scientific study of the mind.
Developing Formal Systems and Logic
-
Wilkins and the Universal Language ([06:39])
- Reverend Wilkins’ 17th century attempt to create a taxonomy-based artificial language with no room for falsehood.
- The idea of mapping meaning through ontological structures, a precursor to modern knowledge ontologies.
-
What is a Formal System? ([08:41])
- Three pillars: Digital (discrete states), Medium independence, and Token Manipulation (rule-based transitions).
-
Boole and Mathematical Logic ([10:17])
- George Boole’s creation of algebraic logic that laid the groundwork for computers and later, psychological theories.
- William Stanley Jevons critiqued Boole’s system, refining logical operations further.
-
Boole's Family Legacy ([12:42])
- Boole’s wife and daughters achieved remarkable scientific and literary success, influencing generations in science and mathematics.
- Griffiths: “His wife...went on to teach psychology, philosophy, and logic at university...each of [his daughters] did remarkable things...” ([12:54])
Psychology, Behaviorism, and Scientific Challenges
-
Behaviorism and Its Limits ([14:08])
- Early 20th-century shift to behaviorism—studying observable behavior, not the mind.
- The challenge: minds are not observable objects.
- Jerome Bruner’s return to mentalist experiments showed that perception was shaped by context and expectation, not just stimulus.
-
Example: Coin Value and Perception ([16:37])
- Rich vs. poor children perceived coin size differently, revealing how values and experience bias perception.
Computation and the Birth of Modern Computing
-
Turing and the Turing Machine ([20:30])
- Alan Turing conceptualized computation as a set of discrete operations akin to a mathematician’s thought process.
- Griffiths: "If you watch a mathematician solving a problem... that basic operation... you have something that you're thinking about already, it changes... You write something down and you move on. That's exactly what a Turing machine does." ([21:20])
- Turing proved not all problems are solvable by machines, foreshadowing inherent computational limits.
-
Von Neumann and Stored-Program Computers ([24:17])
- Von Neumann’s architecture: an improvement on Turing’s abstract model, allowing practical, reprogrammable computers—true ancestors of all modern computers.
Mathematical Models of Mind: Logic, Language, and Cognition
-
Bruner and Concept Learning ([27:53])
- Mathematical rigor applied to cognition: using logic to model how people learn and distinguish categories.
- Bruner’s experiments showed that conjunctions (“and”) are easier for people to learn than disjunctions (“or”), coining “intellectual nightmares” for the latter.
-
Shannon’s Information Theory ([30:59])
- Claude Shannon connected Boolean logic to practical electronics, enabling efficient information transmission—pivotal for both communication technology and cognitive modeling.
-
Miller’s “Magic Number Seven” ([32:10])
- George Miller demonstrated that human channel (memory or perception) capacity is limited, often to around seven chunks of information.
-
Simon's Heuristics and the Path Model of Reasoning ([35:01])
- Newell, Simon, and Shaw explored how humans navigate decision spaces using heuristics—leading to the modeling of problem-solving as navigating a search tree.
Language, Grammar, and Learning
-
Chomsky and Generative Grammar ([37:48])
- Noam Chomsky’s revolution: formal, mathematical systems can represent the underlying structure of language.
- Demonstrated that natural language is more complex than finite-state systems; the mind constructs abstract, generative grammars.
- Chomsky (quoted): "The child who learns a language has in some sense constructed the grammar for himself..." ([68:06])
-
Induction vs. Deduction ([41:59])
- Chomsky was skeptical that language could be learned just from data, positing “universal grammar” as an innate constraint.
- Induction, unlike logic (deduction), deals with inference from incomplete info—Hume regarded it as habit, but Griffiths suggests systematizable cognitive bias.
-
Rosch and Family Resemblance ([46:14])
- Eleanor Rosch argued categories are fuzzy and based on similarity, not strict logical boundaries—updating the classical view of concepts.
-
Tversky on Similarity and Dimensionality ([48:08])
- Amos Tversky challenged the idea that mental similarity maps onto geometric/spatial rules like symmetry or the triangle inequality.
Neural Networks, Learning, and the Roots of AI
-
Perceptrons and Neural Networks ([51:35])
- Frank Rosenblatt’s Perceptron: an early neural network, capable of linear discrimination but limited—critiqued by Marvin Minsky for failing to solve non-linear tasks.
- Ideas of spreading activation by Collins, Loftus, and Quillian modeled associative memory as a network of connected concepts.
-
Hinton and Deep Learning ([56:20])
- Geoffrey Hinton (a Boole descendant) helped revive neural networks with backpropagation (gradient descent) and led to breakthroughs in deep learning when more data/compute became available (2009, 2012).
- Modern AI—massive data + deep neural nets—now solves previously “unsolvable” problems.
Probability, Bayesian Reasoning, and Human vs AI Intelligence
-
Probability and Bayesian Inference ([61:37])
- Cardano’s early probability theory developed into subjective (Bayesian) reasoning—modeling belief updating as data comes in.
- Griffiths: “Probability theory gives us a tool for talking about... why [AI] are still not as efficient as humans at being able to learn from the data that they get.” ([62:36])
-
Peter Wason’s Card Experiment and Human Logic ([64:34])
- Human choices in logic puzzles (like Wason's selection task) reflect probability-based, not strictly logical, reasoning—clarifying the distinction between how humans and formal systems “think”.
Human Exceptionalism, AI, and the Limits of Computation
-
Chomsky on Human Learning & "Divine" Abilities ([68:06])
- Chomsky marveled at the child’s capacity to construct grammar with little input—the “remarkable type of theory construction”.
- Cole Smead: “Doesn't that show you how incredibly unique the human is? ... a child doesn't have really much, if any, data ... and yet ... are able to do that.” ([68:55])
- Griffiths: “This is really the biggest gap that I think remains between AI and human minds, right. Which is that humans are able to make these inferences from relatively small amounts of data...” ([69:16])
- AI is “derivative”—dependent on training data—whereas human thinking is generative and bias-filled from evolution, embodiment, and social learning.
-
Divinity, Plato, and the Soul ([74:00])
- The discussion touches on the Platonic notion of innate knowledge, Chomsky's "biolinguistics," and whether human reasoning is “divine” or simply a product of evolved biases.
-
The AI Singularity and Human Constraints ([77:42])
- Griffiths characterizes AI intelligence as qualitatively different—not just “smarter”, but fundamentally “alien” due to different constraints and inductive biases.
- Griffiths: “Intelligence is not a one-dimensional axis... humans are able to learn from small amounts of data because that's all they're going to get...” ([78:53])
-
The Role of Metacognition in the AI Era ([81:41])
- As cognition is increasingly automated, metacognitive skills—how we manage and direct thinking—become more essential.
- Griffiths: “That metacognitive skill ... is made even more important by the power of these kinds of AI systems.” ([82:18])
Notable Quotes & Memorable Moments
-
“Leibniz was... trying to solve this problem of how to use math to understand the mind. What's interesting... is that he kind of failed...” —Tom Griffiths ([03:19])
-
“If you said [a fish is a mammal], the words that you used for fish and mammal would tell you that they're not compatible with one another.” —Tom Griffiths ([08:07])
-
“No one ever saw a thought or touched a feeling. And so that created this problem...” —Tom Griffiths ([15:28])
-
“Turing was remarkably prescient...” —Tom Griffiths ([23:00])
-
“Bruner called some disjunctive categories ‘intellectual nightmares’.” —Cole Smead ([27:53])
-
“There's an effortlessness and a mystery to it [how children learn language] that the word divine captures well.” —AI Claude, paraphrased by Cole Smead ([76:00])
-
“Humans are able to make these inferences from relatively small amounts of data... The rest of [what we know] needs to be what we call inductive bias.” —Tom Griffiths ([69:16])
Timestamps for Key Segments
- AI as Historical Continuum, Not Sudden Arrival – [02:01]
- Leibniz, Calculus, Math and the Mind – [02:49]
- Wilkins' Universal Language and the Limits of Formal Systems – [06:39]
- Boolean Logic, Jevons, and the Rise of Mathematical Psychology – [10:17]
- Boole's Family Legacy – [12:42]
- Behaviorism and Bruner's Perception Experiments – [14:08] / [16:37]
- Alan Turing, the Turing Machine, and Unsolvable Problems – [20:30]
- Von Neumann and Practical Computing – [24:17]
- Information Theory (Shannon) – [30:59]
- Miller's Magic Number Seven – [32:10]
- Newell, Simon, and Search Trees in Reasoning – [35:01]
- Chomsky, Universal Grammar, and Language Learning – [37:48]
- Induction, Probability, and Bayesian Reasoning – [41:59]/[61:37]
- Neural Networks (Perceptron, Hinton, Deep Learning Revolution) – [51:35]/[56:20]
- AI versus Human Intellect, Divine/Innate Intelligence – [68:06]/[76:00]
- AI Singularity and Human Constraints – [77:42]
- Metacognition in the Age of Cognitive Automation – [81:41]
Conclusion
Tom Griffiths provides a masterful, lucid storytelling thread that traces the quest to mathematically formalize the mind—from early philosophy to deep learning. While technology has delivered astonishingly capable machines, the human mind remains uniquely, irreducibly generative, shaped by evolution, bodily experience, and the mysterious capacity to learn from little. Investors, thinkers, and the curious are reminded that the most valuable insights often come from the convergence of disciplines and that true progress may depend not just on the power of computation, but on the art of critical, metacognitive thinking.
Where to find Tom Griffiths:
- Princeton University Webpage
- His lab’s X/Twitter handle and official publications are also available online. ([82:21])
Host Call to Action:
Buy Laws of Thought, review this podcast, and send book suggestions to podcast@smeadcap.com or @smeadcap on X/Twitter.
