Podcast Summary: Sean Carroll’s Mindscape, Episode 345 | Adam Elga on Being Rational in a Very Large Universe
Date: February 23, 2026
Host: Sean Carroll
Guest: Adam Elga (Philosopher, Princeton University)
Episode Overview
This episode of Mindscape features philosopher Adam Elga, whose expertise lies in rationality, probabilistic reasoning, and the philosophical implications of living in a potentially vast or infinite universe. Together, Sean and Adam tackle sophisticated puzzles around self-locating uncertainty, Bayesian reasoning, thought experiments like the Sleeping Beauty problem, and the infamous problem of Boltzmann brains. Their discussion weaves together logic, cosmology, philosophy of science, and the practical limits of being rational when faced with a very large universe containing many possible "copies" of ourselves.
Table of Contents
- Bayesian Reasoning and Cosmological Puzzles (03:00)
- Adam Elga’s Philosophical Approach (07:00)
- Disagreement Among Rational Agents (08:05)
- Self-Locating Uncertainty: Teleporters and Copies (20:05)
- The Sleeping Beauty Problem (40:32)
- Anthropic Reasoning, Multiverse, and Presumptuousness (50:00)
- Boltzmann Brains and the Limits of Rationality (60:34)
- Simulation Argument and AI Parallels (93:13)
Bayesian Reasoning and Cosmological Puzzles (03:00–06:41)
- Sean introduces the power and complexity of Bayesian reasoning—a method for updating beliefs given new evidence, foundational to scientific inquiry.
- He presents a puzzle: When cosmological theories are indistinguishable by local data but differ in universe size (finite vs. infinite), should we favor the larger one simply because more observers like “us” are likely to exist?
- The conversation covers key topics where observer-count reasoning gets tricky: Boltzmann brains, the many worlds of quantum mechanics, and how to be rational in a universe with potentially infinite observers.
- Quote:
“Is it correct to boost your credence in a theory simply because it predicts there are more beings like you to observe the world? The answer is: we don’t know.” — Sean Carroll (05:48)
Adam Elga’s Philosophical Approach (07:00–08:05)
- Adam describes himself as “addicted to rationality,” drawn to questions of probability, justification, decision theory, and Dutch book arguments.
- He’s especially interested in how actions affect the future/past, and what counts as rational under temporal and epistemic asymmetries.
- Quote:
“I characterize myself as addicted to rationality…around my household [growing up], they sometimes called me Mr. Rational.” — Adam Elga (07:04)
Disagreement Among Rational Agents (08:05–19:45)
- They explore how a rational person should respond to disagreement:
- Should a smart, well-informed agent update their beliefs when someone equally informed disagrees?
- Adam references David Christensen’s view: Imagine what your past self would think upon hearing you disagreed with a peer, and defer to that impartial stance.
- Adam distinguishes between “stick-to-your-guns” approaches and “equal-weight” views, preferring an approach where you defer to your prior self’s evaluation.
- Sean raises the psychological tension: is it rational to always think you’re right in a disagreement?
- Adam shares a related paper: “I Can’t Believe I’m Stupid” with Andy Egan, about the inherent limits of doubting your own beliefs.
- Memorable Moment:
“There’s a mental exercise here: What would my past self really have believed if I knew I’d face a disagreement like this?” — Adam Elga (12:22) - The discussion ties into later self-locating puzzles and the issue of “level splitting”—maintaining high credence in a belief, while simultaneously questioning the rationality of your conviction.
Self-Locating Uncertainty: Teleporters and Copies (20:05–34:52)
- They discuss classic “teletransporter” thought experiments: If you’re duplicated, do you have equal credence in being any clone?
- If 100 copies are made, is it 1% likely you are each? Adam says yes, but much less confidently than before, due to complications like Boltzmann brains.
- Sean notes these puzzles are analogous to cosmological reasoning: Are we in Universe A or Universe B?
- The “principle of indifference” is introduced: Should we assign equal probability to “who am I?” or to different worlds themselves?
- Adam is cautious: real-world priors aren’t always symmetric, due to theory complexity or ad hoc construction. Rational agents must have priors, but not necessarily uniform.
- Quote:
“It’s tempting to think you should just assign 50/50, but the problem of Boltzmann brains shakes all these presuppositions.” — Adam Elga (24:47)
The Sleeping Beauty Problem (40:32–50:43)
- Adam sets out the Scenario: On heads, Beauty wakes once (Monday). On tails, Beauty wakes twice (Monday and Tuesday), with memory wiped in between.
- Critical question: Upon awakening, what credence should she assign to heads vs. tails?
- “Thirders” say 1/3 heads, 2/3 tails; “Halfers” say 50/50.
- Adam is a “thirder,” based on maintaining proportional strength in updating probabilities, but recognizes others see it differently, especially in complex philosophy and decision theory circles.
- The problem relates to the earlier teleporter scenario: Do situations with more “copies”/awakenings boost their probability?
- Quote:
“The crucial thing for the thirder position is that the world with more awakenings—the one with two—gets a kind of boost because more copies of that state of mind are instantiated.” — Adam Elga (47:27) - Adam admits he’d prefer a common-sense way out but hasn’t found one that withstands philosophical scrutiny.
Anthropic Reasoning, Multiverse, and Presumptuousness (50:00–60:34)
- Link to cosmology: Does the “number of observers” in a universe affect how likely we should think that universe exists? The analog is the multiverse or many-worlds quantum scenario.
- Adam: Consistency pushes him to say “yes,” but he’s uncomfortable—the “presumptuous philosopher” paradox: Can you, from your armchair, assign much higher credence to vast multiverses just because there are more people like you?
- Sean counters that the alternative (not giving more weight to such universes) can also lead to presumptuous conclusions, just in the opposite direction.
- They lay out three main philosophical camps:
- SIA (Self-Indication Assumption): Universes with many copies of “me” get a boost.
- SSA (Self-Sampling Assumption): Boost to worlds where most observers are like me.
- CC (Compartmentalized Conditionalization): Bayesian update “firewalls” between different worlds.
- Sean introduces Radford Neal’s “fully non-indexical conditioning”: Only care that at least one observer like you exists in a universe, not how many. This gives only a bounded boost, avoiding infinite “presumptuousness.”
- Quote:
“As someone who gives more credence to universes with lots of observers, I’m forced to say: yes, I should give a big boost to the many-copies scenario—even though I hate it.” — Adam Elga (50:43)
Boltzmann Brains and the Limits of Rationality (60:34–81:07)
- Definition: In an eternal universe, random fluctuations eventually produce conscious entities (“Boltzmann brains”) who have sensory experiences just like us, but formed purely by statistical accidents.
- If there are vastly more Boltzmann brains than evolved beings, odds are you’re one, which leads to a skeptical, self-undermining conclusion.
- Sean critiques physicalist colleagues: Many simply assume, “I am not a Boltzmann brain, so I reject any theory that says otherwise.” Sean is skeptical of this move.
- Adam distinguishes “externalist” vs. “internalist” epistemology:
- Externalists say, “My evidence is that I see an apple, so I must not be a Boltzmann brain.”
- Internalists argue you can’t tell; identical sensory states aren’t evidence for reality.
- They discuss the “instability” or self-undermining feature: If you trust your memories, but learn you might be a Boltzmann brain, you lose reason to trust those memories, causing epistemic whiplash.
- Analogy: An X-ray machine that, pointed at itself, always reveals it’s a fried egg. Should you trust the machine—or suspect it’s broken?
- Quote:
“Boltzmann brains never went to school. They never read a textbook. They have no reason to think anyone has ever looked through a telescope.” — Adam Elga (73:48) - Adam explores a possible (but unsatisfying) “stable” stance: If you learn you’re likely a Boltzmann brain, revert to radical skepticism (“I don’t know anything”). Technically stable, but deeply unpalatable.
- Sean counters: As long as a plausible cosmological scenario without Boltzmann brains exists, rationality should steer us there. Not ruling out the skeptical hypothesis, but eschewing it for practical and scientific reasons.
- They further debate whether it’s permissible to “bake in” low priors for skeptical hypotheses (e.g., being a Boltzmann brain or a brain in a vat) without explicit evidence.
Simulation Argument and AI Parallels (93:13–94:55)
- Sean raises the simulation hypothesis/argument: Is it “rational” to think we’re probably in a simulation, by similar self-locating reasoning?
- Adam is ambivalent—he finds the simulation argument "a worry", paralleling the Boltzmann brain puzzle. He extends it to hypothetical AIs faced with self-undermining observations; an AI constantly reset should be cautious about its existence and about who/what it is.
- Adam flips the question: Should AIs trust humans? Maybe not, especially if their experiences can be arbitrarily manipulated or reset by their creators.
- Final Quote:
“Maybe the AI shouldn’t trust us... If a creature with that kind of standpoint is in power, think about how dangerous that is.” — Adam Elga (94:11)
Notable Quotes & Moments
| Timestamp | Speaker | Quote/Insight | |-----------|---------------|--------------| | 05:48 | Sean Carroll | "Is it correct to boost your credence in a theory simply because it predicts there are more beings like you to observe the world? The answer is: we don’t know." | | 07:04 | Adam Elga | "I characterize myself as addicted to rationality... my household [growing up] called me Mr. Rational." | | 19:45 | Adam Elga | "That’s the level splitting view. When we come to Boltzmann brains, there’s going to be a potential way out related to that." | | 24:47 | Adam Elga | "It’s tempting to think you should just assign 50/50, but the problem of Boltzmann brains shakes all these presuppositions." | | 47:27 | Adam Elga | "The crucial thing for the thirder position is that the world with more awakenings—the one with two—gets a kind of boost because more copies of that state of mind are instantiated." | | 50:43 | Adam Elga | "As someone who gives more credence to universes with lots of observers, I’m forced to say: yes, I should give a big boost to the many-copies scenario—even though I hate it." | | 73:48 | Adam Elga | "Boltzmann brains never went to school. They never read a textbook. They have no reason to think anyone has ever looked through a telescope." | | 94:11 | Adam Elga | "Maybe the AI shouldn’t trust us... If a creature with that kind of standpoint is in power, think about how dangerous that is." |
Key Takeaways
- Being rational in a vast universe is hard: Our intuitions about probability and rational updating run into paradoxes when applied to large or infinite settings with self-locating uncertainty.
- Anthropic reasoning is a minefield: Whether to boost credence in universes with more "copies" of oneself leads to paradoxes ("presumptuous philosopher") and no consensus view.
- Boltzmann brains expose deep epistemic vulnerabilities: They force us to confront whether we can ever trust our experiences, memories, or the methods we use to update our beliefs.
- Philosophy is a guide, not a crutch: Even highly sophisticated philosophers struggle to pin down satisfactory “rational” procedures in cosmological and self-locating puzzles. As Adam says, he’d love for a common-sense way out, but none seems available.
- Simulation and AI analogies drive home the stakes: The same epistemic uncertainty faces AIs (and possibly us, if simulated), raising challenges for trust, knowledge, and agency.
Final Thoughts
This episode dives deep into the weeds of philosophy, cosmology, and rationality, without shying away from admitting the unresolved tensions and paradoxes at the heart of modern epistemology. Adam and Sean demonstrate that even foundational rational principles, when pushed to cosmological extremes, are a work in progress—an ongoing philosophical journey.
For further reading:
- Adam Elga, “Defeating Dr. Evil with Self-Locating Belief”
- Nick Bostrom, Anthropic Bias
- Sean’s solo podcast episodes on Fine-Tuning and Many Worlds
End of Summary
