Planet Money: "Don't Hate the Replicator, Hate the Game"
Date: February 27, 2026
Hosts: Mary Childs, Alexi Horowitz Ghazi
Theme: Investigating the Replication Crisis in Social Science through the “Replication Games”
Episode Overview
This Planet Money episode delves into the heart of the “replication crisis” — the troubling pattern where many scientific studies cannot be reproduced when their experiments or code are scrutinized. Mary and Alexi travel to Montreal to join economist Abel Brodeur at one of his “Replication Games,” a unique event rallying social scientists to audit and replicate published research papers. The episode explores how and why these problems persist, the efforts to confront them, and the potential of crowdsourced quality control to restore faith in published science.
Key Discussion Points & Insights
1. The Replication Games: An Unconventional Solution
-
Introduction to Abel Brodeur (00:56–01:16):
- Abel Brodeur is an economics professor at the University of Ottawa known for organizing the Replication Games—large events where teams attempt to reproduce the results of published social science studies.
- At these games, dozens of social scientists split into teams to audit papers, checking code and results in a hackathon-style event (01:29–01:47).
-
Why Replication is Necessary (01:47–02:13):
- Ever since data analysis has become easier, it's been clear that many published results don’t hold up to scrutiny—a phenomenon referred to as the “replication crisis.”
- The crisis spans disciplines: starting in psychology, spreading to medicine and economics.
2. Roots of the Replication Crisis
-
Abel’s Personal Experience (05:56–08:40):
- As a master’s student, Abel found no effect of smoking bans on smoking rates, contradicting prior research. He admits to manipulating the dataset until he got a statistically significant result—an “asterisk-worthy” outcome.
- Quote: "I was so happy I was in the library. I just yelled like, significant. I was so happy." — Abel Brodeur (07:29)
- Ultimately, he realized this tortured approach distorted the aim of science and decided to publish the more accurate, but less exciting, null result.
- As a master’s student, Abel found no effect of smoking bans on smoking rates, contradicting prior research. He admits to manipulating the dataset until he got a statistically significant result—an “asterisk-worthy” outcome.
-
Publishing Incentives and P-Hacking (08:45–09:40):
- Academic culture rewards statistically significant (p < 0.05), novel results, pushing researchers—often subconsciously—toward data manipulation known as p-hacking.
- Quote: "Actually when you look at it from the outside, it's like, this is crazy what you've done." — Abel Brodeur (09:23)
- Academic culture rewards statistically significant (p < 0.05), novel results, pushing researchers—often subconsciously—toward data manipulation known as p-hacking.
-
Empirical Evidence & Journal Resistance (09:40–10:54):
- Abel and colleagues analyzed the distribution of "stars" (statistical significance markers) in academic papers, finding suspicious concentrations just beyond the publishable threshold.
- Their paper faced multiple rejections, reflecting hesitation within publishing to confront these systemic issues.
3. The Birth and Growth of the Institute for Replication
-
Making Replication Official (13:34–14:12):
- Researchers were one-off checking each others’ work, but Abel noticed that mass change would only come from system-wide monitoring.
- He created the "Institute for Replication," a quasi-official body to request data and code more successfully—and to scale up the auditing effort.
-
Crowdsourcing Replication: The Games Begin (15:04–18:14):
- An unplanned overflow of participants at a replication workshop in Oslo led Abel to formalize the "Replication Games.” Surprisingly, many attendees found errors even in the first event, such as duplicated data that should have indicated economic inequality.
4. Real-Time Replication in Montreal
-
How a Replication Game Unfolds (19:54–20:30):
- The Games have no winners or prizes. Teams use the “replication package” (the original data and code) to attempt to reproduce published results within seven hours.
-
Fieldwork Vignettes & Participant Perspectives
- Jolene Hunt (PhD student): Sees collaboration as a rare and valuable aspect of the event, breaking academic silos (20:48–21:39).
- Felix Fosu (postdoc): Prefers to find errors, emphasizing, “We need to know what works and what does not work.” (23:40–24:00)
-
Replication in Practice (24:00–27:21):
- Phase One: Do the code and data produce the same results as published?
- With the software open:
- Chishiya (Researcher): "So if you check the numbers minus 18.11432, and I'm looking at the published version, it says minus 18.114 star, star, star." (25:57)
- Success when outcomes align; proceed to “robustness checks.”
- With the software open:
- Phase Two: Are the conclusions robust to changes in methods or data subsets?
- Example: Omission of a single cartel changes the outcome of a paper on Mexican cartels, raising doubts about its entire thesis (29:22–29:39).
- Phase One: Do the code and data produce the same results as published?
5. Outcomes and Reactions
-
Presenting Findings (30:45–31:14):
- Most papers reproduce, but some teams find major issues. Two uncovered significant problems: missing variables and unrobust findings.
-
Handling Disagreement and Backlash
- Authors of challenged papers get advance notice and an opportunity to respond before findings go public (31:51–32:14).
- Sometimes replicators and original authors dispute what counts as a “real” finding or failure (33:00–34:25).
- Quote (Paolo Pinotti, original author): “It’s like doing a study on the effect of spreadsheets on productivity and then saying, oh, the results don’t hold up if you exclude Microsoft Excel.” (34:27)
-
Abel’s Perspective on Success and Failure
- Quote: “In a world in which science works, I think this should have been picked up before it’s published, cited and disseminated. So I don’t think it’s a success.” — Abel Brodeur (35:26)
-
Systemic Takeaway
- Most errors are not career-ending fraud but significant mistakes in coding, data handling, or interpretation.
- “The system is still broken, even after putting on more than 50 games and replicating about 300 papers.” — Mary Childs (36:37)
6. Cultural Impact & Incentive Change
- Behavioral Shift via Probability of Scrutiny (36:44–37:10):
- Research shows that it’s the chance of getting caught—not severity of punishment—that compels cleaner, more honest work. Brodeur’s games aim to increase perceived odds of detection, nudging the academic world toward greater integrity.
Notable Quotes & Memorable Moments
- “I was so happy I was in the library. I just yelled like, significant. I was so happy.” — Abel Brodeur (07:29)
- “Actually when you look at it from the outside, it’s like, this is crazy what you’ve done.” — Abel Brodeur (09:23)
- “We are trying to find something.” — Felix Fosu (23:16)
- “If you remove only one [cartel], then the result collapses.” — Felix Fosu (29:34)
- “In a world in which science works, I think this should have been picked up before it’s published, cited and disseminated. So I don’t think it’s a success.” — Abel Brodeur (35:26)
- “It’s like doing a study on the effect of spreadsheets on productivity and then saying, oh, the results don’t hold up if you exclude Microsoft Excel.” — Paolo Pinotti (34:27)
Important Timestamps & Segments
- 00:56–01:47: Introduction to Abel Brodeur and the Replication Games
- 05:56–08:40: Abel’s first encounter with the incentives problem and p-hacking
- 13:34–15:04: Creation of the Institute for Replication
- 16:26–18:14: The first Replication Game in Oslo, major early errors discovered
- 19:54–27:21: Montreal Replication Game: participant impressions and practical replication process
- 29:00–29:42: Cartel paper robustness test—key finding leads to a challenged result
- 31:51–34:25: Handling disputed findings between replicators and original authors
- 35:26–37:10: Abel on systemic failure, the ongoing necessity for scrutiny, and culture shift
Summary
In “Don’t Hate the Replicator, Hate the Game,” Planet Money chronicles how academic incentives and lax scrutiny have led to the replication crisis, with stories from inside the Replication Games—a hopeful, gamified approach to solving a systemic problem. Host Mary Childs and Alexi Horowitz Ghazi capture the cautious optimism and the very real tensions as social scientists confront uncomfortable truths about their field. Abel Brodeur’s initiative brings transparency, a bit of fun, and the subtle pressure academics need to be more rigorous, showing that while science may never be perfectly clean, it’s a lot cleaner when everyone knows that anyone, anywhere, could walk in and check for dust.
