Freakonomics Radio Episode Summary: "Policymaking Is Not a Science — Yet (Update)"
Episode Information
- Title: Policymaking Is Not a Science — Yet (Update)
- Release Date: April 9, 2025
- Host: Stephen J. Dubner
- Network: Freakonomics Radio + Stitcher
- Description: An exploration of the challenges in translating scientific research into effective public policy, focusing on the scalability of interventions across various domains such as medicine, education, and economic policy.
Introduction
In this bonus episode of Freakonomics Radio, host Stephen Dubner revisits the theme "Policymaking Is Not a Science" with updated insights and data. Building upon a two-part series discussing "sludge"—the frictions that impede effective policy implementation—this episode delves into the complexities of scaling research-based interventions into real-world applications.
Key Contributors
- Stephen Dubner (Host)
- Dana Susskind (Physician Scientist at the University of Chicago): Specializes in cochlear implants and early childhood education.
- John List (Economist at the University of Chicago): Pioneer in field experiments and the credibility revolution in economics.
- Patti Chamberlain (Senior Research Scientist at Oregon's Social Learning Center): Founder of Treatment Foster Care Oregon.
- Lauren Suplee (Deputy Chief Operating Officer of Child Trends): Expert in implementation science.
Understanding the Problem: Scaling Scientific Interventions
Medical Adherence Challenges
-
Dana Susskind discusses the remarkable success of cochlear implants in restoring hearing to profoundly deaf children. (02:12)
- "The cochlear implant is a remarkable piece of technology... but how often we fail to take advantage of these advances." (02:41)
-
Despite technological advancements, adherence remains a significant issue:
- "About a third of all Americans have high blood pressure... Only 50% actually are controlled." (03:39)
-
John List highlights human behavior as a critical barrier to adherence:
- "Prescription adherence is a very difficult nut to crack." (04:18)
Scaling Failures Across Sectors
- John List and Dana Susskind examine why interventions that work in controlled research settings often falter when scaled:
- "Solutions that look foolproof in the research stage are failing to scale up." (05:59)
Case Studies in Scaling Interventions
Parent Academy Initiative
-
John List describes the Parent Academy program in Chicago Heights, which successfully improved children's cognitive and executive function skills within months. (10:17)
-
Failure to Scale: When the program was introduced in London, parental uptake was minimal, leading to its failure despite initial success. (10:27)
Treatment Foster Care Oregon (TFCO)
-
Patti Chamberlain explains the development and scaling of TFCO, which places individual children in family homes rather than group settings, resulting in better outcomes and lower costs. (18:44)
-
Scaling Challenges: Initial attempts to implement TFCO across 15 sites faced systemic barriers due to conflicting policies across child welfare, juvenile justice, and mental health systems. (20:24)
-
Resolution through Fidelity Standards: TFCO overcame scaling issues by developing strict fidelity standards and training protocols, ensuring consistency across multiple sites. (42:17)
The Science of Scaling: Implementation Science
-
Lauren Suplee introduces "implementation science" as a field dedicated to studying how programs are integrated into real-world settings and how implementation quality affects outcomes. (22:54)
-
Definition: "It's the study of how programs get implemented into practice and how the quality of that implementation may affect how well that program works or doesn't work." (22:48)
-
Challenges Identified:
-
Voltage Drop: The reduction in program effectiveness when scaled up.
- "Voltage drop essentially means I found a really good result in my original research study. But then when they do it at scale, that voltage drop ends up being, for example, a tenth of the original result." (30:33)
-
Fidelity: Maintaining the integrity of the original program during scaling.
- "Measures of fidelity is a really critical part of the implementation process." (41:21)
-
Three Buckets of Scaling Failures
John List categorizes scaling failures into three primary buckets:
-
Lack of Evidence for Scaling
- "Only 8% of those programs were actually backed by research evidence." (27:34)
- False positives: Initial research findings may not hold without robust evidence.
-
Wrong People Studied
- Studies may focus on populations that do not represent the broader target group.
- "These are studies that have a particular sample of people that show really large program effect sizes. But when you scale, that effect disappears." (28:20)
-
Wrong Situation Used
- Environmental and contextual changes alter program effectiveness.
- "When you go from the original research to the scaled research program, we don't understand what properties of the situation or features of the environment will matter." (29:53)
Proposed Solutions for Effective Scaling
Rigorous Replication
-
John List advocates for scaling programs only after multiple, well-powered independent replications confirm the original findings:
- "We do not believe that we should scale a program until you're 95% certain the result is true." (34:52)
-
Encouraging Replication: Rewarding scholars for replicating studies to ensure reliability.
Fidelity Standards
- Patti Chamberlain emphasizes the importance of maintaining fidelity through secure session uploads and rigorous coding:
- "If they're not meeting the fidelity standards, then we offer a fidelity recovery plan." (42:40)
Adaptability and Humility
- Researchers and policymakers must remain flexible and willing to adjust programs based on real-world feedback:
- "You shouldn't assume your original program was perfect; it will need adjustments." (43:12)
Quotes Highlighting Key Insights
-
Dana Susskind (02:12):
"Someone who is severely to completely profoundly deaf after implantation can have normal levels of hearing. And it is pretty phenomenal."
-
John List (04:18):
"Prescription adherence is a very difficult nut to crack."
-
John List (07:32):
"Now, my contribution in the credibility revolution was instead of working with secondary data, I actually went to the world and used the world as my lab and generated new data to test theories and estimate program effects."
-
Patti Chamberlain (18:17):
"We have to figure out how to use our own science to make better policies."
-
John List (34:52):
"We need to know what is the magic sauce."
-
Dana Susskind (44:08):
"Everybody's motivation at the end of the day is about trying to do good for the people they serve."
-
John List (45:06):
"So I do think inherently it is about people."
Conclusions and Forward Path
-
Integration of Implementation Science: Essential for bridging the gap between research and policy by ensuring that programs can be effectively scaled without losing their intended impact.
-
Cultural Shift in Academia and Policy: Encouraging replication and valuing fidelity over rapid implementation can lead to more reliable and effective policies.
-
Collaborative Efforts: Researchers, policymakers, and practitioners must work together to understand and overcome the human-centric barriers to scaling.
-
Vision for the Future: As John List eloquently puts it:
"The world is imperfect because we haven't used science in policymaking. If we add science to it, we have a chance to make an imperfect world a little bit more perfect."
Additional Resources
For those interested in delving deeper into the research and methodologies discussed, Freakonomics Radio provides links to academic papers and further reading materials on Freakonomics.com. The full transcript of this episode is also available for comprehensive review.
Notable Audio Clips with Timestamps
-
Dana Susskind on Cochlear Implants: (02:12)
"My job is to implant this incredible piece of technology which bypasses these defective hair cells..."
-
John List on Scaling Challenges: (10:17)
"So if you want your program to work at higher levels, you have to figure out how to get the right people into the program."
-
Patti Chamberlain on Implementation Barriers: (20:33)
"When we tried to implement, we ran into tremendous barriers because if we satisfied the policies and procedures of one system, we were at odds with the policies and procedures in the other system."
-
Lauren Suplee on Implementation Science: (22:54)
"It's the study of how programs get implemented into practice and how the quality of that implementation may affect how well that program works or doesn't work."
Closing Remarks
This episode of Freakonomics Radio provides a comprehensive examination of why effective policymaking remains elusive despite robust scientific research. Through insightful discussions with leading experts like Dana Susskind, John List, and Patti Chamberlain, listeners gain a deeper understanding of the intricate challenges in scaling interventions and the critical role of implementation science in bridging the gap between research and real-world application.
