Choiceology with Katy Milkman
Episode: "The Algorithm Advantage"
Release Date: March 23, 2026
Host: Dr. Katy Milkman
Episode Overview
In "The Algorithm Advantage," host and behavioral scientist Dr. Katy Milkman explores why and when people trust algorithmic advice over human expertise. The episode journeys through the transformation of the Boston Celtics with analytics, examines pioneering research on algorithm appreciation, and considers when humans should—and should not—rely on algorithms. Insights span high-stakes sports, medicine, everyday decision-making, and the profound implications of generative AI.
Key Discussion Points & Insights
Human vs. Algorithmic Advice
-
Opening Scenario: Katy sets the stage with a relatable dilemma: who is more trusted for dinnertime cooking advice—an algorithm like ChatGPT or a foodie friend?
- Mixed opinions surface: some trust the speed and concrete guidance of ChatGPT, others the authenticity and tailored touch of a human friend.
- Quote (Dr. Katy Milkman, 01:06): “I find that recipe suggestions are usually pretty good. I guess the algorithm, because it's faster and you know, at that point, I just want dinner on the table.”
-
Main Theme: This everyday conundrum introduces the episode’s focus—how we weigh automated versus human advice in various domains.
Analytics and the 2008 Boston Celtics: A Sports Revolution
The Celtics’ Dilemma and the Rise of Analytics
- Celtics’ Low Point (02:19–03:15): The team, long iconic, hit rock bottom in 2006, plagued by injuries and poor results.
- Traditional Scouting (03:15–04:22):
- Dean Oliver (ESPN data scientist) explains scouts’ reliance on “just watching” and gut feel.
- Quote (Dean Oliver, 04:04): “There are guys I can see for five minutes and I never have to see him again because I know right then... obviously not the way a lot of data goes.”
The Turn to Analytics
- Moneyball Influence (04:57–05:10): The Celtics, inspired by baseball’s Moneyball movement, bring in Mike Zarin—an analytics-driven decision-maker.
- Building a Predictive Model (05:32–05:54):
- Zarin’s draft models predict player performance years into the future, revealing undervalued talent invisible to scouts.
Key Trades and “The Big Three”
- Acquiring Garnett & Allen (06:10–07:14): Data identify veteran stars Garnett and Allen as undervalued assets with hidden strengths (notably defense).
- Quote (Dean Oliver, 06:29): “Kevin Garnett ... was a genius on the defensive side of the ball. ... Even the old school scouts are supposed to see it. But it is not as prominent as offense.”
Triumph and Impact
-
Season Success (07:30–13:31):
- Celtics post one of NBA’s best-ever records; playoff story builds tension and drama.
- Finals vs. Lakers: Analytics undergird team construction while LA relies on brand and star power.
-
Championship Outcome (13:03–13:31): Celtics win Game 6 in a euphoric blowout; green and white confetti rains down as data-driven strategy delivers.
Legacy and Lessons
- Dean Oliver Perspective (15:20): “I don't think it's recognized as much as it should be for, for how it kind of changed the league. ... Certainly if they hadn't changed their path ... it would have been practically impossible to win a championship.”
- Modern NBA (16:00):
- Every team now has an analytics department. Models drive roster construction, game tactics (like more three-pointers), and player evaluation.
- Yet, skepticism persists—when algorithms make mistakes, trust can quickly erode.
Behavior Science Spotlight: Algorithm Appreciation
What Is "Algorithm Appreciation"? (18:19–18:55)
- Definition (Jennifer Log): “People incorporate advice more from an algorithm than from people. ... we call [that] algorithm appreciation.”
Examples & Experiments
-
Netflix & Streaming Recommendations (19:08): Algorithms like Netflix are familiar sources of advice; LLMs like ChatGPT represent a new frontier.
-
Classic Study (19:56–20:55):
- Subjects predict song popularity, receive identical advice labeled as coming from either an "algorithm" or a "person".
- Finding: People update their beliefs more when told the advice comes from an algorithm.
- Quote (Dr. Katie Milkman, 20:55): “So the same advice labeled differently as coming from an algorithm versus a person basically produces different decisions.”
-
Expert vs. Novice Dynamics (21:12–22:50):
- National security “experts” discount all advice (human or algorithmic), while non-experts trust algorithms more—and as a result, outperform experts on forecasting accuracy.
- Quote (Dr. Katie Milkman, 22:50): “The implications of that are a little scary, right? As two experts speaking to one another... maybe don't listen to us, but listen to the algorithms and you'll make better judgments.”
Drivers and Boundaries
- Numeracy Matters (23:48):
- Participants comfortable with numbers trust algorithmic advice more.
- No Age Effect: Familiarity with technology (proxied by age) doesn't explain algorithm trust.
- Transparency (“Unpacking the Black Box”, 25:15–26:53):
- Showing people how an algorithm works (“simple” vs. “complex”) doesn’t meaningfully change their level of trust.
- Support from Dan Goldstein's research: Complexity level doesn’t sway willingness to accept algorithmic advice.
When to Prefer Algorithmic Advice—and When to Beware
When Algorithms Excel
- Medicine Example (27:00–28:09): Algorithms outperformed pathologists in diagnosing cancer, spotting cues missed by humans.
- Broader Implication: Algorithms add value by processing vast data and highlighting overlooked patterns—sometimes even “updating textbooks.”
When Humans Should Be Cautious
- Limits and Risks (28:23–29:07):
- Algorithms are only as good as their training data—biased or incomplete input propagates errors and prejudices.
- Quote (Katy Milkman, 29:02): “So you could end up with a really low quality algorithm and trust too much in it.”
Practical Takeaways
- Synthesis with the “Wisdom of the Crowd” (29:16–29:45):
- Incorporate group and algorithmic input for best accuracy—GenAI is a valuable voice in a collective.
- Personal Practice (Jennifer Log, 29:50):
- Scrutinize the data source behind any algorithm.
- Quote (Jennifer Log, 29:50): “If I'm getting advice that seems to be coming from an algorithm, I think more questions come up for me. Is this data complete? ... The algorithm is going to magnify whatever patterns in the input data.”
Broader Reflections & Societal Implications
Scaling Trust and the Rise of Generative AI
- Growing Algorithm Adoption (31:08):
- The more an algorithm is used in a domain, the more trust (“algorithm appreciation”) grows in that context.
- Cited research (John Bogard, Suzanne Hsu): widespread adoption predicts rising algorithm appreciation.
Cautions: Bias & Deskilling
-
Bias Risks:
- Algorithms can perpetuate racism, sexism, cognitive biases—especially LLMs (large language models).
- Quote (Katy Milkman, 31:37): “Biases like sexism and racism can be baked into algorithms if they're trained on flawed data. And like humans, ... LLMs exhibit many biases.”
-
Deskilling Hazard:
- Over-reliance can erode human expertise—doctors, for example, may lose their edge if aided too exclusively by AI.
- Quote (Katy Milkman): “If we outsource learning and thinking to algorithms, that may be okay in some arenas but catastrophic in others.”
“Trust, But Calibrate”
- Final Takeaway (32:00):
- Appreciate and incorporate high-quality algorithmic advice—but remain vigilant, critical, and aware of underlying data and context.
- Quote (Katy Milkman): “We should lean into our algorithm appreciation, but with care and calibration... blind trust in any powerful tool can lead to missteps.”
Notable Quotes
-
Katy Milkman (Opening, 01:06):
“I find that recipe suggestions are usually pretty good. I guess the algorithm, because it's faster... I just want dinner on the table.” -
Dean Oliver (Celtics context, 04:04):
“There are guys I can see for five minutes and I never have to see him again because I know right then and that's obviously not the way a lot of data goes.” -
Jennifer Log (Algorithm Appreciation, 18:31):
“People incorporate advice more from an algorithm than from people. ... We call that algorithm appreciation.” -
Dr. Katy Milkman (Impact, 22:50):
“The implications of that are a little scary, right? ... maybe don't listen to us, but listen to the algorithms and you'll make better judgments.” -
Jennifer Log (Data Skepticism, 29:50):
“If I'm getting advice that seems to be coming from an algorithm, I think ... is this data complete? ... The algorithm is going to magnify whatever patterns in the input data.”
Key Timestamps
- 01:06: Everyday human vs. algorithm advice example
- 02:19 – 16:00: Deep dive into the Celtics’ analytics transformation
- 18:16 – 31:08: Jennifer Log interview—algorithm appreciation research and implications
- 29:16: Practical advice for applying algorithmic guidance
- 31:08 – End: Societal consequences, bias, deskilling, calibrated trust
Episode Tone and Style
Engaging, clear, and anchored in behavioral science. Katy Milkman blends vivid storytelling (Celtics, medical anecdotes) with research insights. The episode balances warning and optimism, urging listeners to use algorithmic tools thoughtfully.
Useful for Listeners Who Missed the Episode
This episode reveals why we increasingly gravitate toward algorithmic advice—even preferring it over human judgment—and when that bias can help (or backfire). Drawing lessons from the world of sports, business, medicine, and AI, the show offers both inspiration and caution for those striving to make better choices in a data-driven world.
