Masters in Business: At The Money – Fan Favorite: Algorithmic Harm
Date: January 8, 2026
Host: Barry Ritholtz (Bloomberg)
Guest: Cass Sunstein, Professor at Harvard Law, Author of Algorithmic: Protecting People in the Age of Artificial Intelligence
Episode Overview
This episode explores the pervasive influence of algorithms and artificial intelligence (AI) in modern society, focusing on the concept of "algorithmic harm." Host Barry Ritholtz speaks with Cass Sunstein about the ways algorithms shape prices, information feeds, and even our social perspectives—sometimes enhancing efficiency, but often leading to manipulation, discrimination, and societal division. The discussion draws on Sunstein's new book and offers insight into both the promise and peril of algorithm-driven markets and media.
Key Discussion Points & Insights
1. Defining Algorithmic Harm
- [01:35] Sunstein uses a Star Wars metaphor:
- Jedi algorithms recommend what people want at fair prices, without exploiting them.
- Sith algorithms exploit consumer ignorance or behavioral biases—e.g., targeting people who lack information or are unrealistically optimistic about a product's effectiveness.
- Quote:
"Exploitation of absence of information. That's algorithmic harm."
— Cass Sunstein [01:58]
2. Everyday Examples of Algorithmic Impact
- Obvious areas: Dynamic Uber pricing, personalized Amazon books, social media content curation, music recommendations on Pandora.
- Less obvious:
- Price differentiation based on economic status (wealthier people pay more).
- Cultural balkanization—algorithms continuously reinforce existing preferences (e.g., music taste), potentially narrowing worldviews and hindering discovery.
- Quote:
"We're going to get very balkanized culturally...that's culturally damaging and it's also damaging for the development of individual tastes and preferences."
— Cass Sunstein [05:12]
3. From Music to Media: Echo Chambers and Societal Division
- [06:22] Discussion expands to news consumption:
- Algorithms not only create but intensify echo chambers—media bubbles form on the left and right, fragmenting public discourse and making mutual understanding difficult.
- Quote:
"Algorithms can echo chamber you. An algorithm might say, you know, you're keenly interested in immigration and you have this point of view. So... we're going to funnel to you lots of information..."
— Cass Sunstein [06:48] - Societal harm arises when citizens base their worldviews on algorithmically filtered, divergent realities.
"People will be living in algorithm driven universes that are very separate from one another and they can end up not liking each other very much."
— Cass Sunstein [07:18]
4. Algorithmic Threats to Democracy
- [08:11] Algorithms and democracy:
- Algorithmic news feeds threaten the shared facts and reality necessary for self-regulation and democracy.
- Geographically isolated realities: E.g., Los Angeles and Boise, Idaho residents see different "realities" online.
5. Price vs. Quality Discrimination
- [08:41] Two types:
- Price discrimination: Charging consumers differently based on wealth/tastes—can enhance efficiency if buyers are informed.
- Quality discrimination: Offering different product versions; also defensible if consumers are informed.
- Harm emerges when algorithms knowingly exploit consumer ignorance or behavioral biases.
- Quote:
"If it's the case that for either pricing or for quality, the algorithm is aware of the fact that certain consumers are particularly likely not to have relevant information, then everything goes haywire."
— Cass Sunstein [10:14]
6. Crossing the Line into Exploitation
- [11:03] Behavioral targeting:
- Companies like Facebook can target consumers based on intricate data (likes, geography, credit, purchase history).
- The danger lies in using this power to exploit not just preferences but vulnerabilities—behavioral biases or lack of self-knowledge.
- Example: Marketing dubious pain relief products to vulnerable radio audiences.
- Quote:
"That's not going to make America great."
— Cass Sunstein [13:20]
7. AI as an Extension of Algorithmic Impact
- [13:32] The leap from algorithms to AI:
- Large language models (like ChatGPT) can learn detailed personal information from interactions.
- Need for robust privacy protections to prevent misuse.
- Generative AI increases both the benefits (personalization) and the risks (manipulation, scams).
- Quote:
"Generative AI can go well beyond the algorithms we've gotten familiar with to both to make the beauty of algorithmic engagement... and the ugliness of algorithms, here's how we can exploit you to get you to buy things."
— Cass Sunstein [14:51]
8. Market Efficiency vs. Price Gouging
- [15:30] Algorithms, surge pricing, and gouging:
- Algorithms may just reflect market dynamics (e.g., Uber during rain, shovels during a snowstorm).
- Lines are blurred when emotional pressure or vulnerability lead consumers to overpay.
- Quote:
"While it's morally abhorrent to many ... from the standpoint of standard economics, it's okay. Now if... people under short term pressure... are especially vulnerable ... there's a behavioral bias..."
— Cass Sunstein [16:33]
9. Regulation, Privacy, and Transparency
- [17:29] Comparing U.S. and Europe:
- Europe is generally ahead on privacy protections, but neither the U.S. nor Europe has solved the problem of algorithmic exploitation.
- Suggests regulatory focus should be:
- Enhancing consumer protection (via informed choice and bias mitigation).
- Enforcing algorithmic transparency (public understanding of what algorithms are doing).
- Quote:
"Right to algorithmic transparency ... is a coming thing where we need to know what the algorithms are doing so it's public. What's Amazon's algorithm doing? That would be good to know."
— Cass Sunstein [19:32]
Memorable Quotes
- "Exploitation of absence of information. That's algorithmic harm." — Cass Sunstein [01:58]
- "We're going to get very balkanized culturally...that's culturally damaging and it's also damaging for the development of individual tastes and preferences." — Cass Sunstein [05:12]
- "Algorithms can echo chamber you...that might be a very good thing from the standpoint of the seller ... But from the standpoint of you, it's not so fantastic. And from the standpoint of our society, it's less than not so fantastic." — Cass Sunstein [06:48]
- "If it's the case that for either pricing or for quality, the algorithm is aware of ... certain consumers are particularly likely not to have relevant information, then everything goes haywire." — Cass Sunstein [10:14]
- "Generative AI can go well beyond the algorithms we've gotten familiar with ... the beauty of algorithmic engagement ... and the ugliness of algorithms, here's how we can exploit you." — Cass Sunstein [14:51]
- "Right to algorithmic transparency ... is a coming thing where we need to know what the algorithms are doing so it's public." — Cass Sunstein [19:32]
Noteworthy Timestamps
- Defining algorithmic harm: [01:35]
- Cultural balkanization and music tastes: [04:24]
- Media bubbles and democracy: [06:22] – [08:41]
- Price vs. quality discrimination: [08:41] – [11:03]
- Behavioral targeting and exploitation: [11:03] – [13:32]
- From algorithms to AI, privacy concerns: [13:32] – [15:30]
- Surge pricing vs. price gouging: [15:30] – [17:29]
- U.S. vs. Europe on privacy, transparency: [17:29] – [20:00]
Tone & Language
The discussion is candid and occasionally playful (e.g., Star Wars metaphors), but leans earnest and informative. Sunstein mixes economic analysis with evocations of popular culture, aiming to clarify complex concepts with relatable analogies and real-world examples.
Conclusion
Barry Ritholtz and Cass Sunstein offer a wide-ranging and accessible primer on how algorithms—soon supercharged by AI—shape contemporary consumer experiences, media diets, and even democratic processes. The episode serves as both a warning and a guide, urging listeners to demand transparency, be vigilant about exploitation, and take active steps to protect themselves in an increasingly algorithm-mediated world.
