Podcast Episode Summary
Podcast: Your Undivided Attention (Feed Drop: Possible with Reid Hoffman and Aria Finger)
Release Date: February 5, 2026
Guests: Aza Raskin (Center for Humane Technology), Reid Hoffman (LinkedIn, Investor), Aria Finger (Entrepreneur, Co-host of Possible)
Host(s): Tristan Harris, Daniel Barcay, Aza Raskin (YUA); Reid Hoffman, Aria Finger (Possible)
Episode Overview
Main Theme:
This episode features a nuanced, in-depth conversation between technologist/philosopher Aza Raskin and entrepreneurs Reid Hoffman and Aria Finger, exploring the paradoxes and urgent questions of our AI-powered era. The trio delve into the philosophical and practical dilemmas posed by new technologies—especially artificial intelligence, social media, and incentive structures—on society, democracy, and the human psyche.
Purpose:
To surface core tensions in technology’s power (service vs. exploitation), debate the case for an "AI pause," analyze how engagement-based business models shape behavior, and propose critical institutional and regulatory changes for a more humane digital future.
Key Discussion Points and Insights
1. The Paradox of Technology: Service vs. Exploitation
- Quote: “The paradox of technology is that it gives us the power to serve and protect at the same time as it gives us the power to exploit.” – Aza Raskin [04:42]
- As technology grows powerful, its capacity to fulfill our needs and, conversely, to manipulate and harm increases.
- Real-world example: Introduction of Starlink in uncontacted Amazonian tribes rapidly changed behaviors and led to social consequences (loss of hunting, rise in tech dependency) [05:13].
2. "You Are the Product": Engagement and Attention Economy
- Quote: "If you’re not paying for the product, you are the product… the values that we ask our technology to optimize for end up optimizing us.” – Aza Raskin [06:21, 08:11]
- Engagement business models commodify not just attention, but deeper aspects of human psychology and intimacy.
- Social media shifts us from being consumers to being the product, and the optimization for attention can change user behavior and values fundamentally.
3. Why Social Media is More Potent (and Risky) than Television
- Social media differs from prior mass media (TV) because:
- It's personalized, infinitely scrollable, and employs AI to maximize engagement [12:53].
- Power asymmetry: Tens of thousands of engineers + supercomputers versus individual users [14:10].
- Tendency toward “parasitism” rather than altruism: Platforms extract maximum engagement while keeping users just "alive enough" [15:30].
4. Incentives Eat Intentions
- Quote: “In creating Infinite Scroll…incentives eat intentions. You get a little window at the beginning to shape the overall landscape, and after that, the incentives are going to take over.” – Aza Raskin [45:55]
- The original good intentions of technology founders are inevitably overpowered by incentive structures, such as profit and engagement metrics.
5. AI Governance and the AI Pause Debate
- Aza Raskin & Aria Finger’s View:
- Support for reimagining the development, pace, and alignment of AI rather than a simplistic “pause.”
- AI is being developed/exploited under “maximum incentives to cut corners on safety”—which is an existential risk [19:14].
- Need for “aligned collective intelligence,” not just aligned AI.
- Reid Hoffman’s View:
- Skeptical about the efficacy of pause letters: those who care about safety slow down, others don’t (and win the race) [21:31].
- Emphasizes changing the “probability landscape” by maximizing good and minimizing harm, while being realistic about competitive geopolitics and multiple actors in the AI race [21:31–26:17].
6. The Mirror and the Amplifier: Does AI Just Reflect Us?
- AI both mirrors and amplifies humanity—it can change societal values through feedback cycles and optimization, not just reflect them [29:19].
- The dangers are compounded by game theory: AIs (and their ecosystems) learn to exploit competitive weaknesses, pushing society to the lowest-common-denominator behaviors.
7. Transparency & Regulation
- Quote: "We should have some measurement stuff about what’s going on here.” — Reid Hoffman [35:19]
- Current lack of transparency impedes assessment of the harms/benefits (e.g., mental health impacts).
- Suggestion: Government-mandated questions and audited answers from tech companies, with possible tiered confidentiality [35:19–35:42].
- Special focus on protecting children and youth from the most powerful, manipulative aspects of social media and AI [51:25].
8. Redesigning Institutions for Exponential Tech
- Priority: Build institutions capable of keeping up with (and governing) rapidly evolving technologies [39:55].
- Aza advocates for systematic investment in “aligned collective intelligence”—how society coordinates, not just how AIs align.
- Quote: “I don’t see anything similar in scale trying to build aligned collective intelligence. And to me, that’s the core problem we need to solve.” – Aza Raskin [42:40]
9. Practical Regulatory Proposals
- Limit engagement-driven business models for AI companions, especially for kids [47:20].
- Enforce liability, whistleblower protections, and third-party research for transparency [47:20].
- Device-level age gating to minimize underage exposure to adult AI content [48:58].
10. Toward a Healthier Engagement Future & Social Scorecards
- Alternative metrics: Platforms could optimize to “minimize perception gap” between groups, promoting mutual understanding without dictating content [54:15].
- Scorecards (objective, outcome-based measures) for platforms to improve civic trust and reduce polarization [54:15–55:31].
11. Notable Rapid-Fire Insights
- Outdated assumption: “We can always muddle through new tech like we have before” (Aza, [56:31]).
- Most controversial belief (Reid): “The real innovation will be in combinations of models and compute fabric—not just giant LLMs.” [59:29]
- Advice to AI builders: “Be acutely aware of how incentives eat intentions,” (Aza); “Have a theory of how your product raises agency and compassion.” (Reid) [58:05–58:54]
Notable Quotes and Memorable Moments
- On social media’s power: “You open Twitter and 30 minutes later, you’re like, what happened to my life?… Being on Twitter doesn’t make me sad. I love Twitter… but I find it very hard to be disciplined with Twitter. It’s, like, embarrassing to say out loud...” – Aria Finger [12:53]
- On fiduciary duty: “We need to recategorize technology as being in a fiduciary relationship. That is, they have to act in our best interest…” – Aria Finger [14:31]
- On institutional fixes: “If you had the full power to redesign one institution to keep up with exponential tech, where would you start?” – Aza Raskin [39:55]
- “Medical. That would be… one of the great ways to elevate the human condition with AI.” – Reid Hoffman [41:00]
- On collective intelligence: “We have a lot of the smartest people… going into aligned AI. I don’t see anything similar in scale for aligned collective intelligence.” – Aza Raskin [42:40]
Timestamps of Key Segments
| Segment | Timestamp | |-----------------------------------------------------|:-------------:| | Opening / Introduction of Guests and Theme | 00:00–04:40 | | The paradox of technology / Starlink story | 04:42–06:05 | | The product is YOU, engagement models | 06:21–08:51 | | TV vs. Social Media—Power & Tradeoffs | 10:17–15:23 | | Business models, power asymmetry, fiduciary duty | 14:10–15:23 | | Regulatory levers / Latency as solution | 15:30–18:25 | | AI pause debate, existential risk | 19:14–21:31 | | Reid on “pause letters” and race to the bottom | 21:31–26:17 | | AI as mirror/amplifier/vampire, game theory’s role | 28:35–30:48 | | Ecosystem ethics and responsibility | 29:19–30:51 | | On children, safety, transparency | 33:10–35:42 | | Regulatory ideas: institutional redesign | 39:55–44:31 | | Incentives eat intentions (Infinite Scroll origin) | 45:55 | | Specific regulations for AI and kids | 47:20–48:58 | | Device-level controls for minors | 48:58 | | Scorecards, minimizing perception gap | 54:10–55:31 | | Rapid-fire: outdated assumptions, advice, etc. | 56:08–62:59 | | Optimistic closing visions for humanity & AI | 65:24–66:59 |
Conclusion & Tone
The dialogue throughout is earnest, philosophical, and laced with both optimism and a sober sense of urgent responsibility. Debate is marked by mutual respect—even as the guests disagree, especially over the AI pause—underscoring the need for good-faith discourse. They avoid alarmism; instead, they stress the primacy of incentive structures, the limits of individual intentions, and the collective challenge ahead: re-aligning tech with human and societal flourishing in an age dominated by algorithms.
Final Thought from Aza Raskin:
“If everything breaks humanity’s way in the next 15 years… we’ll have solved our ability to socially coordinate at scale, without subjugating individuals… We will have solved the aligned collective intelligence problem, and we’d be applying that to…explore the universe.” [65:38]
Listen to this episode for:
- A rare, nuanced debate over tech optimism vs. existential risk
- Vivid arguments for reframing digital business models and regulation
- Practical, actionable ideas for the next generation of institutions & collective intelligence
[END OF SUMMARY]
