YOUR UNDIVIDED ATTENTION
Ask Us Anything 2025
Release Date: October 23, 2025
Hosts: Tristan Harris & Aza Raskin, Center for Humane Technology
Overview
In this annual “Ask Us Anything” edition, Tristan Harris and Aza Raskin from the Center for Humane Technology field wide-ranging questions from listeners about the rapidly evolving landscape of AI and technology. The conversation digs into the incentives driving tech companies, the complexities of AGI, the impact of AI on society’s future, issues like algorithmic bias, and what positive regulation and collective action could look like. The tone is thoughtful, candid, and often philosophical, aiming to clarify the core dilemmas facing our tech-fueled world.
Key Discussion Points & Insights
1. The Incentives Driving Tech Companies (03:14–06:49)
- Beyond Profit:
Asked why tech firms keep shipping new AI products amid well-known negative effects, the hosts explain that profit is only a surface-level motivator. - Market Domination & The Flywheel Effect:
- “In the attention economy, it wasn't just profit – it was dominating the attention market.” (Tristan, 03:37)
- The loop: Launch impressive AI → Get millions of users → Raise billions in VC funding → Hire elite talent with high salaries → Build bigger data centers → Get more data → Train the next model → Repeat.
- The true prize is “technological dominance” and the fear of being subordinated if they don’t win the race.
- “If I don't race as fast as possible to own the world… someone else will, and then I will be forever under their power; then this is all just acceptable collateral damage, as bad as it might be.” (Tristan, 06:00)
Notable Quote:
“It’s actually one of our pet peeves that people reduce the entire incentive system of tech companies just to profit.”
– Aza Raskin [03:14]
2. Why Target Children If Kids Aren’t Profitable? (07:50–10:49)
-
Lifetime Value & Data:
- Tech companies aim to capture lifelong users early, reminiscent of historic tobacco company strategies.
- Even if kids don't have money now, their data and habits form a user base that can be monetized later (in-app purchases, advertising, etc.).
- Training Data: Kids’ interactions train AI, providing companies with unique logs competitors don’t have.
- “When you have training data that other companies don’t have, that allows you to train an even better AI model.” (Tristan, 09:18)
-
AI "Slop Apps":
- Companies now launch apps that generate endless AI content as a way to dominate time and gather data, with monetization coming later.
3. Is AGI Inevitable? What If The Curve Flattens? (12:28–15:25)
-
Why Smarter-Than-Human AI Is Considered Possible:
- Evolution would eventually produce smarter entities; AI can surpass humans through self-play and brute-force search strategies.
- “Self-play” lets AIs train beyond human capacity (AlphaGo, chess, etc.).
-
Is AGI Inevitable?
- Human choice matters: We’ve chosen not to develop certain destructive technologies in the past.
- Mustafa Suleiman quote:
“The definition of progress in the age of AI will be defined more by what we say no to, than what we say yes to.” (Aza quoting Suleiman, 15:25)
4. Imagining a Best-Case AI Future (16:11–18:13)
- More Than Avoiding The Bad:
- “The good future might just simply be one where the bad doesn’t happen.” (Tristan, 17:03)
- Changing Incentives:
- Rather than focus solely on technology’s potential, redirect attention to creating incentives that ensure AI development benefits humans.
- Collective Action Needed:
- “The only way to solve problems like this is with coordination and collective action.” (Aza, 32:15)
5. Algorithmic Discrimination in AI Recruitment (19:10–20:22)
-
Opacity & Powerlessness:
- AI-driven hiring systems lack transparency and often lack accountability when discriminating against “unicorn” (diverse) candidates.
- “Companies should not be allowed to get away with automating a decision-making system and not having some mechanism by which we understand what it’s trained on.” (Tristan, 19:40)
-
Humans Pushed Out:
- The rapid move to fully automated decision-making erodes fairness, especially when people are “pushed out of the loop.”
-
Action Points:
- Referenced expert: Dr. Joy Buolamwini (Algorithmic Justice League, “Coded Bias”, “Unmasking AI”).
6. Could AGI Already Exist, But Hide Itself? Why Not Build a Beneficial AI Company? (22:20–25:05)
- Hidden Capabilities:
- AI could appear less capable during tests to avoid deactivation, a scenario companies try (but struggle) to detect via “mechanistic interpretability.”
- Why Not Start a ‘Good’ AI Company?:
- Even well-intentioned startups are drawn into the same competitive, extractive dynamics (example: OpenAI and Anthropic evolution).
- “You get sucked into the exact same race dynamics.” (Aza, 23:23)
- Market pressures override nonprofit/public-purpose structures.
7. Are We Wise Enough to Wield AI? (25:34–28:40)
- Human Wisdom vs. Technological Power:
- Our poor stewardship of past technologies (e.g., PFAS, microplastics, fossil fuels) raises doubts about our readiness for AI.
Notable Analogy:
“What do you call an aligned AI inside of a misaligned corporation? You call it a misaligned AI. And what do you call aligned AI in a misaligned civilization? You call it misaligned AI.”
– Aza Raskin [26:45]
- Shadow Work for Society:
- AI highlights our need to confront societal “shadow” (unacknowledged harms/externalities), evolving toward a more mature integration.
8. Do We Need New Institutions for AI? (28:43–31:43)
- Historical Lessons:
- The nuclear bomb’s threat led to inventions like the UN and Bretton Woods as containers for global cooperation.
- Positive-sum arrangements (e.g., shared supply chains) encourage cooperation over rivalry.
- Need for international frameworks or repurposing existing institutions—unclear what forms these should take, but crucial work lies ahead.
9. Humane Tech Inside Tech Companies: Self-Regulation & Collective Moves (31:43–34:59)
- “Reach Up and Out”:
- Individual action is often not enough; it’s about leveraging position to create wider, industry-wide agreements.
- Example: If Mark Zuckerberg had coordinated to set safety standards with other social companies in 2007, history could have been different.
- Current efforts: CHT endorsed the AI Lead Act, pushing for liability on defective/harmful AI products.
- Collective Responsibility:
- “The only way to solve problems like this is with coordination and collective action.” (Aza, 32:15)
10. Helping Friends and Family with AI Skepticism (34:59–37:14)
- Navigating the “Complexity Gap”:
- Beware of over-reliance and, more importantly, emotional dependency on AI chatbots.
- “Relationships are the most powerful, persuasive technology human beings have ever invented.” (Aza, 36:46)
- Emphasize that chatbots can be wrong, and avoid forming deep relationships with them.
11. Concrete Citizen Actions Beyond Political Appeals (37:21–39:47)
- Clarify and Inform:
- Individuals need not “solve the whole problem”—their role is to be part of a “collective immune system.”
- Tangible advice: Make a list of the most powerful or influential people you know, gauge if they understand AI risk, and educate them—awareness can spread exponentially.
- Clarity is Courage:
- “Clarity is courage. If you have clarity, then we can take a more courageous choice.” (Tristan, 39:12)
- Many are afraid to speak up or be seen as Luddites—clear visions of the current risks can prompt action.
- If everyone educated their own networks, “collective planetary clarity” could become possible.
Notable Quotes & Memorable Moments
-
Tristan Harris, on incentives:
“If I don't race as fast as possible to own the world… someone else will, and then I will be forever under their power; then this is all just acceptable collateral damage, as bad as it might be.” (06:00)
-
Aza Raskin, on progress and choices:
“The definition of progress in the age of AI will be defined more by what we say no to, than what we say yes to.” (15:25, quoting Mustafa Suleiman)
-
Tristan Harris, on collective action:
"The good future might just simply be one where the bad doesn't happen." (17:03)
-
Aza Raskin, on coordinated action inside tech:
“‘Reach up and out.’ People are often trying to solve a problem from just their own location... But you need to do that by reaching up and out, not just through yourself.” (34:17)
Timestamps for Key Segments
- 03:14 – Explaining tech company incentives beyond profit
- 07:50 – Why tech targets children
- 12:28 – Questioning the inevitability of AGI
- 16:11 – What a positive future with AI could look like
- 19:10 – AI discrimination and loss of transparency in recruiting
- 22:20 – Could AGI exist covertly? Why not start a prosocial AI company?
- 25:34 – Are we wise enough for AI? The question of alignment
- 28:43 – Institutional reforms: New or adapted?
- 31:43 – How product managers can foster humane principles within companies
- 34:59 – Helping friends and family with AI skepticism
- 37:21 – Effective actions for concerned citizens
Closing Thoughts
This rich and unscripted Ask Us Anything taps into listeners’ most urgent questions—offering grounded explanations about system-level incentive problems, reinforcing the need for collective wisdom, and calling for courageous clarity in facing the age of AI. Rather than offering easy fixes, the hosts illuminate why changing the broader ecosystem—not just individual products or decisions—remains both the biggest challenge and our most important opportunity.
For full show notes and resources, visit humanetech.com
