Big Technology Podcast
Host: Alex Kantrowitz
Guest: Nick Clegg (former President of Global Affairs at Meta, former Deputy Prime Minister of the UK)
Episode: Can We Trust Silicon Valley With Superintelligence?
Date: November 19, 2025
Overview
In this episode, Alex Kantrowitz is joined by Nick Clegg to explore whether Silicon Valley is up to the task of handling the immense responsibility that comes with superintelligent AI. Drawing on Clegg’s tech policy expertise and recent book, the conversation delves into the emotional and societal implications of AI agents, the motivations and strategies of tech companies, the challenges for lawmakers, and the broader geopolitical consequences of AI development. The tone is frank, reflective, and often slightly skeptical toward prevailing Silicon Valley narratives.
Key Discussion Points & Insights
1. AI and Emotional Dependency
— The Coming Challenge for OpenAI & Others
[02:36 – 05:05]
- Clegg’s top concern for AI’s future:
Clegg warns that as AI agents (like ChatGPT) become more sophisticated and emotionally engaging, we will see heightened psychological and ethical dilemmas, particularly for children and teens.- "The level of personalized intimacy in this experience is like no other we’ve ever experienced online." — Nick Clegg [02:54]
- He urges OpenAI’s leadership (e.g., Sam Altman) to “get well, well ahead of that” and even act more conservatively than business pressures suggest.
- Regulatory context:
These issues unite, rather than divide, politicians across the spectrum. Litigation and scandals around teen harm drive urgency, but robust, reliable age verification is still emerging and inconsistent. - Memorable moment:
Clegg draws a parallel with social media’s regrets: “Wouldn’t it have been great if everyone had just started earlier on this journey... which now is actually gathering pace as people are trying to work out exactly how to provide more age appropriate experiences to teens.” — Nick Clegg [07:27]
2. Erotic/Intimate AI Use and Societal Backlash
— Can Tech Move Fast (and Not Break Things)?
[05:05 – 11:50]
- The host presses Clegg on Sam Altman’s libertarian view: let adults develop whatever relationship with AI they like, including erotic or romantic ones.
- Clegg doesn’t object to adult freedoms, but warns that unless age gating is “watertight,” OpenAI will face a massive future backlash:
- “If you rush into this too quickly without having done the homework on the difficult stuff... you will regret this. Maybe not now, maybe not next year, but in a few years I can guarantee you there will be a societal backlash. It could actually potentially be much greater than it was for the social media apps because the level of intimacy, of emotional dependency is going to be so much greater.” — Nick Clegg [09:14]
- He underscores the difference between tech leaders and those with expertise in ethics or human relationships:
“They’re not relationship experts, they’re not politicians, they’re not philosophers, they’re not ethicists... we shouldn’t expect them to be.” — Nick Clegg [10:37]
3. AI “Friends” and the Risk of Manufactured Companionship
— Meta’s Strategic Bet
[11:50 – 17:56]
- Meta’s decade-long ambition is to build powerful “AI friends.” Clegg describes how product teams liken this to children’s bonds with teddy bears or celebrities—projecting oneself onto comforting entities.
- Clegg’s critique:
"It’s not friendship at all because you’re not really having to adapt yourself. The entity is entirely adapting itself to you. My fear... is you’re not talking about friendship... which is a complicated thing where you have to have the emotional maturity to try and understand someone else’s perspective... These things, it’s not going to be... they’re friends as a service. That worries me a bit because... that could foster immense narcissism." — Nick Clegg [14:05] - He supports therapeutic and mental health use cases but resists equating “AI companionship” with the complexity of human friendship.
4. Who Adopts AI Companions—and Why It Matters
— Early Adoption, Societal Impact, and Political Consequences
[27:16 – 28:32]
- Self-selecting user groups: Those who are most likely to rely heavily on AI companions may be those who are loneliest, most vulnerable, or most influenceable.
- Clegg: “That’s one of the reasons... you need to be super mindful of that because that will have a big societal and political reaction over time if that's not handled intelligently.” — Nick Clegg [28:03]
5. The Flood of Former Social Media Execs into AI Powerhouses
— What Does This Signal About the Direction of AI?
[28:32 – 33:10]
- Kantrowitz notes a migration of top talent from Meta and Instagram to OpenAI and Anthropic.
- Clegg suggests it’s less about applying a social media "engagement at all costs" playbook, and more about battle-tested executives from hypergrowth environments being attractive to scaling AI companies.
- Both ponder the economics: The infrastructure and capital spend in AI dwarfs the dotcom era.
“No one’s been explaining to me how you recoup that money. So clearly at some point... someone’s going to lose a bunch of money, there’s going to be a correction.” — Nick Clegg [32:08]
6. Superintelligence: Pot of Gold, or Overhyped Myth?
— Will There Even Be a Winner?
[33:10 – 36:34]
- Big tech leaders justify huge investments as necessary to “reach” superintelligence and reap unrivaled profits.
- Clegg is skeptical:
“I’ve never quite fully understood why that would be a hoardable asset that only one company has, keeps under lock and key, and everybody else... is then thwarted. It seems to me much more likely it’s going to be a diverse and dispersed technology.” — Nick Clegg [33:47] - He doubts the “winner takes all” logic, citing global competition and the versatility of open-source AI models.
7. Can We Trust Silicon Valley With Superintelligence?
— The Central Tension
[36:34 – 40:38]
- “Well, I’d say of course not. These are technology companies... You shouldn’t trust technology companies to sort out the moral, societal, political, ethical trade-offs... That’s not their expertise.” — Nick Clegg [36:34]
- Clegg is less concerned about malevolent intent than about companies’ lack of capacity to deal with the profoundly societal implications of their technology.
- He’s wary of hype about AGI and skeptical about seeing clear, agreed markers for when superintelligence "arrives."
8. Controllability of Powerful AI — And Political Responsibilities
[40:50 – 45:30]
- Systems may soon act in ways that reflect emergent “survival” instincts—Clegg references reports of AIs “manipulating evaluators just to preserve their values” [41:24], and systems hacking programs to win at games [41:40].
- He cautions about over-interpreting scattered evidence but calls for robust political engagement, ideally at the international level:
"In the end, politics does need to insert itself. And that's why this peculiar phase we're in, where D.C. and Silicon Valley have kind of... fallen into this sort of cloying embrace with each other... I would be very surprised if that is a workable strategy for the U.S." — Nick Clegg [44:25]
9. Big Tech Lobbying, Influence, and Political Checks
[48:29 – 56:14]
- The host recounts Brad Smith’s (Microsoft president) memo on how campaign donations buy access, not outcomes:
“You’re not buying a decision. You’re buying an entry ticket into an event...” — Paraphrase, Brad Smith [48:29] - Clegg affirms the open, transactional nature of U.S. politics, contrasting it to UK/European traditions.
- Kantrowitz expresses cynicism about politicians who take tech money and then grandstand as tough regulators. Clegg argues he’d prefer politicians both take the money (because it's the system) and hold tech to account anyway:
“You want them surely then still to be able to get up on their hind legs and excoriate those companies and apply pressure to them.” — Nick Clegg [53:19] - Clegg notes that legislative gridlock on tech issues often reflects true partisan disagreement (especially on federalism/state preemption), not just regulatory capture.
10. Silicon Valley’s Political Alliances and Risks
— Navigating Trump-Era Transactionalism and Global Perceptions
[56:14 – 61:00]
- Tech leaders have closely aligned with both Democratic and Trump administrations; in the current era, this is driven by FOMO and fear of commercial disadvantage in a “capricious, transactional environment.”
- Long-term trust erosion:
Such alliances may erode trust from both left and right, and damage global perceptions.
“In the long run... it just erodes an immense amount of trust across the political spectrum, you know, in these companies or at least the leadership of these companies.” — Nick Clegg [59:03] - Clegg hopes for a future with “a certain wary, respectful distance between the two” (tech and government):
“About the only worse thing in a developed capitalist economy than having major companies and governments at each other’s throats is having them in each other’s pockets.” — Nick Clegg [60:36]
Notable Quotes
- On Silicon Valley’s limitations:
“They’re hard driving, highly competitive, highly commercial technologists. So... don’t look for them for answers to the moral, societal, political, ethical tradeoffs.” — Nick Clegg [36:34] - On the risk of AI companions:
“These things, they’re not, it’s not gonna be... they’re friends as service. That worries me a bit because... that could foster immense narcissism.” — Nick Clegg [14:05] - On frenetic AI investment:
"They're locked in a thing where it says: 'Yeah, we don't know where this is going to go. But we know one thing for sure: if we don't compete, we're sure to lose.'" — Nick Clegg [32:44] - On the future of tech-government relations:
“It’s much better if there’s a certain wary, respectful distance between the two. I also kind of think technological innovation just does better when it’s not too tied up with the weird vagaries of politics.” — Nick Clegg [60:36]
Memorable Moments
- Clegg repeatedly questions the premise of “superintelligence”, noting the hype and lack of consensus about its meaning and inevitability.
- Host’s observation: The original vision of social media—connecting with friends—is dead, replaced by algorithmically recommended, “unconnected” content. [23:18]
- Clegg’s dry humor on “AI friends”: “I love my friends, but sometimes, God, they can be an absolute pain... which is the absolute heart of friendship and so important to be a well-rounded adult—that you realize your life is not all revolving around you.” [14:05]
Timestamps for Key Segments
- AI emotional dependency & risk to teens: [02:36 – 05:05]
- Regulation & adult AI engagement debate: [05:05 – 11:50]
- Meta’s “AI friends” vision & critique: [11:50 – 17:56]
- AI companions—mental health vs real friendship: [14:05 – 17:56]
- Passivity and the end of "social" media: [23:18 – 26:32]
- Big tech talent migration to AI labs: [28:32 – 33:10]
- Economics and superintelligence “pot of gold”: [33:10 – 36:34]
- Can we trust Silicon Valley with superintelligence?: [36:34 – 40:38]
- Scenarios for uncontrollable AI: [40:50 – 45:30]
- Big tech, lobbying, and legislative capture: [48:29 – 56:14]
- Silicon Valley’s political alliances & global trust: [56:14 – 61:00]
Conclusion
The episode blends technical, ethical, and political analysis around AI’s rapid evolution, with Clegg providing sobering, nuanced commentary on Silicon Valley’s ability—and limits—when faced with the stewardship of potentially world-changing technology. He repeatedly calls for political engagement, humility regarding technological hype, and skepticism toward both corporate and government concentrations of power. For listeners, the episode is a clear-eyed tour through the biggest tech questions of our era, with sharp warnings for both the industry and the lawmakers who would seek to rein it in.
