FIREWALL with Bradley Tusk
Episode: Can You Be Good and Great?
Guest: Sebastian Malaby
Date: April 2, 2026
Episode Overview
In this episode of Firewall, host Bradley Tusk sits down with renowned journalist and author Sebastian Malaby to discuss the intersection of technology, politics, and the moral dilemmas at the highest levels of AI and tech entrepreneurship. Focusing on Malaby’s new book, The Infinity Machine, the conversation covers the precarious financial state of OpenAI, leadership and safety in the AI space, and the big question: Can you be both good and great in today's world of tech titans?
Key Discussion Points & Insights
1. OpenAI's Precarious Financials and Vision
-
Altman's Pitch and Valuation Frenzy
- Malaby argues Sam Altman is a "great pitch man...the distilled essence of Silicon Valley" (02:19), whose network prowess has propelled OpenAI to jaw-dropping valuations, perhaps beyond its actual market capabilities.
- OpenAI is "over its skis" on both jaw-dropping valuation (approaching $1 trillion) and monumental cash burn, with leaked numbers predicting "$660 billion of cash burn between now and 2030" (04:22).
-
Revenue Challenges
- Only about "5% of [OpenAI's nearly 1 billion] users pay them any money" (06:22).
- Malaby doubts the stickiness of OpenAI’s products: "If you tried to charge them money, they would just go use Anthropic or Gemini or something. It's just not sticky" (06:34).
-
Comparison to Other Tech Models
- Tusk and Malaby debate the lack of trillion-dollar, subscription-based “freemium” precedents: "[Spotify is] not in the trillion dollar ballpark" (07:58).
-
Valuation Games and Venture Capital Mindset
- Investors often use headline figures for hype, while actual buy-in (e.g., via warrants and SPVs) signals that even insiders don't fully believe the valuation (10:04).
- "It's a circular economy of nonsense," Tusk jokes about inflated, self-referential dealings between OpenAI and partners like Oracle (10:52).
2. The Energy and Political Tangle
-
AI's Energy Appetite & Public Impact
- Tusk: "I don't know a single politician that says, I'm going to tell my voters...their energy bills went up 40% because Sam Altman [needs] to be a trillionaire" (15:30).
- Political backlash is brewing, with "39 states working on...legislation ranging from flat out moratoriums to prohibitions on anti pass along cost to utility ratepayers" (15:54).
-
Safety and Regulation
- Both anticipate a bipartisan push for AI reform post-2027, with politicians acting in "self protection" (17:14).
- Malaby draws parallels to the Industrial Revolution: "You can't sort of change the way people...work...because [of] a rival form of cognition in a machine, and expect zero political fallout." (17:37)
3. Anthropic, Claude, & the AI Arms Race
- Competition Heats Up
- Tusk observes: “It does feel like even since your op ed came out, anthropic...has really surged past OpenAI...when I talk to most people, they talk about Claude" (20:23).
- Malaby sees the landscape as volatile: "Each lab seems to have a moment...It's not game over. They're all pretty close" (21:08).
4. Demis Hassabis and DeepMind: Leadership and Ethics in AI
-
Profile of Visionary Leadership
- Malaby: “He's the guy who really kicked all this off. In 2010 he founds DeepMind...explicitly to build artificial general intelligence” (21:34).
- Hassabis uniquely blends “serious scientist” and “business guy” skills (23:15).
-
Ethical Commitments and Corporate Negotiations
- In selling DeepMind to Google, Hassabis insisted on safeguards: "You can't use this AI for military purposes," and demanded an "ethics oversight board" (25:05).
- Despite this, Hassabis found it "a massive fight to actually make any of that real" and even threatened to spin DeepMind back out of Google (26:01).
5. Limits of Individual Action & the Need for Collective Governance
-
The Collective Action Problem
- "One lab cannot really push the government. It needs to be...government has to want to do some kind of safety and impose the rules" (27:21).
- Real progress likely “has to be multinational...US and China both involved" (30:45).
-
Historical Parallel: Nuclear Non-Proliferation
- The conversation draws explicit nuclear parallels—how existential threats eventually led to global treaties: “In 1962 you had the Cuban Missile Crisis...then six years later, 1968, nuclear non proliferation treaty” (31:17).
6. The Goodness vs. Greatness Dilemma
- Can You Be Both Good and Great?
- Tusk's core question: "Is it possible to be both? And...can you be truly great if you are not good?" (35:31)
- Malaby: “There's a greatness...political leader, maybe a military leader...probably you can't be good. And this was the dominant definition of greatness until the 20th century...In the 20th century...that kind of greatness went out of style...Now...more Einstein, pure scientific invention. And we're happy to call those people great and those people can be good" (35:47).
- Demis Hassabis emerges as a case study: not just a technical visionary, but a leader "trying to have a coalesced team by projecting a set of values that people buy into" (41:45).
- DeepMind’s early hiring test: “Was that person polite and friendly to the receptionist? If they weren’t, forget it, they're not going to be hired. So they really filtered out the assholes" (43:13).
Memorable Quotes & Moments
-
Malaby on Silicon Valley Culture
"He [Altman] represents the kind of distilled essence of Silicon Valley, meaning he's incredibly good at the network..." (02:30) -
Malaby on the Risk of AI Investment
"What we have to remember is there's a lot of stories in the valley of people who fake it till they make it....The difference this time is that you've got so much capital that needs to be raised, that this is an experiment...in how deep are the global capital markets" (12:35) -
Tusk on Political Realities for AI
"I don't know a single politician...that's going to tell my voters...it’s okay [energy costs go up], because the most important thing is that Sam Altman become a trillionaire" (15:35) -
Malaby on the Oppenheimer Syndrome
"The nuclear parallel, the Oppenheimer syndrome, is infused in the conversation in these [AI labs]" (35:03) -
Malaby on Demis Hassabis’ Ethos
"His mum...was quite religious because of all that and brought up her son to pray in the evenings. ...In the early days...a test they made for new hires was, was the person coming in...polite and friendly to the receptionist? If they weren’t...forget it." (43:05)
Timestamps for Key Segments
- OpenAI Finances & Altman’s Leadership: 01:50–13:00
- Valuation and Investment Games: 09:13–13:45
- AI Infrastructure, Political Risk, and Energy: 14:37–19:30
- AI Safety and Regulation Prospects: 17:04–21:23
- Anthropic and AI Competition: 20:48–21:23
- Demis Hassabis/DeepMind Story: 21:23–28:32
- State vs Federal Regulation, International Coordination: 28:44–32:15
- Historical Parallels—Nuclear Age and Responsibility: 32:02–35:05
- Goodness vs. Greatness in Leadership: 35:05–44:03
Conclusion
Sebastian Malaby’s conversation with Bradley Tusk is a deep dive into the existential paradoxes facing AI leaders today, revealing tensions between ambition and ethics, short-term valuation hype and long-term sustainability, and individual charisma versus collective responsibility. The episode is essential listening for anyone interested in how power, money, and morality interact in the age of artificial intelligence.
