Moonshots with Peter Diamandis – EP #208: "The AI Wealth Gap: Why 40x Deflation Changes Everything"
Guest Panel:
- Peter H. Diamandis (Host)
- Salim Ismail
- Dave Blundin
- Dr. Alex Wissner-Gross (“Alex”)
Release Date: November 17, 2025
Episode Focus:
Tracking the future of technology and its impact on humanity, with special attention to AI’s accelerating progress, the wealth gap it may widen, hyper-deflation in compute costs, and social, economic, and scientific implications.
Episode Overview
This episode dives into the unprecedented pace of technological change, especially in artificial intelligence, and grapples with its societal and economic impacts. The hosts tackle the realities of AI-driven deflation, the shifting power dynamics among hyperscalers (Anthropic vs. OpenAI/Google), the emergence of nation-state competitors, and the critical dangers and opportunities presented by automation, new energy infrastructure, and scientific revolutions.
Much of the episode orbits around a pressing concern: How do we ensure the benefits of abundance reach everyone amid rapid, deflationary automation, and not just an ever-more-concentrated elite? The hosts discuss possible solutions, including UBI, open models, and entrepreneurship, while exploring the technical breakthroughs (from model compression to neuroplasticity), global competition, regulatory pitfalls, and the next frontiers in science and health.
Key Discussion Highlights
1. The Global Anxiety: Cost of Living & Unemployment (00:00–01:00; 41:20–44:30)
- Peter Diamandis opens acknowledging the global mood: top concerns are cost of living, unemployment, and social inequity, casting doubt on the “abundance” narrative unless we bridge the reality gap.
- Quote:
“The number one concern globally is cost of living… unemployment... poverty and social inequities… We talk about a future of abundance… but this is the reality—what people are feeling.” – Peter (00:00–00:19) - Dave Blundin illustrates with a real-world example: in countries like Iran, families spend a third of their income on smartphones/data (44:27), a wedge that will widen as AI layers atop current disparities.
- Salim Ismail advocates for Universal Basic Income (UBI) as the traditional job market becomes obsolete faster than predicted:
“This may be the thing that breaks the educational model… where all mechanisms for subsistence… are basically free… That’s the incredible opportunity.” (42:46)
2. AI Supremacy & Hyperscaler Strategies: Anthropic, OpenAI, Google (02:40–13:15)
- Enterprise LLM Market Shifts: Anthropic overtakes OpenAI in enterprise LLM API market.
Alex: “If code generation is the critical path to recursive self improvement, then Anthropic’s focused strategy could lead to superintelligence—unless some ‘special sauce’ (like vision or real-world grounding) is still missing.” (03:38–04:16) - Business Model Divergence:
- Anthropic: Targeting enterprise-first, “friendly AI,” trusted with sensitive data—less headline-grabbing, more reliable (06:23, 07:25).
- OpenAI: Mass consumer play; burns capital aggressively with the intent to dominate through scale, Bezos/Amazon-style (10:55–11:30).
- Question of sustainability and competition as scale costs plummet; the “trillions” in projected revenue and spending are a historic anomaly (12:23).
- Alignment vs. Capabilities:
Alex: “Every alignment project almost inevitably ends up as a capabilities project... That’s the law of economic nature here.” (08:50–09:56)
3. Hyper Deflation: The 40x Cost Crash in AI (28:29–35:16)
- Massive cost reductions make trillion-parameter models accessible not just to hyperscalers, but even mid-scale enterprises—$4.6M to train (vs. hundreds of millions “before”).
- Alex:
“There’s 40x year over year hyper deflation in the cost of intelligence per… unit of intelligence… This is the nuclear core that’s going to drag down the cost of everything else.” (30:48–31:24)
- Alex:
- Historical Parallel:
- Salim draws comparison to the lithium-ion battery revolution in EVs: “Always watch for those deflationary curves and go where the curve is pointing you.” (33:05)
- Macro Implication:
- This will ultimately drag down costs in healthcare, food, energy, etc., even if social distribution remains unresolved (35:16).
4. Technical Frontiers: Model Compression, Memory, and Learning (17:57–27:39)
a. AI World Models & Holodeck Wars (13:34–17:57)
- Fei Fei Li’s World Labs demo: photorealistic, traversable virtual worlds created using 3D Gaussian splats—lightweight, client-side reconstruction vs. pixel-for-pixel server-side equivalents.
- Alex: “We’re seeing the beginning of the holodeck wars… But the true addressable market is in synthetic data for robos, not just consumer fun.” (16:02–17:57)
b. Forgetting Memorized Data Without Losing Reasoning (17:57–23:38)
- New methods to “prune” memorized data from massive models while preserving reasoning—enabling privacy and more efficient, general models.
- Alex:
“The holy grail is maybe a million-parameter model that’s generally intelligent. That would be an incredible outcome.” (21:09–21:23)
- Alex:
c. Lifelong, Continual, Meta-Learning (Neuroplasticity) (23:38–27:39)
- Google’s “nested learning” paper introduces higher-order meta-learning architectures—a step towards continuous, human-like learning and adaptability.
- Alex: “If you compress knowledge about the world enough, you get a phase transition—and general intelligence just pops out.” (24:01–25:51)
5. Geo-political & Regulatory Frictions (35:47–41:13)
- Europe’s Dilemma:
Brussels is considering loosening the GDPR to enable AI innovation as compliance costs stifle startups and slow time-to-market by up to a year.- Peter:
“Venture funding in Europe dropped up to 30%… AI models [are] 6–12 months behind.” (37:35) - Salim:
“The cost of compliance for GDPR has proven to be ridiculous… but they’re doing the best they can in a difficult environment.” (36:46)
- Peter:
- Urgency of Infrastructure:
Must make major energy and data center buildout decisions within five years (if not months!), or lose the race to the US/China. (39:53–40:28) - Alex:
“It’s up to sovereign countries to define how much they want to participate in the superintelligence explosion.” (39:41)
6. The Coming AI Wealth Gap & Urgency of Uplift (41:20–47:55)
- Escalation of the “wealth gap” as AI/compute/capital advantages concentrate in a few regions and companies.
- Dave:
“All that money funnels out of the country and lands in Silicon Valley… You add AI as a layer… the gap is going to get really, really wide.” (44:27)
- Dave:
- Narrative Challenge:
- How do we maintain optimism and belief in a hopeful, compelling future as the transition creates real, near-term pain? (45:20–46:32)
- Salim:
“People are 10x more likely to listen to fewer stories than positive stories… Therefore, you need 10 times more stories on the positive side.” (46:15)
7. Call to Action: The "Moonshot Summit" and Community-Building (46:52–51:28)
- The podcast team proposes a global “Moonshot Summit” to bring together entrepreneurs, engineers, and visionaries to work on uplifting humanity, defining benchmarks (cost of living, crime, healthcare) to solve with AI-powered deflation.
- Alex:
“It’s less about events or teams and more about rigorously defining benchmarks… then this 40x deflation can optimize towards.” (49:01) - Salim:
“The only way you’re going to change the world is to have people shift their mindset, come listen, and actually activate and do something.” (48:35)
8. Energy and Compute: Data Center Buildout & Nuclear Power (52:29–59:44)
- Accelerating Demand:
- Multiple 1GW data centers coming online by 2026; pressure on US energy infrastructure is immense.
- Nuclear Renaissance:
- $80B partnership for a new fleet of AP1000 reactors—mature, reliable Gen 3+ fission—but even these are a decade out, far too slow for demand.
- Dave:
“The flow of capital will be $1.2T a year by 2030 just into data center construction and power. There’s nothing even close in the history of the world to that scale.” (56:21–58:10) - Bureaucratic Caution:
- US construction timelines and regulatory hurdles are a critical bottleneck; even re-starting “offline” plants takes half a decade. (59:44–60:19)
9. Robotics, Swarms & Drones (61:30–64:49)
- 16,000 perfectly synchronized drones in China set a new world record—heralding a new age of AI-enabled, coordinated swarms for industrial and entertainment use, beyond “humanoid” robotics.
- Dave:
“The interactive, thousands-of-drone part is way underappreciated as a real-world thing to do right now.” (62:27–63:45) - War implications: Drone swarming is already a survival necessity in Ukraine and is changing defense realities worldwide. (64:00–64:29)
10. Science & Health: AI as Scientist and Healer (80:47–89:14)
a. AI for Scientific Discovery (80:47–83:27)
- Sam Altman (OpenAI CEO):
“GPT-5 has glimmers of AI doing new science… there’s a chance GPT-6 will be a GPT-3-to-4 leap for science.” (80:47–81:23) - Alex:
“Grand challenges in math, science, engineering, and medicine will start to fall to AI in the next three, max, years.” (81:26)
b. Chan-Zuckerberg & The Race to Cure Disease with AI (84:29–86:00)
- Massive investments (CZI, DeepMind, Anthropic) aim to cure all disease by 2030 by using generative AI for virtual cells, organs, and intervention mapping.
- Alex:
“It may be perversely easier to cure all diseases with AI than to cure them one by one.” (85:46)
c. GLP-1 Drugs, Healthspan, and Universal Basic Health (86:00–89:14)
- US government pricing cuts make “longevity” drugs more accessible. These drugs already cut repeat stroke risk in half.
- Alex:
“I think this is the beginning of universally basically abundant healthspan drugs.” (87:25) - Caveat:
“GLP1s are not a panacea—if you lose weight, exercise is crucial to preserve muscle mass.” – Peter (88:15)
d. Edison’s “Cosmos”: The Scalable AI Scientist (89:14–93:18)
- Agentic AI completes 4–6 months of expert-level research in 12 hours, parsing 1,500 papers and 42,000 lines of code per experiment.
- Dave:
“Bombard it with the raw information… It’s a brand new way to do things and it could do anything.” (92:04)
- Dave:
11. Genetics: The Coming Era of Engineered Offspring (93:18–101:37)
- New companies (Preventive, Manhattan Genomics) aim to shift IVF from selection to alteration of embryos.
- Despite FDA blocks, global race underway, especially outside the US.
- Salim:
“Essentially the human being is now a software engineering problem… seems inevitable… The question is, what do you want to design for?” (94:53) - Alex:
“If you look at [Gattaca] the right way, it’s arguably a more utopian future… We get space colonization and healthy babies.” (96:12) - Cultural/Ethical Chasm: Hollywood’s dystopian storytelling on AI and genetics reinforces public fear rather than hope.
- Peter:
“We need more Star Trek in our lives.” (99:56–100:37)
- Peter:
Notable Quotes & Memorable Moments
-
On AI cost collapse:
“This is Moore’s Law on crack… 40x hyper deflation… the nuclear core that’s going to pull down the cost of everything else.” — Alex (31:24)
“Just think… Imagine we had a factory making cars, 40x production year over year, everyone would go crazy. That’s what’s happening in AI.” — Dave (34:27) -
On the threat of global division:
“It’s up to individual sovereign countries to define how much they want to participate in the superintelligence explosion.” — Alex (39:41) -
On the moonshot call to action:
“Want to be a billionaire? Help a billion people.” — Peter (48:15)
“We need 10 times more stories on the positive side.” — Salim (46:15) -
On AI & the collapse of expertise:
“If GPT3 was the first glimmer of a Turing test, GPT5 is the first glimmer of AI doing new science; GPT6 could be the leap…” — Altman (80:47) -
On engineered babies:
“The human genome is software… breeding goes digital now.” — Salim (94:53)
“We’re 50 years from Asilomar, and there is no single federal statute banning germline editing. There will be a generational conversation.” — Alex (98:14)
Segment Timestamps
- 00:00–01:00 — Global anxiety: cost of living, jobs, inequity
- 02:40–13:15 — Hyperscalers: Anthropic overtakes OpenAI in enterprise LLMs
- 13:34–17:57 — Fei Fei Li’s World Labs: photorealistic AI-simulated worlds
- 17:57–23:38 — AI “forgetting” data: model compression without intelligence loss
- 23:38–27:39 — Google’s “nested learning”: continual/lifelong meta-learning
- 28:29–35:16 — AI hyper-deflation: Kimi/MoonshotAI, training cost collapse
- 35:47–41:13 — Europe’s AI regulatory bind and the risk of falling behind
- 41:20–47:55 — Wealth gap, optimism, and the reality of AI’s impacts
- 47:55–51:28 — “Moonshot Summit” proposal: community and benchmarks
- 52:29–59:44 — Data center + energy boom: the nuclear bottleneck
- 61:30–64:49 — Robotics: swarms/drones, Ukraine and global implications
- 80:47–83:27 — AI’s breakthrough in science (Altman, Alex)
- 84:29–86:00 — AI-driven medicine: CZI & the 2030 “cure all disease” moonshot
- 86:00–89:14 — GLP1 drugs and Universal Basic Health
- 89:14–93:18 — Edison’s Cosmos: the scalable agentic AI scientist
- 93:18–101:37 — Embryo editing: ethics, global race, cultural fears
Closing Thoughts
- Dave: “The need for ethical, trustworthy thought leaders is backlogged so deep now. With AI, fusion, driverless cars and genetic engineering, we need new voices.” (99:20)
- Alex: “I spend substantially all my time on solving humanity’s hardest problems with AI. People should reach out if they want to collaborate.” (103:50)
- Salim: “We’ve open sourced techniques for overcoming corporate ‘immune systems’ to tech change. If you’re battling institutional resistance, ping me.” (79:48)
Peter closes with a call to optimism—and a rallying cry for community-driven moonshots.
Summary Takeaways
- AI’s 40x annual deflation in cost is now the engine dragging every industry’s costs down; intellectual, economic, and regulatory denial is at everyone’s peril.
- The “abundant” future is not yet evenly distributed; the coming years will test our social cohesion and whether optimism and opportunity can overcome fear and displacement.
- New frontiers (AI “brain” compression, world simulation, swarm robotics, and synthetic discovery) will reshape who holds power—provided energy, compute, and regulatory bottlenecks don’t stifle U.S. and European competitiveness.
- Calls to direct AI’s progress toward uplifting, measurable human outcomes are growing louder, with the podcast team themselves proposing to lead the charge.
- Whether in science or society, benchmarks, community organization, and forward-looking optimism may be the critical meta-technologies of the next decade.
“The world’s biggest problems are the world’s biggest business opportunities. Want to be a billionaire? Help a billion people.”
— Peter H. Diamandis (48:15)
For those interested in joining the proposed Moonshot Summit or for further discussions on these themes, email: moonshotsamandis.com.
End of Summary
