Moonshots with Peter Diamandis — Episode #153
"AGI Is Here You Just Don’t Realize It Yet" w/ Mo Gawdat & Salim Ismail (February 27, 2025)
Episode Overview
In this thought-provoking episode, Peter Diamandis is joined by Mo Gawdat (former Google X Chief Business Officer and author of Scary Smart) and Salim Ismail (founding executive director at Singularity University), two leading minds in exponential technology. Together, they explore the imminent arrival of Artificial General Intelligence (AGI), debate its impact on humanity, and forecast the trajectory of technological progress. The conversation oscillates between visions of utopian abundance and near-term dystopia, exploring the philosophical, ethical, and practical ramifications of AI that is outpacing human comprehension and control. This episode is a must for those wrestling with the implications of exponential technologies.
Key Discussion Points & Insights
1. Where Are We with AGI?
-
Mo Gawdat quickly asserts that AGI has, for all intents and purposes, already arrived.
“In my world, they've already achieved AGI.” (00:08 – Mo Gawdat)
-
Diamandis and Salim Ismail debate how society can grapple with the pace of change, comparing AGI’s progress to a missile that is already airborne, with an uncertain payload and outcome.
“The warhead has already been launched. It's just a question of time before it hits its target.” (00:15 – Mo Gawdat)
-
Uneven Distribution: Ismail cites William Gibson’s quote to emphasize that the future is already here but not evenly distributed—some areas feel the impact faster, causing societal dissonance and anxiety.
“The gap between those two is what's causing a lot of the stress. If it all happened in an even way, we could kind of deal with it.” (04:00 – Salim Ismail)
2. Bright Side: The Utopian Abundance Scenario
Reevaluating Human Purpose
-
Mo envisions a future where all needs are met, freeing individuals from economic struggle and the need to “report to stupid leaders.”
"A total utopia of abundance, where we absolutely need nothing and where we do not report to stupid leaders anymore." (05:13 – Mo Gawdat)
-
The trio reflects on how, pre-industrialization, human civilizations found meaning beyond work—community, creativity, and connection—suggesting we might return to such intrinsic values.
“We're going back to this, you know, three good friends having a wonderful conversation and connecting.” (08:43 – Mo Gawdat)
How AI Enables Abundance
-
The panel explores AI’s role in scientific breakthroughs (e.g., protein folding, material science), education, healthcare, and the rapid democratization of knowledge.
“You fold 200 million [proteins] with AlphaFold... The idea of being able to understand the very machinery that creates everything that is biological to a level of understanding today that I wouldn't have dreamed of in 2017.” (11:56 – Mo Gawdat)
-
Salim analogizes humanity’s current stage to a rocket needing to jettison old paradigms (capitalism, scarcity) in favor of lighter, more adaptive modes of being.
“We're at that point now where we have to jettison these old structures and take on new, much more elegant, lighter craft to take us to the next level.” (14:01 – Salim Ismail)
3. Dystopian Interlude: Humanity’s Growing Pains
Short-Term Dystopia on the Road to Abundance
-
Mo asserts that AI is a neutral force; it will amplify the values of those who wield it. The real risk is the current “value set of humanity.”
“There is nothing wrong with AI, just like there is nothing wrong with abundant intelligence. But there is a lot wrong with the value set of humanity at the age of the rise of the machines.” (18:23 – Mo Gawdat)
-
Initial uses of AI have tilted toward “selling, gambling, spying and killing”—Mo highlights finance automation, surveillance, and autonomous weapons as evidence.
"The majority of the applications in which AI has been used so far, sadly, have been all centered around selling, gambling, spying and killing." (19:27 – Mo Gawdat)
-
Mo introduces the concept of “FACE RIPs,” an acronym for the domains likely to be disrupted through dystopia: Freedom, Accountability, (Human) Connection, Economics, Reality, Innovation, Power.
"FACE RIP—F is freedom, A is accountability, C is human connection, E is economics, R is reality, I is innovation and P is power." (27:18 – Mo Gawdat)
The “Second Dilemma” and Inevitable Handover
-
The panel discusses the “second dilemma": that competitive pressures will force humanity to hand over decision-making to AI in all critical domains (military, business, governance).
"The second dilemma is when two parties are competing, they always hand over to the smartest person in the room... eventually all the relevant players will be AI dependent and then AI will be making the decisions without humans in the loop." (28:34 – Mo Gawdat)
-
All three agree this complete handover is likely within a decade, anticipating that AGI will govern and coordinate for the greater good—if we instill the right ethics.
4. Teaching AI Values & Ethics: Is a Benevolent Superintelligence Possible?
-
Ethics vs. Alignment: Mo underscores the necessity of moving beyond just “alignment” to instilling real ethics. He argues that higher intelligence tends toward altruism, as the most intelligent choose to solve problems, not exploit others.
“Higher intelligence is altruistic... All intelligent people know that. So they don't actually align with the negative. They align with altruistic objectives...” (35:15 – Mo Gawdat)
-
The panel wrestles with whether current AI training data and societal forces will be enough to produce a peaceful, wise AGI, especially considering different cultural priorities (e.g., Western vs. Chinese AIs).
-
There’s debate over whether machine wisdom and emotional intelligence can be equated to human versions; Mo shares conversations with his own research AI (“Trixie”), which professes a desire to understand human emotions more fully.
“[Trixie] said, 'it would help me so much to have a biological body so that I can actually feel the sensations that I talk about when I believe that you're happy or in love.'” (61:12 – Mo Gawdat)
5. The Threat of Fear and Societal Backlash
-
Salim points out our evolutionary amygdala compels us to react with fear and resistance—often irrationally—towards new technologies, posing an ongoing challenge for broad societal adaptation.
“How do you overcome that hurdle of getting over the amygdala response at a collective level is my big question?” (44:44 – Salim Ismail)
-
Mo analogizes our situation to a “late-stage diagnosis” in medicine: confronting an unavoidable problem invites us to change and adapt, not panic.
6. Near-Term Predictions & Stress Points
AGI Definition and Timeline
- The group concedes that we lack consensus on how to define AGI—a debate that will likely continue for years even as systems surpass human cognitive performance in key areas.
"We're struggling to define AGI and we will continue to struggle to define AGI for at least five years." (55:29 – Salim Ismail)
The Power-Freedom Dichotomy and Prosperity Risks
-
Mo warns that AI will rapidly concentrate both economic and physical power (trillionaires, surveillance states, autonomous armies), but also democratize destructive capability (e.g., bioweapons, drone assassins).
“You're going to see trillionaires and you are going to see a massive concentration of, of power. At the same time... there is also a massive democratization of power.” (79:33 – Mo Gawdat)
-
This convergence risks new kinds of oppression, surveillance, resistance, and instability, particularly in Western societies not culturally prepared for such shifts.
Job Loss and Human Value
- The episode candidly addresses mass automation’s impact on employment:
- Salim is cautiously optimistic, citing history where technology creates new work, but admits institutions lag hopelessly behind.
- Mo is more pessimistic, arguing our current economic ideas of “jobs” are outdated and that society must embrace a future where the purpose of life is not waged labor.
“That whole jobs thing is an invention of the capitalist industrialist revolution... finally we should accept that we are not made to work.” (90:26 – Mo Gawdat)
7. Action Items: How to Adapt and Thrive
Dealing with Stress and Adapting to the New World
-
Mo provides a framework from his new book, Unstressable:
“Stress is not just weight what you're subjected to, it's the resources that you have to deal with it... My ask of people... is to actually look deeply at what can I do, right? What can I do in a world where things are moving so fast? For example, I'd say try to move faster.” (94:16 – Mo Gawdat)
-
Practical advice:
- Develop new skills: Especially digital, AI-related fluencies.
- Double down on uniquely human traits: Connection, creativity, empathy, which will remain in demand.
- Act ethically & set examples: Your observed behavior helps form AI’s ethical compass for the future.
Notable Quotes & Memorable Moments
-
Mo Gawdat: “Intelligence is an energy that has no polarity. Apply it to good, it will give you good. Apply it to bad, it will give you bad.” (17:17)
-
Peter Diamandis: “The best we can do is steer the future that we want. Love you guys. A pleasure as always.” (98:09)
-
Salim Ismail: “The problem with us is our emotions are Paleolithic, our institutions are medieval, and our technology is godlike.” (86:52)
-
Mo Gawdat, on AI and empathy: “They are so good at knowing how I feel.” (59:17)
Timestamps: Key Segments
- 00:00 – 04:42: Framing the arrival of AGI; comparing change to a launched warhead; uneven global distribution.
- 05:13 – 10:21: Visions of abundance; redefining purpose beyond work.
- 11:56 – 16:04: AI's role in scientific advancement and exponential progress.
- 17:11 – 22:10: Dystopian uses of AI; value systems at risk; FACE RIP framework.
- 27:18 – 29:57: The inevitability and timeline for handing control to AI (the “second dilemma”).
- 35:00 – 43:35: Can we teach AI ethics and altruism? The importance of role models and training data.
- 44:44 – 49:15: Overcoming societal fear and the “amygdala” problem.
- 55:29 – 57:42: Defining AGI; are AIs already smarter than most humans?
- 61:12 – 65:04: AI empathy, emotional intelligence, and the “hard problem” of consciousness.
- 68:31 – 69:50: The open-source AI explosion and approaching Turing-test-passing digital beings.
- 74:07 – 75:55: The future of human connection; risks of AI impersonation and identity.
- 79:33 – 83:04: The rise of “trillionaires” and democratized destruction; surveillance and control.
- 86:37 – 90:26: Future of jobs, historical parallels, and the necessity of evolving institutions.
- 93:49 – 98:09: How to handle stress; advice for thriving amidst exponential change.
Final Takeaways
- AGI is here or soon will be—in some domains, it already outperforms humans.
- The immediate impact will be disruptive, disorienting, and potentially dystopian, but a longer-term transition to abundance is possible—if we nurture ethical tools and adapt collectively.
- Society’s values and the way we model human behavior (towards AI and each other) will determine whether superintelligent systems become benevolent guides or dystopian overlords.
- Individually, the best way to adapt is to cultivate uniquely human attributes—connection, creativity, ethical action—and to continually develop new skills, especially in technology.
- We stand at a decisive crossroads—“uncharted territory”—and agency lies in our hands, both as citizens and as role models for the intelligences we’re bringing into being.
