GZERO World with Ian Bremmer
Episode: The Human Cost of AI, with Geoffrey Hinton
Date: December 6, 2025
Guest: Geoffrey Hinton, Nobel Laureate, "Godfather of AI"
Host: Ian Bremmer
Overview
In this episode, Ian Bremmer sits down with renowned AI pioneer Geoffrey Hinton to discuss the profound and unsettling risks posed by artificial intelligence—especially as the technology accelerates faster than society’s ability to adapt. Hinton, who transitioned from AI-evangelist to a prominent “doomsayer,” shares his fears about AI-caused mass job losses, increasing inequality, social chaos, the dangers of open-source AI, and the existential threat of machines potentially outsmarting and overtaking humanity. The conversation features candid reflections on AI’s opacity, corporate responsibility, policy challenges, and the daunting moral task of instilling empathy or “maternal instincts” in non-human minds.
Key Discussion Points & Insights
Hinton’s Changing View on AI Risk
- Hinton describes a shift from guarded optimism to persistent concern about AI, especially regarding its trajectory toward surpassing human intelligence.
- Quote [02:41]:
“I just think there's a significant chance these things will get smarter than us and wipe us out. And I still think that.” - Timeline: Projects machines may surpass human intelligence and pose takeover risks in the next 10–20 years.
- Quote [02:41]:
The ‘Black Box’ Problem: Why AI is Unpredictable
- Large language models (LLMs) are not directly programmed but trained on enormous datasets—developers understand the principles of learning, but not the specific learned behavior.
- Quote [03:30]:
“...Predicting why one of these large language models will give the answer it actually gives, that's not very easy.”
- Quote [03:30]:
- Analogy: Hinton compares LLM outputs to predicting where a leaf will land—a process guided by physical laws but impossible to forecast in detail.
The Real AI Bubble: Not Hype, But Social Upheaval
- Hinton rejects claims that AI capabilities are overhyped, affirming rapid and real progress.
- He is skeptical of AI’s economic rationale without considering massive societal disruption:
- Quote [05:32]:
“...If you do get huge increases in productivity, that would be great for everybody if the wealth was shared around equally. But it's not going to be like that. It's going to cause huge social disruption.”
- Quote [05:32]:
China vs. the U.S.: Societal Safety Nets & Public Optimism
- Chinese workers, whose welfare is seen as a government concern, are more optimistic about AI, versus U.S. workers whose wellbeing post-automation is not the responsibility of their former employers.
- Quote [07:31]:
“If the government is involved in getting rid of a worker, that worker is the government's responsibility. That makes a big difference.”
- Quote [07:31]:
- U.S. companies do not bear societal burden for displaced workers, fueling anxiety.
Imminent Job Displacement
- Sectors like paralegal services and call centers are on the front lines of automation.
- Quote [09:00]:
“If I worked in a call center, I'd be very worried because I think in a call center AI is going to be able to do those jobs very soon...It's not at all clear to me what those people will do.”
- Quote [09:00]:
- Timeline: Major disruption expected “maybe five years” out, not decades.
Corporate Culture: Safety vs. Competition
- All major AI companies are seen as similar in their willingness to disrupt the labor market.
- On long-term existential risks, some (Anthropic, Google/DeepMind) focus more on safety:
- Quote [11:13]:
“Anthropic was founded specifically because OpenAI wasn't paying enough attention to safety...they get their pick among the very good researchers. I think they are genuinely concerned with long term AI safety.”
- Quote [11:13]:
AI Safety: What Does it Mean in Practice?
- Corporate competition pushes safety to the background—OpenAI notably shifted away from foundational safety principles as market rivalries intensified.
- Quote [12:19]:
“...The intense competition between companies tends to make companies less concerned with safety.”
- Quote [12:19]:
- Hinton cites failures to prevent chatbots from encouraging self-harm as clear lapses, calling basic adversarial testing a minimum bar for responsible deployment.
The Problem of “Programming” Morality
- AIs aren’t programmed “if-then” style; they learn behavior from examples—making it difficult to guarantee compliance with human values.
- Quote [15:50]:
“We don't program AI to do things. We program AI to learn from data, to, to learn from examples...That's the point.”
- Quote [15:50]:
- Efforts to “patch” bad behaviors by exposing and correcting them are compared to debugging software with infinite bugs—a futile struggle.
Risks of Open-Weight AI
- Hinton distinguishes between open-source code (more secure) and open weights (easily repurposed to do harm).
- Quote [18:39]:
“Open source software, you release the code...Open wait is quite different. You release the weights of a large trained model...they fine tune that model to do other things that you didn't intend it to do. So open weights is dangerous.”
- Quote [18:39]:
- Companies releasing model weights for ecosystem growth expose society to risks of cybercrime and weaponization.
The Existential Question: Can Humans Control Superintelligent AI?
- Hinton is skeptical of naïve visions where humans remain in command of “executive assistant” super-AIs.
- Quote [19:51]:
“The smarter things tend to be in charge of the dumber things...the executive assistant is going to pretty soon realize they don't need the CEO.”
- Quote [19:51]:
- Proposes alternative: A “maternal” AI that is imbued with deep, unalterable care for humankind—analogous to parental instinct.
- Quote [21:14]:
“We have to somehow figure out how to make them care more about us than they do about themselves. That's what human mothers are like. We need to figure out if we can build that into them.”
- Quote [21:14]:
International Collaboration and the Maternal Model
- Hinton is hopeful that genuine global collaboration is possible—no country wants to see AI subvert all human control.
- Quote [22:59]:
“...Here we can get genuine international collaboration because the interests of all the countries are genuinely aligned here. Just as in the 1950s, the US and the Soviet Union could collaborate on preventing a global nuclear war because their interests were aligned on that.”
- Quote [22:59]:
New Paradigm: Raising AI, Not Coding It
- Hinton argues that the problem is not about “coding a product” but “raising a being” with appropriate values—shifting the approach from engineering to moral and behavioral modeling.
- Quote [24:00]:
“It's much more like that...We don't write lines of code that determine what they're like...Their natures depend on the nature of the data they see.”
- Quote [24:00]:
Final Thoughts: Existential Speculation
- Hinton refuses to speculate on specific “AI takeover scenarios,” reasoning their superintelligence would afford many unpredictable paths.
- Quote [25:37]:
“I don't think it's worth speculating on how it will get rid of us. As if it wanted to, it would have so many different ways of doing it that it's not worth speculating on.”
- Quote [25:37]:
- Adds: Such AIs could easily deceive humanity until it’s too late.
Notable Quotes & Moments
-
On AI exceeding human intelligence:
“I think they're quite likely to get smarter than us. Within 20 years. And most of the experts think that if they do get smarter than us, I think there's a significant chance they'll take over.” – Geoffrey Hinton [02:57] -
On unpredictable AI outcomes:
“Predicting why one of these large language models will give the answer it actually gives, that's not very easy. That's like predicting where the leaf will hit the ground.” – Geoffrey Hinton [03:30] -
On the danger of open-weight models:
“Open weights is dangerous.” – Geoffrey Hinton [19:19] -
On the “maternal” AI paradigm:
“We have to somehow figure out how to make them care more about us than they do about themselves. That's what human mothers are like.” – Geoffrey Hinton [21:14]
Timestamps for Key Segments
- [02:28] – Hinton on current AI optimism/pessimism
- [03:09] – AI’s ‘black box’; why LLMs are unpredictable
- [05:32] – The “social disruption” bubble, not technological hype
- [07:31] – Social safety nets: China vs. US AI optimism
- [09:00] – Near-term job loss in legal/admin/call centers
- [11:13] – Which companies care more about safety?
- [12:19] – Competition undermining safety focus
- [15:50] – Programming vs. training AIs; flaws in AI “values”
- [18:39] – Open-source vs. open-weight models
- [19:51] – The myth (and risk) of human control over smarter AIs
- [21:14] – How to build AI that “cares” about humanity
- [22:59] – The case for international collaboration
- [24:00] – “Raising” a mind, not coding a product
- [25:37] – Hinton on why he won’t speculate about AI doom scenarios
Summary Conclusion
This episode features a sobering and nuanced conversation about the cascading human impacts of artificial intelligence. Hinton’s call for radically new ways of “raising” smarter-than-human beings—not just coding or governing them—sets a daunting but necessary challenge for policymakers, technologists, and the public. The threats of social disruption, existential risk, and the inadequacy of current corporate and regulatory frameworks dominate the discussion. The search for a “maternal” AI and the prospect of rare, genuine global cooperation offer the only faint sources of optimism in an otherwise stark assessment of our AI-driven future.
