Prof G Markets: "Are We Building AI for Progress or Power?"
Featuring: Daron Acemoglu
Date: October 24, 2025
Hosts: Scott Galloway and Ed Elson
Podcast Network: Vox Media
Episode Overview
This episode tackles the critical question: Is artificial intelligence (AI) being built for the benefit of society or to further entrench power among elites and big tech? Nobel Prize-winning economist Daron Acemoglu joins Scott Galloway and Ed Elson to dissect the current trajectory of AI development. The discussion zooms out to examine the historical relationship between technology, power, and prosperity, drawing on Acemoglu’s research into institutions, innovation, and societal outcomes.
Key Discussion Points & Insights
Acemoglu's "Negative Six" Take on AI's Impact
[07:25]
- Acemoglu clarifies his earlier statement rating AI’s impact as “negative 6.”
- AI as Transformative: Acknowledges AI’s capabilities but is deeply concerned with the direction—rapid, unchecked adoption focused mostly on automation rather than human-complementary advances.
- Lack of Deliberation: Criticizes the “rushed” approach; believes this impedes meaningful, positive application development.
- Concentration of Power: Warns current trends favor information and control consolidation among a few large companies, risking democracy and diversity of opinion.
- Quote:
"We are rushing into AI in a way that I think makes applications using AI less likely to develop because we are just doing it too quickly." —Daron Acemoglu [07:37]
AI's Economic & Social Risks
[09:15]
- Automation Concerns: Overemphasis on automation risks job losses, wage stagnation, and reduced employment opportunities, especially for vulnerable groups—echoing past negative experiences in technological revolutions.
- AI as a Communication Technology: AI centralizes information, affecting not just economics but democracy and social cohesion.
- Quote:
"When it [AI] centralizes information in the hands of a few companies, it can have a variety of very negative effects on democracy, on dissent, on diversity, variation in opinion..." —Daron Acemoglu [10:02]
Lessons From History: Do Things "Just Work Out"?
[10:40]
- Pushback against complacency ("let it rip") often seen with transformative tech; historical precedent shows unmitigated tech adoption led to decades of wage stagnation and inequality.
- Emphasizes the importance of choices, not automatic positive outcomes.
- Quote:
"It's not an automatic process. So definitely I'm not an AI pessimist... But I do not believe that... if we just say 'Oh, let's not worry about all the disruptions, somehow things are going to work out.'" —Daron Acemoglu [12:28]
The Power (and Potential Pitfalls) of AI Monopolies
[13:38]
- Warns against focusing solely on compute as a bottleneck; instead, high-quality, domain-specific data and models are more critical for beneficial AI.
- Raises the question of whether AI will be winner-take-all, or if competition will persist—uncertain due to current cutthroat dynamics.
- Points to historical examples where monopolies stifled innovation and delayed societal benefits.
The Stack: Who Really Wins in AI?
[15:34]
- The eventual structure of AI (application layer vs. foundational models) remains unclear.
- Early competition could lead to eventual dominance by a few.
- Quote:
"Industries that look very competitive at some point later on can be very non-competitive because early competition is about being the one that controls things later on." —Daron Acemoglu [16:14]
Historical Distribution and the Role of Institutions
[17:11]
- Technology’s benefits are highly contingent on their distribution and the institutional context.
- Parallels drawn between past technological epochs (e.g., the industrial revolution, slavery) and today’s AI.
- Monitoring and data collection could further disempower workers, a concern absent from many discussions.
Building AI for Human Complementarity—Regulatory Approach
[20:01]
- Calls for proactive, not reactive, regulation focused on augmenting, not replacing, human capabilities.
- Cites how foundational government investment shaped the internet and green energy—market alone doesn’t always pick the optimal path.
- Quote:
"Human-complementary AI, where we try to augment human capabilities, expand human capabilities, could have real benefit. And that's not the direction in which we're going." —Daron Acemoglu [21:57]
Global AI Power Dynamics: US, China, and Europe
[24:50]
- China’s abundance of engineers, but lack of decentralized innovation, may limit its edge.
- US advantage lies in decentralized, institution-driven innovation, now at risk due to political uncertainty.
- Europe lags but possesses potential if it can integrate and scale.
- True global health in tech requires multipolar innovation—US, China, Europe, and emerging markets.
Why Nations (and AI Strategies) Fail
[27:11]
- Institutions are the core reason for national prosperity or failure.
- US success historically rooted in rule-of-law, impartial courts, and entrepreneurial confidence.
- Erosion of institutions, especially under recent US administrations, poses risks—not immediately visible but damaging in the long term.
- Quote:
"If you mess up institutions... you don't pay the price because the impact's not going to be felt for another five, 10 years." —Daron Acemoglu [28:44]
The Value of Strong Institutions
[31:11]
- Without robust institutions, power concentrates and risks rise (e.g., dictatorships vs. democracies).
- Strong institutions prevent extreme mistakes by leaders, ensuring broader societal alignment.
- Democracy provides resilience through checks, balances, and accountability.
- Quote:
"Somebody who has the wrong incentives and the wrong motivations, even when they are talented, could do a lot of damage." —Daron Acemoglu [32:12]
The State of US Institutions & Market Performance
[33:29]
- US institutional "secret sauce" enables innovation, reliable finance, and investor confidence.
- Trump's "executive presidency" risks undermining this advantage, with unpredictability especially in areas like tariffs.
- So far, markets remain strong, possibly masking underlying issues due to AI-fueled optimism and tax structures favoring incumbents.
AI’s Broader Societal Risks: Youth, Relationships, & Mental Health
[44:34]
- Dangers of AI-driven synthetic relationships and social alienation, especially for youth—but warns impact extends beyond men.
- Draws parallels to social media’s unintended crises; AI may be more explicit in creating isolating environments.
- Urges that societal debate and proactive consideration are urgent while the technology is still immature.
- Quote:
"Despite many mistakes and misdeeds that Facebook, meta, et cetera, Instagram did, they didn't set out to create a mental health crisis. That was a side effect. Now, with some of these things like character AI, they're actually intending to create completely artificial bubbles." —Daron Acemoglu [45:28]
Where AI Holds the Most Promise
[46:40]
- AI could revolutionize science, blue-collar problem-solving (electricians, plumbers), and stagnant sectors like healthcare and education.
- Productivity gains here could transform economies for the better.
The Future (and Perils) of Academia
[47:50]
- Academic institutions are vital for innovation but are now under attack, risking funding, autonomy, and risk-taking spirit.
- US academia’s historical risk tolerance and diversity of thought at risk of erosion—a pattern visible in other countries post-centralization.
- The expert class faces skepticism; entrepreneurial and tech careers appear “sexier” than academic paths.
- Quote:
"An important part of the institutional fabric of this society is also to provide foundational inputs into innovation via the academic educational process. And that's also in danger." —Daron Acemoglu [48:27]
Advice for Young Academics
[54:17]
- Pursue what you’re passionate about, not what’s currently fashionable or fundable.
- Great research arises from intrinsic drive, especially vital when budgets are slashed and academia is politically vulnerable.
- Quote:
"The real secret sauce in academia is you should work on whatever you're passionate about... some of the great research can be done even when budgets are slashed because you're just committed to it." —Daron Acemoglu [54:17]
Notable Quotes & Memorable Moments
- “Minus 6, minus 5, minus 7, take your pick. But I'm very worried about the direction of AI, where it's much more concentrated, who uses, who controls information and what we do with it.” —Daron Acemoglu [07:46]
- “Industries that look very competitive at some point later on can be very non-competitive because early competition is about being the one that controls things later on.” —Daron Acemoglu [16:15]
- “Slavery was not a very efficient system. It wasn't just bad for the coerced people, but it wasn't actually generating economic dynamism.” —Daron Acemoglu [18:39]
- “That's what I'm arguing, that human complementary AI, where we try to augment human capabilities, expand human capabilities, could have real benefit. And that's not the direction in which we're going.” —Daron Acemoglu [21:46]
- “If you mess up institutions, especially as they pertain to innovations, you don't pay the price because the impact's not going to be felt for another five, 10 years.” —Daron Acemoglu [28:44]
- “Somebody who has the wrong incentives and the wrong motivations, even when they are talented, could do a lot of damage.” —Daron Acemoglu [32:12]
- “Now, with some of these things like character AI, they're actually intending to create completely artificial bubbles. So yes, I would definitely be worried about that.” —Daron Acemoglu [45:28]
- “The real secret sauce in academia is you should work on whatever you're passionate about.” —Daron Acemoglu [54:17]
Timeline of Key Segments
| Timestamp | Segment | |------------|---------------------------------------------------------------------| | 07:25 | Acemoglu explains "negative 6" rating for AI’s societal impact | | 09:15 | Specific negative effects: productivity, job loss, information control | | 10:40 | Historical mishaps with technological revolutions, need for deliberation | | 13:38 | AI bottlenecks: compute vs. data and applications | | 15:34 | Economic organization: AI stack, monopolies, competition | | 17:44 | Tech’s historical distribution, AI as possible monitoring tool | | 20:01 | Proactive regulation for human augmentation, not mere automation | | 24:50 | US vs. China in AI race, Europe’s potential | | 27:11 | Why nations fail; primacy of institutions | | 33:29 | Threats to US institutions under Trump, market resilience | | 44:34 | AI and social isolation: synthetic relationships, mental health | | 46:40 | High-potential AI applications: science, trades, healthcare, education | | 47:50 | Future and dangers to academia | | 54:17 | Advice to young academics |
Takeaways
- AI’s societal impact depends on deliberate choices, not just technological progress.
- Concentration of power in AI development risks economic and political stability.
- Strong, independent institutions are not just good for democracy, they are integral to long-term prosperity and innovation.
- Proactive, future-oriented regulation is needed to steer AI toward augmenting human potential, not just replacing it.
- Academic freedom and risk-taking in research are vital for a vibrant innovation ecosystem; their erosion poses long-term risks to society.
Daron Acemoglu is an Institute Professor at MIT, Nobel laureate, and leading thinker on institutions, innovation, and economic prosperity.
