Podcast Summary
The Tech Policy Press Podcast
Episode: Should AGI Really Be the Goal of Artificial Intelligence Research?
Date: March 9, 2025
Host: Justin Hendricks
Guests:
- Eric Salvaggio (Rochester Institute of Technology, Tech Policy Press Fellow)
- Boran Villier (Beau, Data Scientist, TD Canadian Bank, co-author of paper)
- Margaret Mitchell (Chief Ethics Scientist, Hugging Face, co-author of paper)
Overview: Main Theme
This episode features a deep and critical discussion on whether Artificial General Intelligence (AGI) should be the primary objective in artificial intelligence research. The guests are co-authors of a recent position paper titled, “Stop treating AGI as the North Star goal of AI research.” They critique the dominant AGI narrative and identify six major "traps" that arise from fixating on AGI as the ultimate target, arguing that it can hinder more meaningful, pro-social, and rigorous goal-setting in the field.
Key Discussion Points & Insights
1. What is AGI? Why is it the Wrong Goal? (00:11–10:53)
- Definition & Narrative Ambiguity:
AGI is generally described as highly autonomous systems that can outperform humans at most tasks. However, its definition remains vague and narrative-driven, serving the interests and agendas of those in power.- “Part of the problem that we're getting at in the paper is that this term, this AGI term, doesn't have a very concrete meaning, but it does function as a narrative for people to prioritize their own interests.” — Margaret Mitchell [03:08]
- Utility of Vagueness:
The lack of clarity allows claims of AGI progress regardless of the actual capabilities, often substituting real political deliberation for “faith-based” technological optimism.- “By not defining what artificial general intelligence is in real terms, you get to do all kinds of things and pretend that it's AGI.” — Eric Salvaggio [04:12]
- Masking Disagreement:
The obsession with AGI masks underlying political, ethical, and social disagreements about what AI should achieve, preventing necessary debate.- “Why is AGI the wrong goal? ... What disagreements are we not having? And what questions about what matters and to whom are we jumping over?” — Beau (Boran Villier) [09:30]
2. Six Traps of AGI-Centric Research
a. Illusion of Consensus (10:53–16:16)
- The term AGI presumes everyone shares the same vision, drowning out alternative approaches and critical analyses.
- “By putting forward this concept of AGI, we're moving everyone down the same path. ... This illusion of consensus goes a little further than just contestations around the term AGI.” — Margaret Mitchell [10:53, 13:29]
- It reinforces a monolithic, often Silicon Valley-driven ideology, marginalizing pluralistic visions for technology.
- “It's a fantasy about the salvation of concentrated power ultimately... Democracy is contestation, and as soon as the contestation goes away, democracy goes away.” — Eric Salvaggio [13:29–15:19]
b. Supercharging Bad Science (16:16–25:21)
- The AGI focus erodes scientific rigor—hypotheses become vague, exploratory research is confused with confirmatory research, and hype replaces evidence.
- “All of that sort of rigorous science is abandoned and justified by this belief that we're working towards something inherently good.” — Margaret Mitchell [16:39]
- AI research communities have a duty to distinguish hype from reality, but AGI narratives make this much harder.
- “This question of what is happening with bad science in the space of AI becomes really front of mind when you start with this question of the disresponsibility for the research community.” — Beau [18:26]
- The public is misled as “Does it work?” replaces the more scientific “Does it work for the reasons you thought?”
- “Whenever we get a new AI model... The answer is it works. And then there's all kinds of backward reasoning as to why it works...” — Eric Salvaggio [23:35]
c. Presuming Value Neutrality (25:21–25:46)
- AGI distracts from the core reality that AI research is not just a technical endeavor; it is steeped in social, ethical, and political values.
d. The Gold Lottery Trap (25:46–26:33)
- The pursuit of AGI is often motivated by incentives, hype, and luck rather than genuine scientific or societal merit.
e. Generality Debt (26:33–28:50)
- The term “general” is misused; AI models trained on diverse, unscreened data are labeled “general,” hiding the absence of real understanding of their behaviors.
- “You just call it general. So we have this concept put forward in AI AGI research of making systems that are general, which is really just putting a blanket over a lot of different things.” — Margaret Mitchell [26:33]
f. Normalized Exclusion (28:50–35:07)
- AGI-centric research excludes whole communities, disciplines, and perspectives from goal-setting and the design of AI systems.
- “It's not just people... entire disciplines of thinking that may have a different frame on artificial intelligence are pushed aside.” — Eric Salvaggio [31:01]
- “If you're a believer, you can work on it, but if you're not a believer... then you're not really invited to participate.” — Margaret Mitchell [33:59]
3. Recommendations for Policymakers (35:07–41:24)
- Prioritize Pluralism & Inclusion:
Incorporate diverse voices in setting AI research goals. Value multiple objectives and challenge the assumption that AGI should be the overarching aim. - Be Specific, Not Swept Up by Narratives:
Focus on concrete, well-specified goals and measurable outcomes for technology; question “shiny stories” and narratives from dominant industry players.- “Ask yourself, what kind of consensus matters to you as a policymaker?... Not what are the stories that I can latch onto...” — Beau [36:16]
- Treat AGI as a Political Proposal:
Recognize that AGI, for now, is a political and organizational idea, not a technical reality.- “AGI is literally not a technology at the moment. What AGI is—a political organization. It is a way of organizing society.” — Eric Salvaggio [40:14]
- Demand Demonstrable Usefulness:
Evaluate claims of AI progress by asking what specific benefits are being offered, and what evidence supports those claims.- “Think about what should this technology be useful for specifically and for each of those things, what needs to be demonstrated…” — Margaret Mitchell [39:06]
Notable Quotes & Memorable Moments
- “This illusion of consensus... drowns out all of the other possible ways of contextualizing the AI problems…”
— Margaret Mitchell [10:53] - “Democracy is contestation, and as soon as the contestation goes away, democracy goes away. So if AGI is used to reach that goal, do we actually want that at all?”
— Eric Salvaggio [15:19] - “AGI is literally not a technology at the moment. What AGI is—a political organization. It is a way of organizing society...”
— Eric Salvaggio [40:14] - “People will declare that AGI has been achieved... there are going to be organizations in the foreseeably near future that say they've reached AGI and they're going to try and monetize that in various ways.”
— Margaret Mitchell [39:06] - “Ask yourself instead, what kind of consensus matters? When does consensus matter and how do I get there?”
— Beau [36:16]
Timestamps for Key Segments
- 00:11–03:08 — Introduction & AGI’s prevalence in current narratives.
- 03:08–10:53 — Critique of AGI’s ambiguity and consequences for goal-setting.
- 10:53–16:16 — The illusion of consensus and risk for democracy.
- 16:16–25:21 — Supercharging bad science: vagueness, hype, and weakened research rigor.
- 25:21–28:50 — Value neutrality, gold lottery, and generality debt explained.
- 28:50–35:07 — Normalized exclusion: Who decides on AI goals?
- 35:07–41:24 — Recommendations for policymakers & final thoughts.
- 41:24–41:59 — Closing remarks and gratitude.
Conclusion
This episode is a rigorous challenge to the current AI research culture’s fixation on AGI as the ultimate objective. The speakers argue that such focus is not only scientifically and ethically questionable but also dangerously exclusionary and politically fraught. They advocate instead for specificity, pluralism, and inclusion in setting AI research goals—reminding listeners, and especially policymakers, that AGI is not a predetermined destiny but an ideological construct up for critical scrutiny.
