Episode Overview
Podcast: 80,000 Hours Podcast
Episode Title: What the hell happened with AGI timelines in 2025?
Date: February 10, 2026
Hosts: Rob Wiblin, Luisa Rodriguez, 80,000 Hours Team
This episode dives into the dramatic swings in expert predictions about when Artificial General Intelligence (AGI) will arrive, particularly focusing on the hype and subsequent skepticism that defined AI discourse from late 2024 through 2025. Rob walks listeners through key technical advances, shifting industry and public sentiments, and the factors recalibrating AGI timelines. Using both inside information and public analyses, the episode tries to make sense of why expectations soared—and were then tempered—in a single year, offering a nuanced view on where things stand now and what to expect in the years ahead.
Key Discussion Points and Insights
1. The 2025 AGI Timeline Whiplash
- Initial Hype (Early 2025):
- OpenAI’s reasoning models (e.g., 01, 03) drove confidence in short timelines to AGI.
- Sam Altman: “We are now confident we know how to build AGI.” (c. Jan 2025)
- Demis Hassabis (DeepMind): AGI probably 3–5 years away.
- Dario Amodei (Anthropic): “A country of geniuses in a data center… quite likely in the next two to three years.”
- AI2027 Scenario: Popular notion that by 2027, AI R&D becomes fully automated, leading to an “intelligence explosion”.
- OpenAI’s reasoning models (e.g., 01, 03) drove confidence in short timelines to AGI.
- The Dramatic Pessimistic Shift (Late 2025):
- Forecasts for AGI arrival moved further into the future than before the latest technical breakthroughs.
- Even the bullish “AI2027” writers admitted true superhuman coding wasn’t on their immediate horizon.
Memorable quote:
“Short timelines to transformative artificial general intelligence swept the AI world. But then... sentiment swung all the way back in the other direction…” (00:10)
2. Technical Drivers of Sentiment Swings
a) Reasoning Models—Promise vs. Practicalities
- Why Optimism Rose:
- Reasoning models excelled in checkable domains like math, logic, and coding.
- Why Disillusionment Set In:
- Hoped-for generalization to “messier” tasks (e.g., event planning, flight booking) did not materialize.
- “They weren’t suddenly able to… organize an event that actually works.” (06:10)
- Expectation that skills would generalize, based on instruction-following successes, proved false.
- Hoped-for generalization to “messier” tasks (e.g., event planning, flight booking) did not materialize.
b) The Limits of Scaling Thinking Time (“Inference Scaling”)
- Scaling “thinking time” (letting AIs process longer) yielded big initial performance gains.
- But most improvements traced to this “one-off” trick; further scaling is constrained by cost and hardware limits.
- “There aren’t enough computer chips in the world to give models ten or a hundred minutes to think.” (10:39)
- More than two-thirds of improvements came from longer inference, not smarter modeling.
c) Reinforcement Learning (RL) Roadblocks
- RL in complex domains requires huge amounts of compute for marginal improvements.
- “Toby Ord estimates that the compute efficiency of this kind of reinforcement learning might be literally 1,000,000th as high as it was back in the predict the next word era...” (16:55)
- Most computational effort spent on failed attempts; AIs get minimal feedback on what worked/failed.
- “An AI trying to suck intelligence through a tiny straw.” (19:01, paraphrasing)
- Squeezing further gains via RL is now greatly limited by chip supply and practical expense.
3. Why the Mood Turned: Real-World Impact & Human Limitations
a) The Demo-to-Reality Gap
- Growing mismatch: impressive AI demos ≠ major productivity impacts in real life.
- “AI models keep getting more impressive… but more useful at the rate that long timelines people predict.” (25:10, paraphrasing Dwarkesh Patel)
- Analysts leaned towards trusting practical evidence over technical demos.
b) Lack of Continual Learning
- AIs don’t learn new skills the way humans do (“few-shot learning”); they plateau quickly.
- No significant progress observed on fixing this fundamental limitation in 2025.
c) AI R&D Automation Is Harder Than Coding
- Even perfect AI coding assistants wouldn’t fully automate AI R&D; many tasks in labs aren’t software development.
- As AIs take on more work, remaining tasks become harder (“plucked the low-hanging fruit” effect).
- Even if 95% of code is AI-written, most of the “research speed” remains bottlenecked elsewhere.
4. Updated AGI Timelines—Numbers & Realities
- Metaculus (Prediction Market):
- January 2025: Forecasters predicted “strong AGI” in July 2031.
- Early 2026: Pushed out to November 2033 (a 2.5-year extension).
- “Not cataclysmic... but people definitely became—they started to think things are going to take a couple years longer...” (39:53)
- Key inflection point:
- By 2032, most available compute/electricity/staff will be allocated to AI; further progress slowed by resource, not just software breakthroughs.
- Models after that date may cost $1–10 trillion to train.
- “We would probably just be in for a bit of a slower grind.” (43:02)
5. Contrasting Industry Realities vs. Public Perceptions
- “Backlash” Narrative:
- In academic/media circles, belief circulated that AI progress (especially GPT-5) “stopped”—and that AIs are economically unviable.
- Debunking the Narrative:
- Epoch Capabilities Index: Shows AI capabilities growing steadily—possibly even twice as fast after April 2024.
- “It finds that if anything, AI progress has sped up in recent years.” (46:11)
- Real-World Utility: Host’s daily use as copilot, searching answers, getting advice, etc.
- “I use AI for hours a day now as a copilot and thought partner...” (50:18)
- Frontier vs. Near-Frontier Costs: Only the absolute cutting edge is expensive; most users’ needs can be met cheaply.
- E.g., Gemini 3 Flash close to Gemini 3 Pro in capability, but at a quarter the price.
- Revenue Reality: Industry revenue grew much faster than the optimistic predictions.
- “In actual reality, it went up almost four fivefold—$30 billion.” (54:38)
- Profitability: AI companies are profitably scaling, not selling at a loss like early Uber.
- Epoch Capabilities Index: Shows AI capabilities growing steadily—possibly even twice as fast after April 2024.
6. Where Does This Leave the Field—and Society?
- 2028: “I would be pretty shocked if we got fully automated AI research and development next year.” (1:00:16)
- 2029–2030: “It begins to feel plausible… can’t rule it out if current trends continue…”
- Longer takeoff possible: “But there’s also a very real chance that we’re in for a significantly longer and slower takeoff.”
- Key contextual quote:
- “The idea that in 10 years we’ll have this revolutionary technology, a machine that can do all intellectual work that humans can do… is still shocking and incredibly consequential.” (1:02:19)
- Preparation window: 10 years to AGI isn’t much, but it’s all the world may have.
Notable Quotes (with Timestamps & Attribution)
- “Short timelines to transformative artificial general intelligence swept the AI world. But then… sentiment swung all the way back in the other direction…”
– A (Rob Wiblin), 00:10 - “A country of geniuses in a data center… we’re quite likely to get that in the next two to three years.”
– Dario Amodei (Anthropic), 02:18 - “There aren’t enough computer chips in the world to give models ten or a hundred minutes to think…”
– A (Rob Wiblin), 10:39 - “Toby Ord estimates that the compute efficiency of this kind of reinforcement learning might be literally 1,000,000th as high as it was back in the predict the next word era…”
– A (Rob Wiblin), 16:55 - “An AI trying to suck intelligence through a tiny straw.”
– A (Rob Wiblin), paraphrasing, 19:01 - “AI models keep getting more impressive at the rate the short timelines people predict, but more useful at the rate that long timelines people predict.”
– Quoting Dwarkesh Patel, 25:10 - “We would probably just be in for a bit of a slower grind.”
– A (Rob Wiblin), 43:02 - “It finds that if anything, AI progress has sped up in recent years.”
– A (Rob Wiblin), on the Epoch Capabilities Index, 46:11 - “I use AI for hours a day now as a copilot and thought partner…”
– A (Rob Wiblin), 50:18 - “In actual reality, it went up almost four fivefold—$30 billion.”
– A (Rob Wiblin), referring to industry revenue, 54:38 - “The idea that in 10 years we’ll have this revolutionary technology… is still shocking and incredibly consequential.”
– A (Rob Wiblin), 1:02:19
Important Segment Timestamps
- 00:00–05:00: Recap of sentiment swings and how reasoning models led to AGI hype.
- 06:00–13:00: Technical optimism and disillusionment: RL, generalization, and thinking time.
- 14:00–21:00: Compute limitations and the “squeezing through a straw” analogy.
- 22:00–28:00: Why AIs fell short of real-world impact; human learning vs. AI learning.
- 29:00–36:00: Limits of AI automating R&D and the slowing of practical improvements.
- 38:00–44:00: Updated timelines—Metaculus, resource bottlenecks, slowing pace possible.
- 45:00–57:00: Critique of the “AI bust” narrative; evidence of progress and profitability.
- 58:00–End: Synthesis, future expectations, and the unsettling shortness of AGI timelines.
Summary Takeaways
- Dramatic changes in technical capability and expert sentiment made, then un-made, the case for short AGI timelines in 2025.
- Most gains in AI reasoning came from one-off tricks of scaling thinking time and heavy reinforcement learning in checkable domains, both of which are running into limits.
- There’s a growing gap between AI’s demo wow-factor and its transformative impact in the real world.
- Major prediction markets and AI thought leaders updated their earliest AGI arrival expectations by several years—progress is still rapid by historical standards, just not as “explosive” as the 2025 hype.
- Industry productivity, profitability, and adoption rebuts the “AI bust” story circulating in some media/academic circles.
- The world may only have about 10 years to prepare for the massive impact of AGI—a timescale that, while longer than some feared, is still alarmingly short.
Final Note:
Despite wild swings in mood and forecasts, steady—if uneven—progress continues. While AGI is not just around the corner, neither is it a distant sci-fi fantasy. The run-up to 2032–2033 is now seen by experts as the most likely “make or break” window for the arrival of transformative artificial intelligence.
