Short Wave: "AI is great at predicting text. Can it guide robots?"
Host: Regina Barber (NPR)
Guest: Jeff Brumfiel (NPR Science Correspondent), with insights from Moojin Kim (Stanford), Chelsea Finn (Stanford), Ken Goldberg (UC Berkeley), Pulkit Agrawal (MIT), and Matthew Johnson Roberson (Carnegie Mellon)
Date: February 11, 2026
Length: ~15 minutes
Episode Overview
This episode explores the intersection of artificial intelligence and robotics, focusing on whether the recent leaps in AI—especially language-based models—can truly revolutionize how real-world robots learn, act, and assist us. While AI chatbots excel at generating human-like text, translating those capabilities to physical robotic actions in a messy, unpredictable world is a very different challenge. Host Regina Barber and NPR science correspondent Jeff Brumfiel walk listeners through current breakthroughs, persistent obstacles, and expert perspectives, with a lively, curious tone and a bit of skepticism about hype versus reality.
Key Discussion Points & Insights
1. The Hype and Reality of AI-Powered Robotics
- Context: Recent events, like Tesla’s announcement of the Optimus humanoid robot and Google’s unveiling of an AI-driven robot, have hyped up expectations for intelligent, autonomous machines.
- [00:20] Regina Barber notes AI is "everywhere"—from email to cars, and now, robots at major tech events.
- Historical Letdowns: Despite decades of promises, robots repeatedly fail to live up to the sci-fi vision.
- [01:33] "The robots... have always disappointed compared to the vision." — Jeff Brumfiel
- Central Question: Can the advances that made text-based AI so powerful finally make general-purpose robots possible, or are there unique obstacles?
2. AI Robotics at Stanford’s IRIS Lab: Teaching by Doing
- Lab Tour: Jeff visits Stanford's IRIS lab, where researcher Moojin Kim demonstrates a simplified robot (just mechanical arms) powered by an AI model called OpenVLA.
- [03:58] Jeff describes not a humanoid, but "a pair of mechanical arms with pinchers," emphasizing the practical, non-flashy nature of real research robots.
- Neural Network Approach: Instead of programming every little action, the robot learns "by example."
- [05:28] "Whatever task you wanna do, you just keep doing it over and over. Maybe like 50 times or 100 times." — Moojin Kim
- [05:40] Smiling robot anecdote: "It's exactly the same thing, except this robot's actually doing stuff." — Jeff Brumfiel
- Real-World Demo: The robotic arms are trained to pick out green trail mix pieces on command, exemplifying how AI can interpret text and translate that into action—if properly trained.
3. The Dream of Generalist Robots
- Ultimate Goal:
- [06:25] "In the long term, we want to develop software that would allow the robots to operate intelligently in any situation." — Chelsea Finn
- Includes basic tasks like "making a sandwich, cleaning a kitchen, or restocking grocery store shelves."
- Current Achievements:
- Chelsea Finn’s spin-off shows a robot folding laundry—a mundane but surprisingly complicated affair.
- [07:38] Regina anthropomorphizes the robot as it folds: "It looks like it's like, oh, I just gotta fold another one."
- Limits: Although impressive in lab conditions, these robots still make mistakes, get stuck, and need human intervention.
- [08:00] "It might be able to fold laundry 90% of the time or 75% of the time, but the rest of the time, it's going to make a big mess..." — Jeff Brumfiel
4. Why Isn’t Robotics Evolving as Fast as ChatGPT?
- Big Data Bottleneck: AI chatbots are trained on vast internet datasets, but robots lack a similar “big dataset” for physical actions.
- [09:00] "There's no examples online of robot commands being generated in response to robot inputs." — Ken Goldberg
- [09:24] Goldberg quips, "At this current rate, we're going to take 100,000 years to get that much data."
- The Problem of Physical Data Collection:
- Training robots in the real world is time-consuming and logistically tough.
- Simulation as an Alternative:
- [09:48] "The power of simulation is that you can collect... very large amounts of data." — Pulkit Agrawal
- Example: 3 hours in simulation can generate 100 days of "robot experience."
- Sim-to-Real Gap:
- [10:20] Simulations can teach basic locomotion, but tasks like picking up fragile objects are much harder to simulate correctly.
- [10:34] "Your robot will fling things across the room if it doesn't understand the weight and size..." — Jeff Brumfiel
- [09:48] "The power of simulation is that you can collect... very large amounts of data." — Pulkit Agrawal
5. Deeper Challenges: Problem Framing and Learning Limits
- Predicting Text vs. Doing Things:
- [11:09] "The question is not do we have enough data? It is more, what is the framing of the problem?" — Matthew Johnson Roberson
- Text prediction is linear and well-bounded; robotics requires adapting to endlessly variable physical situations.
- [11:37] "Next, best word prediction works... It's not clear I can take 20 hours of GoPro footage and then produce anything sensible with respect to how a robot moves around." — Matthew Johnson Roberson
- Teaching Robots to Teach Themselves:
- [12:06] "Or have the robots teach the robots." — Regina Barber
6. Optimism, Cautiously Applied: Where AI Robotics is Already Working
- Narrow AI Applications:
- [12:24] AI image recognition is already helping in package sorting—robots pick objects efficiently with AI's help.
- [12:52] "It's working really well... and I think we're going to see a lot of that AI being used for parts of the robotic problem. You know, walking or vision or whatever. It's going to make big progress. It just may not arrive everywhere all at once." — Jeff Brumfiel
- Memorable Demonstration Closes the Episode:
- [13:20] "Usually that, that spot right there where it identifies the object and goes to it. That's the part where we hold our breath in." — Moojin Kim
- The robot successfully grabs the right trail mix—showcasing real incremental progress.
Notable Quotes & Memorable Moments
-
On Disappointment:
- [01:33] Jeff Brumfiel: "The robots... have always disappointed compared to the vision."
-
On Teaching Robots:
- [05:28] Moojin Kim: "Whatever task you wanna do, you just keep doing it over and over. Maybe like 50 times or 100 times."
-
On Failing in Real-World Robotics:
- [08:00] Jeff Brumfiel: "...it might be able to fold laundry 90% of the time or 75% of the time, but the rest of the time, it's going to make a big mess that then a human has to get in there and clean up."
-
On Data Limitations:
- [09:24] Ken Goldberg: "At this current rate, we're going to take 100,000 years to get that much data."
-
On Simulation:
- [09:48] Pulkit Agrawal: "For example, in three hours, you know, worth of simulation, we can collect 100 days worth of data."
-
On Framing the Challenge:
- [11:09] Matthew Johnson Roberson: "The question is not do we have enough data? It is more, what is the framing of the problem?"
-
On Incremental Progress:
- [13:42] Jeff Brumfiel: "Nobody really programmed the robot. Exactly. This is all neural network, learning how to move the claws and respond to the commands on its own. And to me, it's pretty wild that that works at all."
Important Timestamps
- 00:20: Introduction to the episode’s big question: Can AI move from chatbots to real-world robots?
- 03:58–05:35: Inside the Stanford IRIS lab; teaching a robot by demonstration.
- 06:25: Articulation of the dream: robots that understand plain instructions.
- 07:21: Demo: AI-powered robot folding laundry.
- 08:25: Ken Goldberg on realism versus hype.
- 09:48–10:34: Simulated training for robots—opportunities and limits.
- 11:09–11:51: Expert on the fundamental differences between text prediction and physical robotics.
- 12:24–13:14: Successful narrow applications; AI in industrial robots.
- 13:14–13:42: The trail mix test and the emotional tension of “will it work?”
Tone & Style
- Friendly, curious, and skeptical—humorous banter between Regina and Jeff keeps things light even as complex issues are discussed.
- [12:09] Regina Barber: "Okay, so Jeff, you've taken me from like optimist to pessimist. It's the, you know, the road I take every day."
- Experts balance caution with real enthusiasm about incremental progress (especially in narrowly defined tasks).
- Encouraging close to the episode focuses on progress and continued discovery.
Summary
The promise of AI in robotics is immense, but we're only beginning to bridge the gap from virtual text predictions to physical-world mastery. Successes are real, if limited: robots can now learn some tasks with less rigid programming, especially in controlled environments or using carefully designed simulations. Massive hurdles remain—especially the lack of vast, diverse real-world “training data” for robots, and the fundamental leap from predicting text to manipulating a chaotic, physical world. Still, researchers’ incremental advances hint at a future where AI-powered robots can increasingly assist in everyday life, if not quite with the flawless grace of science fiction.
