Transcript
Juana Summers (0:00)
AI used to be a thing of science fiction.
Martin Kosti (0:02)
I know I've made some very poor.
NPR Announcer (0:05)
Decisions recently.
Martin Kosti (0:10)
But I can give you my complete assurance that my work will be back to normal.
Juana Summers (0:16)
And the genre is full of superhuman AI machines that become so smart they turn against the humans that created them.
Narrator/Storyteller (0:23)
Skynet begins to learn at a geometric rate.
Keith Roedersheimer (0:26)
It becomes self aware at 2:14am Eastern Time, Aug. 29. In a panic, they tried to pull the plug.
Juana Summers (0:35)
Skynet fights back.
Keith Roedersheimer (0:36)
Yes, that's an AI that could get out of control. But if you really think about it, it's much worse than that.
Martin Kosti (0:41)
Much worse than Terminator.
Keith Roedersheimer (0:42)
Much, much worse.
Juana Summers (0:43)
That's Keith ROEDERSHEIMER Talking to NPR's Martin Costi. Back in 2011, he was a research fellow for what was then called the Singularity Institute for Artificial Intelligence. It's now the Machine Intelligence Research Institute, or miri. At the time, Roedersheimer was looking into the idea of a computer that was not only smart, but capable of improving itself.
Keith Roedersheimer (1:05)
It's able to look at its own source code and say, ah, if I change this, I'm going to get smarter. And then by getting smarter, it sees new insights into how to get smarter. And then by having those insights into how to get smarter, it modifies its source code and gets smarter and gets new insights. And that creates an extraordinarily intelligent thing.
Juana Summers (1:22)
They called this the Singularity because that intelligence could grow so fast, our human minds might not be able to keep up. In 2011, that still seemed like a long, long way off. But in 2025, artificial intelligence is seeping into everyday life with ChatGPT and the like. Even proponents of AI, like developer Jonathan Liu, joke about the estimated probability of AI doom.
