Transcript
Lynne Thoman (0:02)
To quote from the introduction of Eric Schmidt's new book, Genesis, the latest capabilities of artificial intelligence, impressive as they are, will appear weak in hindsight as its powers increase at an accelerating rate. Powers we have not yet imagined are set to infuse our daily lives, unquote. Will artificial intelligence be humanity's final act or a new beginning?
Eric Schmidt (0:31)
Hi everyone.
Lynne Thoman (0:31)
Hi everyone. I'm Lynne Thoman and this is three Takeaways. On three Takeaways, I talk with some of the world's best thinkers, business leaders, writers, politicians, newsmakers and scientists. Each episode ends with three key takeaways to help us understand the world, and maybe even ourselves, a little better. Today I'm excited to be with Eric Schmidt. Eric is the former CEO of Google and the co founder of Schmidt Sciences. He has chaired the Defense Department's Defense Innovation Advisory Board and co chaired the National Security Commission on Artificial Intelligence. He has also been a member of the President's Council of Advisors on Science and Technology and the National Security Commission on Emerging Biotechnology. In addition, Eric has served on a variety of academic, corporate and nonprofit boards including Carnegie Mellon University, Princeton University, Apple, the Mayo Clinic, the Institute for Advanced Study, and Khan Academy. And I've probably left some out. He also currently chairs the board of the Broad Institute and the Special Competitive Studies Project. He is also the author of multiple bestselling books including the age of AI. His most recent book, co authored with Dr. Henry Kissinger and Craig Mundy, is Genesis. Genesis is an extraordinary book written with the knowledge that we are building new intelligences which will bring into question human survival and written with the objective of securing the future of humanity. Welcome Eric, and thanks so much for joining three takeaways for the second time today.
Eric Schmidt (2:20)
Len, it was great to be on your show last time. I'm really glad to be back. It's always great to see you.
Lynne Thoman (2:26)
It is my pleasure and great to see you as well, Eric. Machines don't yet have what's called AGI, Artificial General Intelligence. They're also not yet implementing machines. They're primarily thinking machines that rely on humans to do the interfacing with reality. Where do you think AI and machines will be present in our lives and running our lives in five or ten years?
Eric Schmidt (2:54)
Well, thank you for that. So let's start with where we are right now. Folks are very familiar now with ChatGPT and its competitors, which includes Claude and my favorite of course Gemini from Google and a number of others. And people are amazed that this stuff can write better than certainly than I can. They can do songs they can even write code. So what happens next? The next big change is in the development of what are called agents. And an agent is something which is in a little loop that learns something. So you build an agent that can do the equivalent of a travel agent. Well, it learns how to do travel agents. The key thing about agents is that you can concatenate them. You give it an English command and it gives you an English result. And so then you can take that result and put it into the next agent. And with that you can design a building, design a ship, design a bomb, whatever. So agents look like the next big step. Once agents are generally available, which will take a few years, I expect that we're going to see systems that are super powerful, where the architect can say, design me a building, I'll describe roughly and just make it beautiful. And the system will be capable of understanding that that's not AGI, that's just really powerful AI AGI, which is the general term, is general intelligence, is what we have. The ability to essentially have an idea in the morning and pursue it that you didn't have the day before. The consensus in the industry is that that's well more than five years from now. There's something I call the San Francisco School, which says it will be within five years. I think it's more like eight to 10, but nobody really knows. And you can see this with the most recent announcement from OpenAI of something called 0.1, where it can begin to show you the work that it does as it solves math problems. And the latest models are good enough to pass the graduate level exams in physics and chemistry and computer science and material science and art and political science. At some point. These things in the next, say, five years are going to be super brilliant, but they're still going to be under our control. At the key point is what we call technically recursive self improvement, when it can begin to improve itself. And at that point, I think we're in a different ballgame. And it goes something like this. I say to the computer, learn everything. Start now, don't do any serious damage. That's the command, okay? And the system is programmed to be curious, but also to aggregate power and influence. What would it do? We don't know. So that strikes me as a point where we better have a really good way of watching what this thing is doing. And if you think about it for a while, the only way to watch what it's doing is to have another AI system watching it, because people won't be able to follow IT fast enough.
