Transcript
A (0:00)
I think we could take a lot of ethical advice from smarter entities, but we might also want to have a debate with them about it and actually share the understandings. You actually want to weave our preferences and our discourses into this system in the right way. Ideally, we should become a kind of cyborg civilization where we both have superintelligence guiding and coordinating us. If you're below a certain error threshold, you can combine error prone processes in such a way that you get a new process that has a much lower error rate. I believe something like this might happen with AI. There is a kind of transition in reliability, but once it's reliable enough, you could make this redundant system, make the reliability go up enormously.
B (0:43)
Welcome to the Future of Life Institute podcast. My name is Gus Docker and I'm here with Anders Sandberg. Anders, welcome to the podcast.
A (0:51)
Thank you for having me.
B (0:53)
Could you say a little bit about your background?
A (0:56)
So I'm usually presenting myself as an academic jack of all trades. I started out studying computer science and mathematics, then I took a course about neural networks and kind of fell in love with the brain. So I took neuroscience courses, psychology courses, a bit of medical engineering, and then I ended up in the philosophy department of Oxford University at the Future Humanity Institute. So these days when people ask what I am, I say, some kind of futurist philosopher, Something, something.
B (1:27)
You have this wonderful manuscript called Grand Futures, which is, last I checked, 1400 pages where you dig into the physics, the economics of all the kind of different paths that humanity could take and become a much larger presence in the universe. Could you say, what's the status of that manuscript right now?
A (1:53)
So for a while the manuscript had been resting because I needed to finish another book. Lower Liberty and Leviathan, Human autonomy and the era of artificial intelligence and existential risk, which feels a bit urgent. We kind of need to figure out some of those things out. So the manuscript had been resting on my sofa, kind of waiting for me to finish with that lightweight 600 page volume. But the nice part is, of course, now I learn how to write better and science has kept on advancing, so I've been piling up references and things to add. So it's not like Grand Futures is going to be how the world was back in 2023 when I started on something else. It's rather now I'm rebooting it, which is also very useful because now I have many more people who can help me actually check. But what I'm writing is correct, or at least plausible.
