Transcript
A (0:00)
Sam Altman, even before founding OpenAI, said the development of superhuman machine intelligence is the greatest threat to the existence of humanity. Amodei of anthropic said, there's 25% chance of a catastrophic outcome for essentially humanity wiped out. Elon Musk has been extremely vocal for a decade now by talking about 20% chance of annihilation. Yet despite all of these risks, AI companies have been fighting tooth and nail to just not be regulated at all. If we build systems that are smarter than us across the board and we cannot control them, we are screwed. There is no gain from getting to superintelligence. The only actor gaining is the superintelligence itself. The fundamental win condition is that there is deep buy in about understanding how big the risks are. Understanding the superintelligence under current conditions is a terrible idea for humanity. If enough people have this, we have won even without any specific law, because those people will make the right decisions collectively.
B (0:52)
Welcome to the Future of Life Institute podcast. My name is Gus Stocker and I'm here with Andrea. Andrea, welcome to the podcast.
A (0:59)
Thank you for having me.
B (1:01)
Amazing. Tell us a bit about yourself.
A (1:03)
Yeah, absolutely. And thank you for having me again on the podcast. It was great to be here last time. So, I'm Andrea Miotti. I'm the founder and CEO of Control AI. Control AI is a nonprofit working in the uk, the US and starting to work in a few other countries to prevent the most extreme risks from powerful AI systems.
B (1:23)
Okay, so I want to start by talking about the current moment we're in here. Could you tell us about the landscape, organizations and funding when it comes to influencing AI policy?
A (1:37)
Yeah, absolutely. So I think what we're seeing in the past year, and especially the past months, is a veritable flurry of lobbying from most of the AI companies. So what AI companies, as many listeners of the show might know, but it's always worth repeating, are focusing on one clear goal, to develop superintelligence. That is AI that can replace and out compete all humans at all tasks. This is why top AI experts like Nobel Prize winners, top AI scientists, and concerningly, even CEOs of many of these AI companies warn that we risk human extinction from superintelligence. Yet despite all of these risks, and these risks that are becoming more and more well known across the public and openly acknowledged by many of these companies, AI companies have been fighting tooth and nail to just not be regulated at all. And in many ways, they're deploying a similar playbook to the one that was tried and tested by tobacco Companies, the so called tobacco lobbying playbook. Where in tobacco what happened is that the notion that cigarettes and smoking could cause cancer was discovered pretty early on and was well known pretty early on inside many of these tobacco companies. But instead of stopping or instead of working with governments to chart a different path, these companies started to sweep this under the rug and especially to do essentially propaganda campaigns in the public to deter, intimidate scientists that would reveal findings about what tobacco was causing, which is cancer, across many smokers. And we are seeing the same with AI companies. Like you know, on one hand we have many ex AI company employees that quit and very bravely in many cases by losing millions of dollars, tens of millions of dollars. You know, by now, probably even more than that, speak out about the risks. We have even the CEOs of these companies speaking out about the risks, especially in the past. Like, you know, I could find a quote from each one of them, but just kind of some of the most famous ones. Sam Altman, even before founding OpenAI, said the development of superhuman machine intelligence is the greatest threat to the existence of humanity. So you know, these words are very clear, like he knew what the company was going to build could end humanity. Other CEOs like Mode of Anthropic said there's 25% chance of a catastrophic outcome for human civilization, essentially humanity wiped out. Elon Musk has been extremely vocal, very, very vocal for a decade now by talking about 20% chance of annihilation, maybe more. Now in recent interviews is even saying no matter what, once we get to AI smarter than humans, humanity will clearly not be in control anymore. So we can just sit and let them do what they want and things like this. But at the same time, these companies are raising billions of dollars, tens of billions of dollars, and they are spending them to prevent any form of regulation. On one hand you have this threat that affects a lot of humanity that they're very well aware of. At the same time they lobby to sweep this information under the rug, to silence whistleblowers from speaking out and to make sure that no country regulates. And the best antidote to this is, is just speaking the truth. The more people learn about this risk, the more they are concerned, the more they realize the stakes that are here. And the more they want to act and the less they know, the more the companies can keep doing what they're doing, which is to threaten all of humanity.
