Transcript
A (0:00)
Today I'm welcoming Mustafa Sulaiman, the CEO of Microsoft AI, the founder of Inflection AI and the co founder of DeepMind. And for the past few months he has been sounding an alarm about artificial intelligence, about the way some AI systems are being developed, and about why that particular trajectory has little to offer, perhaps, but woe and worry. Let's get started. Welcome, Mustafa. It's great to see you. It's been a long time.
B (0:26)
Yeah, it's been a while. Thanks for having me. I'm excited for this conversation.
A (0:30)
You and I have spent a lot of time thinking about some similar things and we agree on a lot of them. But that's really boring for all of those people who are listening. Let's maybe lay out where I think we agree and then we'll get to a sort of a knotty space. We are in this weird time. The world is changing because of technology. And many of the fictions that we've used to coordinate human behavior are under strain. By fictions, I mean the shared stories that allow us to cooperate from money and nations and corporations and credentials and jobs. And the way we perceive the world is also changing. People have traditionally operated with a scarcity os resources are limited. Human intelligence was the bottleneck, but that some of those assumptions no longer hold. Intelligence, mostly through AI, is becoming cheaper and more capable. You are part of that intelligence wave, that artificial intelligence wave, and you also believe the world is changing. You've called for a humanist superintelligence. You've warned about the risk, the trajectory that takes us to AI psychosis if people believe AI is conscious when it's not. And I think we both agree that we need new operating principles for this new era. Let's get to that question of where it really gets interesting. You wrote this great essay back in the summer of 2025 about seemingly conscious AI, and you're worrying that as AI becomes more capable, more autonomous, and more embedded in our daily lives, people will start projecting consciousness onto it. They'll fall in love with it. They'll believe it's God. They'll advocate for its rights. They'll take its very bad advice at time to time. And you think this is dangerous not just for individuals, but for society. So let's start there. Geoff Hinton, he is the godfather of deep learning. A man you know very well, he's a Nobel laureate. He said that AI is conscious and that there really is a there there. Why do you think Jeff is wrong?
B (2:29)
You know, I. I think Jeff's got to a stage in his Career where he can play the founding father contrarian role in order to provoke an important public conversation. You know, obviously I massively admire and respect Jeff. I think he's incredible. We hired him as a contractor consultant at DeepMind back in 2011, along with his student at the time, Ilya Sutskever. So absolute legend of the field. My take on this question is that it's going to be very hard for us to precisely say whether it is or whether it isn't conscious. And so we have to be very clear about the working definition that we're using for consciousness. And then we also have to be very clear about the mechanism inside these models that I think is quite fundamental to the definition. So first of all, the definition, many people intuitively think of this as self awareness. Is the model able to describe its own experience in a persuasive way? And I don't think that is really a fundamental part of the right definition of consciousness. I think that's a bit of a misnomer. I think consciousness is inherently linked to the ability to suffer and to experience pain. And therefore I think that there's very good reason to believe that for a long time to come that will be contained to the human or the biological experience. Let's say in general, because we have a reward system, a learning system which is inherently connected to the external world. And we, you know, learn likes and dislikes when our pain system is triggered. And that's basically how we form representations which we use for decision making from fight or flight all the way through to our prefrontal cortex. So I think that's a very, very important distinction. And I think it helps to set us apart from the silicon based learning systems that we have today.
