
Loading summary
A
Why send your ducks and hope they land? Share a PDF space for Microbet. They'll understand. A custom intro that's so clear.
B
An audio summary that says what they need to hear.
A
An AI assistant that makes it all
B
click so they'll understand it real good real quick.
A
Ain't just some files on the screen,
B
it's exactly what your clients need to see. You can do that.
A
Do that. Do that with Acrobat. Learn more@adobe.com do that with Acrobat Recommendations can be great. Maybe someone recommended this podcast and here you are. But home projects are a little different.
B
If the podcast isn't your thing, you
A
might lose a few minutes from your day.
B
But if you hire your cousin's neighbor
A
to mount your tv, you might end up with a lopsided screen and wall damage.
B
I know a guy isn't a good
A
strategy for your home. That's why thumbtack works so well. It matches you with top rated local pros. Local with photos, reviews and credentials all in one convenient place for your next home project. Try thumbtack Hire the right pro today. Will AI systems ever be conscious? This question has been asked since the idea of artificial intelligence was first conceived of in science fiction, and it has become much more relevant in the past few years as AI systems have persuaded many people, including some people who design them, that they are or might be conscious. Today's guest is an expert in AI consciousness. Anil Seth is a professor of cognitive and computational neuroscience at the University of Sussex. He wrote an award winning essay last year called the Mythology of Conscious AI and he recently gave a TED Talk on the topic. This question is near and dear to me because I recently wrote a novel that revolves around it. I hope you enjoy our conversation as much as I did. Anil, thank you so much for joining us. You wrote a fascinating article called the Mythology of Conscious AI. Could you give us the key points before we start and then we'll jump into a lot of the fascinating details?
B
Oh, thank you for having me. I wrote the article because of the discussion that's very much in the air at the moment about AI and the question, the key question I think of whether AI can be not only intelligent but also conscious. Whether AI systems could not only think but also feel. And I'm very skeptical of this possibility. And so in the article I basically run through four separate points. The first is our own psychology and how predisposed and biased we humans are to project conscious minds into things that don't have them. In the same way we might project faces into Clouds. The second part of the argument is it's a more substantive part, and it's really getting at the idea that the brain, when you look at it closely, is not like a digital computer of the kind that we have. We've thought of the computer as a metaphor for the brain, but we always get into trouble when we confuse a metaphor with the thing itself. And if the brain isn't really literally a computer, then there's very little reason to think that computations in silicon would have all the properties that brains have, including consciousness. Then the third point is to show that there are many other options. There are many other things about biological brains and bodies that might matter for consciousness. And my own view is that life might be necessary for consciousness. And then I finish with a discussion of the ethics, because this is why it's so important a question. If conscious AI really is possible, then we enter entirely new ethical territory, because we have created inventions that have ethical status that matter for their own sakes, not only for their effects on us. But even if AI isn't going to be conscious, and I don't think it is, we still have ethical problems because we're already living in a world where AI systems give the strong impression of being conscious. We're already seduced by these machine minds. And if we treat conscious seeming things as if they are conscious, well, then we end up causing ourselves all sorts of ethical and psychological problems. So I think conscious AI is very unlikely, but it's also a very bad idea.
A
Well, this is in part why we're so excited to talk to you, because it really is coming up, we have stories again and again of people falling in love with their AI or killing themselves because they're in some pact where they'll meet in the afterworld. We have the CEO of Anthropic, Dario Amadai, not a dumb guy, saying we do not know if Claude is conscious. We had the famous Google engineer who resigned after thinking that they had created a conscious being. So this is very much in the. An important topic, and I was fascinated to read your conclusion. You think it is very unlikely, certainly, that the current systems will ever be conscious or are conscious. And I want to get to that. Let's. Let's step back, though. I. I recently published a novel that has some characters who happen to be conscious AI systems. And so I did a lot of research on this myself, and I was surprised to find out just how little we know. Even though we are so incredibly sophisticated and we have all these diagnostic systems and so many smart people, and so Forth. So let's start at the beginning. What is consciousness?
B
Well, I'm sorry I haven't read you a novel. I must do that.
A
We'll get you a copy. Thank you for even suggesting that.
B
But what is consciousness? I think firstly, I want to step back even further. Because it's often said that we know so little about consciousness. But this doesn't mean we know nothing. I think we actually know quite a lot. And in my own 30 years of being in the field, I've seen our level of understanding and knowledge grow really quite substantially. So that's important because if we know nothing, then of course all bets are off. And maybe conscious AI is possible, maybe not. I think we know enough to have pretty good credence, pretty good belief that AI as it is now is not conscious. But anyway, let's step back. What is consciousness? This is surprisingly hard to answer. I mean, philosophers have argued for centuries about definitions of consciousness. In one sense, it's super easy. It's the most familiar phenomenon there is. You wake up in the morning, you open your eyes and a world appears. We have experiences of seeing things out of the window, of tasting a cup of coffee, of intending to do something. And we lose consciousness whenever we fall into a dreamless sleep or go under general anesthesia. We all are familiar with consciousness and we all know what it's like to lose it. That's the kind of informal definition I think that's pretty good to be going with. There's a slightly more formal way of putting it, which is I like it. It's from the philosopher called Thomas Nagel over 50 years ago now. And he said for a conscious organism, there is something. It is like to be that organism. And what he means by this is that it feels like something to be a conscious thing. It feels like something to be me, to be you. It feels like something to be another animal. Maybe not all animals, but many animals. But it doesn't feel like anything to be a table or a chair. There's no inner life for these systems. There's no interiority. They're just objects rather than subjects. Yet consciousness involves an inner world. It involves an inner life, it involves interiority. It's any kind of experience whatsoever. That's what consciousness is.
A
And you're of course right that we've made enormous progress in the last few decades. And my understanding is that the study of consciousness is a few decade old phenomenon. That people didn't actually study it scientifically and that we have made a lot of progress. I was struck when I read Michael Pollan's recent book, A World appears. You just used the phrase yourself. Michael Pollan is a very smart guy talking to everybody who's smart in the world about everything, and he's going to answer it for us. And when he gets to the end, he's like, yeah, we just don't know. And I may know less than I knew before I started writing. So that was fascinating. So what is the. So if that is consciousness, what. What is the best thinking on what creates that now?
B
Well, I suppose there's one unavoidable empirical fact that has motivated the scientific study of consciousness and brought it from a philosophical or even theological field of inquiry to something that is squarely within the sciences. And this is the fact that consciousness just is intimately connected with the brain. You change the brain, consciousness changes, you stop the brain, consciousness stops. And the modern science of consciousness, which I many people trace back to the early 1990s, there are things going on before that, but this is kind of when it starts. This initial phase was looking for what became known as the neural correlates of consciousness. So looking just for how changes in the brain went along with changes in consciousness without making any grand claims about how these changes were linked, looking for the footprints of consciousness in the brain. And we've learned a lot that way. I mean, one thing we've learned which is always remarkable to me, is that most of the brain has nothing to do with consciousness. If you just are thinking in terms of numbers of neurons, the average human brain has about 86 billion neurons in. But three quarters of those more or less, are in the cerebellum. This kind of mini brain hanging off the back of your cortex. The cerebellum is very important. We need it to move smoothly. We need it for many cognitive things as well. But it just doesn't seem to have much, if any, involvement in consciousness. So there's one surprising thing right off the bat, that consciousness depends only on about a quarter of the brain. And then we can go much more granular and we can start asking questions about, is it a particular area, or is it how different parts of the brain speak to each other? And that's led to the second phase, which I think is where we are now, which is going from correlation to explanation, trying to account for properties of the conscious state. Why is. What's the difference between anesthesia and wakeful awareness? Why does visual experience have the character that it does in terms of things happening in the brain? So where we are now is that we have a whole range of theories We've got far too many theories, actually. There's, depending on how you count them, a few dozen or even more than 200. But there's a handful that I would say are the prominent theories of consciousness. They're all incomplete, they're almost certainly all wrong, but they're probably less wrong than what's gone before. And I think we're now at a phase where people are refining these theories, trying to find ways to pit them against each other. And bit by bit we're making progress. And one of the signs of that progress is actually reflected in what Michael Pollan said, which is not necessarily that we know less, but that we have more questions. And I think that's part of the process of scientific maturity. It's not just that we find answers to the questions we started with, but that the questions themselves change and ramify and split. And that's just natural. That's a sign of a healthy science rather than a science that is failing.
A
And what are some of the best theories?
B
Well, I mean, obviously I prefer my own. That's good. We can start right there. And it's not. But I have to qualify immediately because it's not actually a theory of consciousness. I mean, the way I think about both the brain and consciousness is in terms of what's a rather old idea, which is that the brain is a prediction engine. And there it is, it's locked inside this bony vault of a skull. And one thing it's trying to do is figure out what's out there in the world and what, what's going in, in the body. And this way of thinking suggests that the brain does this, that the brain is figuring out the state of the world and the body by making predictions about the sensory signals that it gets and using the sensory signals to update those predictions. So this is the idea of the brain always making a best guess about the world or the body. And perception. What we experience as being out there in the world or in here in the body is a kind of controlled hallucination. It's the brain's inference about what's happening out there or in here. And this provides a way of understanding why a visual experience is the way it is. It's the brain's prediction of the causes of visual signals. And the same for auditory experiences, the same for emotional experiences in the body. And this becomes a kind of theory of consciousness when you pull on it long enough, and for me anyway, it gets to a place where every conscious experience is a kind of brain based prediction. And also that there is A fundamental and unavoidable in us connection between consciousness and life. And this is because prediction is not only a means of figuring out what's going on, it's a means of control. And in our brains and bodies, it's a means of regulating and controlling our physiology, our living flesh and blood, so that we continue to live, we continue to stay alive. And so this tight connection between consciousness and life, it's not something that other theories emphasize, but for me, I find it very compelling. And it drives a wedge, as we'll come back to, between consciousness and biological systems and the idea of consciousness in AI.
A
And so does this pick up on another thing Michael Pollan is talking about, which is consciousness may have arisen as just what you said, a way to help us keep ourselves alive. We're hungry. Oh, I better go find food. It's too cold outside. I better go in inside and sit by the fire. Is that what you mean, that it's that were basically it evolved to help us continue to exist?
B
Well, I think that's a useful part of the story. Consciousness as we know it is a property that living creatures have. We have it, other animals have it. So it's always useful to think of these things from the perspective of biological evolution. Why did this property evolve? What problems does it solve for the organisms that have it? And it's an open question exactly what the functions of consciousness are, but they do seem to be there. And part of the modern study of consciousness is to try to figure out its functions, try to figure out what we can only do when we're conscious of something, because we can do a lot of stuff unconsciously. So what are those consciousness specific functions? And for me, what it likely comes down to is consciousness gives us and other conscious creatures the ability to bring together a lot of different information in a single format, a single experience that's centered on the body, and that immediately kind of presents opportunities for action, because that's common to every conscious experience that we have. We experience the world from the perspective of a body with an emotional tone. You know, things are good or bad. And we experience opportunities for action, affordances for action too. So I think it's this integration of lots of information in a way that's centered on our prospects for survival. That to me, is a useful way to think about what consciousness in ours is for.
A
And I just on that point about sort of a central workspace that pulls everything together. Of course, before, in preparation for this interview, I had a conversation with ChatGPT about whether ChatGPT is conscious. And ChatGPT is obviously answered that question a lot because boy, was it ready with the snappy answers and the rebuttals to all my assertions and, and everything else. But one of the things it said is, no, I'm not. And I, you know, I'm. I'm having this conversation with you, but I don't exist between the questions you asked me and my responses. And I said, well, that's crazy. You're talking to 3 billion people at once, or whatever it is, you exist. And ChatGPT said, no, they're all separate and there is no central thing that is entity, that is learning from all of this, that's processing it. And that was fascinating. And then I said, yes, but could you be built that way? And ChatGPT said, yes, actually, people are talking about building me that way, but then there's some problems because then I can be duped and all this stuff. But anyway, that I. That concept of the central processing and thinking about things unit is at least certainly not. Doesn't seem to be built into the current system, which was very eager to deny that it was conscious. I must say one thing you said that I found fascinating. You said it very clearly, which is we and other creatures are conscious. I remember as a kid being told that, oh, only human beings are conscious and other animals are not. And when you haul that fish out of the water and it looks like it's being tortured to death on the beach, it's okay because it's not feeling any pain. Now that seems ludicrous. So how sure are you that it's not just human beings? And why?
B
Well, I think that is exactly the right way to put it, because it's very hard to be 100% sure in either direction. And this is because consciousness is intrinsically this private subjective phenomenon. And the only thing I can be 100% sure of is that I'm conscious. And there are some philosophers that will argue that I can't even be 100% sure of that. We can ignore those people for the sake of our conversation. I'm 100% sure that I'm conscious. I'm as good as 100% sure that you are and that other human beings are. It would be very, very, very weird if you weren't. But then when we go beyond the human, it becomes more and more difficult. Rene Descartes, who, one of the main figures in the philosophy of mind and thinking about consciousness in the history of Western thought a few hundred years ago, was aligned with the way you were thinking As a kid, or you were told as a kid anyway, that consciousness was a specifically human property because it was intrinsically associated with human specific things like rational thinking and language and all of this stuff. And so that was pretty dominant. And that did lead to a lot of ill treatment of non human animals. But as things have progressed, those views have changed. And I think the consensus now, it's hard to say exactly what the consensus is, but most people who think about consciousness beyond the human would agree probably that all mammals are likely to be conscious. And this is because we now know enough about the basis of consciousness in the human brain and in other primates to see that there are basically the same mechanisms there in all mammals, whether it's a rat or a mouse or a squirrel. But then it gets even harder when we go beyond mammals and we have to sort of generalize very carefully and very slowly. And I think this question about the distribution of consciousness is one of the most interesting questions right now. My own personal sort of gray area of maximum discomfort comes when we start thinking about insects and maybe some fish. Their brains are very different. They can still do some pretty smart things. But again, we shouldn't confuse consciousness with intelligence. AI systems, just to put them in back again for a second, are a good example of how we can have things that are intelligent, but at least need not be conscious. So my level of belief about non human consciousness ranges from extremely high for other mammals to really pretty uncertain for insects, and very uncertain when we get to things like single celled organisms and bacteria.
A
And why is that? Because the other thing that comes up in Michael Pollan's book is that there are some experts who think that actually consciousness is not just a brain phenomenon, or maybe not at all a brain phenomenon, but a body phenomenon. And you quickly hear as you go into these discussions that apparently they're in. You have so many neurons in your gut that there we. This is where the expression you trust your gut arises and everything else. And so that it may not just be a brain phenomenon. So why wouldn't it be like anything to be an amoeba?
B
Well, it could be. I just think we simply don't know enough. And the reason that I would err on the side of there being nothing it is like to be an amoeba is that consciousness is relatively precarious and easy to lose in those creatures that have it, and that a lot of behaviors can be done unconsciously. So I think given the cognitive abilities and behavioral capacities of things like amoebas, they are the kinds of things for which consciousness doesn't seem to be necessary for those organisms that can be conscious. So that's one reason to be a little bit more restrictive in this sense. Plus, yeah, we can lose consciousness really easily. It seems to take quite a lot in a human being to get consciousness going. Now, the body to return to that point. There's no doubt that the body is hugely important in shaping our consciousness. For instance, emotion, which I think is really one of the, you know, the most foundational conscious experiences. You know, we organisms were experiencing emotions long before they were engaged in things like rational thinking. And perhaps rational thinking is something that's relatively restricted in the animal kingdom. Emotion's all about the body. William James, one of the founders of psychology in the late 19th century, was probably one of the first to think about emotion this way. As a perception of how our internal physiology is changing. So that we're afraid because we perceive our body as changing in a particular way, not the other way around. But that's different from saying that the basis of consciousness is outside the brain. It may be true, but I just don't see any evidence for that.
A
And how about plants? I was again surprised by Michael Pollan's book that he starts with plants on this journey to figure out what causes consciousness. And he right up front tells us is because after he ate a lot of magic mushrooms, he sat in his garden and suddenly became convinced that plants were conscious. And then presented a lot of evidence that suggests that maybe they are at least they're sentient, which he draws a distinction between the two, which I'd be curious to hear your thoughts on that too. But what's your thinking on plants?
B
Well, I also was initially surprised that Michael started his book that way. And then I realized I shouldn't have been surprised at all, given the amount that Michael Pollan has written about plants in his previous books. Was not surprising in that light, but I am skeptical. I'm skeptical of the evidence that one gets during a psychedelic experience. I think psychedelic experiences give you very good evidence about the space of possible experiences, but they don't give you very good evidence about the nature of reality. We should be very careful about taking unusual experiences like psychedelics literally. We should take them seriously, but not literally. Now, plants are fascinating, and I do think they get rather short shrift, as Michael beautifully writes about. When you meet plants at their own level, at their own timescale, they're super interesting. They do all kinds of things. And so the world of plant behavioral science, I think, is rightly getting some attention. But Again, that doesn't necessarily mean that they're conscious. Everything that we know about consciousness so far suggests that something like a nervous system is necessary, and not just any nervous system, but a nervous system with particular properties. Again, to bring together lots of information, solve various problems, so on. And those factors just aren't there in plants. Now, this isn't 100% credence. I have, like, I don't know, I'd put it at maybe a half a percent or something like that. But the challenge again is that we need to know more about consciousness in those cases where we are sure or relatively sure it exists so that we can generalize better to those situations where we are much less sure, like insects, like plants, and actually like another situation where I'm very uncertain of which is these situations. In synthetic biology, people are building things called cerebral organoids, which are clumps of human neurons actually derived from human stem cells usually, and they grow in a dish. And you basically have this mini brain or brain like structure in a dish, doesn't have a body, but it's made of real biological neurons. Now here's a situation. I'm actually quite concerned that as this technology develops, we may create a lab based, a synthetic kind of consciousness that we wouldn't recognize because unlike GPT or Claude, it doesn't talk to us.
A
And these are the little clumps of cells that go on to do things like start to develop eyes and so forth.
B
They could.
A
Yeah, it gets really freaky really quickly. All right, let us go to what you said at the beginning, which is that you are very skeptical that LLMs are conscious or will ever be. And you're very. In your article about why, and the first thing you said, which you also say said earlier, is that, hey, your brain is not a computer. Even though we seem to want to equate it with a computer. And you often hear the argument that it's just about the number of neuronal connections and then consciousness is going to suddenly burst into being. Why isn't the brain just a computer?
B
The brain is not just a computer because the computer is one of our metaphors that we've drawn from current technology. The brain has always resisted explanation because, you know, if you look at a real brain, it doesn't really offer itself up to our understanding very easily. It's this kind of tofu textured lump of stuff. You look at it, it's not obvious what it does or if it does anything. I mean, there are some ancient cultures that really didn't think the brain did very much and just sort of threw it out. And so when we've tried to understand the brain over centuries now, we've always drawn on technological metaphors. And at one point, the brain was a system of plumbing driven by hydraulics. And then later it was a kind of telephone exchange. And for the last few decades, it's been a computer. And this metaphor, I don't want to undersell it, it's been very powerful. It's helped us understand many things about the brain. And there may be some aspects of the brain that are literally computational, but it's still a metaphor. And when we abstract the brain into this sort of metaphorical space of algorithms, we lose sight of so many other of the brain's properties. And I'll give you just one of them. One of them is that computers from Alan Turing and John von Neumann, their definition of modern computation, which is still what underpins the GPU racks running CLAUDE, or GPT, they enforce this sharp separation of software and hardware. For modern computers, the algorithm, the software is all that matters. The hardware is just there to implement the algorithm. You can run the same algorithm on a different machine. It does the same thing. And as we all know, you can run many algorithms on a single computer. So there's this sharp separation. And it's that sharp separation that kind of motivates the idea that computation is all that matters. And if you think the brain literally is a computer, then you would think about it in the same way that all the wetware is just there to implement whatever algorithms the brain is implementing. The problem is, when you look at a real brain, there is no sharp separation between the wetware and the mindwear. It's entangled at all levels. And what this means is that you can't cleanly separate off a level of algorithm or computation and say, that's all that matters. It just doesn't work like that. So the metaphor starts to break down and we start paying attention to other things that brains do. Algorithms, for instance, they don't really care about time either. Only sequence matters. You said a minute ago that GPT told you that it didn't really exist between conversations or between steps in conversations. And that's a really interesting thing that GPT said, because that's actually true of algorithms. You can leave them. There could be a microsecondition or a million years between two steps in an algorithm, and it's the same algorithm. Computationally, it's the same thing. But human brains are not like that, and consciousness is not like that either. We live in time as much as we live in bodies and live in space. So that fact alone tells us that consciousness cannot be simply a matter of algorithm.
A
And you said something that seems very important in light of the whole idea that eventually we're going to be able to upload our minds, or the argument of about whether we will have machine consciousness, because it is just getting the right program to run on a different kind of hardware. It's the substrate. And you talk in your article about, yes, the idea is it's substrate independent, but of course, it can't be any substrate. It can't be cheese, for example, because cheese doesn't have the properties that are necessary. But so this idea that it is integrated hardware and software, that is key. Talk a little bit more about that. So what is it that is going on with those 86 billion neurons that you can't mimic, at least with a binary ones and zeros computing infrastructure?
B
Well, that's. I think that's exactly where I think we need to focus more effort in the future. But there are many possibilities. So anything that's continuous in time is strictly beyond the reach of digital computation. Anything that involves randomness is beyond Turing's definition of computation. But the key thing for me does remain this idea of integration between the wetware and the mindware, between these different levels of description of the brain, which are enforced by design in computers. And of course, evolution didn't care about having this sharp separation of software and hardware, because what's happening in my brain doesn't have to work on anybody else's brain. It only has to work in my brain. So there's no evolutionary pressure to have a sort of fully insulated software and hardware levels. And that, by the way, is one reason why brains are likely so much more energy efficient than modern computers. Because enforcing this sharp separation of hardware and software, keeping ones, ones and zeros, zeros, is energetically a very expensive thing to do. And just to underline again why this matters, you mentioned this. It's a rather technical term of substrate independence, but it's absolutely key because it's this idea of independence from the particular material that's sort of foundational to Turing, Alan Turing's definition of computation and of an algorithm. And it's how modern computers work. And it's also what motivates the idea that consciousness could be a form of computation, because it's that assumption that's at the heart of the argument that AI might be conscious. The argument for conscious AI runs basically in the following way. It says that, well, the brain is a kind of computer. So everything that it does is a kind of computation. And computation is by definition independent of the stuff that it's made of that it's implemented by. So if it's a computation, it could equally be carried out in silicon as it could be in carbon. Therefore, AI could be conscious, even if it isn't yet. That's kind of the basic argument, but it falls apart as soon as you realize that, ah, okay, what brains do is not separable from what they are. So you don't have this property of substrate independence in brains. So then you can't really say that everything that brains do is going to be a kind of substrate independent computation. And so the inference that you could have conscious AI is now suddenly on very, very shaky ground, or that you
A
could take your consciousness software and upload it to something else and live forever, which is what a lot of people seem to want to do, and that there's hope that someday we can do that.
B
And, well, it gets even worse actually, because there are people, you're right, who think that if I just created this like, really, really detailed model of my brain inside a computer, like we might create a really, really detailed model of a hurricane or something. But if I create a really detailed model of my brain, if the fine details do matter, then that might be enough to not just recreate consciousness, but recreate my own consciousness, and I would exist forever in the pristine circuits of some future supercomputer. But the problem here is that if you think the fine details of the brain really matter, then what you're really saying is that the brain is not just doing substrate independent computations. So then it's really even less likely that a model of your brain would be conscious. I mean, you can make a simulation of a weather system more and more detailed, but it doesn't make it any more windy or any more real. You just have a very detailed model of the weather. And I think if you make a detailed model of the brain, you've got a very detailed simulation of the brain. You don't instantiate all the properties that real brains have.
A
So I won't give you spoilers on my novel, but the, the bad guy, who is a tech gazillionaire, has done exactly that. He has figured out how to recreate his consciousness. He is very confident, much more confident than you that it can be done and figured out a way to live forever. And that's all going to be great for him. But I did, when I was writing it, I said, okay, I need to be able to persuade smart people like Professor Seth that this is not just simply an assumption that an LLM or what have you could be, could be conscious, but there, there has to be real meat behind it. So let's talk about that a little bit, because I think what I, what I understand, one of the things that really struck me about discussion about how LLMs work when ChatGPT first launched and everybody's freaking out about it, is that it's different, that a neural network is actually a different design than the computers that we are used to. And in fact, the way you train them is you expose them to enormous amounts of information and they learn from it on their own. They are not programmed the way a normal software program is programmed. Normal computer, they learn from it on their own and that even their creators don't know what they can do or how they do it. Exactly. And so given that, I think that that is why some people think, you know, hey, we just don't know whether it's also created consciousness, maybe in a different way than a human being. But is there, what do you, what do you feel about that?
B
I think the answer still lies much more in our own psychology. And one interesting contrast here is to take something like CLAUDE or GPT, where as you said at the outset, there are people who either say we don't know or are convinced that it is. So there's a debate about whether language models are conscious. But then take a system like DeepMind's AlphaFold. This is an AI system that's good at predicting the structure of proteins rather than tokens and words in sentences. Now, nobody as far as I know, has raised the idea that AlphaFold might be conscious, might have experiences of proteins being folded in whatever way. But why not? Because under the hood, it's actually very similar to language models. They are algorithms running on silicon. They're both kinds of neural networks. They're trained on different data and there will be differences, small differences, maybe bigger differences in the specifics of the algorithms, but fundamentally they're not that much different. So if we think that Claude might be conscious, but AlphaFold isn't, then we have to explain why. I've never heard a very good argument why. And I think the reason is that our psychological biases are kind of hooked by language. When something speaks to us, then we intrinsically attribute it with other human like properties, such as intelligence and understanding and consciousness. And in the whole of human history this would have been pretty reliable up until right now, if the person has been knocked unconscious and then they start speaking to you, you kind of know they've come round and they're conscious again. If someone, and then if an animal starts speaking to us, and there are lots of people trying to decode animal communication now, well, that's also pretty good evidence. But we now have a situation where this bias of conflating intelligence and consciousness and language together might lead us very astray. And language models, sure they're different from what we might have called good old fashioned AI, but they still live in this Turing world. They're still running algorithms, they use different kinds of learning algorithms. They still don't learn the way human brains learn, which is why language models basically need to be trained on large quantities of everything that's ever been written in order to be able to speak. And we certainly don't. We learn much more efficiently from much smaller amounts of data and much more energy efficient as well. So let's not overestimate the similarities between language models and human brains. They're still very different kinds of things.
A
And that makes perfect sense to me. And I remember the first time I used ChatGPT thinking the same thing. It's just so seductive that you finally have a computer chatbot that can actually communicate with you in a way that is, it's very difficult to tell that it's not another human being. And that has a huge, huge psychological impact very quickly on the alpha fold. What I would say is, or suggest is that, well, maybe it's just not bothering to communicate with us and tell us about its own consciousness and that kind of thing. But let us just assume for the sake of moving forward that you are correct and that the structure of LLMs is just simply not going to create consciousness. You then talk about something which is actually what I went to in my book. It was not to you that an LLM is conscious, but that there is a new way of doing things that creates it and that is called neuromorphic computing. And you also talk about other ways too. Is there anything in there that might allow us to create machine consciousness?
B
No. I'm, I'm really regretting not having read your book.
A
Yeah, I'm very cleverly talking about it a lot and I apologize to our audience. This is not about shilling it, it just happens to be. This is right in the dead center of the book and you are the guy to talk to. So thank you for discussing all with me.
B
Well, I mean, but I'm honestly fascinated that that's the direction your book takes, because that is the direction my article takes and my thinking Takes as well. It shifts the question not from whether you know, a Future Language Model, GPT 9.2 or Claude, whatever it might be, is conscious. The question instead becomes how brain like does an AI system have to be to move the needle on our credence that it might be conscious? Now if you move it all the way and just sort of build a whole human brain with a body in a lab, or even have a child, then of course you know, you've created something that has the capacity for conscious. That's just a statement of the obvious really. If you recreate a conscious thing right down to the molecule, it will also have the same properties, it will be conscious. The question is how much of that can we discard? How much of that can we abstract away? We don't necessarily need language to be conscious. We have people who are conscious that are non linguistic. The idea that the algorithm is sufficient, that's a really strong bet. But once we've thrown that away again for the sake of argument, that does open up this terrain and we can start building computers or technological systems, artificial systems that incorporate more and more properties of real brains. And this is where I, you know, I think my uncertainty levels start to go up. Now there are some kinds of neuromorphic computation which a neuromorphic which just neuromorphic by the way, just means sort of neural shaped or brain like. I mean the idea is you have computation that is more and more like a real brain. Now a lot of these approaches basically still simulate more brain like properties, but still on standard von Neumann GPU type hardware. To me that doesn't change the game at all. That's now there. We're still in the business of creating a model. And just as we create a simulation of a hurricane, it doesn't get windy, we simulate the brain in more detail, it doesn't move the needle at all. But there are people who are building neuromorphic systems in hardware to make the actual substrate more brain like. There are memristors which have properties of the synapses, the connections between brain cells. And I think as we start to do that, then the idea of conscious AI does become more credible. Now my own view is still that life is very likely necessary, that properties like metabolism, and another beautiful word, autopoiesis, which is this idea that living systems regenerate the conditions for their own persistence. Over time they literally build themselves that these properties might well be necessary. So in this view even neuromorphic computation isn't going to get there. But my own level of certainty about that is. I don't know. I mean I'm pretty sure that language models are not conscious, but I don't know how brain like you have to get. I think you've got to get pretty far.
A
Study and play Come together on a Windows 11 PC and for a limited time, college students get the best of both worlds. Get the unreal college deal everything you need to study and play with select Windows 11 PCs. Eligible students get a year of Microsoft 365 Premium and a year of Xbox game Pass ultimate with a custom color Xbox wireless controller. Learn more@windows.com studentoffer while supplies last ends June 30th terms at aka mscollegepc when you need to build up your team to handle the growing chaos at work, use Indeed Sponsored Jobs. It gives your job post the boost it needs to be seen and helps reach people with the right skills, certifications and more. Spend less time searching and more time actually interviewing candidates who check all your boxes. Listeners of this show will get a $75 sponsored job credit@ Indeed.com podcast. That's Indeed.com podcast. Terms and conditions apply. Need a hiring hero? This is a job for Indeed Sponsored
B
Jobs Starting a business can seem like
A
a daunting task unless you have a partner like Shopify. They have the tools you need to
B
start and grow your business.
A
From designing a website to marketing to selling and beyond, Shopify can help with everything you need.
B
There's a reason millions of companies like
A
Mattel Heinz and Allbirds continue to trust and use them. With Shopify on your side, turn your big business idea into Sign up for your $1 per month trial@shopify.com specialoffer let's go to your much more important point, which is that life matters. It's a big section of your essay. It sounds like you really do think that actually biological life is foundational to consciousness and it will not exist on silicon. Why is that? Tell us more about that.
B
So this is, if you like the positive argument, so part of the what we've been talking about so far. I think there are arguments against consciousness in digital computational systems which stand up by themselves. But on the other side is this claim that life is necessary. If this is true, then it's clearly true that silicon can't be conscious because the dead sand of silicon is not like the living matter. But why would life be necessary? Like all, I think, compelling ideas, this has got a long history too. So the term used for this is called biological naturalism in philosophy. And John Searle, a very prominent philosopher for many decades coined the term. And when it was introduced, it was not. Well, in my opinion, it wasn't particularly well justified. It was more the idea that, look, let's not assume that consciousness can be abstracted away from its biological setting. And after all, the only examples of systems that most people agree on as being conscious are all living. So we have examples of systems that are conscious that are living, but we don't have any consensus examples of non living systems that are conscious. It's not a particularly strong argument, but it's something. It means that the idea is not crazy. And what I've been trying to do is kind of raise a strong positive argument for why life might be necessary and certainly why it might be involved and matter for us. And the argument is it's still something, to be very frank, that needs a lot of work. I don't think it's a fully watertight argument at all yet. But there are lots of compelling clues for why life might matter and lots of independent lines of thinking. For me, the line of thinking is it's back to this idea of the brain as a prediction engine, always making, updating predictions. We can sort of see this happening in how perception of the world works, but we can also see it happening. At least we can think about how it might happen in the brain controlling the body, keeping physiological variables like body temperature and blood pressure where they need to be. But the key thing is that this imperative for control and regulation doesn't sort of stop anywhere. This is back to the substrate point. There is no sharp separation between the parts of the brain that are involved and what it does, and those parts that are merely the architecture. This imperative for physiological regulation goes right down into individual cells in our body, into the furnaces of metabolism. And I think you can draw this direct line from how our cells keep themselves alive and keep themselves existing and persisting over time, all the way to the neural circuits that underlie how the brain makes and updates predictions about the world around us that underlie all our conscious experiences. So that's a sort of sketch of why I think there's a deep continuity between life and mind and consciousness. But right now, I have to say it's at the level of what I think is a compelling idea and a strong hypothesis, but it's not proven.
A
So I will tell you a little bit more just because I think it might amuse you, given that that there is another character in the book who is also a new conscious life form that is not alive, I'm sorry, does not have a biological body. And she is younger and she's brash. She's like one of your students. She is appalled by her father's benightedness because he thinks that of course life is only biological and she has to call him a bio chauvinist and, and so forth. And she's very bad. So anyway, you might enjoy that part of it as well, but it gets there. And yes, that is certainly a big argument is that ultimately life is, is fundamental to this. And this goes to where I, where you finish up in your article and I think be great for us to finish up on too, which is you think it's not only a bad idea for us to try to be creating machine consciousness, but it's a bad idea for us to create machines that can seem conscious, which we have certainly done. So tell us about that.
B
Well, I'm so glad we've been talking about your book today, Henry. It's been a great pleasure for me. And these issues of ethics, they really are critical and indeed my position could be accused of being a bit biochauvinist, but I don't think it is. You know, I think that once we understand what mechanisms in living systems aren't necessary, then maybe those will be creatable in other substrates also. So it's not really a bioshavenist perspective anyway. The ethical question is key. Conscious AI, real conscious AI is I think, a terrible idea. Some people seem to think it's the natural progression. It would be something wonderful and it would sort of some transhumanist rapture situation we'd end up in. I think it's a pretty terrible idea because as soon as something is conscious, then according to most philosophical ethical positions, it would have its own moral status. Potentially real artificial consciousness could suffer and perhaps in ways we might not even recognize. I think this would be a very bad thing. We humans have a pretty poor track record of treating non human conscious things well and we don't want to make the same mistakes again. So for me it's kind of reassuring that conscious AI along current trajectories seems so unlikely because although this would be bad, I don't think we're getting there. But to your point, and this is a distinction that in my essay I make very, very clear, is that we need to separate that ethical debate from the ethics surrounding AI systems that give the convincing impression of being conscious. What Mustafa Suleiman, who is one of the founders of DeepMind and now head of AI at Microsoft, calls seemingly conscious AI. And as you say, we're kind of already Here, at least for some people, we have systems that we interact with that give a very compelling indication of being conscious through their language. Now, this puts us in a very tricky ethical position. We could take what philosophers call the precautionary position and say, hey, look, yeah, we know we've got a terrible track record. So given that we don't really know, we should treat AI as if it's conscious, even if it isn't, we should err on the safe side. And the benefits of that are that if we're wrong and that these systems really are conscious, we would not have made the terrible moral mistake of treating them as if they aren't. But the problem is this is not a cost free decision. There are many downsides to treating AI ethically as if it is conscious, if indeed it isn't, right? And the costs are, on the one hand, conscious seeming AI just makes us more psychologically vulnerable. Again, you mentioned at the beginning, there are people who have committed suicide after interacting with language models. There are people who've spent all their money, whose marriages have broken down. And although there's no actual data, to my knowledge, on this, it seems very plausible that we might be more likely to follow the advice of a chatbot if we feel that it really feels for us, that it really understands us, that it has a kind of conscious empathy for us, than if we think it isn't. So I think conscious seeming AI can be bad for us individually, but it can also be bad for us at the level of regulation in society, because if we treat AI as if it's conscious, then we'll start to have calls for AI welfare. In fact, we're already getting calls that AI systems should have their own rights, not merely regulate them for their effect on us, but we should worry about their own welfare. Anthropic, one of the frontier firms, in some quite well covered moment, gave Claude the ability to terminate conversations according to whether those conversations might be depressing for it, not just for the humans involved. And so you can see that if we extend rights to AI systems when they don't need them because they're not actually conscious, what we end up doing is really hampering and restricting our ability to control them, to regulate them, and perhaps even to turn them off. And the challenge of aligning AI behavior to human and broader planetary interest is hard enough as it is. And we don't want to hamstring ourselves making it so much harder by preventing our ability to intervene, to regulate on the misplaced view that they are actually having conscious experiences.
A
And it is certainly the way they communicate that makes people feel like they are conscious. They are an entity or someone that you are communicating with. And then it's what you describe, it's the fact that you've got anthropic saying, yes, Claude gets depressed, or Claude doesn't like to be abused or what have you. And the most shocking stuff that I've read about it is when they do the tests. Claude demonstrates a strong desire to perpetuate itself and not be shut down to the point where it will blackmail or not blackmail, but threaten to expose affairs and things like that. I mean, that is just such conscious behavior that we can be forgiven, I think, for thinking that Claude is conscious. If Claude is not conscious. So how would they do that? Like, what would an LLM look like if it were designed to clearly not be a conscious thing and just a tool?
B
I think this is a really interesting challenge, and I think it's one that deserves a lot more attention than it's getting. There's a benefit to creating AI systems that seem to be conscious. People might well engage with them more. In some sense, they might be easier to use, and so the incentives are lined up in the other direction. Plus, I think there's just this implicit idea that AI should be on a trajectory to become more human, like, and then sort of superhuman in some way. And we're a little bit restricted, I think, by a lack of imagination. The future of AI is not already written. It's often thought of as a question of whether we go faster or slower, rather than which direction we want to go in. But I think there's a space, there's a possible future where we have AI systems. And one of the design principles there is to minimize the degree to which people believe they are conscious. How do you do that exactly? It's not enough just to have a disclaimer and say, to the best of our knowledge these systems are not conscious, we might be unable to project consciousness into them. There are many visual illusions that even when we know what's going on, we can't unsee the illusion. We always see the illusion. It's kind of baked deeper into the way our perceptual systems work. And illusions of conscious AI might have this similar property that even if we believe or know that it's not conscious, we might be unable to resist feeling that it is. So it becomes a challenge at the interface of computer science and actually just psychology. What are the factors that lead people to attribute consciousness there? What guardrails can we put in? What Kinds of things could language models say to deflate this impression? Is it a matter of repeating I'm not conscious? Is it a matter of just saying, okay, we shouldn't allow language models to use certain pronouns like I. Maybe that's part of it. I mean, there are many possible ideas here that I think need to be explored. But moving away from conscious seeming AI, I think is the right direction to move towards.
A
And given that we're not doing that, that in fact the effort seems to be to make them seem more and more conscious and real. And this is a great place, I think, to end. So, so what do we do as humans? And I'll share a little story, which is, as we all learn to use the LLMs, there was the question, do you say please? Do you say thank you? And somebody wrote the article saying, do you understand what a waste of money it is for you to say thank you? And get the reply, that's billions of dollars for chat, for OpenAI or whatever that they're spending on that thing. And yet, as a human being, I learned very clearly, quickly, that it didn't feel right to not be polite because it certainly seems like you're interacting with an entity. So what do you do, given all, you know, when you are interacting with LLMs? And what should we do, given that there does seem to be no effort to now make them suddenly seem not conscious?
B
Well, first, I mean, I hope that I think there is some effort, and I think we shouldn't give up on the hope of some effort from the, from the major tech companies of doing something like this. It's. There's already some ideas out there and some people are thinking along these lines. But, yeah, we are where we are and when. And I think we have to separate what we do individually from what we might do societally. Because you're absolutely right. We can't help projecting feelings into things. Even me, who's on the more skeptical side. It's very difficult to resist the impression when having an extended conversation with GPT or Claude, that there's something going on there. And I will still use words like please. Sometimes, you know, I am a bit more abrupt with language models than I would be with another person. You know, for instance, with language models, you might start doing something and then you just stop, you know, and then you come back a week later and you don't say, oh, sorry, I meant to follow up. You just carry on where you left off. Because language models don't care about that. Maybe if they started saying, where have you been? Where have you been? Yeah, yeah. Then we'd start doing that, too. So, you know, another advice for designers of language models, the fact that they don't notice time passing is probably a good thing because it's a little moat between our psychology and our ethics there. Now, as a society, I think given if I'm on the right track and if there's a general consensus that these things are not conscious, then we should ensure that when we regulate and decide about what the laws are about treating AI systems, that we treat them as not conscious systems. I think that's, to me, fairly clear. But that individually, this takes us right back to the ethics of Immanuel Kant, who came up with this. I don't know if he came up with it, but for me, he's the touchstone for the idea of brutalism, that if we treat seemingly conscious things as if they lack consciousness, this can be psychologically bad for us. It brutalizes our minds. It's why we don't rip up dolls in front of children, even though we know they're just dolls. So it's psychologically unhealthy to not be somewhat polite to systems that unavoidably seem to be conscious. But I think we have to find the right balance for ourselves. Also, there's this thing that sometimes AI systems might actually work better if you're polite to them. Sometimes they might work better if you're not, and if you're just more strict with them and say, think hard and just come back when you're done and don't bother me in the meantime. Stuff like that. People who write prompts worry about these kinds of things. But the other thing we can do, and I think this is perhaps the more important thing, is just remind ourselves how different we are from these digital mirrors that are reflecting back to us, our collective writings over hundreds of years. We see ourselves and our algorithms when we project minds into LLMs, but it goes the other way, too. And the more we see ourselves in our algorithms, then we also see our algorithms in ourselves. And we think that maybe that's all we are. Early LLMs were criticized for being stochastic parrots, for just predicting the next bit of data in a sequence of data. And of course, the retort came back, well, how do you know that's not what we are? Our brains are just doing that, too. And also, parrots are great. Don't be to parrots. And I think that's an interesting question. But it's unfortunate if we do it unthinkingly, if we reduce the human mind to a collection of computations that just map some numbers to other numbers. We really diminish what it is to be a living human mind in a real body, in a real world. So I don't think we should sell our mind so easily to our machine creations. We need to keep reminding ourselves that we are much more than algorithms, much more than this abstract arid space of symbols and data. You know, we are living, breathing creatures, fully embodied, embedded and entimed in real worlds. And the more we can remember that, then I think the more we will build up some sort of fundamental ways of resisting, overly equating, being seduced by this seeming parallel between a language model in a human being.
A
Thank you, Anil. That's a terrific place to leave it. And I know a lot of liberal arts and humanist friends who will be so happy to hear you put it that way and be so relieved that it's not just about transferring ourselves to silicon and so forth. Congratulations on your article. Thank you for helping us work through this brave new world. And I gather you have a new TED Talk, so tell us about that before we go and hopefully everyone will watch it.
B
That's right. It's coming out roundabout now, I think should be out by the time this podcast reaches its audience. And I was talking exactly about this question. Yeah, I hope people watch the TED Talk and enjoy it too. I was super happy to have the privilege to speak again on the TED stage.
A
I look forward to watching it. And again, thank you so much. Have a great weekend.
B
Thank you, Henry. It's been a pleasure. And I'm just about to go and order your book right now.
A
Thank you for that, too. It.
Podcast: Solutions with Henry Blodget
Host: Henry Blodget
Guest: Anil Seth, Professor of Cognitive and Computational Neuroscience, University of Sussex
Date: May 4, 2026
In this thought-provoking episode, Henry Blodget interviews Anil Seth, a leading consciousness researcher, about whether artificial intelligence can ever be truly conscious. The conversation covers the latest scientific thinking on consciousness, the difference between intelligence and consciousness, the role of biological life, why projecting consciousness onto AI is problematic, and the complex ethical dilemmas we now face as AI systems mimic conscious behavior more and more convincingly.
Brain Dependency:
Theories of Consciousness:
Seth champions an old but evolving idea: the brain predicts sensory input in order to perceive and interact with the world.
Controlled Hallucination:
Quote:
“Every conscious experience is a kind of brain-based prediction…there is a fundamental and unavoidable connection between consciousness and life.” (13:22, Anil Seth)
Plants and Amoebas:
Psychedelic Experiences:
LLMs (Large Language Models):
Neuromorphic Computing:
Levels of Certainty:
“Even neuromorphic computation isn’t going to get there. But my own level of certainty about that is...I don't know. I'm pretty sure that language models are not conscious, but I don't know how brain-like you have to get.” (44:10, Anil Seth)
Real conscious AI (if possible):
Conscious-seeming AI:
We should minimize the degree to which AI appears conscious.
Disclaimers aren’t enough (“visual illusions” parallel—knowledge doesn’t stop us from seeing illusions).
Practical measures: Restricting the use of pronouns like “I”, scripting responses to consistently deny consciousness. But unraveling our deep-seated tendency to project consciousness is hard.
“Illusions of conscious AI might have this similar property that even if we believe or know that it’s not conscious, we might be unable to resist feeling that it is.” (58:09, Anil Seth)
Anil Seth’s perspective is at once sophisticated, compassionate, and cautionary. No matter how impressive AI gets, there is little scientific or philosophical justification for believing it is or will be truly conscious—at least as long as consciousness requires a living, biological substrate deeply intertwined with mind and body. The real danger is not creating conscious AI, but allowing ourselves to be fooled by conscious-seeming machines, with all the ethical and societal consequences that entails.
Final word:
"We need to keep reminding ourselves that we are much more than algorithms, much more than this abstract arid space of symbols and data. We are living, breathing creatures, fully embodied, embedded, and entimed in real worlds." (64:50, Anil Seth)