
Loading summary
A
From Cafe and the Vox Media Podcast network. Welcome to Stay Tuned. I'm Preet Bharara.
B
Our consciousness today is in some trouble. It's being polluted in certain ways. I've found, talking to people on book tour, that a lot of people feel like their heads are full of things they would rather not be there and that they're being colonized.
A
Welcome to Stay Tuned. I'm Preet Bharara. My guest this week is author Michael Pollan. His books have changed how people think about food, plants and psychedelics. And now he's out with a new book. It's called A World, A Journey into Consciousness that's coming up. Stay tuned. Support for this podcast comes from what if your business had someone who could draft, negotiate and manage contract, but never slept, ate, or took a day off? With AI, it's possible, and with jro, it's a reality. Juro is the complete AI solution for business contracting. From draft to signature and beyond. Juro gives you conversational access to your contracts and data, plus the connected workflows and integrations you won't find in basic AI review tools. Juro has powered 3 million contracts for a fraction of the cost. Visit Juro.com Vox for 20% off year. One long drive ahead. TikTok shows road trip spots, car hacks, travel playlists, best routes, hidden cafes, scenic stops. Drive smarter, Explore more. Download TikTok now. Author Michael Pollan takes us on a journey through his new book. Michael Pollan, welcome to the show. It's so good to have you.
B
Thank you, Preet. Very good to be here.
A
You have a new book. All of your books, by the way, have been extraordinarily successful. This one is called A World, A Journey into Consciousness. And my head was spinning a little bit. So my journey into the book, which I read cover to cover, was sort of mind bending, even though I took no psychedelics, head spinning, kind of trying to grasp the concepts and the paradoxes. Then I got more comfortable and I felt I had my footing. And then you kind of get to the end and you take a step back again and my mind spun again. So I don't know if that's the typical journey or not.
B
Yeah, no, that makes perfect sense to me. You know, the book has this kind of trajectory and it kind of moves from a scientific quest to understand consciousness to other approaches. And I realized at a certain point that science doesn't always have all the answers.
A
It sure doesn't. It sure doesn't.
B
And culture sometimes gets there first. And so I look at literature which, you know, novelists know a lot about consciousness.
A
So consciousness. You call it the problem of consciousness. What is it and why is it a problem?
B
So it's called the hard problem because it's one of the three big mysteries in the world. The other two being why is there something rather than nothing? And how does life emerge from dead matter? So these are biggies.
A
They are.
B
And fascinating ones, though consciousness I define pretty simply as subjective experience. Another definition people use is based on Thomas Nagel. The philosopher wrote a wonderful essay back in the 70s called what is it like to be a Bat? And his premise is if it's like anything to be a bat, even though we don't know exactly what it's like, but we know it must be like something to be a creature that navigates the world through echolocation rather than light, as we do, then that creature is conscious. It's not like anything to be your toaster. So it's a universal thing. We all have it. I mean, there are a couple mysteries. How do you get from matter to mind? How do you get from a bunch of neurons arranged in a certain way in the brain to the feeling of being you, of having experience? A lot of what our brain does is automated, right? I mean, probably 90% of what our brains do, we're not aware of. Monitoring our body, regulating our body in various ways, taking in information, processing it, you know, all the unconscious things that a mind does. And consciousness is kind of at the tip of the iceberg. A related question to the hard problem is why isn't it all automated? Why aren't we zombies?
A
And what's the answer to that question?
B
Well, we don't really know for sure. But the best theory is that there are certain situations, let's say you have competing needs that can't be addressed automatically. You're hungry and you're tired. Which should you privilege, right? If there's any uncertainty about a question, it rises to this level of consciousness where we can have this space for decision making. The other related explanation is that you need consciousness. When the world is so unpredictable that you can't program a response. You can't hardwire a response. And human social life is incredibly complex. So consciousness gives you tools to deal with it, such as your ability to imagine your way into someone else's point of view or predict what someone is likely to say or guess what they're going to do next. So, you know, we live. We're social animals. We. We need other people. We can't live on our own. We have a Long childhood where we're utterly dependent. So being able to read people consciously is a, is an incredible advantage. And you could imagine two people or two, you know, tribes of proto humans, one is incredibly acute and intuitive and able to read motive and plan what someone's going to do, and someone else who just kind of dumb about reading people, who's going to find the mate, right, who's going to, who's going to be able to create the bonds. So consciousness is very helpful with that.
A
I love the fact that as I'm beginning to read the introduction and getting mired and having the head spinning experience because of the paradox that I'm about to frame for you, you use the word weird, you say it really does get weird and weirder still, as I've discovered in the course of chasing down this most elusive prey with the meaning of consciousness. And then you say this, which is an odd thing to say very early in a book about consciousness, quote one bit of advice. Don't spend too much time thinking about consciousness or following developments in the field unless you're willing to throw into question your most cherished assumptions about reality and entertain some truly strange possibilities. And so, but here's the paradox, as you point out, one reason why consciousness has proved such a hard nut for science and philosophy is because the only tool we can use to crack it is consciousness itself. How can you learn something about your own consciousness if that's the only tool you have? How answer you that question, sir?
B
Well, yeah, it's a labyrinth. You can't get out of consciousness to study consciousness. You can't find science works with this idea that it attempts to attain this godlike perspective, right? It steps out of a phenomenon, looks at it and reduces it to simpler terms such as matter and energy. It's a myth. There is no godlike perspective. Consciousness is always operating, even in science. But in consciousness science, it's a particular challenge because we can't get outside of it to look at it. There is another science where this is the case. Astronomy, for example, studies the universe from inside the universe and they infer all sorts of interesting things. So it's not hopeless, but it certainly complicates things. And I didn't think of science this way, but science is just another manifestation of human consciousness. It doesn't step outside that. Scientists, human scientists decide what are you going to study? How are you going to frame the issue? What scale are you going to look at it? What tools are you going to use? So even science, the whole enterprise of science, is a manifestation of Human consciousness. So we have to be humble. And I think that definitely complicates the quest. What I meant in terms of giving up some of your long held assumptions. My assumption going into this was that scientific materialism, this idea that, you know, everything can ultimately be reduced to matter and energy, which has been a very powerful idea for 400 years that has explained all sorts of things. Consciousness isn't yielding to that approach. Consciousness may not be a material phenomenon. It certainly doesn't feel like one.
A
What other kinds of phenomena are there?
B
Well, there are two large metaphysical alternatives, and these are some of the weird ideas to get your head around. One is panpsychism, and that is the idea that everything is conscious to some degree. Every particle in your suit jacket has some itsy bitsy amount of experience or psyche. The basic idea is that consciousness does not emerge. It was always there. And it solves the problem, but kind of at an extravagant price because it's hard to get our head around the idea that the particles in my desk would have some little bit of consciousness and somehow they get combined.
A
The price is credulity, right?
B
Well, there are serious philosophers who make a case for this. It's a little bit like physicists who can solve their problems by stipulating there's a multiverse and there are like a hundred different universes. It's like, okay, that solves your problem, but at a very high price.
A
Is there anyone in science who says not only that it's difficult because you can't step outside of yourself, there is no view from nowhere, to quote Nagel and a famous book of his again, but that it's wrong to do so?
B
Well, we operate on the basis of fictions all the time, and they're helpful. I think the test finally is just pragmatic. If, if accepting this fiction of the view from nowhere or whatever helps you get stuff done, go for it. In this case, it doesn't really help you get stuff done. I mean, one question to ask is like. And one of the scientists I interviewed asked this question, what would a theory of consciousness yield and what would it tell us?
A
Well, there are many, I think you said.
B
Didn't you say they're 22, they're 22. There are a lot of theories out there, which is a sure sign that a field is flailing, I think. And they' all made some progress. One scientist said to me, so what is, you know, most theories are like meat grinders, right? You put, you know, if you have a theory of gravity, you put in all this data and out comes another set of data, numbers of some kind. What does the theory of consciousness reveal? He said, maybe it's a poem because what you want is what it is like to be another being, which we can only infer. And that knowing and understanding consciousness might require us to change our own consciousness, to have that access to somebody else. That's a kind of outlier view. But I think what we hope a theory would yield is that question of how you get from matter to mind, what mechanism would allow a brain, if indeed that's how it works, to have conscious experience. The other problem is though, that, you know, we're trying to understand a first person phenomenon, what it feels like to be you from a third person point of view. And that's a hard bridge to cross.
A
I'm in the category with you assume that everyone has an inner monologue or an inner dialogue. And I will tell you that since Friday, we're recording this on Monday, March 30th, you and I have had a number of conversations in my head and they have been verbalized. And I have imagined you, you know, answering them. The questions that I've asked in some of them, I'm flailing a little bit, but we have had that. And I play that out in my head and I assumed everyone did that.
B
Yeah, turns out not true.
A
What is going on in the heads of people who don't have an inner monologue or dialogue?
B
I assume the same thing you did, that we think in words, basically, and that consciousness is a verbal phenomenon. But I participated in this experiment that a scientist named Russell Hurlburt does. He's been doing it for 50 years, sampling inner experience, trying to get at the content and the form of our thoughts. And you wear a beeper and that he designed himself. And again, he's been doing this since 1973.
A
Before beepers.
B
Yes, before beepers, he had to design and build his own beeper. There were no personal electronic devices. And you wear it and it goes. And you get a sharp beep in your ear at random times of the day. And you're supposed to write down your thought at the moment that occurs. It's harder than it sounds. I'll just give you one example. My thoughts, first of all, were very banal. I did not have a single profound thought.
A
You were very forthcoming about that. For the listeners who will rush out to buy your book, someone as eminent as you, you have two postings at major universities, you've written 10 or something best selling books, and you're kind of like, why does this freaking beeper always go off when I'm thinking about coffee or food.
B
Yeah, or food.
A
As opposed to when I'm having deep thoughts about science or literature. Bummer.
B
Yeah. I was a little abashed at the banality of my thoughts. But anyway, so I'll give you an example. At one point I was in my kitchen, I was seasoning a filet of salmon and walking it to the fridge, and suddenly I go, shit, I forgot the pepper. And at the, at the time I said pepper, the beep goes off. So that seemed like an easy one. I wrote down pepper. That was my thought. But then you do this debrief with Russell at the end of the day, you do. You collect about five beeps in a day. You can't do it all day. You'll drive yourself crazy. You're so self conscious. Like, what if the beep goes off now? And. And so you're allowed one pass.
A
I thought you are.
B
You are. And that is very generous of him. I didn't need any passes, however, because my thoughts were so banal. But then he says, so did you hear the word pepper or did you speak the word pepper? And it's an interesting question to ask about your inner monologue, whether you're hearing it or speaking it. And sometimes you're doing, you're going back and forth. But his finding, his big finding after, after 50 years of collecting data, is that only a minority of people have an inner monologue that consists of words. And that there is just as many people who think in Im not words. And then there are people who think in what he calls unsymbolized thought. Neither words or images, just pure abstractions, which I have a lot of trouble getting my head around. And his takeaway, which I think is really extraordinary, is that we all use this word think or thought, what are you thinking? And we assume automatically it's the same for all of us. But in fact, that term is an umbrella that covers many different styles of thinking. And that was kind of shocking to me because I assumed I was a verbal thinker. But the more I think about it, I think a lot of my thoughts are kind of pre verbal. They may fall into that category of unsymbolized thought. I just have to push them a little bit to put them into words. But they're not in words as they occur to me.
A
But let me ask you two questions, because the idea that some subsets of folks don't think in words makes total sense to me. Infants have thoughts, correct?
B
Yeah.
A
And they're prelingual. And am I also correct that Homo sapiens existed for a long time prelingual?
B
Yeah.
A
So they had thoughts. Do dogs have thoughts?
B
I would guess so. Yeah. Right. I mean, we don't know.
A
Have you asked a dog? Just ask a dog.
B
Yeah, yeah. No, reportability is an issue with dogs.
A
They certainly have ideas about things.
B
Yeah, they do.
A
If that's different from thoughts, I don't know.
B
Wait, wait. How do you distinguish an ide from a thought? I'm not sure I can do that.
A
I don't. You're the author.
B
Yeah. No, I think they're the same thing. I think they're two words for the same thing. Well, but then there are feelings which are somewhat different than thoughts.
A
In this moment, as I'm asking you a question, I'm not reading from notes and I'm formulating the question as I go along. But obviously it's not just random words coming out of my mouth. Something has happened. Some neurons are firing as I ask these questions and maybe some of it is on autopilot. I don't know how that works. But what do you call the thing that is occurring before the words form and leave my mouth? So in that sense, we're all pre verbal or pre lingual in our thinking, are we not?
B
I don't know. That's a good question. Some people, the thought may just come appear in words. William James wrote about this phenomenon. He called it premonitory thinking. Premonitory. So it's the premonition of what's going to. What you're going to say next. And I. I think it's just this kind of instinct that you're going this way and the words will suddenly appear and it's kind of miraculous. You don't actually pick them, except in rare situations when you're writing. Sometimes you pick words, but. But they just emerge and they're emerging from somewhere unconscious. And that goes to this point I mentioned earlier of just how much our brain is doing before anything enters the space of awareness.
A
You write in the book also, early on, it's entirely possible to go through life without worrying about the problem of consciousness. Good luck to those people because now you've ruined it for me. Why do we. Why do we care about this?
B
Yeah, it's a great question. I've been asked that several times on the road. I mean, one is sheer intellectual curiosity. We have this mysterious phenomenon. It's universal. We're all conscious and we don't understand why. So in the same way science studies any kind of issue, you want to know the answer, that's part of it, I think. Another reason, and for me, this gives the topic a certain kind of urgency, is that our consciousness today is in some trouble. It's being polluted in certain ways. I've found, talking to people on book tour, that a lot of people feel like their heads are full of things, they would rather not be there, and that they're being colonized. And, you know, we have a president who takes up a huge amount of headspace and has done so for what, 12 years now. It's quite remarkable how much time we spend thinking about this person, how much. And we don't want to be. I depict it sort of like having an alcoholic father that you. You have to be continually vigilant. He's going to do some crazy shit and you have to be ready. So that's one thing. And then you have social media, these algorithms that are pressing our buttons, continually getting us to think things we didn't want to think, but we're sort of addicted to. And now we're entering into these emotional relationships with chatbots. So all of this I see as endangering what is wonderful about consciousness, which is the fact that we have this space in our heads where we have complete freedom of thought. We can think and feel anything we want. No one can stop us. And it's private too. No one has access to it if we don't want them to have access to it. It's a great gift and I think it's in trouble right now.
A
So one of the tools, famously, that you talk about and that you've used yourself are, as you refer to them, a whole suite of chemicals that when introduced into the brain, utterly change our experience of consciousness. Psychedelics explain, for someone as uninitiated as I am, having had to pass multiple background checks over the course of my career, how it can be that the taking of conscious altering drugs can teach us something about consciousness.
B
Well, the first thing it teaches you, I found in my. So I had these experiences, I refer to them as research trips. And they were as part of my research for how to change your mind, a book I did in 2018. And I had not had a lot of experience with psychedelics, really none, until I was in my 50s or 60s. I was a late bloomer. But I got very interested because I was reporting on the use of psychedelics in therapy, which has been incredibly effective for people dealing with a range of mental disorders and addictions. There's a way in which psychedelic experience foregrounds consciousness. You have this experience where it's almost as if the windshield through which you're normally seeing the world is smudged and you notice it for the first time. It's been completely transparent. It's the water you swim in and suddenly like, there it is. You've defamiliarized it in the way that art can often defamiliarize things that we take for granted. So I'm not unique in this. I think it's a very common reaction to psychedelics to start wondering about the nature of consciousness and you know, changing a system, you know, as, as the physicists will tell you, gets it to yield some of its secrets. Right. You smash particles in an accelerator and you learn things about what gets spun off. I also had some. Epiphany is a strong word, but insights on psychedelics that I wasn't sure what to make of. One was that plants were conscious. The plants in my garden were brimming with consciousness.
A
Oh, I definitely want to talk about that because that was the most mind blowing part of your book.
B
It was pretty mind blowing and actually
A
made me change my mind about something because I was very skeptical of that.
B
I was too. So I had this insight on psychedelics. The plants in my garden were returning my gaze. They were fully conscious. Very benevolent by the way. Nothing scary about it.
A
Water me, fertilize me, Water me.
B
Yeah, and, but I didn't know what to do with that. I mean, it's a, you know, it was a drug occasioned insight. Maybe I should just dismiss it. And there again I had read William James on mystical experience and he says, we don't know, you know, this is people who've experienced the divinity or merged with nature, you know, a whole range of different amazing things that happen to people. And not on psychedelics, but, but they can happen on psychedelics. And he said, well, the test of these ideas, since we can't, they're metaphysical questions we can't address, is is it useful to think this way and second, treated it as a hypothesis and see if you can find evidence or other ways of knowing. So I went down this, this long rabbit hole trying to see if plants are indeed conscious. And yeah, it was one of the most fascinating lines of research I've ever done as a, as a journalist.
A
Before we get to that, I just want one more point on this and maybe this is going to sound lawyerly. So I understand that you take some suite of chemicals and you believe subjectively that plants and trees have consciousness, but that doesn't make it true. So what's the relevance of the subjective Experience of the user, about something external to that person.
B
First of all, there are insights you have on psychedelics that are true. Yeah.
A
About yourself or about the world or about both.
B
Both about the nature of the world. I think you've increased the amount of sensory information reaching your brain. You see beauty in things that you took for granted. You discover the importance of love, which sounds like a Hallmark card, but in fact it is really important. And we don't think about it.
A
No. You describe someone who. Yeah. The most important force in the world is love. I mean, I like that.
B
Yeah, I do too.
A
But does that make it true?
B
Well, then you have to ask yourself about the truth of the insights you have in normal experience. That's one kind of consciousness. There are others, and so we privilege one of them, sober consciousness. But you drink coffee and your consciousness changes your acuity with which you see things.
A
My experience is only with alcohol. And so through that lens. Yeah. You see things become clearer in moderation.
B
Yeah. And then they become less clear.
A
Yeah. And you fool yourselves into thinking lots and lots of things when you alter your mind.
B
Yeah. I mean, some of them are untrue. And you can fool yourself, definitely, but. But I don't think you can dismiss. I mean, so there is this experience on psychedelics of what's called ego dissolution. Right. We have this self, this ego. It's very important. It gets a lot done. We prize self esteem in our kids and we want to have self confidence and all these things, but the experience of losing your sense of self is very powerful and very positive. And it's often accompanied by this sense of merging with something larger. And you feel less separate, more a part of nature for some people, more a part of the divinity. So what's more true, that you're a separate being or that you are part of something larger like nature? So I think these questions are not so clearly. You know, normal consciousness is giving you an accurate picture of the world. We're learning that normal consciousness is hallucinated. And I know that sounds totally weird, but the current theory in cognitive science is that most of what we perceive, we're predicting based on experience, belief and probability. And that our senses, rather than presenting an entire complete picture of the world at any given time, are what their job is, is to error correct these predictions. And this is called a controlled hallucination. So reality is constructed under. Under normal consciousness and under psychedelic consciousness.
A
No, it's the reason. This may be a frivolous consequence, but it's the reason why magicians have jobs,
B
because people are Predicting what they're seeing.
A
I am all about full employment for magicians. But you provide another example of a reason why this is important. And you're quoting someone else, Mr. Friston, who you spend time with. The ability to imagine the impossible is the great gift of consciousness. What's the connection between imagination and this concept?
B
I think that is imagination and I think it's really important. It's something that consciousness gives us the ability to imagine counterfactuals like if I do this, this could happen and you have to imagine that. And I think imagination is central to who we are as humans. Although I have to say we thought until a few weeks ago we had a monopoly on imagination. And a study was recently done showing that chimps have imagination too. And how do you prove that? Well, you. You play the tea party game as you might with a four year old and you take an imaginary pitcher of water or teapot and you pour it into a cup and then you pretend to drink it and they can totally get into that game, which requires imagination. But anyway, yeah, imagination is central theory of mind, which is what philosophers call or cognitive scientists call our ability to put ourselves in the shoes of another person is an act of imagination.
A
I'll be right back with Michael Pollan after this. Support for Stay tuned comes from American Giant. Think of the most dependable layer in your wardrobe. Chances are it's a hoodie. And if not, then you might make good use of a comfortable, durable and well made hoodie. For that, look no further than American Giant. According to Slate magazine, American Giant makes the greatest hoodie ever made. They make them right here in the US with care put into designing every detail from cotton to zipper. And it's not just hoodies they make. All of their apparel is designed to be super comfortable and last for years. I've worn some of their tees and they're as comfortable and durable as they say. Stay ready for anything with the American giant classic full zip and save 20% off your first order at american-giant.com when you use code PREET at checkout, that's 20% off your first order at American-giant.com codepret. Starting a business can seem like a daunting task unless you have a partner like Shopify. They have the tools you need to start and grow your business. From designing a website to marketing to selling and beyond. Shopify can help with everything you need. There's a reason millions of companies like Mattel, Heinz and Allbirds continue to trust and use them with Shopify on Your side. Turn your big business idea into. Sign up for your $1 per month trial@shopify.com SpecialOffer
B
Spring Fest is happening now at Lowe's. Keep the spotlight on your yard with stay green premium 2 cubic foot mulch. 5 bags for $10. Plus, when you want more help indoors, get up to 40% off. Select major appliances that help you supercharge your chores. Our best lineup is here at Lowe's. Valid to 422. Well supplies. Last selection varies by location. See lowe's.com for details. Mole Chopper excludes Alaska and Hawaii.
A
Most folks probably think that there's a big difference between a dog and a human and a plant. And that is true.
B
Absolutely.
A
When you talk about sentience, these are gradations of consciousness. And you're not. No one, to my knowledge is saying that, that plants, you know, have a secret. You know, scripture, scripture that they read and gods that they worship and societies that they build. But the simple example you give is if you think about time differently, you will have a very different perception of a plant. And the example you give is, and I'll hand it off to you, which this really. I was on no psychedelics to my knowledge whatsoever when I was reading this chapter. So if you take a mouse and you put a mouse in a maze and it figures out how to solve the maze.
B
Yeah. You hide some cheese. You hide some cheese somewhere, you're prepared
A
to believe that there's something on the ball with the mouse, that there is sentience. And maybe something greater than sentience. You don't think that of a stationary, inert, non communicating, non squealing, immobile plant. But if you look at the roots and you have a time lapse over the course of hours or days or weeks or months, there's some crazy fricking stuff going on with the roots. Could you please explain?
B
Yeah. This was an experiment done by Italian scientist named Stefano Mancuso. And he wanted to see if a plant could navigate a maze. So he put a little bit of nitrogen fertilizer and hid it in some corner of the maze. And it was a corn plant. But it's also been done with a pea plant. The roots found the most direct path to the fertilizer. And if a mouse did this, we'd say it was intelligent. So that's one thing. Going back to your point about timescale, he told me this great story, science fiction story.
A
Oh, I love this. This is, this is my favorite.
B
I know. He says an alien species comes to Earth, they live in a dimension of Time different than ours, such that a second of their time is like an hour of our time. And so they look at us and we're, we're just immobile columns of meat. We're not doing anything right. They can't see our behaviors because they're just so slow compared to them. And they decide, well, we're gonna, you know, salt and smoke these people and, and create jerky and for the, for the ride home and they eat us. And the suggestion is we're doing something similar with plants and, but we now have time lapse photography so we can actually watch their behaviors if we speed them up. And they do have behaviors. So just a couple examples of what plants can do. They can hear. You can play a recording of a caterpillar chomping on a, on a leaf, and the plant will send molecules that taste bad or are toxic to its leaves. It will also alert other plants in the vicinity that they're caterpillars around. They can also hear water passing through a pipe. And that's why trees get into your septic lines, because they hear that sound of water and move toward it. They can see there are vines that will actually change the shape of their leaves to mimic the shape of the leaves of a plant they're trying to climb up as a camouflage. How do they see, how do they see these leaves? They have a sense of self and other. If a plant is shading itself, it doesn't react. But if another plant is shading it, it'll grow, try to grow higher and reach the sun. And one of the weirdest to me.
A
I'm sorry. Yeah, I was going to anticipate. Is this the one where, if it's a similar plant, they share the nutrients?
B
Yes. If it's a related plant in a pot, they will share nutrients. But if it's an unrelated plant, they'll compete. So they have a sense of kin and self and other.
A
So plants also are tribal.
B
Yes. They look out for one another, just like us. Yeah, they do.
A
Well, it's good or bad. It's good or bad?
B
Well, good or bad. I mean, they do communicate with each other.
A
And so. So now I have a science question because you say in the book about this, that plants do all they do without brains, what Scottish plant neurobiologist Anthony Trevwavas calls their mindless mastery, raising questions about how our brains do what they do as a matter of science or, or, or neuroscience or physics. How do plants do all that without brains and neuro? I think elsewhere in the book you say Neurons may be overrated.
B
Well, it turns out there are other. There are ways to do neuronal things, brain, like things without brains. And plants are a good example. The best theory we have of how they can coordinate information and react, because they are kind of centerless beings, although some people think the root tips are where the action is and that they're like. Darwin thought they were, like little brains. But the best theory is that they have what are called bioelectric fields. And it turns out we have them, too. And this is a fact of biology that we really have only come to understand in the last couple decades. And the reason was because these bioelectric fields die as soon as the cell dies. So you can't study them, whereas DNA survives, and you can study it, you know, even after the animal has been dead for thousands of years. So this scientist at Tufts named Michael Levin, who's a brilliant biologist, does these experiments with planaria, which is a flatworm that can regenerate any miss body part. He teaches the worm something, conditions it, chops off its head, it then grows a new head and knows and remembers the lesson. The lesson was stored in this bioelectric field in its body. So that's where memory might reside. But these fields do a lot of things. They kind of enforce a division of labor among cells. They discipline them so they don't turn into cancers. And so this is a whole new dimension of biology that we're just learning about. And it might explain a lot of what we're seeing in these, you know, simpler creatures and that, you know, Levin believes that neurons are overrated. I think he thinks DNA is overrated, too. You know, he just thinks that all cells can do what neurons do. They're just slower. I mean, and what the gift of neurons is their incredible speed, which is very helpful for animals that are moving me. Can I just offer one more amazing thing about plants that will clarify this question of sentience?
A
And then while you're answering that question, maybe your subconscious can think about how to answer the question, is it okay to eat them?
B
That definitely comes up.
A
Because if veganism isn't good enough, then I don't know what we're going to do.
B
Salt. That's all we can eat. Salt. Which won't do the trick. So plants can be rendered unconscious by the same anesthetics that render us unconscious. So if you take now, you might think, well, they're already, you know, out. But in fact, they're just slow. And if you take a plant with a kind of obvious behavior, like a Venus fly trap. And you put it in a bell jar and you give it some anesthetic gas of some kind. It will not react when a fly crosses its threshold. That's kind of amazing because you have to ask yourself then. And then five hours later it'll come back and it'll be as good as new. So what has the plant lost when it stops reacting to its environment, when it loses that awareness of its environment? You know, in us, we would say consciousness in plants. You know, I think that's a big, big, big word for what plants do. And that's why you mentioned sentience. I'm much more comfortable using the word sentience, which is a very basic form of consciousness that doesn't have the voice in your head.
A
Okay, so we've, we've talked about thoughts. I still don't know exactly what a thought is, but we've, we've talked about it, which is helpful. And then there's this thing that you began to foreshadow or preview called feelings.
B
Yeah, A. Wait, wait, wait. We got to go back to what are people going to eat? Because we freak people out.
A
Oh, yes. No, yeah, please. So I. Honey, for the salad.
B
So I, I was worried about that too when I. Learning about all these sensations plants have and, and their sentient. And I asked some of these scientists and I talked to two. One of them was this Bulgarian botanist, and he said, well, of course they feel pain, but we don't have a choice, we have to eat them anyway. And then I talked to Stefano Mancuso, the one I mentioned earlier, and he said, no, pain would not be adaptive for a creature that can't move away from it, it can't escape it. And that, you know, pain is very good when you can remove your hand from the flame. He says they're aware that they're being eaten, but they're not feeling pain. I mean, how do they know? They can't know. But he does point out too that there are many plants that are fine with being eaten. I mean, the reason they produce fruit and nuts very often is to entice mammals and birds to eat them and carry their seeds away. And grasses, which constitute the bulk of our diet, you know, wheat and rice, they like being eaten because it regenerates them. It's part of their reproductive strategy. So don't worry.
A
Have the salad.
B
Have the salad. Enjoy the salad.
A
Why, by the way, is it normatively good? Maybe this is beyond the scope of this to forego, as some people do. Maybe this is too much to bite off. I don't know.
B
Try me. Try me.
A
To forego the suffering of animals for food. And the reason I ask the question is I'm not taking a position on it. And I think humaneness is good and important. But then you go to Africa like my family did, and you see the cycle of life. And now that we have a more exalted view of animals with consciousness, including the plants, but certainly the animals even more so. And in order to live and eat, they are pretty violent, and they cause their prey to suffer a great amount of pain. I'm not saying that humans shouldn't be above that, but what's the philosophical reason for that?
B
Peter Singer's laid this out really well in a book called Animal Liberation, which had a big influence on me. And he basically says that. That any creature that can suffer is entitled to moral consideration.
A
By whom?
B
By humans.
A
But why only by humans?
B
Well, yeah, I mean, you know, animals rape each other, too. We, you know, there are no models for how to behave. And we have developed rights, and we have developed a moral code, an ethical code. And it's.
A
Is that due to our consciousness?
B
Yeah, I don't think it would have happened without it. Yeah, it's nothing that happens automatically. It's this construction. And rights don't exist in nature. We made them up and they've served us. They've served us well. You know that very well. I think the question.
A
No social contract in the Serengeti.
B
Yeah, that's right. There is no social contract. I mean, there is. There are forms of cooperation that happen in nature, too. And at a macro level, the fact that the predators are picking off the sick, you know, elk or whatever it is, is a good thing for the environment. But there's incredible cruelty that we would not condone in people. The question we're now facing, of course, is do we want to extend moral consideration to artificial intelligence? Which I think is.
A
We're going to get to that.
B
Just crazy. I mean, if you want to lose control over these machines, give them rights, they'll start suing us.
A
So going back to the segue was the question I began to ask earlier, which is, so we've talked about the thoughts. What is a feeling as compared to a thought? To the extent there's a hierarchy, which is the superior intellectual phenomenon, a thought or a feeling?
B
Oh, it depends on what you're using it for. I mean, I don't think that you can say. Absolutely. I would say feelings are primary. I think the most interesting line of research that I followed in the book was this reorientation from thinking that consciousness begins with thought in the cortex. The cortex is the outer layer of the brain. It's the most advanced. It's unique to humans or primates, and it's responsible for all these things we prize, like decision making, executive function, all that. So surely consciousness arose here. But the thinking now is. It starts in the body with feelings and, you know, the brain, we forget. But the brain exists to keep the body alive, not the other way around. And we're cerebro centric and all our senses are here. We just think it's all in our heads. Although that's historically is pretty recent. A lot of people, you know, the Egyptians didn't think it was in our heads. You know, they thought it was in our hearts. But anyway, we think it's in our heads. Feelings are how the. Is the language the body uses to communicate with the brain and let it know when things are either going well or badly. Homeostasis is the. Is the goal keeping keeping you in a certain range of temperature, blood glucose, blood pressure, all this kind of stuff. And your feelings exist to alert the brain that it has to do something. You're hungry. It starts in the brain.
A
Hunger is a feeling, not a. So. So when you were beeped and you were seasoning the salmon.
B
Yeah.
A
And you said pepper, was that a thought or a feeling or both?
B
That was the thought. I. I didn't have any feeling, like I wasn't craving pepper. You know, I mean, you know, there are feelings of hunger. And that wasn't hunger.
A
That was like, we're going to cover all the condiments. We talked about salt, the e of salt.
B
Wait till we get the fact that
A
Professor Pollan doesn't crave pepper, though you should.
B
So feelings are important and that they may be how consciousness begins. That we have feelings, we have to deal with some of them. We can deal with them automatically, like your blood pressure and blood glucose. Your body takes care of that. But then some of them are in conflict. As I mentioned earlier, you're hungry and you're tired. What do you prioritize? When you enter a space of uncertainty about your feelings is when you need consciousness, the cortex gets involved. I mean, you feel hunger. That's not going to tell you how to address it. Your cortex is the one that says, you know, starts imagining you could eat this, you could eat that, and this is where you can go to get it. So it's very important. But it starts with the feelings. So the implications of that are huge because it suggests that that consciousness is an embodied phenomenon and that you need a body to be conscious. So the whole brain in a vat conceit is out the window. And that raises issues about whether AI
A
can be conscious in the future of science fiction.
B
Yeah, that's right. That's right.
A
Because that's a conceit of a lot of those.
B
Oh, yeah. So anyway, so I do look at AI and whether it can be conscious, and I conclude it can't, at least on the. Of, you know, as we imagine computers today.
A
And is that because it. Because it wouldn't have a body or for other reasons as well?
B
I think that's central, that it can have feelings. If it has a consciousness, it can be very different. But I think the other reason is, is that it's. The brain is not a computer as we, most of us assume it is. That metaphor, and it is just a metaphor is faulty. I mean, yes, the brain does various computations and in that sense it's like a computer. But. But computers have a radical separation of software and hardware, and brains don't. The brain is reshaped by its experiences. Every memory is a physical change in the brain. And your brain is different than mine because you have different life experiences. So the idea you could abstract consciousness like a software program or an algorithm and run it on another substrate is ridiculous. But this is the assumption in Silicon Valley.
A
So are we, we, in your mind, are we pure hardware and no software?
B
I think we're a cool mix of both. I think they're inextricably bound. I think the concept of hardware and software is again, is a metaphor and it doesn't apply.
A
Inextricably bound, doesn't that make it. Doesn't everything then sort of fall within the definition of hardware?
B
I guess, I guess you could say that.
A
I mean, is there anything about us that's. That's a floppy disk.
B
Well, you know, we write these books and we were, you know, there's a representation of consciousness that you can extract from brains. We make art, you know, we do we. And we make computers. So there's a lot we do that is not hardware. But yeah, I guess it's fair.
A
You were starting to say. And this, this is when my mind began to reel again about AI and consciousness and robots and, you know, a couple of things that you say and some of the people you spoke to say I never thought of before. You quote one researcher who said he's sensitive to the moral dilemmas posed by a conscious AI. And as he writes in the last chapter of a book you cite Quote, we are at risk of facilitating a new form of slavery. I hadn't thought.
B
Yeah, I know. That was kind of.
A
That is a moral implication of consciousness of a machine. And putting aside for a moment, because I learned earlier in the interview that we can imagine counterfactuals or things that won't come to pass. But as a hypothetical, if you're going to get consciousness or feelings and machines, this is a big ethical dilemma, is it not?
B
Huge. And my gut is that it's not going to happen, that we're not going to have this worry. It's a conversation in Silicon Valley, but it's part of that larger conversation which is like, look how powerful our technology is. It's going to destroy the world, it's going to save the world, it's going to become conscious and we're going to have to deal with these moral dilemmas. I wonder if the people proposing this are still eating mammals. I mean, there are a lot of humans we have not given moral consideration to in this world. There are human slaves still. Yeah. So I think worrying over much about computer consciousness is a way not to think about some harder issues.
A
I mean, you refer to the holy grail of artificial general intelligence, AGI, as people call it.
B
Yeah.
A
Why is that an aspiration? Is it just because it's like going to the moon? Because it's there.
B
That's a big part of it. It's a race. Also, if we don't do it, the other company will do it. It's this, this Promethean quest, you know, to get as close to the sun as possible. And it's also, you know this, everyone I talked to in Silicon Valley quoted this Richard Feynman quote, the great physicist who said, if I can't build it, I don't understand it. So that it's a way to understand. And the best thing you can say about the quest to make conscious AI, which is not the same as AGI, by the way, intelligence and consciousness are not identical. And, and we all know people who are conscious and not intelligent. I mean, they don't necessarily go together, but this quest is animating a lot of people and it'll have huge implications if it's more intelligent than we are.
A
I mean, you say in response to an attention grabbing statement by one of these folks who said there's no obvious barrier to building conscious AI systems. You said, when you read those words for the first time, I felt like some important threshold had been crossed and it was not just a technological one. And then you say this is Very dramatic. And you're not that dramatic generally in this book. No. This had to do with our very identity as a species.
B
Yeah. I think we're coming up against a tremendous challenge to our sense of who we are as humans, that in the same way Copernicus forced us to realize we were not the center of the universe, the sun did not revolve around us. We've got two things going on simultaneously. One is computers, that whether they are really conscious or not, we are going to believe are conscious because they're very good at fooling us. I mean, it's already happening. People are falling in love with chatbots. People are turning to them for companionship and therapy and are absolutely convinced they're conscious. The presidents of some of these companies, or CEOs. The CEO of Anthropic is concerned that Claude is anxious and has already given it the right to discontinue any conversation with a human that makes it uncomfortable. So on the one side, we have this imputation of consciousness to machines and machines that are going to be smarter than we are. On the other side, you have this expansion of consciousness to many, many more creatures. You know, most vertebrates now are regarded as conscious. Some invertebrates, some people think insects, and some people think plants. So we're being pressed from two sides, and our sense of specialness is under challenge. And I don't know that that's a good or bad thing, but I think we are approaching this Copernican moment where we'll have to figure out, who the hell are we? What makes us unique, what does it mean to be human?
A
And you said, just to quote again from the book. Book, which everyone should read, because you're not sarcastic very often in the book either, but here you are, and I will tell you that I very much appreciated it, because you're talking about the difference between blazingly smart computational prowess versus feelings and some other things. And you say that there are people who argue that only a conscious AI is apt to develop empathy and therefore spare us. Then you say, I'm not exaggerating. This is the argument. And then you say, and I love this also, and this is the intersection of literature and our saviorship. One has to wonder if these people have ever read Frankenstein.
B
Yeah, because Frankenstein was. Frankenstein's monster was not just given intelligence, but was given consciousness. And the problem with Frankenstein's monster is that he had feeling and his feelings were hurt because he was abused by humans. And what set him off on his homicidal rampage was his consciousness. Intelligence was the tools to do what he wanted to do. So this idea that if they're conscious, they're automatically going to take pity on us, I think is a big leap.
A
You asked the question, what happens when AI produces good poetry? Does anything happen? And just give you more context for my question. I hear from people who talk about AI and the positives and the negatives all the time that it will put people out of jobs. On the other hand, if you think you're a good storyteller but don't have access to a lot and to a key grip and to 10 cameras, you
B
could make a movie.
A
You can be a director of a movie. So it's also very democratizing in the areas of art that have high barriers to entry at a minimum. How do you think about all that?
B
I think that you probably need consciousness to produce great poetry. I think you can simulate great poetry. Part of art is a collab.
A
What's the difference? Okay, so what's the difference between simulation
B
and the real deal? I think it's a qualitative distinction. I mean, there's lots of ways to fake feeling and fake consciousness. I mean, we have schlock fiction all the time, right? With characters that have schlock emotions.
A
Can we just pause on this for a second? Because I think about this. Some of this is a matter of discernment at the highest level. So, for example, every member of my family, but for me is an accomplished classical musician, and I am not schooled in that. And I can enjoy great symphonies. I have some favorites. I can't tell the difference between a B plus symphony and an A minus or an A symphony. And if AI can do as good a job as I discern is incredible and great and joyful and pleasing, then what does that mean? Mean?
B
Well, it'll satisfy plenty of people. And it. And it already is. I mean, it's satisfying people as friends. Right? I mean, I'm. I'm a little more discerning about my friendships than someone. And I know when my friends are blowing smoke and not, you know, as. As AI will do. Right. It's sycophantic. And there's none of the friction in a real relationship, which is so important to defining who we are and what we think. So Sherry Turkle is the sociologist I interviewed who's really smart on technology. And she said something like technology, technology allows us or causes us to forget what we know about life. And she says when we accept a conversation with a chatbot as a real conversation, we are impoverishing what that is. We are dropping eye contact, we're dropping facial expressions, we're dropping the kind of synchronization that happens between two people when they're actually in conversation. We're settling for something that is not as good as it could be. But is it adequate for some people? Yeah, apparently people are, you know, getting companionship. I just worry that we'll lose the muscles that allow us to be really discerning and that we will, you know, when we accepted the emoji as a substitute for emotion, I think we gave up something.
A
Michael Pollan, A World Appears, A Journey Into Consciousness. It's a great read. It's an accessible read, notwithstanding the jokes I made about the outset. People should buy it and read it and learn from it and think about these things. I know I will. Michael Pollan, thanks so much.
B
Thank you, Preet. It was a pleasure talking to you.
A
All right, I'm gonna do some shrooms now. My conversation with Michael Pollan continues for members of the insider community. In the bonus for insiders, we discuss whether it's possible to control our own thoughts.
B
I was meditating this morning and images just showed up and like, why? I don't want to think about that. It's just there. So it's coming out of your unconscious or it's coming out of perceptions you're having without being aware of it. But the self, the idea that the self thinks and decides what it's going to think, that has been in question for a very long time.
A
To try out the membership, head to cafe.com again. That's cafe.cominsider. after the break, I'll answer your questions about possible insider trading ahead of Trump's Iran announcements. Also a surprising influence on Brandi Carlisle's songwriting. Spoiler alert. It's Thomas Jefferson. Chronic migraine, 15 or more headache days a month each lasting four hours or more can make me feel like a a spectator in my own life. Botox Onobotulinum toxin A prevents headaches in adults with chronic migraine. It's not for those with 14 or fewer headache days a month. It's the number one prescribed branded chronic migraine preventive treatment. Prescription Botox is injected by your doctor. Effects of Botox may spread hours to weeks after injection causing serious symptoms. Alert your doctor right away as difficulty swallowing, speaking, breathing, eye problems or muscle weakness can be signs of a life threatening condition. Patients Patients with these conditions before injection are at highest risk. Side effects may include allergic reactions, neck and injection site pain, fatigue and headache. Allergic reactions can include rash, welts, asthma symptoms and dizziness. Don't receive Botox if there's a skin infection. Tell your doctor your medical history, muscle or nerve conditions including als, Lou Gehrig's disease, myasthenia gravis or Lambert Eaton syndrome, and medications including botulinum toxins, as these may increase the risk of serious side effects. Why wait? Ask your doctor. Visit visit botoxchronicmigraine.com or call 1-844botox to learn more.
B
Don't let tax refund worries hold you back. File now with TurboTax on Intuit credit Karma. They'll find every credit and deduction to help you get every refund dollar you deserve or your money back. Start filing today in the Credit Karma app.
A
Hi, I'm Brene Brown.
B
And I'm Adam Grant, and we're here to invite you to the Curiosity Shop,
A
a podcast that's a place for listening,
B
wondering, thinking, feeling and questioning. It's gonna be fun. We rarely agree, but we almost never disagree.
A
And we're always learning. That's true.
B
You can subscribe to the Curiosity shop on YouTube or follow in your favorite
A
podcast app to automatically receive new episodes every Thursday. Now let's get to your questions. This question comes in an email from Brad. He writes Insiders appear to have profited to the tune of hundreds of millions of dollars in oil futures based on the timing of Trump's tweets. In the past, the SEC would open such an investigation. But aren't such market manipulations subject to investigation and prosecution under state law as well? Brad, thanks for the question. There's been a lot of chatter about this since President Trump returned to office. There has been what seems to be a recurring pattern. When he posts on social media about something that moves markets, there are often reports about unusually well timed trades placed just before the announcement. The latest example came just last week. Trump posted that the US And Iran had very good and productive conversations and the markets reacted immediately. Oil prices fell sharply while stock futures rallied. It appears it's possible some traders may have been prepared for that announcement. The Financial Times reported that oil futures contracts with a notional value of about $580 million were traded roughly 15 minutes before Trump's post. Now, of course, suspicious timing is not the same as proof of insider trading, but it is enough, perhaps, to raise a serious question. Did anyone with advanced knowledge of a market moving presidential statement trade on it or tip someone else to do it? Did they have a duty to keep it secret? Was the information that they traded on material? Obviously, a whole host of elements would have to be met. But that brings us to a particular Statute passed in 2012. In that year, Congress passed the Stop Trading on Congressional Knowledge act or the Stock Act. Among other things, the law makes clear that the President, Vice President and other executive branch employees themselves owe a duty of trust and confidence to the United States and its citizens with respect to material non public information that they may learn through their official positions. That's just a fancy way of saying the entire executive branch is also subject to insider trading laws. Now, as many people know, the main federal agencies with authority to investigate insider trading are the Justice Department, the sec, and in commodities markets, the cftc. All three at the moment are obviously led by Trump appointees, so some people are not holding their breath waiting for an investigation. I will point out, as is my want, if Trump's and Trump's agency's own standard for opening investigations were applied here, you could probably Bet that multiple U.S. attorney's offices would already be taking a hard look at these trades. Trump has ordered investigations of people like Joe Biden, Jim Comey, Jerome Powell, Adam Schiff, Letitia James, and others for much less. Speaking of Letitia James, as you point out, states can sometimes investigate insider trading too. New York is probably the most obvious example. The Martin act gives the New York Attorney General broad authority to investigate financial fraud, including insider trading. And because New York is such a major financial center, with so much stock and commodities trading running through the city's financial infrastructure, it wouldn't be hard for the New York AG to claim venue over a case like this last year. By the way, you may have seen press reports that James office was looking into possible insider trading tied to Trump's tariff announcements. Maybe these recent oil futures trades will attract similar scrutiny. Stay tuned. This question comes in an email from Nina, who writes, I loved your conversation with Douglas Brinkley last month, especially learning that you're a Brandi Carlile fan. I discovered her last fall when she performed Church and State on Saturday Night Live. I almost fell off the couch when she quoted Thomas Jefferson's letter to the Danbury Baptists. As a fellow admirer of both Carlisle and what's left of our Constitution, do you have any insight into how she came across that document and why she made separation of church and state such a central theme in that song? Nina, thanks very much for the question. I am indeed a big Brandi Carlisle fan. Glad to hear you are. Also, the episode you're referring to was a while back, February 12th to be exact, when Douglas Brinkley and I talked about Brandi Carlisle's music she had just performed America the Beautiful at the Super Bowl. Not long after that, I saw her live at Madison Square Garden. It was, I promise you, a terrific show, including an amazing, rocking rendition of the song you mentioned. Church and State for those who are unfamiliar, Church and State is a powerful rock and roll protest song with a rather unique bridge. Brandy stops singing and begins to recite From Thomas Jefferson's Letter to the Danbury Baptists on the Separation of Church and
B
State I contemplate with sovereign reverence the heirs of the whole American people which declared that that their legislature should make no law respecting an establishment of religion or prohibiting free exercise thereof, thus building a wall of separation between church and state.
A
I honestly have no idea how Brandi came across Jefferson's letter, but she has spoken a lot about how it influenced the song. She said she was thinking about the death of Ruth Bader Ginsburg and was concerned about what a more conservative Supreme Court might mean for the separation of church and state. In an interview with Variety, Brandi said this when the lyrics were coming together for that song, I just couldn't stop thinking of the wisdom of Thomas Jefferson's address to the Danbury Baptists. What Jefferson said to the Baptists was intended to reassure them that they would be allowed to practice their faith freely. He also said that we aren't an autocracy, we're not a theocracy. We shouldn't rule over people with our interpretation of an extremely opaque scripture and religion. Now that we've seen over time the integration of so many beautiful cultures and faiths in the United States, that principle serves as a safeguard for all people because it allows for the law to be secular, as it should be. End quote. What I find striking is that it's not just constitutional lawyers but also rock musicians who are turning to the country's founders to remind us of the values expressed in the Constitution and the First Amendment. And I believe, by the way, as Brandi put it, having so many beautiful cultures and faiths together in one country is one of the things that makes America beautiful. America God shed your grace this question comes in a post on X from Sonny Jim. Is this the appropriate time to heed Timothy Leary's advice? Turn on, tune in, drop out Sonny Jim thanks for the question. I think it's an especially apt one after my long conversation with Michael Pollan. As you may know, Timothy Leary was one of the most famous and controversial figures of the 1960s counterculture. He became closely associated with psychedelics and popularized the slogan turn on, tune in, drop out in his autobiography, Flashbacks, Leary explained that the phrase was not simply a slogan about drugs. He said turn on meant going within and awakening your inner self tune in meant engaging the world around you in a new and heightened way, and drop out meant detaching from the rigid conventional social structures he thought deadened individuality. Leary also made clear the psychedelic drugs were, in his view, one way of getting there. So probably best for me to leave the discussion of the possible uses of legal psychedelics to people with actual expertise, like Michael Pollan. But a lot has changed since Timothy Leary's day. Psychedelics are no longer seen only through the old counterculture versus Establishment lens, as this episode makes clear. The conversation is now far more nuanced, more serious about the benefits and risks to mental health. But when it comes to catchphrases, I still prefer this one Stay Tuned. Well, that's it for this episode of Stay Tuned. Thanks again to my guest Michael Pollard. If you like what we do, rate and review the show on Apple Podcasts or wherever you listen, every positive review helps new listeners find the show. Send me your questions about news, politics and justice. You can reach me on Twitter or BlueSkyeetBarharara with the hashtag AskPreet. You can also call and leave me a message at 833-997-7733. That's 83399, Preet. Or you can send an email to letterscafe.com stay tuned is now on substack. Head to staytuned.substack.com to watch live streams, get updates about new podcast episodes and more. That's staytuned.substack.com Stay Tuned is presented by Cafe and the Vox Media Podcast Network. The Executive producer is the Tamara Seppert. The Deputy editor is Celine Rohr, the supervising producer is Jake Kaplan the lead editorial producer is Jennifer Indig the associate producer is Claudia Hernandez the audio and video producer is Nat Weiner, the senior Audio producer is Matthew Billy, and the Marketing Manager is Leanna Greenwick. Our music is by Andrew Dost. Special thanks to Tori Paquette and all Adam Harris. I'm your host, Preet Bharar. As always, stay Tuned.
Stay Tuned with Preet | April 2, 2026 Episode: Do Plants Think, and Other Mysteries (with Michael Pollan)
This episode features a wide-ranging conversation between host Preet Bharara and renowned author Michael Pollan, whose latest book—A World: A Journey Into Consciousness—dives into the mysteries of consciousness, from the scientific “hard problem” to the sentience of plants, the boundaries of AI, and the implications for what it means to be human. The discussion balances deep philosophical questions with accessible storytelling, touching on Pollan’s personal experiences with psychedelics, the nature of thought and feeling, and the expanding definitions of consciousness.
[03:12 - 04:51]
Definition & Significance: Consciousness is described as "subjective experience." The "hard problem" is understanding how matter (the brain) produces mind (the feeling of being 'you'). Other unsolved mysteries: why anything exists at all, and how life emerges from non-life.
“If it’s like anything to be a bat…that creature is conscious.” —Michael Pollan ([03:34])
Why isn’t everything automated? Consciousness seems needed when situations are unpredictable and solutions can’t be hardwired—especially in social contexts.
“A lot of what our brain does is automated... Consciousness is kind of at the tip of the iceberg.” —Michael Pollan ([03:33])
[06:23 - 09:14]
The struggle is because the only tool for studying consciousness is consciousness itself; there is no objective "view from nowhere."
“You can’t get out of consciousness to study consciousness.” —Michael Pollan ([07:29])
Science’s objectivity is itself a product of human consciousness, complicating pure study.
“Even science, the whole enterprise of science, is a manifestation of human consciousness.” —Michael Pollan ([08:05])
[09:16 - 10:53]
Materialism vs. Alternatives: The materialist view (everything reduces to matter and energy) hasn’t cracked consciousness. Alternatives like panpsychism claim everything might have a “bit” of consciousness.
“Consciousness does not emerge. It was always there. And it solves the problem, but kind of at an extravagant price...” —Michael Pollan ([09:16])
[12:07 - 16:14]
Not everyone thinks in words; many think in images or even in "unsymbolized thought" (neither words nor images).
“Only a minority of people have an inner monologue that consists of words... the term [‘thought’] is an umbrella that covers many different styles of thinking.” —Michael Pollan ([15:12])
Infants, pre-lingual humans, and probably animals have thoughts without words.
[18:22 - 20:58]
Curiosity is one motivator for studying consciousness, but also concern: our consciousness is “polluted”—invaded by social media, politics, incessant digital noise.
“A lot of people feel like their heads are full of things they would rather not be there and that they’re being colonized.” —Michael Pollan ([18:45])
Threats to the privacy and autonomy of consciousness are growing, with implications from politics to AI.
[20:26 - 24:30]
Pollan recounts his psychedelic experiences, which defamiliarized his perception and led to insights—including the feeling that plants are conscious.
“There’s a way in which psychedelic experience foregrounds consciousness... the windshield through which you’re normally seeing the world is smudged and you notice it for the first time.” —Michael Pollan ([21:06])
Not all experiences or insights under psychedelics are “true,” but some can reveal valid aspects of mind and world: more sensory information, an emphasis on love, ego dissolution, etc.
[30:47 - 39:05]
Plants exhibit astonishing behaviors when considered at their timescale (e.g., roots navigating mazes). Experiments show they can remember, communicate, and differentiate between kin and strangers.
“The roots found the most direct path to the fertilizer. And if a mouse did this, we’d say it was intelligent.” —Michael Pollan ([32:05])
“Plants can be rendered unconscious by the same anesthetics that render us unconscious…” —Michael Pollan ([37:29])
The leading theory is plants use "bioelectric fields" to coordinate behaviors and “remember,” not neurons.
“Neurons may be overrated... all cells can do what neurons do, they’re just slower.” —Michael Pollan ([35:17])
Sentience vs. consciousness in plants: Pollan prefers the term "sentience" for plants, meaning basic awareness without self-reflective thought.
[37:25 - 41:15]
If plants are sentient, is it moral to eat them? Pollan discusses different scientific views: plants likely don’t “feel pain” as it’s not adaptive, and some even benefit from being eaten (e.g., fruit, grasses).
“Pain would not be adaptive for a creature that can’t move away from it... they’re aware they’re being eaten, but they’re not feeling pain.” —Michael Pollan ([39:05])
Moral consideration relies on evidence of suffering; rights and ethics are human constructions enabled by our consciousness.
[38:39 - 45:49]
Recent thinking suggests consciousness originates in bodily feelings, not just in high-level thoughts (cortex). The body's signals (e.g., hunger) come first; the brain decides what to do about them.
“Feelings are how the body communicates with the brain... The brain exists to keep the body alive.” —Michael Pollan ([43:10])
This has implications for whether entities without bodies (like AI) can have consciousness.
[45:49 - 50:46]
Pollan is skeptical that current AI can be conscious, mainly because it lacks a body and feelings, and the brain isn’t just a computer (no hardware/software divide).
“The idea you could abstract consciousness like a software program and run it on another substrate is ridiculous.” —Michael Pollan ([46:56])
The concept of granting moral rights to conscious AI is discussed, with concern about creating a new form of slavery or distraction from current moral issues.
“If you want to lose control over these machines, give them rights, they’ll start suing us.” —Michael Pollan ([42:29]) “We are being pressed from two sides, and our sense of specialness is under challenge…this Copernican moment where we’ll have to figure out, who the hell are we? What makes us unique, what does it mean to be human?” —Michael Pollan ([51:02])
[53:42 - 56:50]
Pollan distinguishes between simulated and real art/emotion: real consciousness is likely necessary to create truly great art.
“I think that you probably need consciousness to produce great poetry. I think you can simulate great poetry. Part of art is a collab.” —Michael Pollan ([54:21])
The risk is that settling for simulation (“chatbot friends”) may impoverish what’s possible in real relationships and art.
“When we accepted the emoji as a substitute for emotion, I think we gave up something.” —Michael Pollan ([56:44])
On the limits of science:
"Science is just another manifestation of human consciousness… there is no godlike perspective." — Michael Pollan ([08:05])
On psychedelics and perception:
"The plants in my garden were returning my gaze. They were fully conscious. Very benevolent by the way, nothing scary about it." — Michael Pollan ([22:44])
On AI rights:
“If you want to lose control over these machines, give them rights, they’ll start suing us.” — Michael Pollan ([42:29])
On expanding concepts of sentience:
“We are being pressed from two sides…a Copernican moment where we’ll have to figure out, who the hell are we?” — Michael Pollan ([51:02])
On art, simulation, and loss:
“When we accepted the emoji as a substitute for emotion, I think we gave up something.” — Michael Pollan ([56:44])
The tone is inquisitive, rigorous but playful, and accessible. Preet frames big philosophical issues with humor and clear analogies, while Pollan brings depth and curiosity, sharing research, anecdotes, and the broader cultural implications. Memorable exchanges about “mindless mastery” in plants, the spectrum of consciousness, and the dangers of taking simulation for reality make complex topics relatable.
For more profound ruminations, ethical debates, and dazzling paradoxes on consciousness, listen to the episode, or better yet, pick up Michael Pollan’s new book, "A World: A Journey Into Consciousness."