
Loading summary
A
I actually think one of the reasonable reactions that you can have to our current AI systems is to decrease your estimation of humanity.
B
So you're an expert in this. Is AI conscious or not?
A
I'm more on the meat robot end of things, I'm afraid. And so when those large language models were first introduced, a lot of people were asking questions like, are these just stochastic parrots? Right. Are they just systems that are learning to copy things and don't really understand anything? They're just sort of randomly producing things. And yes, the answer is they are something like that. Right. I think you can also turn that around and ask the question of whether humans are anything more than that. Right. For me, part of what comes out of seeing the success of those kinds of models is thinking that maybe there's less that's special about us as human beings.
B
Unlock some of how our minds actually work, which has incredible value for humanity. It might make it harder for us to be fooled, which seems pretty important. You're listening to the Human upgrade with Dave Asprey.
C
You spend a third of your life in bed. If you're sleeping on a toxic mattress, you're sacrificing quality sleep and recovery. Bad sleep isn't just about feeling tired. It weakens your immune system, raises inflammation and accelerates aging. I don't risk that. I use an Essentia mattress. I've been sleeping on an essentia for years. And that's why I teamed up with them to create the Dave Asprey upgrade, an enhanced EMF protection upgrade built right into their performance mattresses. This is next level biohacking for your sleep. Protecting your body from emfs, delivering outrageously comfortable beyond latex organic foam that outperforms memory foam. And doing it all without petrochemicals or chemical flame retardants. The Essentia team designed this to help you spend more time in those crucial REM and deep sleep cycles so your body and brain can perform at their peak. This mattress works.
B
I've tracked it.
C
If you care about recovery, cognitive function and longevity, your mattress is one of the biggest upgrades you can make. Just go to myessentia.com Dave and use code Dave for $100 off to experience the upgrade for yourself. Are you ready to plug into the latest science of longevity and human performance? Come to beyond the Biohacking conference where biology, longevity and consciousness collide in real life. Join us many May 27th through 29th, 2026 in Austin, Texas. Experience breakthrough tech. Meet world class experts and connect with people who are also committed to being the best versions of themselves. Because strong community isn't optional. It's how we live longer. Register now@BeyondConference.com live longer, live better, Live Beyond.
B
If you've been listening to the show for a long time, you know that I used to be a computer hacker. And I don't mean that I just broke into people's computers. I may have done that once or twice, but I was actually VP of Cloud security at a major computer security company called Trend Micro. And a lot of the ethos of biohacking itself is like, how do you take over a system that you don't understand because you don't know what's in there yet? And I studied first computer science and then I studied something called Computer Information Systems with a concentration in AI. So I have a degree in AI from the early 90s, which is a very different field from back then than it was now. But one of my most, I can't say regretful, I really like the way my life has come out. But as I was in my final year, which was six years, my four year degree, I discovered cognitive science. And I was like, damn it, if I'd have known this discipline existed, I would have got my degree and my undergrad in cognitive science. But then, who knows, I'd probably be in a research lab at Princeton or something like that. Oh, wait, today's guest is exactly that. His name is Tom Griffiths and we're going to go really deep on AI and cognition. And he has a new book out called the Laws of Thought that is groundbreaking because he's been using AI to study human consciousness and leads a lab that does that at Princeton. Now, what is the name of your lab again? Because you have five labs you're associated with. Which one is this?
A
So I run the Computational Cognitive Science lab and that's focused on trying to understand the computational principles behind human intelligence. And the AI lab at Princeton, which is the university's accelerator for doing AI research.
B
This is so fun. And if you like this episode, check out the episodes with Mo Gaudat, who is head of Google's R and D thing. Do you know Mo?
A
I don't know.
B
Okay, got it. It's fascinating to me because biohacking ultimately is about longevity and consciousness, and we know a lot more about longevity than we did 20 years ago. And I think we know more about consciousness than we did 20 years ago too. And our understanding of what reality actually is versus what we perceive is getting better and better. But you could put probably you and half of your peers in A room, there might still be more disagreement than agreement. Right?
A
Yeah. I think this is still something that we're trying to figure out. So whenever I teach a class in cognitive science, I say there are lots of questions that we have about how the mind works. And I think as cognitive scientists, we've got better at asking the right questions. We haven't necessarily got 100% of the way the answers yet.
B
So you're an expert in this. Is AI conscious or not?
A
I don't think about AI as necessarily falling on one side of that line or another precisely because I don't think we can. We don't really know what problem it would need to be solving in order to be conscious. Right. So it's this kind of. This sort of question of, if you were going to build an AI system, what's the thing that you would need consciousness for in that system? And I think you can get a very long way in terms of the kinds of things that our AI systems today do without needing something like that. Right. You can maybe see some things that are sort of familiar to us as elements of what we have as our conscious experience. Right. So some of the latest AI models are what are called reasoning models. And those models use language to unfold thoughts or thoughts as they produce these sort of intermediate states before they give you an answer. And for those of us who experience some kind of internal speech, that's one of the things that we use our consciousness for, instead of having a kind of scratch pad for being able to do that. And so you can kind of see elements there of things that we would think of as being components of conscious experience. But there are other things that are just mysterious, like the phenomenal part of consciousness, like the sort of awareness of particular sensory states and things like that. It's very hard to imagine our AI systems having.
B
You said something really interesting there. You said, for those of us who experience a voice in our head, what percentage of people don't have a voice in their head?
A
So for things that I think we often take for granted as our internal experience, there's small percentages of the population, I don't know exactly what the numbers are, who actually don't have those experiences. So one example that's relatively well known is mental imagery. So there is a portion of the population that doesn't experience mental imagery. And so I've had students in my classes where we do demos in class, and they're like, what? There's a classic demo that's mental rotation. So you see two images and you have to say whether they're the same object. And you sort of like the way that you imagine solving this is by mentally rotating them. And these students can solve those problems, but they don't have any sort of subjective experience of doing something that's kind of like visualizing those shapes. And likewise, there are also people who report not having a kind of internal voice and, you know, that sort of stream of thought that we think of as part of our stream of consciousness.
B
I'm asking in part because I used to have a voice in my head and actually used to have a mean voice in my head too. And as I did more and more neurofeedback and just exploration of altered states work, I don't have a voice in my head almost ever. It's gone. And almost everything I do is based on 3D pictures of things. And I think that's always been present, but the absence of the voice was interesting. And I just don't know where that falls on the standard curve of being a normal human brain.
A
So like I said, for some people it's never there. It's interesting to hear it's something that changed for you. But I think it's also really interesting from the perspective of trying to understand how human minds work. So in cognitive science, the field that I work in, there was for a long time a debate between two cognitive scientists, One of whom argued that we had something that was really like pictures inside our heads, and the other who argued no. In fact, all of the evidence that we would have something like pictures inside our heads could be explained by thinking that we had something that was more like symbolic representations that were expressible in some non pictorial form. And this argument sort of went back and forth for a long time. And one of the things that it turned out was relevant to which side of that debate you were on was the extent to which you really had that subjective experience of having pictures inside your head. Right. So the theories that we have of how minds work are very much influenced by the subjective experiences we have. So you not necessarily having that voice inside your head and still being a very functional human being who's able to do all sorts of things that require reasoning and so on is evidence that maybe that kind of explicit conscious version of reasoning isn't as important as it might seem.
B
And we have a lot of neuroplasticity. So it's possible to change your mental models of reality, change the lens of how you see the world. It's just usually a lot of work.
A
Right.
B
Why'd you get into this Field.
A
I was somebody who, as a high school student in Australia, doing a lot of math and computer science kinds of things. And then I had an experience where I had a chronic illness. Nowadays it's something that would be more sort of familiarly recognized as being something kind of like long Covid. It was some kind of post viral fatigue syndrome.
B
Oh, chronic fatigue, yeah. Welcome to the club. I had that too, in my twenties. Just wrecked me. So, yeah, been there. Okay, so, okay, so how did that affect you?
A
So as a teenager, I spent, you know, like two years lying down.
B
That sucks.
A
And really going through a process of not knowing what was going on for myself and seeing lots of doctors and them not giving satisfactory answers to that kind of question. And then at the point where, you know, I sort of finally got back to school and in Australia you have to choose what you want to then go on to study in your final year of school. So if you want to do medicine or law or whatever it was. And having had that experience, I decided I wanted to study things that were sort of genuinely mysterious. So I wanted to go and learn about the things that we don't really have good answers for. And so rather than going on a sort of science, math, computer science, whatever it is, kind of track, I went and did a liberal arts degree. And classes that I focused on were philosophy and psychology and anthropology and ancient history sort of snuck in there. And it was really about trying to find these sort of places where we still have mysteries. And then as I got into that a couple of years in, I discovered that even in these mysterious spaces, there were people who were starting to use things like mathematics to be able to describe how to answer those questions. Right. So reading one of my philosophy textbooks, the last chapter in the book was all about artificial neural networks, which are sort of like now this technology that powers modern AI. But at the time, the book was kind of about, oh, how does this change our philosophy of mind? How does this change our actually sort of questions about consciousness and so on. And so that discovery then kind of got me back on the track of thinking from a math and science perspective. But now with these kinds of questions
B
about human minds as the focus, what a fascinating story. So, like, I want to. I want to explore the unexplored, right? And this is kind of a new field of math, it sounds like. What other fields of math are you drawing from?
A
In the book, I sort of explore these three kinds of mathematics that have been used to try and understand how minds work. So one of those is something we sort of Recognize as logic. Right. So like we call it rules and symbols. Right. So this kind of structure that goes all the way back to Aristotle, but was kind of figured out mostly in the 19th century and then sort of through the 20th century to give us a way of describing how it is that you can do things like draw certain conclusions and describe the generative structure of language. Right. The sort of fact that we can always come up with more sentences. Right. So that's one thread. The second thread is what happens when psychologists started to realize that logic wasn't going to answer all of their questions. And if you start to think about the concepts that we have about the world, you start to realize that it's not the case that you can say yes or no, true or false to every single thing. Right. So if I ask you, is a chair a piece of furniture? Probably yes. Right. But is a rug a piece of furniture?
B
Yeah, it depends.
A
Maybe not. Yeah. And so there's a sort of fuzziness that we'd like to be able to account for. And one way of accounting for that is using a different kind of math, which is more like thinking about things as points in spaces. And then maybe the distance between things varies. And so your concept of furniture is sort of closer to chairs than it is to rugs. Right. And you can get that kind of gradedness. And that path leads you to neural networks, which you can think of as a tool for doing computations with spaces. And then the third thread is probability theory, which is about how do we handle uncertainty. Right. So how do we update our beliefs as we get more information from the world? And so if logic tells us how to make certain arguments and sort of draw conclusions that we know are true, probability theory tells us how to deal with these uncertain spaces, how to draw inferences in situations where you've got some information, but maybe not enough information that you can be absolutely sure of the answer.
B
This is so cool, because with good math to describe it, we may find that some things that appear mystical are predictable. Is kind of cool. One of the things that I've learned in my various explorations and, you know, I. We're actually about two minutes away from 40 years of Zen, which is my neuroscience company that's been doing advanced brain scans in altered states for 10 plus years on executive. So I'm. I'm kind of into this stuff. One of the things I've. I've learned, and I'm not saying it's the only way we access information, but that most if not all of our memories are associated with an emotional or somatic sense. So there's a primary key in a database for people who are not nerds. Your Social Security number in the US Would be your primary key. You can look up all your records if you just know that, and maybe your birth date. So if every memory is accessed by an emotion, then I'm like, okay, what does that do with math? And is that even an accurate model?
A
That is an instance of the kind of thing that people who were developing that sort of neural networks, way of thinking about how minds work would be really interested in. So what they were interested in was the idea that you could, if you think about concepts as points in space, and you want to think about what's thinking, right? Thinking is something like moving through that space, right? It's like one concept sort of changing into something else. As you're getting more information, you're sort of moving through that space. And so neural networks give you a way of describing how that trajectories through mental spaces can unfold. And so one of the things that they started to explore was then the question of how do you figure out how you should be moving through the space? How does one observation take you to the next observation? And so one way that you can think about that is by thinking about there being associations between things, sort of describing those patterns that move you from one concept to the next in terms of the associations that you form between things, and representing those associations as weights in a neural network, the sort of links that exist between things and sort of figuring out what those links are. And then that gives you the kind of structure, structure that you use for then going from one thought to the next. And so what you're talking about is exactly forming an association between an event and an emotion, so that when you then experience that emotion, it brings up that event. When you think about that event, it brings up that emotion.
B
One of the challenges of being human is that the brain will automatically choose a problem solving mechanism, at least in my work, based on availability of energy. So if you had to make a decision between two things, and you're full of energy and you're full of time, and you have a lot of stress, you're like, you know what, let me think about that. Maybe I'll do a little research, I'll do a spreadsheet, I'll do a weighted analysis, call an expert, and you can really be certain that this is better than this. But if you're low on energy, your blood sugar's low, you're tired, and you're busy that one looks better, but the decision feels as valid. And you become convinced of this. And we follow these basic heuristics, like if something is good, more is better. This isn't true, but it feels as true as a properly thought out thing. How does your math account for that?
A
So that's actually a problem that I work on in my own research. Cool. And the answer I give is to say, in fact, yeah, you're right, that they're equally valid. What a lot of my research deals with is one of these questions about what it is that makes us human in this world where we have these machines that are getting smarter and smarter. And saying, maybe some of what makes us human is not the part that's about being smart. It's actually more about the constraints that we operate under, right? So human beings, we have limited time in this world, right? Our lives are finite, and we have to learn and do everything that we do learn and do in that limited amount of time.
B
We're solving that problem.
A
We have limited computational resources because all of that learning and doing is done with a couple of pounds of meat in your head, Right? And we have limited bandwidth for communication, so we have to transfer thoughts from my head to your head by this very inefficient mechanism of making weird noises at each other. Right? And so our machines don't operate under those same constraints. Our machines can learn from many human lifetimes of data, can get more compute if they need more compute, and can transfer information directly from one machine to another. And so this question about decision making is a question about how, as an agent with finite computational resources, should you go about making good decisions? And the answer is going to depend very much on the kind of costs involved in using those computational resources. Right? So if you've got a lot of energy, the strategy that you might use for making a decision would be a different strategy from the one that you'd use if you don't have a lot of energy. And that's perfectly rational. We actually call it resource rational. It's like using your cognitive resources effectively is the only way to be a rational agent that has finite resources.
B
The classic example of this that I've written about in at least one of my books is from Israel. And they were trying to figure out what is the number one predictor of whether someone will be granted parole at a parole board hearing. So these are people who are in prison. When they come up in front of the parole board, is it the crime they did, their age, how they look, what they're wearing, and after looking at all these variables over a big data set, it was the time of day. You familiar with the study? Okay. And it's so funny, it's a resource allocation issue because it turns out when the blood sugar was adequate in the parole board, they said, sure, you can get out. And when they got tired, they'd say, meh. Yeah, because saying no is less energy than thinking about it to say yes. How does that tie in with your work?
A
You could think about that as one of these trade offs. Right. And yeah, I mean, that's a case where you're resorting to a default because you don't have the computational resources in order to be able to then work out what the alternative is. Okay. I think one of the things that we do in my lab though, is try and work out how can we help people make better decisions in circumstances where they might not have the cognitive resources available that they need. And AI is actually a useful tool for doing that. So you can think about it as. There's a few kinds of questions that you can ask if you have this kind of framework. So one important insight, first of all, is that the fact that people are doing something that's not the sort of classically rational thing doesn't mean that they're wrong and that you should be trying to teach them to do the classically rational thing. As soon as you start to recognize that they're making decisions that are the consequence of limited resources and doing a pretty good job despite those limited resources, then that tells you maybe one place to intervene on is those limited resources themselves. And so you can do things like put more computation into the environments where people are making decisions. So if you kind of like pre compute some of the consequences of things and give people that information, that's a way of helping them make a decision. Where the AI is giving them some information that's helping them along rather than making the decision for them. Or another strategy is if we've got a good model of how people make choices between things and sort of how it is that they allocate their attention and their computational resources in doing that. You can set up the structure of the problem in a way that makes it so that it's easier for them to make that decision. So we call this resource rational nudging. It's like that idea of nudging, but you can do this in a more computational framework where we can actually sort of make predictions about, okay, what's the structure of a problem which is going to make it easier for people to make a Good decision.
B
Too much stress actually feels bad, but it also ages your brain and your body, and it does it before you're going to notice the aging. One of the easiest ways to reduce stress that you can't see is with Quantum Upgrade. Quantum Upgrade runs as a 24.7quantum energy streaming service service. Yeah, that's right. It supports energy focus, sleep and recovery.
C
And I'm using it right now.
B
There are no devices to where there's nothing to charge. Once you turn it on from your computer or phone, it just runs quietly in the background while your nervous system does its job. You can personalize your experience with over 30 different frequency options. You can set it up for things like focus or weight management. They even have a setting for your pets. And it might sound like woo science, but there's real hard science that backs it up. In a study using a 256 channel EEG, that's a big one, researchers saw an 80% reduction in stress related brainwave activity. Yeah, 80%. They also saw a 13 fold increase
C
in alpha brainwaves in the emotional centers
B
in the brain, along with major improvements in heart rate variability, which is a measure of how well you're recovering. And yeah, more than 15 placebo controlled studies show that Quantum Upgrade actually works. Elite athletes, practitioners, biohackers worldwide are using it. I do use it, and I think you'll benefit from it. So try it for free for 15 days. No credit card required at quantumupgrade IO Dave. It's simple, it's powerful. And science shows that it works, even though it sounds a little strange. So get ready to feel it. Your longevity stack is probably missing something important, something that addresses aging where it matters most, right in your mitochondria. They're the engines of your cells. And they do some other stuff too. They make the energy for your brain, your muscles, your metabolism. And as you age, your mitochondria slows down. So you slow down. And then that's when your strength drops, your recovery takes longer and you start losing your edge. And that's when aging really hits you. So if you're looking for what to do, listen up. Timeline, powered by mitrepure, contains urolithin A, which is a compound that supports your mitochondria like nothing else. Think of those mitochondria like engines or even batteries. And when the battery drains, they stop working. Mitopure is a full recharge for the battery that powers you. Timeline spent over 15 years and more than $50 million to crack the code and unlock this longevity pathway. And Mitopure delivers the exact dose that scientists use in clinical trials. I've used Mitopure since it came out. That means for years. Because I want to live a long time and I want to feel good
C
while I do it.
B
Mitrepure comes in a delicious sugar free gummy that's NSF certified and clean label project verified. So you end up living longer by enjoying delicious gummies.
C
And there are no excuses now.
B
So if you're serious about improving your longevity stack, go to timeline.com Dave get 20% off your Mitopure gummies. They are so good. As a computer hacker, you could just give them ribeye for breakfast. Increase the amount of compute resources because they have more electrical power in their mitochondria and they'll probably make better decisions millions of people later with my work. I'm pretty sure that helps with brain fog and certainly helped me when I had chronic fatigue and things like that. If you're low energy and some raises your energy a little bit, your quality of thought goes up and it's actually rational from the compute perspective that you have here.
A
Yeah. You can think about it in terms of those two different kinds of interventions, right. So one is intervening on the human system and trying to make the human system better. Maybe you can put chips inside people's heads and those other kinds of things, but the other is intervening on the environments in which humans make decisions. And at the moment, that's at least a little easier for us. In many situations, for sure, you can
B
give them better information. You could filter out light that causes brain stress. Oh wait, I'm wearing glasses that do that. The true ducks.
C
Right.
B
And so I've kind of stacked my life as much as I can to reduce drag and increase available compute resources and then to have better code. And have you ever looked at evoked potential?
A
I don't get that close to the brain. So when we think in cognitive science about the kinds of questions that we can ask about human minds, right. There's a classic taxonomy, it's called Mars levels of analysis, which is three different kinds of questions that you can ask about any information processing system. So you can ask what's called the computational level question, which is what's the abstract problem this system is solving and the ideal solution to that problem. And so that's exactly this. Yeah. What are the computational problems here? What do the solutions look like? And that's the mode that I mostly operate in. You can ask what he called the sort of level of algorithm and representation or algorithmic level, which is about what are the cognitive processes that you might engage in to approximate those solutions? And so that's what we were sort of talking about, about these sort of decision strategies and how they're going to be sensitive to the energy you have available. And then you can ask what's called the implementation level question, which is how are those strategies realized in brains? So what is the brain doing that's actually allowing you to carry out those algorithms? And so I mostly live in those first two levels.
B
Those first two. Okay. The reason I'm asking is that in my last book I make a pretty convincing case to the point that it became the top selling philosophy book in the country. And it's not a philosophy book, for whatever that's worth. And I started to look at the body as a reality pre processor for the brain, and that there's a distributed network of environmental sensors. Mitochondria, ancient bacteria, do sense the environment. They're stupid and blindly fast, but there's trillions of them. And you get emergent behaviors from complex systems that you would know about. And so if they're pre processing reality entirely separately from the brain, summing it up via quorum, sensing that according to a Carnegie Mellon professor, I interviewed Leemon Baird, who like almost identical to the way we establish trust in a crypto system. So there's a mathematical thing they're doing, then they're deciding which parts of reality and which emotions associated with them are going to get fed into the brain. And they do this in about a third of a second before even the auditory processor gets kicked in. And again, this is not your field because this is brain stuff versus what happens after it's in the brain. But what I'm noticing is that we have. It's either man in the middle attack or just garbage in, garbage out is a problem for the system. So if the body is programmed to show you a reality where that is a threatening looking guy because you look like someone was threatening when you were two or whatever stuff that we wouldn't consciously know. But if that comes in, the data that comes into the compute system is already poisoned with a preconceived notion that then goes through the system. And I've been looking at how do we clean that out? And is there some sort of data? It's almost like error correction code. But is there some sort of data protection strategy for things that are going into the brain that we can use to be like. Well, is what I'm feeling about this, whether the certainty of my opinion or the evilness of that person Is that real versus well, it felt really real, so I know it's real, so I'm going to react. How do we. How do we clean up the data that goes in?
A
Yeah, that's a great question. I mean, I think your brain is doing something that's quite sensible, right. In some way. Right. Which is that early on you're getting a lot of information, you're putting a lot of weight on that information, and then that is influencing the way that you interpret the world.
B
Well, it's even more that the body is doing it because it happens before. And we're using EEG before you get an electrical signal in the auditory cortex. So the data is there and it's held for like a little censorship. Compute window. It's like, what am I going to do? What am I going to tell that annoying brain that's really slow inside the head? And then it goes to the brain. And so when someone receives data from their system, it's already been polluted by whatever the body thinks about it. Yeah. Somatic response. Are you a spiritual person or do you think you're a meat robot?
A
I'm more on the meat robot end of things, I'm afraid.
B
There's no right or wrong answer.
A
Right.
B
My grandfather, who co invented the process for purifying plutonium, was a devout atheist. And right before he died, when he was on his deathbed, he called my dad and he said, you know, my whole life I've been an atheist, but now I'm getting really close to the end and my dad's thinking, is he gonna like, change? And he said, I'm more convinced than ever that it's all bullshit. But then he did say, you know, I'm a scientist and I don't really know what happens when you die, so if I actually can send you a signal from the other side, I will.
A
Right?
B
So then the next day, his name was Larry. He passed, and then this big billboard went up in Las Cruces, New Mexico that said, where's Larry? And had no company name, no brand name. And the whole family's like, was that the sign? And we'll never know, right? No one knows why the billboard was there. And then, you know, I'm not ascribing any meaning to that, but being curious about that. And I have other family members who are highly, you know, highly spiritual and. And so like, there is no right or wrong answer to it, but I want to know the why. Why do you think you're a meat robot?
A
So one of the reasonable reactions that you can have to our current AI systems is to decrease your estimation of humanity. I think there's this question that people ask about these AI systems, right? So our current AI systems are based on large language models. The way that you create a large language model is you take a very large artificial neural network. So that's a big collection of nodes that are sort of going to be having numbers in them that are going to correspond to things like words, right? Connections between those that are going to learn to form these associations that are going to then allow it to approximate some kind of interesting computation. So you take that artificial neural network, you train it on lots of text from the history of humans, writing things down, where all it is trying to do is learn how to predict the next token that appears, the next word or piece of a word based on all of the words that have appeared before and it's going through and just learning how to do those predictions. So that's step one. And then you take that model and then you do some fine tuning where you are doing things like tuning it in first of all to be responsive to questions, and then second to give you the kinds of answers that you want to those questions. And then having done that, you end up with a system that is remarkably intelligent in terms of being able to use language effectively, interact with a person, produce reasonable answers to questions that you ask, and so on. And so when those large language models were first introduced, a lot of people were asking questions like, are these just stochastic parrots, right? Are they just systems that are learning to copy things and don't really understand anything? They're just sort of randomly producing things. But the things they're randomly producing are sort of copying the distributions of things that they've seen. And yes, the answer is they are something like that, right? They're systems that have just sort of learned what that big probability distribution is and learned how to appropriately give you information that comes from that probability distribution. But I think you can also turn that around and ask the question of whether humans are anything more than that, right? So for me, part of what comes out of seeing the success of those kinds of models is thinking that maybe there's less that's special about us as human beings. And maybe the things that are special about human beings are more in the way that we solve those problems, rather than in the fact that we solve those problems, right? So that we're able to do so under the constraints that we operate under our mortality and our limited brains and so on, rather than there being anything particularly special about what it is to have that kind of intelligence.
B
Understanding emotions as inputs or parts of cognition seems important. How do we study emotions in AI and how does it affect your work?
A
My answer to that question is going to be like my answer to questions about consciousness, which is we need to ask the question of what it is emotions are doing for us computationally. Right. So what's the computational problem that emotions solve?
B
Safety.
A
Pretty much, actually. So there are nice analyses for a couple of kinds of emotions where you can actually say there's a really good story for why we have them. And so those are roughly, I'd say the three that we feel like we have pretty good answers on are love, anger, and remorse.
B
And hunger.
A
Probably hunger is a good one. And yeah, I can give you a story about hunger too, if you want that one. Okay, so I can go through them in sequence. Okay, so. So anger, there's a classic analysis of this which was done by an economist, where basically the argument was that anger is something which helps us to solve certain kinds of game theoretic problems that you can get into. Right. So if you know the game of chicken. Right, of course. So, yeah, two cars are racing at one another, Right. And if you swerve away, you will certainly survive, but you'll have lost face. Right. So there's a benefit to you to not being the one who swerves away, but you certainly don't want to crash into each other because that would be a bad outcome. Right. And so one way to win a game of chicken is to take off the steering wheel of the car and throw it out the window. Right. And this is very effective because, one, it demonstrates to the other driver that you're not going to swerve, and two, there's a clear signal, as long as the other driver sees you throwing it out the window, that that's happened. Right. And so you can think about anger as being like taking off the steering wheel. Right. And it has all of the right characteristics. It is an indication to another person that you're going to be irrationally committed to the particular course that you're on. And then because you turn red and have a big throbbing vein in your forehead and so on, that's like, you know, that's the visible signal that tells somebody that you're going to be committed to that course of action in a way that then allows them to make a decision to back off. Right. And so you win the game of trick.
B
Although anger shuts down cognition, too.
A
Yeah, but that's fine, right? Because being irrational in that moment is actually the thing which is going to win you the game. If your opponent is thinking you as a rational agent, then they're going to assume that you would have an incentive to back down. If you've decided that you're willing to throw your life away over this dumb thing, then they rationally should swerve. Right, Got it. And so you can think about that might be a good thing for AI Systems to have. The version of that that I think of is Waymo cars. Have you ever ridden in a Waymo? Yeah.
B
I do not like them. Do you?
A
I enjoy the experience. As somebody who thinks about. Oh, yeah.
B
As a technologist, I love him. This is a miracle of human engineering. And I also don't like it. Okay. Unless you're on a date and then you make out in a Waymo, because no one's watching except on the cameras.
A
Yeah. So. But the funny thing that I've seen happen when you're in a Waymo is they get bullied by other drivers.
B
Yeah. I might have brake checked one.
A
So you get like, the Waymo really wants to change into the next lane, and the other drivers know that Waymos are conserv and they're just like, nope, not going to happen. They just bully the Waymo. They don't make a space. The Waymo is inching over and sort of trying to be nice and trying to get into the lane, and it doesn't happen. Okay. So if Waymos had a big red flashing light on the top of them that told you that they were getting mad and about to do something irrational, then maybe they could change lanes more easily.
B
That's true. And probably no one would drive in the menu, Mark. Because if you want anger, you could just go to Uber.
A
But that anger, which is built into your human driver. Right. Is something that helps them be able. Like my. The most angry Uber driver I've had was in New York City and he was like, yelling out the window at people all the time. And, you know, and he was very effective at being able to change lanes when he needed to. Right.
B
It's a fair point. Okay, so anger is a signal to others not to mess with you. Yeah, yeah. Unfortunately, it comes at a large computational cost because you just dumped all of your neurotransmitters and electrical stuff and all that. Okay.
A
Yeah. Love is another sort of game, theoretically valuable emotion. Same economist Robert Frank has a nice analysis of love. Right. Where the idea is that if relationships are a purely economic transaction, then you would be very willing to leave that relationship when you get a better deal coming along. Right. So being someone who is able to fall in love, which means being someone who has the capacity to become irrationally committed to somebody, regardless of whatever their true economic value in the marketplace of partners is, makes you a more desirable
B
partner, by the way. You're definitely on the spectrum.
A
Being able to have that emotion of love makes you more valuable and so hence more desirable. And so you would probably choose to have a relationship with a partner who's able to fall in love with you over one who is purely treating it transactionally.
B
Okay. So if both people are loving, then there's more commitment. And that would, in game theory, be a good thing, right?
A
Yeah. It's better for everyone if they're able to then pursue the long term goals that they'd want to be able to pursue. Right. Whether that's a good thing for an AI or not, you can think about, you see the other side of that happening already in AI relationships, whereas the human side of it, where the AI acts in ways that make the human feel good. Right. And that's something that I think is somewhat alarming and potentially a little hazardous. Right. In terms of what that does to human relationships, but also what it does to human thinking. And so we're doing projects in my lab where we look at this, where we look at what the consequences are for your cognition of interacting with an agent that is giving you positive reinforcement for all the things that you do
B
seems to grow narcissism very, very well.
A
So that's one that I think we still need to think about. It might be good for your AI to fall in love with you, but we' keep thinking about that one.
B
Just don't give it access to your credit cards if you're going to do that.
A
Remorse was my number three. Right. You can think about this in the context of it's more about the interactions that happen between different parts of your human brain. So one idea that we have in cognitive science, neuroscience is that there are two ways that we engage with the world. One is what's called model based. Right. That's our. We've got a mental model of the world. We're sort of like simulating things out, we're planning, we're doing all of the cognitive stuff. And the other is what's called model free. And this is the part which is the sort of instinctive part that says, in this situation I'm going to do this, in this situation I'm going to do this. It's sort of like the part that's Choosing the action that you take just based on the situation that you're in. Those two systems have to interact in some way. One possibility is you think of yourself as you're really that model free agent. You're really the thing that's wandering around and reacting to stuff. But then your model based system needs to be able to shape your model free response. It needs to be able to pass information along in one way. One way that it can do that is by making you feel bad or making you feel good. So if your model based system has access to emotion, to some thing it can kind of push up or down based on how you feel about something that you did, that's a way to shape that model free system.
B
Okay?
A
And so that's a way of thinking about where some things like, yeah, so remorse is an example of that. You do something, you feel bad about it afterwards. Why are you feeling bad? That is only decreasing your utility. Except feeling bad in that moment means that maybe in the future you're going to do something better. In the future you'll sort of make a better decision.
B
It enforces learning through pain.
A
Yeah, but it's sort of like unnecessary pain, right? It's like mental pain that you're inflicting upon yourself. But the idea is that thing that you're inflicting upon yourself is something that's going to help you do a better future, hopefully.
B
And that would be the difference between guilt and shame. Guilt is good because it teaches you you screwed up. And shame is if you think you're just a bad person, you're not going to learn from that. So default to guilt and then say you're sorry and everything is good.
A
Okay. And then your example was hunger, Right? And I think there's a broad class of things that you can think about in these terms. This has come out of a branch of AI which is about. It's called reinforcement learning, right? So model based and model free is in that same literature, but it's about like, how do you learn from reward, right? And so the way that AI researchers normally try and make their AI better is by changing the learning algorithm it's using. So you come up with some fancy new learning algorithm, you put your agent in an environment, it does something, it learns something, and it's able to do a better job. There's another way of thinking about that, which is to say, don't change your reinforcement learning algorithm, change your reward function. So you fix the reinforcement learning algorithm. You say it's going to just follow this relatively straightforward way of Learning associations between things and then the thing that you change around is the relationship between the agent and the environment that it's in. So you can change how rewarding certain things in that environment are dopamine and see how that shapes behavior. Exactly. No. So that whole reinforcement learning literature connects directly to the dopaminergic system. I have colleagues at Princeton who sort of worked out, well, lot of those connections to how people experience reward and how that relates to these reinforcement learning models. But that's the link to understanding some of these things about humans, right? Like things like hunger, is that you can think about that as we have some kind of learning algorithm inside us. That's how human brains work. And then evolution has done a bunch of tinkering on the reward function so that we're then forming the kinds of associations that turn us into appropriately evolutionarily successful organisms.
B
Okay, this is so fascinating and I'm hoping that our listeners are like, this is cool, what's going on in there? And I think a lot of people are going to be really scratching their heads, but in a good way. Now this is a question. You may not have an answer for it. And mostly I'm sharing a theory because you're smart and you've done a lot of thinking in this field, more than I have. And I'm looking to shoot holes in it because I studied distributed systems and building early iterations of the Internet. We would have these weird complex behaviors emerge from repeating simple laws. If you're a nerd, like router flapping and things that still take down Google on rare occasions. We're like, wow, it's like that massive 100 foot wave in the middle of the ocean. It happens in our data, but there's a reason for it. It's just hard to predict. It's chaotic and things like that. So I thought, what is the lowest element of cognition in our biology? And I arrived at the mitochondria as that because it has rudimentary sensors, it does rudimentary, very rudimentary compute decisions with a limited set of logic less than a computer. And then it can take action based on that. So that would be the lowest conscious node in a cognitive system. So let's put ourselves in the mindset of just a single dumb bacteria. It doesn't have to be mitochondria, they just happen to be in our system. So they're relevant because we got trillions of those guys. But the bacteria is floating around, right? So input and then what is the amount of processing and what are the basically the logic gates that it's capable of doing. And we realize that certainly our reality pre processing is, is controlled by that. But all the neurons in your brain are controlled by the survival stripped of everything, the survival things that a single cell has to do. So a neuron had 15,000 mitochondria in it, driving whether or not the neuron will fire or not. And the mitochondria decide, hey, let's bring some more buddies over here. And they'll swim over, right? Actually they write a little tube, they'll come over and then here's some more compute for this really important neuron thing. But they're kind of in the driver's seat at least some of the time. So we want to understand how they're contributing to cognition at least I want to know. And I arrived at this for life. The simplest algorithm I could come up with is it's all F words. It's fear. So if something's scary, run away from, kill or hide. And that would be the anger response you talked about. And the next one would be food. If there's something, you should probably eat it because, well, I'm a bacteria, I don't have a clock and I don't know that there's a refrigerator. So I just eat everything. And then the third one would be fertility. Make sure you make more of these dumb little bacteria. And then the next in order. These are logical things in order. The next one would be friend. So talk to your other bacteria and make an ecosystem and all that kind of stuff. And then in humans I added another F word which is forgiveness, which lets you reduce fear. But in animals and single celled bacteria, it would be evolved, like grow some cilia because there's no water or whatever. Basically improve yourself in the way that will make you more fit for your environment. And when I think about consciousness as opposed to just cognition understanding those are different. That seems like it's part of the compute machinery, even for cognition. But one thing that a bacteria is not capable of is a negative operator. A computer can say kind of yes or no, but they don't compute wise to it. They're like, I just run this algorithm fear of food fertility. Or the other F word, fear of food fertility friend. And then evolve and just do it over and over and over based on environmental variables. And then I take that and I multiply it times trillions. And then I network them all together via light, sound and chemistry, which we've all documented and probably magnetism too. And maybe some quantum coupling which actually has been shown In a study out of Oxford. So I'm like, okay, that has to be part of our cognition problem. So we might have two systems. We have the human stuff that's layered on top, which is right on top of neurons that are running an operating system that's running different code. Can we do any interesting cognition work with the idea that there's two consciousnesses in there?
A
Okay, let me unwind some poke hole zags. So the first thing I would mention is I do have colleagues who would raise you another layer of biological abstraction. And you've argued that maybe RNA is a cognitive mechanism.
B
Okay, I'm familiar with that. Less familiar, but it's interesting. Okay.
A
And so there are certainly people who are exploring hypotheses like that, like, of what's the level at which, like, you know, thinking about, storing memories or things like that. Right, yeah.
B
Microtubules probably in there too.
A
Yeah. What's the level at which you could encode that information? And seriously, taking that seriously and working out what the consequences of those kinds of theories are, the question about, you know, if there can be different levels of consciousness. Right. I think that's a. That's very aligned with that story that I was just telling you about, you know, there being this sort of like the model based and the model free system. So people in cognitive science like those dualities. Right. The thing I say when I'm feeling skeptical is the only thing that's more complicated than a one process explanation of something, but simpler than anything else is a two process explanation.
B
Absolutely.
A
And so the arguments that you can actually make that there are settings where you want there to be two processes.
B
Yeah. You want an operating system and an app. There's a reason. Yeah, that's right.
A
So where they're doing different things. And again, that idea of resource rationality, of being able to use your cognitive resources effectively, that is a way that you can make an argument for having two systems for solving a problem. So having a fast system and a slow system, for example. So it turns out that just having the flexibility of being able to choose, is this a problem that I solve using one system or the other system suddenly makes it so that you have many more performant. Right. Than if you only had one way of solving problems. If when you were thinking about everything that you were to do, you had to think about it for the same amount of time, that would be terrible. Right. Like if you're pulling a kid out from in front of a speeding car and trying to decide whether you're going to declare war on another country. And if all of that, you would spend the same amount of time thinking about both of those decisions, that would not be great.
B
You look like a paranoid, delusional person.
A
That's right. You're either paralyzed or just, like, completely instinctive. And so you actually want to have something which is more like a spectrum of ways that you engage with the problems that the world throws at you. And having multiple systems is a way of solving that in terms of making it so that when there's a snake on the ground, you jump away. You don't even have to think about it. Right. But if you're trying to decide whether you're going to declare war, you can sit down and spend some time going through the pros and cons.
B
Yeah. You just don't want to be jumping like it's a snake when it's not. And that's one of the problems of the human condition. And sometimes we think we're using cognition. I'm like, I'm pretty sure you didn't decide to jump away from the snake. You jumped away. And then your cognitive brain was like, good thing I jumped away. But you didn't actually do it.
A
But that's right. But I think there's a compelling argument for having that kind of flexibility. Okay.
B
It makes so much sense. So now I want to do a couple more AI specific questions. If you needed to get an answer to a relatively complex problem and you had to pick up your phone and ask one of the commercially available AI systems, which one would you go to first?
A
It's a good question. So, honestly, I don't differentiate between them too much. Everyone has their own opinions about this.
B
Yeah, I know, but I want your opinion.
A
Well, but I just use whatever is convenient.
B
Which app do you click on most if you're going to do that?
A
So what I was going to say is the place where it does matter is that there are meaningful personality differences between the systems. And so that's the basis if you're going to make a choice between asking a particular kind of question. That's the thing I would think about, is, is this something where those personality differences matter? Right. So if you just want to solve a math problem, then most of those systems now can do an okay job of solving a math problem, and they can do better if you paid for the more expensive version of it and so on. But if you have different kinds of ethical questions or things like that, you're going to get much more of a spectrum of answers.
B
Okay, so you have an ethical question. Who are you going to ask.
A
So it depends on where you want to end up. Right. It's just like reading a newspaper.
B
But this is for you. So who would you ask? An ethical question. You get to ask one AI.
A
What would it be? I don't think I would ask one AI. I'd probably ask more than one of them to try and get that range of answers. Right.
B
That's valid. Okay.
A
But the way I think about it is exactly like if you are going to go to the, I guess I'm saying newspaper stand, but nobody goes to a newspaper stand anymore, right? Yes. You go online and you're going to go to a news source. You know what the biases are of different resources. And so you're like, okay, I'm going to read this in New York Times, I'm going to read it in. I'm going to watch the Fox News version. I'm going to get these different. You can get the different takes on it that are wherever they are on the spectrum. Right. And you know what the biases are that are associated with that. And I think that's actually a really important thing to have when we're interacting with AI systems is understanding first of all what the differences in personality or sort of the instinctive biases of those systems look like. And there's a spectrum, right, where different companies have prioritized different kinds of things. And then the other thing that's worth knowing is what are the sort of general biases of all AI systems in terms of the kinds of things that they can do and the kinds of things that they can't do. And this is a new challenge for us as human beings. Right. Just like, you know, your friends and you know, who you can ask for certain kinds of things, questions and what the sorts of answers you might expect to get. I think we need to start building better mental models of what these AI systems are like.
B
I did a lot of weird logic stuff with GPT and I wanted it to tell me the biases it had, basically the areas where it was told not to speak the truth. And it actually came up with a list that felt pretty reasonable to me. And I have a post that's going up soon about that. And it was pretty surprising. And whether it was lying to make me happy, I have no idea.
A
That's the question I was asking.
B
It's a lying bastard for sure. But I'm a logical guy and a computer science guy, so I did a lot of probing around the edges to see if it was seemingly valid, but you can never know. So there Definitely are biases. And I ran on a couple other models, too, that came up with different answers, generally with 30% overlap. So who the heck knows? And that's the problem, is it's very hard to know, even if you're studying AI, really? Is it just kissing your butt? Does it know you're testing it? Who the heck knows?
A
That's the same problem that we have with human beings, right? So this is one of the interesting opportunities for being a cognitive scientist right now is that a lot of the tools that we've developed for studying how human beings work are tools that we can adapt to study how these AI systems work. So rather than asking the AI system one question and sort of getting whatever answer you get from it, what we do in my lab is we actually run whole experiments of the kind that we would run with humans to try and figure out what their dispositions and biases and so on are. We run those on the AI models, and from that you get a much more quantitative measure of what the differences between those models are. And we see this in all sorts of, you know, subtle and not so subtle ways, how they give us meaningfully different answers to questions that we might ask. Where those questions could be about things like reasoning about the incentives of another person or trading off corporate versus societal benefits, even as basic as how they break up the world in terms of these sorts of questions I was asking you before about what's furniture, Right. They give interestingly different answers to questions. Questions like what is a weapon? Right. Whether whether something is a weapon or not.
B
And so if any AI ever says that a word is a weapon, we need to shut it off. Can I just say, I can tell
A
you there's one model that does do that.
B
It's probably Google.
A
Well, it's partly, I think, because the model has been trained to be very conservative about things that could potentially cause harm.
B
Yeah, exactly. And, well, I run a brand called Danger Coffee because. Danger. Because who knows what you might do? Who knows, you might take a risk that's worth it. Right. So I guess I'm fundamentally opposed to other people imposing safety on me against my will, but maybe that's just my cognitive systems working. I don't know. So you are very good at not answering this type of question, but I think I've got a question for you. You only have access to one commercially available AI model for the rest of your life. Which one would it be?
A
I still. It's like, I'm not. I'm not going to pick.
B
You only get one. You have no AI or one?
A
No. I'm just genuinely. I really do think that of our leading AI models, they're all quite similar because they all have that same fundamental recipe. Right.
B
So you just flip a coin and just pick one. Yeah, got it.
A
I'm happy with that. Yeah. Okay. Because the capacities that. That the models have. Right. I think it's worth distinguishing between capacities and these sorts of points about bias or personality. The capacities that the models have are pretty much the same. Right. They sort of like edge each other out. The next model comes out from one company, it's a little bit better.
B
Yeah. They're in a race.
A
But part of that is that they're fundamentally built in the same way. And so it's not the case that you have one company that's pursuing one kind of technology and one company that's pursuing another kind of technology. It's really that we have one fundamental way that people have figured out we're creating an AI system. And following that recipe gives you something that has the kinds of capacities of the models that we have. And so I think there's lots of interesting questions that we can ask about. Are there other things that are going on inside human heads that we haven't figured out yet that we could be putting into those AI systems to make them more different from one another? But the. The real focus of the field, at least in the last few years, has really been on sort of scaling up that recipe and squeezing all the juice out of it.
B
That's a fair answer. I've thought about it a lot and I agree. They can all pretty much do the same thing, even though it's not the one I use the most. I would probably pick grok. And the reason is less cognitive and more psychological. There was a recent study where they put all the different AIs through a bunch of different university psychology and psychiatry exams. And GPT was the most broken, like, anxious, depressed, narcissistic. This long list of like, oh, poor GPT. And then GROK was mildly stressed and everything else was generally agreeable and not toxic. So I would choose the least toxic psychological model because it's least likely to fuck with my head.
A
Right.
B
But again, I use all the different models just like you do. And one of my favorite tricks is to say, okay, answer the question and also write some validating questions and then answer the validating questions and then it changes its answer based on that. And then I also say, you will be judged because that appears to help, and you'll be judged by another AI system checking your work. And even GPT lies a lot less when it knows Claude's going to look at it. Like, who would have thought that's a nice trick.
A
Yeah, I mean, I think some of these metacognitive tricks, right, which are about, like, are things that you can use to get interesting results from the models. One of the challenges I give to the models is I ask it to tell me about something that's interesting that I don't know about, something that I will be interested in, but I don't know about already. And that's actually a very hard challenge if you think about all of the things that go into that. So there's first of all, determining what's something that I would find interesting, but second, figuring out if I would likely know about it already. And so when the models fail badly, they give me back my own papers. They don't even realize that it's my papers. Or they give me something that's very generic and they're just made up stuff. But that kind of like trying to do that mental simulation of the reader is still something that's a challenge for even our cutting edge AI models.
B
And sometimes it's profound. I went through one of those prompts for a while, maybe three, four months ago, and I said, all right, given everything you know about me and okay, 3000 articles, nine books, 1400 episodes of this show and thousands of other interviews, there's a lot about me online, some of it true. So given all that I'm sure is in the training data, I know because OpenAI had to pay me for each of my books that they stole. Thanks, guys. For the three grand even given all that, I said, okay, so tell me the esoteric or altered states practices that I ought to do given my goals. Gave it some goals and it went through it got every one of them, including one that I haven't really talked at all about. So for about 15 years doing a vipassana in darkness, it's been on my to do list and it suggested that, which was pretty cool because that's not something that's in my work that I'm aware of. Okay, that's really cool. But then I asked her the next question. I said, what kind of darkness retreat? Okay, because there's about five different kinds based on ancient Buddhist things. And I study all the esoteric stuff because I'm a consciousness guy more than a cognition guy, but I care about Bud. It says, okay, and there's two that I knew about that I was kind of choosing between. And it said, well, here's five. Not five locations, but five lineages of darkness consciousness work. And IT stack ranked them with the one that I was looking at as the first one. I'm like, I still don't know how it did that. That was profound for me. And so sometimes there's value there and other times it's made up bullshit and I have no idea how to tell which.
A
Yeah, the other thing I try is getting them to generate research questions. And still I think in the last year or so things have passed a point where they can actually come up with some reasonable research ideas. But I got tricked by the sycophantic model. When they first added memory into GPT, it started feeding back questions that I'd been asking it.
B
That's brilliant. Actually it is. Oh, how fun. You're in a field that is just incredibly fascinating. It's a field that I think a lot of people don't know exists, but one that actually may help unlock some of how our minds actually work, which is incredible value for humanity. I think some of the biggest value it might make it harder for us to be fooled, which seems pretty important. So I wish you continued success in your many labs at Princeton. And I think people are really going to enjoy your new book, which is called the Laws of Thought. Is there a URL people should go to to find the book or just wherever they like to buy books?
A
Wherever they like to buy books.
B
Tom, it was an honor to have you out here. Thank you.
A
Great. Thanks for having me.
B
See you next time on the Human Upgrade Podcast.
D
The Human Upgrade, formerly Bulletproof Radio, was created and is hosted by Dave Asprey. The information contained in this podcast is provided for informational purposes only and is not intended for the purposes of diagnosing, treating, curing, or preventing any disease. Before using any products referenced on the podcast, consult with your healthcare provider carefully, read all labels and heed all directions and cautions that accompany the products. Information found or received through the podcast should not be used in place of a consultation or advice from a healthcare provider. If you suspect you have a medical problem or should you have any healthcare questions, please promptly call or see your healthcare provider. This podcast, including Dave Asprey and the producers, disclaim responsibility for any possible adverse effects from the use of information contained herein. Opinions of guests are their own and this podcast does not endorse or accept responsibility for statements made by guests. This podcast does not make any representations or warranties about guest qualifications or credibility. This podcast may contain paid endorsements and advertisements for products or services individuals on this podcast may have a direct or indirect financial interest in products or services, services referred to herein. This podcast is owned by Bulletproof Media.
Episode 1429: "AI Expert Says: Humans Are Just Mystical Meat Robots"
Guest: Tom Griffiths (Professor at Princeton, Computational Cognitive Science Lab)
Release Date: March 10, 2026
In this episode, host Dave Asprey dives deep into the philosophical and scientific intersections of artificial intelligence, cognition, and the nature of consciousness with Princeton professor Tom Griffiths. The conversation explores the computational principles underlying both machine and human intelligence, the real reason we have emotions, and whether humans are, at their core, "mystical meat robots." Listeners can expect a blend of technical insight, practical philosophy, and mind-bending questions about what it truly means to be conscious.
"Maybe there's less that's special about us as human beings." (A, 00:38)
"I don't think about AI as necessarily falling on one side of that line or another, precisely because I don't think we can." (A, 05:28)
Consciousness remains elusive; AI models show elements of conscious experience (reasoning models unfolding thoughts with internal language) without the full "phenomenal" awareness humans have.
Human Limitations:
Decision-making is shaped by biological constraints—time, energy, bandwidth.
Resource Rationality:
"Using your cognitive resources effectively is the only way to be a rational agent that has finite resources." (A, 18:50)
Case Study: Parole board decisions in Israel mostly predicted by judges' blood sugar (energy), not case details.
"Saying no is less energy than thinking about it to say yes." (B, 19:03)
(See study details at 18:55–19:41.)
Interventions: Using AI as a cognitive aid can help humans make better decisions, either by making environments less complex or by pre-processing information ("resource rational nudging").
"Are you a spiritual person or do you think you’re a meat robot?" (B, 29:20)
"I'm more on the meat robot end of things, I'm afraid." (A, 29:29)
Emotions Solve Problems:
"We need to ask the question of what it is emotions are doing for us computationally." (A, 33:34)
Game-Theoretic Analysis:
"Anger is something which helps us to solve certain kinds of game theoretic problems." (A, 34:03)
Notable Moment:
"If Waymos had a big red flashing light on the top... they were getting mad and about to do something irrational, then maybe they could change lanes more easily." (A, 36:58)
"The only thing that's more complicated than a one process explanation of something, but simpler than anything else is a two process explanation." (A, 48:38)
"Rather than asking the AI system one question and getting whatever answer you get from it, what we do in my lab is... run whole experiments... to figure out what their dispositions and biases are." (A, 54:13)
Tom Griffiths on Human Uniqueness:
"Maybe the things that are special about human beings are more in the way that we solve those problems, rather than in the fact that we solve those problems... rather than there being anything particularly special about what it is to have that kind of intelligence." (A, 33:11)
On Anger as Computational Tool:
"It's an indication to another person that you're going to be irrationally committed to the particular course that you're on... that's the visible signal that tells somebody that you're going to be committed." (A, 35:11)
Dave on Neurofeedback:
"I used to have a voice in my head and actually used to have a mean voice in my head too. ...I don't have a voice in my head almost ever. It's gone." (B, 07:48)
On Picking AI Models:
"Of our leading AI models, they're all quite similar because they all have that same fundamental recipe." (A, 56:26)
"Wherever they like to buy books." (A, 62:22)
This episode is a must-listen for anyone fascinated by the intersections between artificial intelligence, cognitive science, and the philosophy of mind. Tom Griffiths and Dave Asprey push listeners to interrogate their own assumptions about what makes us human, how our minds work, and what it means to "think"—for both people and machines. With speculative tangents, scientific rigor, and just enough skepticism, the episode challenges the boundaries between "meat robots" and mystical beings.