Transcript
A (0:00)
They say if you want to go fast, go alone. But if you want to go far, go together. At Amica Insurance, we're built for our customers and prioritize your needs. Amica empathy is our best policy. Visit amica.com and get a quote. Today you can go to kittedkitted shop and use the code Smart50Smart50 at checkout and you will get half off a set of thinking superpowers in a box. If you want to know more about what I'm talking about, check it out. Middle of the show. Welcome to the you are not so smart podcast, episode 327.
B (1:22)
When you have an intuition about what's right or wrong, you are going from a psychological is your own feeling to at least a tentative moral conclusion. So anytime we're using our minds to try to figure out what's going on with the world, we have to acknowledge the possibility that we could be looking at things in a distorted kind of way. We have the incredible power to help people or hurt people on a massive scale from a distance. And that is the scariest thing about where we are kind of in our cultural and technological history. And it's also the greatest opportunity, right? And it's not just an opportunity for presidents and people with their finger literal and metaphorical buttons. It's ordinary people as well. The positive power that ordinary people have is stunning and something that we don't fully appreciate.
A (2:29)
That was the voice of Dr. Josh Green, an experimental psychologist, a neuroscientist, and a philosopher. He teaches at Harvard and he runs a lab there. He wrote the book titled Moral Tribes and has been involved in all manner of incredible research into how brains generate minds that experience, behave and make judgments and decisions influenced by a phenomenon we commonly refer to as morality. Morality? Yeah, you can study that scientifically. I met Josh Green earlier this year at a conference on pedagogical innovation. And that innovation was all related to things like arguing and debating and having different opinions and attitudes and political ideologies and all the rest. This conference was held at the Lidowitz center for Enlightened Disagreement at Northwestern University. The center for Enlightened Disagreements. Super cool. The directors, Eli Finkel and Nour Katali, they gathered a bunch of people like Josh and myself together to get the center going and help plan out what they're going to be doing there. We all gave lectures and demonstrations and traded ideas about how to improve discourse and reduce polarization and head off AI powered bots and, you know, save the world through creating better systems for argumentation and Conversation and then teaching people how to apply those things. We will talk about all about that in many more upcoming episodes, and we'll have other guests come on the show who are involved in all of that to talk about what they're doing in this episode. That guest is Josh Green, and he is our guest for several reasons, all of which relate back to the trolley problem. And they relate back to it because Green is actively attempting to generate solutions to the trolley problem. And he's doing that using neuroscience to make sense of cognitive psychology, that is making sense of philosophical quandaries. And here's the thing. I think he's actually onto something because there's evidence that he's getting results. And I think he's onto something in several ways. I mean, he may have solved the trolley problem, and we will get into all of that in just a second. But first, we should probably briefly explain what the trolley problem is. And even if you're familiar with this old philosophical thought experiment and moral dilemma, you're going to learn something new, most likely. Here we go. Okay, here's a very simple version of the trolley problem. There are five people tied to a trolley track or a train track or something like that, and a runaway trolley or train is headed straight for them. However, you are standing in front of a lever that could divert its course onto a separate track where one person is tied down. So you have no other options available, just the lever. And the question here, the moral dilemma is, do you choose to do nothing which will definitely lead to five people dying, or do you choose to pull the lever which will definitely prevent those people from dying, but also definitely kill the one person on the other track? That's the original trolley problem, which, shockingly, isn't all that old, philosophically speaking. It came from an academic paper written in 1967 by the British philosopher Philippa Foote, who in that paper was exploring the ethical issues surrounding abortion. That's not to say the overarching idea here isn't very, very old. It's something in philosophy they call the principle of double effect, which is the proposition that states an action causing both good and bad outcomes can be morally permissible if the bad effect is not intended, only foreseen, and the good effect outweighs it. But then in the 1970s and 1980s, the trolley problem gained fame and it went pre Internet viral, thanks to another philosopher named Judith Jarvis Thompson, who wrote some papers imagining two other variations. What if that one person on the other sidetrack is your child or your mother or husband or wife? And what if there isn't a lever but a footbridge over the track, and a very overweight man is standing at the railing just above the track, far enough ahead of the trolley that if you were to push him over, he would land just right and thus stop it from killing the five people down the line. But kill him? Would you push him over? I mean, it would save five people's lives. Take a second and notice the way you feel in that second scenario versus the first. It's different. It's different for most people. In the first scenario, you save five people by pulling a lever, and that will kill one person. And in the other, you save five people by physically pushing a single person to their death, and it feels different. Would you save five people by sacrificing one person? It seems like they're the same, but they. The answer changes depending on whether you actively physically do the sacrificing and furthermore, whether the one person is already part of a scenario you've become part of, or they become part of it through your actions. When researchers have put these questions in front of people, they have found that, yeah, people do seem to answer it differently, and they answer differently depending on whether they're using a switch or pushing a person. And for the most part, people are hesitant to push the person while not so hesitant to pull the lever. But those answers can change. People can become a lot more okay with pushing the person onto the tracks depending on whether or not their brains have been damaged in certain regions. They also will become much more likely to be okay with it if they are what we used to refer to as psychopaths, if they have a certain antisocial personality disorder that reduces their sense of empathy and increases their sense of sort of utilitarianism. And perhaps most surprisingly, people's answers differ whether or not they're skilled meditators, people who are very skilled at meditation and have been practicing meditation for a long time, they're much more okay with pushing the man off the bridge and onto the tracks to save the five people, even though it will kill the man. A number of insights all started to bubble up thanks to all this research that made their way to psychology and neuroscience. And those insights pretty much amounted to, oh, wait, it seems as though morality, moral judgments, moral decisions. These aren't just affected by the wording of the question, by context, by framing. They seem to be affected by whatever is going on in our brains when we think about that stuff. They're deeply affected by the conditions of our brains, down to whether they're damaged or have been shaped by intense training or experience. And that suggests that morality is something that can definitely be studied scientifically, that morality as a religious idea or a philosophical conundrum is in essence something biological shaped by evolution. Morality is something that brains generate, which means we can research it like anything else the brains do. And that's where our guest enters the story, because Josh Green did that research and found out some very, very fascinating things. And he wrote a book about all of that titled Moral Tribes. But that was a while back. We're going to talk about that a little bit in this episode, but what we're going to focus on is what he did afterward because he went way further with this whole moral psychology neuroscience. Oh, I think it may have figured out the whole source of the trolley problem stuff. He and his lab at Harvard created two ways to apply all of this to make the world a much better place. One involves helping people, encouraging them, nudging them to give to charities that are way more effective than the charities that they would prefer to give their money to, while also allowing them to give money to those charities. It's super fascinating. You can get right in on this. Right now. You can go ahead and give to the charities you care about and the charities that are extremely effective at the same time by going to GiveDirectly.org smart. And if you do it through that URL, which is through this podcast, then his organization will match your donation. This is all part of something called Pods Fight Poverty, which I'm very happy to be a part of, along with all these other podcasts, Ologies and the Happiness Lab and all these things. You will hear all about that in the interview. And the other thing that he's doing, the other thing that we'll talk about in the interview is he has created through the lab, through his team, through his research, through the research of others, a very effective game called Tango, which you can play over an app or over a website. And you and another person who is politically different than you will be able to bypass polarization and arguing and debate that goes nowhere and encourage each other to cooperate and avoid political extremism and get along in a way that might save democracy, save the world. We'll discuss that. Well, discuss everything that I've just mentioned, all sorts of other stuff related to the brain and the trolley problem and more, all after this. Commercial break.
