Transcript
Sujana (0:02)
When you're met with a situation where you have to, like, choose between one horrible conclusion and another horrible conclusion, is that a sign that we're just totally on the wrong track?
Andreas Mogensen (0:13)
I'm inclined to think it's not an indication that we're on the wrong track, it's an indication that we're doing philosophy.
Sujana (0:18)
Okay, yeah.
Andreas Mogensen (0:19)
Because I think the core of philosophy consists of puzzles and problems that arise when a number of things that all individually seem extremely plausible turn out to yield absurd results. There's this quote from Bertrand Russell that I think I'm probably going to butcher, but it's like the job of philosophy is to start with something so obvious it doesn't need saying, and to end up with something so incredible that no one could believe it, and these, like, deep conflicts amongst principles that otherwise strike us as compelling, that's a sign you're doing philosophy.
Sujana (0:50)
So we're doing something right? Maybe.
Andreas Mogensen (0:52)
Possibly, yes. Hey listeners.
Raph (0:56)
Raph here Today I have the pleasure of introducing you to a new interviewer on the show, Zashani Qureshi. Zashani has been a researcher writer at 80,000 hours for a while now, recently writing articles on the challenge of using AI to quickly enhance societal level decision making and reasons AGI might still be decades away. Back in the day, she studied mathematics and philosophy at Oxford University, which is just as well, because for her podcasting debut she has opted to tackle some pretty challenging philosophy. In today's conversation, she speaks with Andreas Mohansson, a senior researcher at Oxford University focused on moral and political philosophy. Had a lot of episodes over the years covering the possibility that AIs could in the future become conscious and deserve moral consideration because there's something that it's like to be them. Among others, we've had Kyle Fish, the first ever model welfare officer at Anthropic, on the show a couple of months ago, and there's a detailed treatment back in 2023 in episode 146, Robert Long on why large language models like GPT probably aren't conscious, but everyone's kind of heard that idea. Now, people might Disagree about whether AIs are going to have subjective feelings in practice, but sure, if AIs can suffer, if it would be better if we didn't make them suffer, that makes sense to most people, I think. So what is a philosopher to do? Or a podcaster for that matter, if we want to say something new and important about whether AIs might deserve moral consideration in future? Well, Andreas has a bunch of new and serious arguments that AIs could start to deserve moral consideration for their own sake, even if they're not conscious and there's nothing it's like to be them. These are arguments that people should seriously care about whether you are free and whether you get what you want. Even if you were to experience nothing at all or, or if you did experience something, it was always neither positive nor negative. It's fair to say that that would be big and indeed highly inconvenient if true. And that is why Andreas is researching it. If we're all completely fixated on subjective experience, but that's not the only way that beings could matter, then we're vulnerable to a catastrophic moral error. In the final third of Sushana's conversation with Andreas, they also turn to new philosophical arguments regarding whether we should weigh suffering much more highly than well being and whether human extinction might actually be a good rather than a bad. I know the more complex cutting edge philosophy episodes like this are often subscriber favorites. And if you do enjoy it, you might also like my interview with Andreas, episode 137 from back in 2022 about whether effective altruism is attractive or not to non consequentialists such as Andreas himself. Without further ado, I bring you Zoshana Qureshi and Andreas Molinson.
