Transcript
A (0:00)
Humans are pretty bad at understanding minds that are different from us. We're bad at caring about them. We're especially bad at doing that when there's a lot of money to be made by not caring. We're making this new kind of mind. There are dangers all around and obviously one of the important questions is like, can these minds suffer and how are we supposed to share the world with them? It just seems like really likely that that has to be part of the playbook. The future is going to get more confusing and more emotional. A lot of what we want to do is stay sane. In the next 10 years. There will be a lot of alpha in not losing your grip.
B (0:42)
Today I'm speaking with Robert Long. Rob's the founder of Elios AI, a research nonprofit working on understanding and addressing the potential well being and moral patienthood of AI systems. I should also flag that I have a conflict of interest here. Rob is both a very good friend and I'm also on the board of his nonprofit Elios. I'm fairly confident that I would have had Rob on if those things weren't true and have in fact had him on before, but worth flagging. Thank you for coming on the podcast, Rob.
A (1:15)
Yeah, thanks for having me back. I'm super excited to be here.
B (1:19)
Okay. I want to start by asking you, I mean, a reason I'm interested in the topic of digital sentience and that I think a lot of our listeners are interested in the topic of digital sentience and kind of the framing of 80,000 hours problem profile on digital sentience. All has to do with the fact that we may be on track to create AI systems that are both conscious or sentient, feeling things, having experiences and also that are deeply kind of enmeshed in our economy. We already use them loads for work and just like entertainment. And maybe at some point we will realize that we've created these beings that we exploit that are having a really bad time. Kind of classic analogy that I find very disturbing is factory farming. So I'm interested. How much do you worry about AI systems that we're building today becoming like factory farming?
A (2:21)
Yeah, that's a great question. I definitely worry about it. I think interestingly my thinking on this has evolved in the past few years where it used to really be. Maybe kind of like you just the primary way I thought about the problem and what we're trying to prevent and I should say I think it could happen and it's definitely something worth preventing. Maybe before I say what is limiting about the factory farming analogy, I'll just quickly say what's really useful about it. So I think what's useful is, as we're building potentially a new kind of mind, let's notice the following facts. Humans are pretty bad at understanding minds that are different from us. We're bad at caring about them. We're especially bad at doing that when there's a lot of money to be made by not caring and things can get locked in or set on a bad trajectory. So that happened with factory farming, arguably. I think if you'd asked people 100 years ago, would you like to have chicken that is raised like this? People would say, like, no, we're going to make that illegal. But we kind of walked into it and economic forces led us there, and now it's a lot harder to roll back. Something like that could happen with AI. And I think people are right to be very concerned about that. But. And I think this is a good jumping off point for a lot of issues about AI welfare. I do think there are some specific aspects of potential AI minds that do break the analogy because of ways that they can be just different from animals and the way our relationship with them would be different from animals. So I can say a few of those.
