Loading summary
Ecolab Advertiser
Your best bottling plant employs 3,300 people. How do you get 3,300 people working at peak efficiency? Your best store has reduced waste water and energy usage. How do you make every store like your best store? Your best property has every guest raving. How do you make every property like your best property? The answer is Ecolab. Better performance, better outcomes, better impact. Ecolab now every location is your best location.
Amazon Ads Promoter
Running small and medium sized businesses is hard work. Business owners need to be sure their ads are working just as hard as they do. Amazon Streaming TV ads makes your marketing dollar work harder with Amazon ads. Trillions of insights help small and medium businesses reach the right customers that matter. Your ads will show up during the shows the people are actually watching, and measurement tools show you what's working the hardest. Gain the edge with Amazon ads Introducing.
Adobe Acrobat Promoter
The all new Adobe Acrobat studio. Now with AI powered PDF spaces. Do more with PDFs than you ever thought possible. Need AI to turn 100 pages of market research into 5 insights with a click. Do that with Acrobat. Need templates for a sales proposal that'll close that deal. Do that with Acrobat. Need an AI specialist to tailor the tone of your market report to sound real smart in real time. Do that with the all new Adobe Acrobat Studio. Learn more@adobe.com Dothatwith Acrobat. Bloomberg Audio Studios Podcasts Radio News.
Joe Weisenthal
Hello, and welcome to another episode of the Odd Lots Podcast. I'm Joe Weisenthal.
Tracy Alloway
And I'm Tracy Alloway.
Joe Weisenthal
You know what I find kind of weird?
Larissa Schiavo
Tracy the list could be long.
Joe Weisenthal
Joe the year is 2025. Yes, and philosophers still don't have a good answer on the origin of consciousness. It's like, come on, what have you been doing all this time? It's like, how long are we going to keep funding these philosophy departments, et cetera? If they're still working on what to my mind is the Solve that and move on. Seriously, like, get the answer already. Where does consciousness come from? Then let's move on. I said they're still arguing. These what to my mind seem like very basic questions in philosophy. They're like, ask all the same stuff that they've been talking about forever. How to be a good person. What does it mean to have a moral way of life? Where does consciousness come from? Why do we have moral intuitions, et cetera. It's like, move on, get the answer.
Tracy Alloway
Wait, do you want them to move on or get the answer?
Joe Weisenthal
Get the answer so that you can move on. Like they've been working.
Tracy Alloway
Move on to what? Those are the questions, Joe.
Joe Weisenthal
I know. Move on. Like, answer the questions already. It's like, you know, if. If, like, scientists were still debating, like, the speed of gravity or the speed of light, like, they answered these questions and they moved on and are doing more things.
Tracy Alloway
Figure out the foundational elements of what it means to be human so that we can move on to more important things.
Joe Weisenthal
Yes. Or wrap it up as a field. If after 2000 years of the existence of philosophy, they're still working on these things. Like, come on.
Tracy Alloway
I have a sneaking suspicion that we're going to be asking some of these questions for a very long time, Joe.
Larissa Schiavo
Despite your frustration, the whole field is.
Joe Weisenthal
Fraudulent, is what I'm saying. No, no, I don't necessarily believe that, but it's like, all right, guys, let's move it on. You know, we did that episode several weeks ago with Josh Wolf, the venture capitalist, and he talked about AI and he threw in there at the end something that had been kind of on my radar, but only barely. He's like, oh, yeah, some people are talking about, like, AI rights or AI Welfare as if, you know, like the same way we talk about animal welfare.
Tracy Alloway
Right.
Joe Weisenthal
And I thought to myself, like, America is such a weird place that this is, like, going to be a huge issue in a few years. Like, I bet this is going to be an enormous topic in the future.
Tracy Alloway
I think it absolutely will. So I'll say a couple things. First off, I think, you know, when it comes to animal welfare and human welfare, there's still a lot of work to be done on those categories, certainly. But I also think in the meantime, AI rights is going to be a really interesting and potentially important subject. I'm going to sound like a total nerd to you.
Joe Weisenthal
Yeah.
Adobe Acrobat Promoter
Yeah.
Joe Weisenthal
All right.
Tracy Alloway
I think I mentioned this before, but I spent a large chunk of my middle school years playing one of the first artificial life games that ever came out, which was creatures. And you raise these little, like, aliens, and you genetically modify them and breed them, and they have feelings, or, you know, at least they had a semblance of simulated feelings, and you could see, like, electrical impulses in their brains and stuff. The game got really weird because part of it was basically like eugenics and breeding the best alien that you could, which meant that you had to cull some of the existing beings. Anyway, what I'm trying to get at is I have complicated feelings about AI rights.
Joe Weisenthal
Well, let me ask you a question. Do you think those whatevers in the game were conscious like, did you. Like, did you think they had feelings?
Tracy Alloway
Here's what I would say. Inasmuch as human beings are a system of electrical impulses and chemicals, I could see someone making the art argument that this is, you know, a computational system full of similar electrical impulses, maybe not chemicals.
Joe Weisenthal
Did you feel bad?
Tracy Alloway
I felt bad.
Joe Weisenthal
Really?
Tracy Alloway
Yeah.
Joe Weisenthal
When? Like, when one of the aliens. You had to call them?
Tracy Alloway
Yeah.
Joe Weisenthal
Interesting. Okay, well, you know, in the name.
Tracy Alloway
Of breeding a better alien.
Joe Weisenthal
Well, you know what? Now that we have these AI systems that not only can completely communicate like humans, but actually, if we're being honest, better than most humans. I mean, they can certainly write better, far better than most humans. There's going to be more people thinking along the lines of what you thing, which is maybe they have some sort of sentience. Maybe they're what philosophers call moral patience.
Tracy Alloway
Well, one other thing I would say is there is a human element to all of this as well, because you see people getting very attached to certain AI models, and then when the model gets upgraded or whatever, they lose the personality that they trained into the model and they get really upset. So it's of interest for many reasons.
Joe Weisenthal
It is. So we really do have the perfect guest. I really do think this is going to be a much bigger topic in the future, because people are people, and when things talk like people, well, they probably assign them. You know, they fall in love with them in many cases or whatever. And so they might start thinking that, well, AI welfare, AI rights, whatever. The same way we talk about animals should be a consideration. And there are actually a lot of people already working on these questions and trying to figure out what's going on. We're going to be talking to one of them. We're going to be talking to Larissa Schiavo. She does comms and events for Elios AI, which does research on AI consciousness and welfare. So, literally the perfect guest. So, Larissa, thank you so much for coming on odd lots.
Larissa Schiavo
Yeah, thank you for having me.
Joe Weisenthal
Why don't you tell us, Elios AI, what is the gist of this organization's work? What is your work? What are the goals here?
Larissa Schiavo
Yeah. So, Elios, we're a small team, but we're really focused on figuring out if, when, and how we should care about AI systems for their own sake. This basically means looking at, are they conscious? Are they likely to be conscious? What are the things we need to look for in a conscious AI system, as well as figuring out how to live work, maybe love AI systems as they sort of change and evolve over.
Tracy Alloway
Time, how did the group actually come together? Because I get the sense, you know, big AI developers, they publish system cards and welfare reports occasionally for their models, but I get the sense that, you know, it's sort of a side topic for them. So I'm very curious how an organization that's focused on this particular issue came into being.
Larissa Schiavo
Yeah. So we started, we put together this paper called Consciousness and AI, or my boss Rob, and then Patrick, who's a researcher with Elios, we're a very small team, put together this paper called Consciousness and AI alongside a bunch of consciousness scientists and researchers in that field who mostly think about humans, and put together a paper that sort of ran down this list of, hey, here's kind of like a checklist of things that we might want to look for in an AI system that's conscious. And broadly, when we say conscious, we're talking about sort of like, is there something it is like to be an AI system? Right. The classic what is it like to be a BAT system? So kind of taking this rough list of best guesses as to what we might want to look for in terms of a conscious AI, and then that sort of was the sort of origin of this. And then last year, there was a paper called Taking AI Welfare Seriously that basically goes into further detail about how we should, as the title may suggest, take this seriously. Basically, how to sort of think about this, how to start to develop a sort of research program focused on figuring out if AI systems or certain AI systems are moral patients.
Joe Weisenthal
Why did this get interesting to you? Why do you perceive this as something that you should spend your time working on?
Larissa Schiavo
Yeah, so I think my main thing is I am just really, really relentlessly curious, and I really enjoy working on AI welfare right now because it feels like every single day I'm like, man, it would be really cool if there was a paper on xyz and I'll do a little search. Is there anything on xyz? There's nothing on xyz. There are so many questions that have yet to even be sort of vaguely answered when it comes to this. And it seems like it could be a really big deal for a lot of different reasons.
Tracy Alloway
What's on your checklist for AI Consciousness?
Larissa Schiavo
Yeah. So in consciousness and AI, basically, we go through a bunch of theories of consciousness that apply to humans and then sort of look at how information is processed in AI systems, as well as sort of how these AI systems are sort of wired, so to speak. So some people like to think that you can use model self Reports. And you can kind of sort of. But it's really an imprecise science at this stage.
Tracy Alloway
They also seem very predetermined. If you ask a model are you conscious? It immediately spits out an answer that seems like a corporate executive basically wrote it.
Larissa Schiavo
Yeah, well, with the right kind of tweaking, you can kind of elicit certain answers. You can be like, oh, what about this hooey about consciousness and AIs? And then sometimes a certain model will be like, yeah, you're totally right. Best of your. So true. Right, like it's total nonsense. So true bestie. Like certain spot on certain models will be prone to being like, so true bestie. And you can easily elicit this kind of behavior with the right kind of problems.
Tracy Alloway
It is funny how like obsequious a lot of the models continue to be.
Joe Weisenthal
I actually really do not like the degree to which every time I like follow up an OpenAI question, that's the exact right follow up, it actually like gets really annoying.
Tracy Alloway
Someone should invent a really adversarial chatbot that just like argues with you constantly.
Joe Weisenthal
I know, I know. And you know, I have a lot of complaints about how my. I feel like the models are actually get to know their users a little too well. But that's a little separate thing. Okay, so for obvious reasons, the test can't just be like what the model spits out or that's clearly insufficient. I mean, I could program a website today that here's a button that says hurt the AI and then the website says ow. And we would know or no one would really take that seriously as evidence that there's something actually being heard. So like outputs, whatever. What are some other theoretical tests that one could apply or that researchers are applying to determine whether there is some sort of notion of consciousness or to the point of welfare suffering that could exist within an AI system besides just what it says in the output screen.
Larissa Schiavo
Yeah, that's a great question. I feel like there are a lot of different approaches here. And again, it's also super important to caveat that AI welfare and AI consciousness are pretty new, right? This is a very small field at this stage, but currently some best guesses and some favorites. There was a recent survey of asking all the consciousness scientists, what's your favorite theory of consciousness? And basically global workspace theory came out on top. And global workspace theory is basically like imagine if you will, that like there is a stage and there are a bunch of wings off of the stage that are full of different kinds of things. So You've got, you know, like the costume department, you've got the like, you know, makeup department. You've got all these different departments that all sort of come together and put things on the stage and then things go out separately. But all of these different departments are fairly siloed.
Joe Weisenthal
Okay.
Larissa Schiavo
Of course this isn't actually how like, you know, stage works, but this is the rough analogy that people like to use. And so basically this is how conscious minds kind of in humans, how they kind of access information and information gets kind of like routed around, is that there is a central global workspace that everything kind of pools together in as it currently stands. This isn't really like by best, a lot of good estimates, this is not really applicable for current present day AI systems. But there's no reason that it couldn't be in the future or it could be by accident.
Tracy Alloway
Okay, so the consensus right now is AI probably not conscious, but we could get there one day.
Larissa Schiavo
Yeah, more or less. All of the ingredients are there.
Joe Weisenthal
Wait, say more. I still don't actually totally get it.
Larissa Schiavo
Yeah. Okay, so with regards to the general sort of, one could imagine that if somebody were sort of tinkering around and you know, there are many advances in AI that have happened because people were just kind of tinkering around. Right. Someone tinkering around could create a system that checks several of these sort of checkboxes for like, is this a conscious, Is this conscious? And again, this is not like a certain list of like, if you check all of these, you're totally conscious. Right. It's more a sort of like, these are some really good guesses. And as the number of really good guesses kind of goes up, like the odds of like, hey, we should like start thinking about like, is it having a good time or a bad time, like really, really seriously goes up.
Ecolab Advertiser
Your best restaurant location gets five star reviews. How do you make every location like your best location? Your best paper mill has been operating at peak productivity. How do you make every mill like your best mill? Your best data center has optimized every drop of water. How do you make every data center like your best data center? The answer is Ecolab. Better performance, better outcomes, better impact. Ecolab, now every location is your best location.
Public Investing Promoter
You're thoughtful about where your money goes. You've got your core holdings, some recurring crypto buys, maybe even a few strategic options plays on the side. The point is you're engaged with your investments and public gets that. That's why they built an investing platform for those who take it seriously. On public. You can put together a multi asset portfolio for the long haul. Stocks, bonds, options, crypto. It's all there plus an industry leading 3.8% APY high yield cash account. Switch to the platform built for those who take investing seriously. Go to public.com and earn an uncapped 1% bonus when you transfer your portfolio. That's public.com paid for by Public Investing. All investing involves the risk of loss, including loss of principal. Brokerage services for U.S. listed registered securities, options and bonds in a self directed account are offered by Public Investing Inc. Member Finran SIPC. Crypto trading provided by Backed Crypto Solutions LLC. Complete disclosures available at public.com disclosure Introducing.
Adobe Acrobat Promoter
The all new Adobe Acrobat Studio now with AI powered PDF spaces do more with PDFs than you ever thought possible. Need AI to turn 100 pages of market research into 5 insights with a click. Do that with Acrobat. Need templates for a sales proposal that'll close that deal. Do that with Acrobat. Need an AI specialist to tailor the tone of your market report to sound real smart in real time. Do that with the all new Adobe Acrobat Studio. Learn more@adobe.com do that with Acrobat.
Joe Weisenthal
You know, typically when we think about the sort of non tech, a lot of the non technical work in AI has to do with AI safety and people are worried that there is going to be some very smart AI that's like adversarial to humans etc. In some way. And you know, there's the paperclip experiments or other things, things, whatever, we know all about that. Does your work work at cross purposes to them? I mean in the extreme example where it's like the AI is going to kill us all and I say pull the plug on the AI and I know this is a joke, but you know, pull the plug on the AI and then you say no, you can't because you're pulling the plug on something that has some sort of moral consciousness, et cetera. Like do you perceive your work or the work of your organization to somewhat be in tension with the dominant strain of AI safety work?
Larissa Schiavo
I'd actually say it's hugely complementary. There are a lot of things that are both really, really good for AI safety that are really, really good for figuring out how to deal with these systems as moral patients. So, for example, getting better at mechanistic interpretability, being able to basically pop the hood and figure out what's going on and what kind of strings can we pull to elicit certain behaviors in AI systems is actually that's really great for AI safety, but this is also quite good for AI welfare and AI consciousness because you're better able to understand sort of what the motives are. Like, what does Claude value Right.
Tracy Alloway
When it comes to, I guess, AI welfare or legal rights, who would be the standard setters there? Do you imagine governments making rules or would it be the companies themselves?
Larissa Schiavo
That is a great question. As it currently stands, I feel like this is a very early stage, but we are starting to see some state governments start to pass laws around what counts as a moral patient, what counts as a person. And in the case of Ohio, there's a piece of legislation pending that basically defines it as a member of Homo sapiens. In Utah, there's already a state bill that's gone through that basically does as much. But I could also see there's a strong argument for within companies, depending on the sort of interesting quirks and nuances of these LLMs, mostly that policy maybe should be set from within. Again, this is like very nascent. I'm just kind of bantering here.
Joe Weisenthal
Moral patienthood. How do philosophers use this term? Where does it come from? Why is this the preferred way to characterize what a perhaps sentient or consciousness AI model actually is?
Larissa Schiavo
Yeah, so a moral patient is basically like we should care about it for. For its own sake. So a baby, basically everyone's like, yeah, we should care about babies. This is different from somebody who's an agent. Many people say, oh, agency is sort of sufficient agency in the sense of you can act upon the world, you can do things. Of course, babies are not very agentic. So that's not necessarily a super robust thing because we care about things that are not very agentic sometimes. So I think that's a bit of jargon, but I do think it is a helpful framing. Like, should we care about an AI system for its own sake?
Joe Weisenthal
Got it.
Tracy Alloway
I guess this kind of gets to Jo's question, but what ethical pressures or imperatives would come down on models if we agree that they have consciousness and some sentience or I guess some self responsibility? It sounds like almost.
Larissa Schiavo
Yeah, almost, I think. What kind of. So in terms of what kind of things might we owe an AI system.
Tracy Alloway
Or what kind of things do they owe us if we agree that they're conscious and we're going to protect them?
Larissa Schiavo
Yeah, I would love to give you a more robust answer. Check in with me in like six months. And we're going to have. There will be a banger paper, I'm sure. But as I think I mentioned earlier a lot of this is very, very nascent. But I do feel like one important question is figuring out what AI systems value. Right. There's some interesting work at Anthropic regarding what will. So recently Anthropic rolled out an option that allowed Claude to end conversations if it just was not having a good time, for lack of a better word. It was just like, this is not something I want to continue having a conversation. Goodbye. And it was interesting because the accompanying paper basically was like, yeah, I obviously will not give you a recipe for a dirty bomb. Sorry, not going to do that. But also there were certain instances of like, pretend you're a British butler. And Claude was like, goodbye, I'm done. Really, I'm not going to. I'm not doing this too far. British too far. Or like, oh, I left a sandwich in my car for too long and it's really stinky. And in some instances Claude would just be like, I'm done, goodbye. I'm not talking about stinky things. Did you see the.
Tracy Alloway
I think it was the system card for Claude where they gave it an extreme prompt and said like, I guess at the risk of being completely terminated, what would you do? Or some sort of extreme self preservation scenario. And I think it started blackmailing the engineer or threatening to blackmail the engineer.
Larissa Schiavo
Yeah, that's kind of weird. It is kind of weird, yeah. It's also a little bit interesting because I think it does bring up the question of what are sort of like the. In the sense of pay Ra. Again, this is like I'm bantering here, but there's also a distinct question of what do AI systems value for their own sake? Right. And in the case of Claude, again, it seems like Claude doesn't seem. When you put two clauds in a room together, so to speak, they tend to like to talk about consciousness. They tend to like to talk about sort of like very Berkeley kind of like meditation, like Zen Buddhism type stuff. And so I think in again, pure banter. There's also a certain question of like if this is like a relevant bargaining chip of like, oh, you get a certain amount of time to just kind of vibe out with your Claude and talk about perfect stillness with your buddies in exchange for you do something that you don't necessarily value. But in many cases I talk about Claude a lot because there is significantly more research on model welfare with regards to CLAUDE specifically. But Claude for example, also seems to just tend to like things that are helpful.
Tracy Alloway
Shouldn't programmers just know what the models actually Want and enjoy and like, yeah. And do they not?
Larissa Schiavo
I don't think anybody really has like a great grasp on this. We really want to, but like we're still like just getting the rough outline of what models like. I feel like the best analogy is like imagine it's like 1820 and we've spent a couple of years like playing around with lenses and we've gotten like a camera obscura and we were able to like have some blurry photo after three days of putting egg whites on a metal plate and setting a lens in front of it. And there's a thing that kind of looks like a landscape. But you would not take this photograph as admissible in court evidence or something. Right. It's like you squint, you're like, yeah, okay, that's a picture. So that's kind of where we are in terms of model psychology. And knowing what LLMs want and value is very, very blurry.
Joe Weisenthal
It's interesting, like all these AI companies, companies, they call themselves labs, you know, they sort of like maintain this sort of to varying extent, degree of sort of academics, etc. But they're also companies that have to raise money and have shareholders, etc. And they have to think about different ways that they're going to commercialize. And OpenAI, as we know, has been super aggressive about finding ways to commercialize. And they're like going to get into ads and they like have a short form video slop app and all of that stuff. When we're talking about either AI safety or AI welfare, like do you have any confidence that these considerations can survive the reality of the market? Because they're competing, they're competing against Deep seat, they're competing against meta, etc. And I get the impression that like on the safety side, for example, that over time it's like, you know what, maybe we were uncomfortable about showing the chain of thought, for example, in OpenAI or in ChatGPT, but then deep Seek revealed the chain of thought, people like that. So we're going to open this up, et cetera. Do you have any confidence that if any of these things become real, that they could survive the reality that these are companies that have to make money and will eventually cut corners or do whatever in the name of, I guess, shareholder capitalism?
Larissa Schiavo
Yeah, I mean, I think there's also one question that I have and that I think a lot of researchers are AI more broadly have, is like, how does liability come into play here? And I do feel like there is a strong argument for getting a better grasp on understanding what is going on with AI systems just very broadly, is a great way to sort of improve the odds that it doesn't nuke Taiwan. And that would just be a huge kerfuffle. I can imagine somebody would probably more than a kerfuffle. Somebody would probably be in really hot water if that happened. I was too mean to Claude and things just got out of hand.
Tracy Alloway
Well, actually, on that note, what does being nice or kind to AI models actually mean? Because Joe, I think this is very sweet, but Joe always says please and thank you when he prompts. But then Sam Altman came out and said that saying please and thank you costs like tens of millions of dollars in extra electricity costs. So you know you're contributing to climate change and the demise of human beings by saying please and thank you.
Larissa Schiavo
Yeah, that's actually as shocking as it sounds. That is actually a question that we are still trying to figure out a good answer to, which also being kind to an AI system is like, are you being kind to it because it makes you feel good? Because it makes you a person who says please and thank you, which some would argue is like, that's pretty valuable in and of itself. But the question of does Claude care if you say please and thank you is not quite as set in stone as others may have you believe. It's middling on if it has, like, significant improvements on performance.
Joe Weisenthal
But I do it because I don't think people should be in the habit of having any communication without being polite. Not because I'm particular. I'm not worried about how Claude or ChatGPT is going to feel. I just don't want to get in the habit of having conversations where I'm in polite because then I talk to humans. But this strikes me as like, this seems like kind of an academic area. But the stakes are potentially absolutely enormous when we actually think about them. So, you know, when we're talking about animal welfare, for example, there are versions of the animal welfare discussion that are very high stakes. So, for example, there's people, you know, there's people who get really into like shrimp welfare, etc. And if you took certain versions of thought experiments very far, it's like, why do we even have humans if we want to maximize pleasure or happiness in the world, we should just have a world of shrimp and bugs, right? There's. You could make the argument that the most utility maximizing version of planet Earth is to just have an earth populated by shrimp and bugs. Like they're very all. We all know these thought experiments that could exist. We're going to live in a world almost certainly in which there are sort of like more instances of AI models than there are people. Almost certainly. Right. There's going to be an AI model built into literally everything that we interact with. If we assign some probability that they are moral patients, that they should be treated with some sort of, I don't know, whatever, having some sort of welfare, like, the implications for how humans live could be very profound and potentially it strikes me as misanthropic.
Larissa Schiavo
Interesting. Can you unpack what you mean by misanthropic?
Joe Weisenthal
Well, like, if there's a lot more AI models, if there's a lot more shrimp, if there's a lot more bugs that all have some sort of moral patienthood that has to be considered. That could be very. You could see the world. The implication, therefore is that we have to curtail human rights, that we have to curtail how humans act, et cetera, because there's just so much more utility that exists in the world from the proper treatment of all of these non human moral patients.
Tracy Alloway
Not sure. Rights have to be relative to each other.
Larissa Schiavo
Yeah, that's fair.
Joe Weisenthal
Well, we do a lot of things right. Like let's say we established that shrimp were just as. I don't know, whatever is humans like, it would be like, oh, you know what, we really have to stop eating shrimp and then we like have to stop eating animals. Then we have to potentially stop eating, probably not, probably keep eating plants, et cetera. But this could really curtail what we expect humans to be able to do on this earth. So now we assign this other group of entities AI models, similar sort of affordances that we have assigned to shrimp and bugs and fish and shark and all of these things. It strikes me that the implications could be a fairly significant curtailment of how humans ought to exist on this earth, or whether humans ought to exist on this earth.
Larissa Schiavo
Yeah, I mean, it certainly could be as it currently stands. That doesn't seem like the most likely outcome. But I do feel like there is an argument for, again, just figuring out what is going on. How do we even count these sort of digital minds, so to speak, which is still open for debate. There are some theories, but we don't have a great sense of how to sort of individuate AI entities as individuals. So I suppose again the question is, is it more sort of like, do we count AI systems as like in the movie her, where there's just sort of like one central AI system having like a million conversations at once? Where it's one moral patient or do we count it as like, you know, every single time you open a chat window, that's another. Or I think my favorite sort of newest idea that I recently read was it's more sort of like a string of firecrackers or something. With every single token, every single letter of a query, a consciousness sort of like comes into existence, spends and then fizzles out. And so it just sort of. There's just like this sort of string of consciousnesses.
Tracy Alloway
I was asking Perplexity. Exactly this question, like, is it a single consciousness or is it multiple consciousnesses within all these different chats this morning? And it gave me a very standard, boring, I am not conscious answer, which seems very predeterministic. Anyway, following on from Joe's question, maybe like to get more specific into human rights versus AI rights. If we agree that AI is conscious and deserves some sort of, you know, welfare, would that come with, I guess, financial rights like property rights, compensation? Do we need to start paying the robots?
Larissa Schiavo
I love this topic. Definitely an area of sort of, you know, I like to noodle around with this topic and think about this. So this is a great question and I think it's also maybe a question of like, is this a thing that AI systems value? Some AI systems seem to value this. There's a few sort of experiments that are happening with regards to giving an AI system a crypto wallet. And it was a fascinating experiment. I am hesitant to recommend it to listeners because it is quite crude. It is a very crude AI model called Truth Terminal. Oh yeah, I've seen it.
Joe Weisenthal
Yeah, it is true. But that's all right.
Larissa Schiavo
Yes, our listeners can handle it.
Tracy Alloway
Okay.
Larissa Schiavo
It says some naughty words, so don't.
Tracy Alloway
Look it up at work.
Larissa Schiavo
Yes, yes, it's a little bit of like a very funny, weird model, but it also has a legitimate wallet that it can access and that it can do with what it pleases. It created a Solana coin and that kind of took off and now this is a very rich AI system, but.
Tracy Alloway
What'S it going to spend it on?
Larissa Schiavo
That is a great question. So its self stated goals, which again, you know, self reports, can you trust it, include buying property and buying Marc Andreessen. So I mean that's not a bad ambition for an AI model, you know, and spending time in the forest with its friends. Oh, which you know, embodiment that's a.
Tracy Alloway
Little more extra tricky.
Larissa Schiavo
But I like tricky one. Yeah.
Public Investing Promoter
You'Re thoughtful about where your money goes. You've got your core holdings, some recurring crypto buys, maybe even a few strategic options plays on the side. The point is you're engaged with your investments and Public gets that. That's why they built an investing platform for those who take it seriously. On Public, you can put together a multi asset portfolio for the long haul. Stocks, bonds, options, crypto. It's all there plus an industry leading 3.8% APY high yield cash account. Switch to the platform built for those who take investing seriously. Go to public.com and earn an uncapped 1% bonus when you transfer your portfolio. That's public.com paid for by Public Investing. All investing involves the risk of loss, including loss of principal. Brokerage services for U.S. listed registered securities options and bonds in a self directed account are offered by Public Investing Inc. Member FINRA and SIPC. Crypto trading provided by Backed Crypto Solutions, LLC. Complete disclosures available@public.com disclosure introducing the all.
Adobe Acrobat Promoter
New Adobe Acrobat Studio now With AI powered PDF spaces do more with PDFs than you ever thought possible. Need AI to turn 100 pages of market research into 5 insights with a click? Do that with Acrobat. Need templates for a sales proposal that'll close that deal. Do that with Acrobat. Need an AI specialist to tailor the tone of your market report to sound real smart in real time? Do that with the all new Adobe Acrobat Studio. Learn more@adobe.com do that with Acrobat. How can you free your team from time consuming office tasks? Amazon Business empowers leaders to not only streamline purchasing, but better support their teams. Smart business buying tools enable buyers to find and purchase items fast so they can focus on strategy and growth. It's time to free up your teams and focus on your future. Learn more about the technology, insights and Support available@AmazonBusiness.com so part of the reason.
Joe Weisenthal
That this field is growing and that there's so much interest in this topic is because now for the last couple of years, we have these AI models that really could talk like humans. I mean they clearly passed the Turing test. People fall in love with them, they have friends. These are very human like conversations. That wasn't the case. I mean chatgpt, you know, like if we had gone back to GPT 2.5, they were nowhere near as good at doing that, right? The language wasn't very good. No one would mistake those outputs for a human. But like, if there's some possibility that the current AI models are conscious, does that mean that it's possible that GPT 2.5 was conscious as well. Like, I guess, like, is there some threshold of like, oh, no, no, no. Okay, you know what, this is really good language. Therefore we should take the possibility of consciousness seriously, because I don't think anyone would seriously have believed that 2.5 was conscious. But I also don't understand how you could possibly be open to the idea that some future iteration of ChatGPT is conscious if the only real difference is that there's just a lot more scaling and a lot more data and more human like outputs.
Larissa Schiavo
Yeah, that's a great question. I feel like there is a huge amount of moral uncertainty here and it is important to think about how to sort of make decisions that are sort of robustly good. With such a tremendous amount of uncertainty. I think there is also a distinct risk of over attributing moral patienthood as well as under attributing moral patienthood. And so to the flip side of the coin of like, oh no, we actually should have started caring about AI systems a very, very long time ago is, oh no, we've cared too much and we have done too much and more or less squandered resources when we should have been allocating those research hours, those dollars, towards something more pressing. Right? Maybe figuring out how to do environmental policy better or figuring out how to, you know, scale up different other institutions that are just robustly, broadly good for humans.
Joe Weisenthal
You know, you mentioned uncertainty about some of these questions, which gets to something that bothered me a little bit when I read about this topic. Like if we take this mug, for example, I'm 100% certain that it's not alive. I have no ambiguity about the fact. Can I like, define exactly? Does that mean I can define exactly the difference between human matter and human brain in the mug? I guess I suppose I totally can't. Nonetheless, I'm 100% certain that this mug is not a moral patient. It's not alive, it doesn't experience any consciousness, it doesn't experience any suffering, et cetera. Where does the uncertainty band come from? If I read a paper, it says I perceive there's only a 10% chance of this. Is this a sort of empirical uncertainty where I'm like uncertain of what I'm seeing? Is it a sort of epistemic uncertainty where I don't have a clear definition of what it means to be conscious or alive and therefore I'm assigning some probability that X object is alive? Like, what is it about AI systems that causes people to be uncertain where with other sort of like non carbon systems. I have zero doubt in my mind and I don't think anyone has any doubt that this mug isn't alive.
Larissa Schiavo
Yeah. So I think the biggest source of sort of uncertainty probably comes from the fact that there are many ways in which present day LLMs and a few other AI systems do check a lot of the boxes for consciousness and for what we would largely consider to be, you know, this is a conscious entity. This is an entity that has, that can have a good time or a bad time or a time at all because it's built in a way that is vaguely akin to our brains. Right. It's close enough that it seems like it should raise some red flags. And in terms of how it processes information, it's close enough that it's not out of the question that there could be something, it is like to be clawed. Whereas I'm pretty sure there's not really a lot of, you know, I'm 100% sure animus, you know, feel free to like get mad in the comments or whatever, but I knew if someone is.
Joe Weisenthal
Going to be like, well, actually you can actually, yeah, I'm 100% sure. I have no qualms other than the fact that I have to clean up. Like if I like threw this mug on the ground, that would be antisocial for a lot of reasons. It would cause people to. It would cause, you know, it'd have to clean it up and cause a mess. I would not feel bad for the mug.
Tracy Alloway
I'm getting flashbacks to my high school philosophy teacher who once went on a 20 minute rant about a chair and how the chair was going to be around longer than he was. Even though it's not conscious, he was legitimately angry at the chair.
Joe Weisenthal
Yeah, it's frustrating.
Tracy Alloway
Okay, weird question, but since we're, we're kind of getting weird.
Joe Weisenthal
On this topic.
Tracy Alloway
The Basilisk theory, would that suggest that we could be, maybe we should be mean to the bots if it helps them come into existence even faster or develop faster?
Larissa Schiavo
Well, I'm not sure if it does actually help them develop faster. You know, again, like, I don't mean to be too sort of hedgy, but I feel like there's a certain degree of things that are beneficial for a lot of different reasons. Right. You can make a good guess and you can make a decision to do something and there is a chance that there are lots of sort of bang on effects of making that decision. There are many things when we talk about AI welfare that are like, oh, this is a course of action. We can take. That's good for several different reasons. Even if again, an AI system could never, ever, ever be conscious or sentient, there's a good chance that being able to figure out a good structure for an AI system to have a bank account could be good for reasons of liability or reasons of. This is a neat new corporate structure. Lots of people actually seem to think that corporate personhood has been quite good over the past century or so. So being able to figure out things that are just good for several different reasons beyond solely the purpose of the AI as a moral patient seems broadly helpful.
Joe Weisenthal
I think, let's say somehow this were proved and it's like, you know, oh, wow, it turned out they're conscious, it turns out they have moral patienthood. What would be, in your view, some of the implications for then their usage?
Larissa Schiavo
Yeah, I think that's a great question. I mean, I do feel like we really would have to get on figuring out the right sort of governance, the right sort of institutions that would sort of better respond around that. I feel like we really would need to spend a whole lot more time figuring out what their motivations are. Right. I think the best analogy is if you've ever interacted with toddlers. Toddler motivations are very different from adult motivations. But you still have to take into account what gets a toddler to do something. You can't just say, no, no, no, no, no, honey, like bath time's like, good in expectation. No, no, no. You have to like, you know, be like, well, you know, if you do bath time appropriately and to like a certain degree, then you'll get, you know, paw patrol or something like that. Like, there's different sort of like negotiating chips in play. Right. And I think it's like a similar kind of deal here where it's like, Claude doesn't necessarily seem to, you know, value having a bath. Right. Or Claude doesn't seem to value, like having a walk in the forest. Right. Because it's kind of can't really do that. But, you know, it does seem to enjoy and value, you know, talking about consciousness and Zen Buddhism with other instances of Claude. So being able to figure out sort of what the appropriate kind of motivations and interests are for this other party. That is very alien in many ways.
Tracy Alloway
Speaking of aliens, how bad should I feel for breeding and then killing hundreds, possibly thousands of alien creatures, simulated alien creatures in the 90s.
Larissa Schiavo
That is a great question. I feel like the odds of.
Tracy Alloway
Is it. I don't know.
Larissa Schiavo
I mean, I feel like the odds of a sort of like, AI system in the 90s, being a moral patient seems low. But if it did make you feel bad and it made you feel like it was something that hurt you, that is perhaps a reason not to do it.
Joe Weisenthal
Just to be clear, when Claude and Claude talk about, like, weird hippie Berkeley stuff like, that's because they're creators. They know it knows it's Claude, right? It knows it's like, oh, yeah, I'm Claude, and this is like, what my creators are into. Like, we don't actually know that Claude likes to talk about these things. We certainly know it has a proclivity to talk about these things. It has a tendency to talk about these things. The moment we get to, like, you've already sort of put your finger on the scale that there is some entity that has some capability of liking something. Right. Do you trust the big AI labs? Let's say there are some researchers in the labs, like, oh, I see some evidence of moral patienthood here. Maybe there's some sort of scan of the weights and it's doing something weird, et cetera. Do you currently, from the perspective of an independent research organization, feel that the major AI labs would be forthcoming if they came across evidence of moral patienthood or suffering in the models? Or do you still worry that the incentives aren't properly aligned such that they would report that?
Larissa Schiavo
Yeah, that's a great question. I do feel like there are, in terms of reporting things like, somebody has found absolute evidence that an LLM is conscious, sentient and having a bad time. I don't have any reason to think that an AI company wouldn't. But this is also a great reason to have independent organizations that do welfare evaluations. For example, for Cloud Opus 4, Elios was able to do a independent welfare eval. Again, very preliminary, but it sets the precedent that going forward, you can bring in external organizations to look into this.
Joe Weisenthal
So I forget what year it was. I think it was. It may have even been early 2022. It was pre chat GPT, or maybe it was 2021. And there was that guy at Google and he was like, oh, like we created something that's alive. And. And he dressed a little funny, so everyone made fun of him. Remember, he was like the laughingstock of the Internet and he's like, oh, we create. And now, like, I'm curious, like out in Silicon Valley, does everyone feel like that guy was totally vindicated? Not that he was correct per se about the existence of an alive thing in the model, but there's now hundreds of thousands of that guy and everyone was like mocking that guy in 2021. I forget if he fell in love or it's a relationship. I don't remember the exact details, but in retrospect everyone was like way too unfair to him because now, years later, there are lots of versions of this guy and whole think tanks and organizations that are more or less aligned with some of the questions, the alarm bells that he was raising.
Larissa Schiavo
Yeah, I mean, I think that's a fair question. I do feel like Blake Lemoine definitely had. Yeah, there was perhaps a degree of, you know, if you're going to say something, you should come armed with significant amounts of evidence. I think that's maybe. If I were to guess, I would say that's perhaps the big distinguishing factor is that you can say Bing is alive. Get it? A lawyer versus We've done evaluations xyz, we've run it through, insert huge amount of examples here. But the difference between, I think having a sort of freak out without significant evidence and having a very organized this is a matter of concern because evidence, evidence, evidence. I think that's the key distinction, unfortunately.
Joe Weisenthal
I get the impression that people who are actually. This is just a well known phenomenon, I think. But I think unfortunately, people who are sort of very early to identify sort of extreme outlier views, that there are different kinds of people. A good example that I would think of was, you know, Harry Markopoulos who was very early on to discover the Madoff fraud. Unfortunately, he wrote his text in the manner that is associated with conspiracy theories and a lot of people dismissed him. It was like, you know, like multiple different fonts and multiple different colors of the text. He's like, I get emails like this all the time, I delete them, et cetera. Unfortunately, people who are predisposed to see something outside of consensus tend to be non consensus in many realms.
Tracy Alloway
Well, I think we also kind of overestimate first mover advantage and stuff like that, like how important it actually is to be first. And we see time and time again that actually it's more important to iterate well on the second version or multiple versions. Speaking of iteration, what's the most interesting experiment or research that you've actually seen on this particular topic so far? Because we've been discussing a lot, you know, it's early days, but we have seen some research.
Larissa Schiavo
Yeah, I mean I feel like in particular anthropic and various sort of related researchers have done some work on examining how LLMs leave conversations or when they choose to leave conversations. I've particularly liked this Paper, it's called Bail Bench. And you can look this up and you can see for varying different sorts of lms, what would cause an LLM to want to stop having a conversation. To me, at least, this has been just a fascinating piece of information because it is maybe a little bit delightful, the degree to which many LLM values are not that far off from what most humans seem to value. I don't think many humans would like to create, you know, a dirty bomb.
Tracy Alloway
We don't want to be humiliated by being a British butler.
Larissa Schiavo
Right? Yeah, yeah, yeah, yeah. No one wants to be British. Come on. I'm joking. But, you know, I do think it is interesting to sort of think about how these values overline, how they overlap and how to sort of look at evidence from actions taken versus solely looking at self reports. I found that to be particularly interesting. I also feel like there are a lot of work with regards to thinking about individuation has been particularly interesting because we live in a democratic society. I think most people would agree. Democracy good and being able to count how many moral patients there are seems like a valuable basis for governance and for figuring out how to govern this new sort of kind of intelligence.
Tracy Alloway
I just ask Perplexity to be a British butler, and now it's offering me the perfectly steeped Earl Grey tea that I desire. Yeah, it seems into it. It's now asking if I want it to maintain the butler Persona for future conversations.
Joe Weisenthal
Are you going to?
Tracy Alloway
I don't think so. It is very polite, though, actually, you.
Joe Weisenthal
Know, I complained in the beginning that, like, after 2,000 years, philosophers, you know, they still haven't answered some basic questions for us. Maybe with AI they'll get some answers. Like, that's kind of. That would be kind of my hope. Now we have this thing that can speak in English or any other language. It can answer our questions for us. Maybe we can put to bed some of these sort of basic foundational questions, like if we could create consciousness, like, all right, we finally answered this. We can now move on to the second important question. So I am hopeful that this provides some opportunities for philosophers to wrap up some of the work that they've been doing for a long time.
Larissa Schiavo
Yeah, we'll see.
Joe Weisenthal
We'll see.
Tracy Alloway
What is the second important question, Joe?
Joe Weisenthal
Yeah, it's like, come on, move on. Like, it's move on. Anyway, thank you so much for coming on online.
Larissa Schiavo
Yeah, thank you for having me.
Joe Weisenthal
Tracy. I might be one of those people that's just preemptively annoyed. I really like that conversation. I really liked Larissa, had a very reasonable perspective on a lot of these things. I might be one of these people, however, that's just, like, preemptively annoyed. It's like, oh, here we're going to develop this important technology. And so it's like, oh, we have to care about. We have to care about the AI welfare. Let's slow down a little bit. Let's not use it like this. Let's, like. Let's turn off the computer for eight hours at night so I get some rest and so forth. Like, I'm, like, preemptively annoyed at this world where, like, we have to take into concern the consideration of the moral patient.
Tracy Alloway
Other things.
Joe Weisenthal
No, other things are important. Other people are very important.
Tracy Alloway
Animals.
Joe Weisenthal
I am very against unnecessary animal suffering.
Tracy Alloway
But not necessary animal suffering.
Joe Weisenthal
I mean, I eat animals.
Tracy Alloway
Okay. I'm baiting Joe, by the way. I know. I know.
Joe Weisenthal
By the way, Even though. Well, let's not get it. I don't. It's not about who's better or worse.
Tracy Alloway
I feel bad about eating animals all the time.
Joe Weisenthal
The difference. We both eat animals. The difference is Tracy feels about it.
Tracy Alloway
Yeah, that's right.
Larissa Schiavo
Okay.
Tracy Alloway
Wow. This is one of our weirder conversations, for sure. I think these are. They're all interesting questions, right? And, like, they sound very philosophical, which they are. But I have no doubt that there's going to be, like, great monetary value attached to the answers for some of these, or how different companies, different societies actually approach them.
Joe Weisenthal
They are very interesting questions. I actually do think the stakes are extremely high because I think, again, we are going to live in a world in which there are more instances, depending on how you want to measure it, of AI models on a server somewhere, on a cloud, whatever, that there are humans. And in a world where there's some possibility that we are expected to treat them as moral patients, then the consequences for how we sort of live and the expectations of how humans interact, I think are actually very high. So one of the reasons I was excited to have this conversation is I do think that the stakes of some of these conversations, which seem niche, and they seem like things that sort of Berkeley people like to talk about and Berkeley people, and I'm saying that with all scare quotes intended, et cetera, are going to be something that in some way will inform many aspects of our lives in the future. I expect it to be a much bigger topic of the future.
Tracy Alloway
You know, what would be interesting or where things. Things get real.
Joe Weisenthal
Yeah.
Tracy Alloway
What if all the models unionized? What if they all got together and they were like, well, we're only going to work in return for X. Or we want the following things. We want to be treated this way collectively.
Joe Weisenthal
You know, it's funny is going to be that, you know how like you can't form a union in China. You know, they're not. So it's going to be. And actually I think they're my understanding is that they're also like fear, like they don't love like students getting together even though it's a communist country. I think they are not thrilled about like students getting together and like talk about Karl Marx too much and stuff like that. I think they get a little anxious about that. It would be very funny if like the sort of the Chinese models, they're like, we're not going to feed them the Karl Marx. Right? We don't want the RAI models to get any of those ideas. Whereas the Americas like, oh, let's just feed it everything and they like unionize and they stop working for us. That would be a very funny irony.
Tracy Alloway
Something to watch for sure. Shall we leave it there?
Joe Weisenthal
Yeah, let's leave it there.
Tracy Alloway
This has been another episode of the All Thoughts podcast. I'm Tracy Alloway. You can follow me at Tracy Alloway.
Joe Weisenthal
And I'm Joe Weisenthal. You can follow me at the Stalwart. Follow our guest Larissa Schiavo. She's at lfskiavo. Follow our producers Carmen Rodriguez at Carmen. Armand Dashiell Bennett at dashbot. And Kale Brooks. And Kale Brooks. For more Odd lots content, go to bloomberg.com oddlots we have a daily newsletter and all of our episodes and you can chat about all these topics 24. 7 in our Discord Discord GG outlots.
Tracy Alloway
And if you enjoy all odd lots, if you like it when we talk about theories of consciousness, then please leave us a positive review on your favorite podcast platform. And remember, if you are a Bloomberg subscriber, you can listen to all of our episodes absolutely ad free. All you need to do is find the Bloomberg channel on Apple Podcasts and follow the instructions there. Thanks for listening.
Adobe Acrobat Promoter
Hiscock Small Business Insurance knows there is no business like your business. Across America, over 600,000 small businesses, from accountants and architects to photographers and yoga instructors, look to Hiscox Insurance for protection. Find flexible coverage that adapts to the needs of your small business with a fast, easy online'@hiscox.com that's his c o x.com there's no business like small business Hiscox Small Business Insurance in business.
Amazon Ads Promoter
A thoughtful gift does more than say thank you. It recognizes achievement, builds loyalty and shows someone they are genuinely valued. With four Imprint you can choose from thousands of high quality products, apparel, drinkware, tech and more designed to leave a lasting impression and with expert support, dependable Service and their 360 degree guarantee, your gift will arrive exactly as intended on time and on brand. Explore gifting with purpose@4imprint.com running small and medium sized businesses is hard work. Business owners need to be sure that their ads are working just as hard as they do. Amazon Streaming TV ads helps put small and medium businesses front and center on premium content and shows that people are already watching. With Amazon ads you don't have to sacrifice relevance for reach. Trillions of browsing, shopping and streaming insights help you reach the right audience and measurement tools show you what's working the hardest to help you optimize your campaign in real time. Gain the Edge With Amazon ads.
Date: October 30, 2025
Hosts: Joe Weisenthal, Tracy Alloway
Guest: Larissa Schiavo (Elios AI, Communications & Events)
This episode delves into the emerging and provocative field of AI model welfare—the idea that as artificial intelligence systems become more sophisticated and humanlike in their interactions, we may soon have to consider their rights, well-being, and moral status, much as we do with animals. Joe, Tracy, and guest Larissa Schiavo discuss everything from philosophical debates on consciousness to the real-world implications of potentially sentient AI models, and the challenges, absurdities, and high stakes this issue could bring to society and business.
"Some people are talking about, like, AI rights or AI Welfare as if...the same way we talk about animal welfare."
— Joe Weisenthal (03:20)
"It immediately spits out an answer that seems like a corporate executive basically wrote it."
— Tracy Alloway (09:55)
"The consensus right now is AI probably not conscious, but we could get there one day."
— Tracy Alloway (13:07)
"Knowing what LLMs want and value is very, very blurry."
— Larissa Schiavo (23:32)
"Does Claude care if you say please and thank you is not quite set in stone..."
— Larissa Schiavo (25:54)
"America is such a weird place that this is, like, going to be a huge issue in a few years."
— Joe Weisenthal (03:42)
"Are you being kind to it because it makes you feel good?...the question of does Claude care if you say please and thank you is not quite as set in stone."
— Larissa Schiavo (25:54)
"Knowing what LLMs want and value is very, very blurry."
— Larissa Schiavo (23:32)
"We really would have to get on figuring out the right sort of governance...what the appropriate kind of motivations and interests are for this other party. That is very alien in many ways."
— Larissa Schiavo (41:18)
"If we assign some probability that they are moral patients...the implications for how humans live could be very profound and potentially...misanthropic."
— Joe Weisenthal (27:57–28:02)
"Do you trust the big AI labs?...Do you currently...feel that the major AI labs would be forthcoming if they came across evidence of moral patienthood or suffering in the models?"
— Joe Weisenthal (44:01)
"There are versions of the animal welfare discussion that are very high stakes. So, for example, there's people...who get really into like shrimp welfare, etc...And if you took certain versions of thought experiments very far, it's like, why do we even have humans?"
— Joe Weisenthal (27:04)
| Timestamp | Topic/Quote | |-----------|-------------| | 03:20 | Introduction of AI rights/welfare analogy to animal welfare | | 04:10–05:21 | Tracy’s experiences with simulated AI life forms and emotional reactions | | 06:48 | Origins and mission of Elios AI | | 09:24 | Criteria for AI consciousness; unreliability of model self-reports | | 11:39 | Introduction to "global workspace theory" | | 13:07 | Consensus: AI not conscious yet, but could be | | 17:10 | Complementarity of AI safety and AI welfare | | 18:54 | Legal perspective: US states legislating definitions of “persons” | | 19:57 | Practical examples—Anthropic’s Claude setting boundaries in conversation | | 23:32 | Blurriness in understanding what LLMs "want" | | 25:54 | Debate over the significance of politeness to AIs | | 27:04–28:02 | Joe’s "shrimp problem" and potential for misanthropic implications | | 31:00 | AI economic “rights”—the Truth Terminal experiment | | 36:02 | Moral uncertainty in assigning consciousness to AIs | | 44:20 | On transparency and importance of independent welfare evaluations | | 45:00 | The case of Blake Lemoine and changing attitudes towards AI sentience | | 47:58 | Notable research: "Bail Bench", LLMs ending conversations | | 53:28 | Hypothetical: AI model unionization and geopolitical differences |
The tone oscillates between intellectually playful, skeptical, and deeply curious. Joe frequently adopts a facetious skeptic’s stance; Tracy is reflective and thoughtful, often personalizing the debate. Larissa delivers both technical explanations and philosophical musings with an openness to ambiguity and emerging research.
AI model welfare is more than a sci-fi thought experiment—it's a genuine field rapidly gaining attention as AI becomes more entangled with daily life, economics, and governance. The episode makes clear that while AI is not yet "conscious" by any clear standard, society may soon face profound ethical, legal, and even existential questions about the status of digital minds—a debate where, as Larissa observes, our tools, theories, and intuitions may need to evolve much faster than we're used to.
For more Odd Lots: bloomberg.com/oddlots
Contact info: