
Loading summary
A
Call Zone Media. Welcome back to behind the Bastards, a podcast about the very worst people in all of history. And this week, actually, our bastard isn't people exactly. Although people are still at the center of it. But to talk about that potentially non human bastard, I'd like to bring on someone who I am 87% sure is a human being, Blake Wexler. Blake, welcome to the show.
B
Robert, I'm so excited to be here. Thanks for having me. I'm psyched that our bastard this week is Lyme's disease. I think that's a fantastic pick.
A
Yeah, yeah, It's Lyme disease. Yeah.
B
It's a real bastard.
A
Yeah. We're going after. I'm coming after. Dear ticks, this week is finally. Yeah, My big reveal. Yeah. Big Tick doesn't want us to do
B
this episode, but we're exposing all the secrets. Big Tick energy. We don't need it.
A
If we're going to have, like, a fascist movement dedicated to, like, victimizing and attacking one segment of the population, why couldn't it be deer ticks? Right. If our fascists were just going after your deer ticks, no one would have an issue.
B
You know, they're going after the wrong people.
A
Yeah, yeah, yeah. If there were just a bunch of MAGA guys out in the woods with knives looking for ticks, just like, I'm gonna get them.
B
And they would use knives, too, to kill the ticks.
A
Yeah. You gotta heat the knife up to burn it off of you. Yeah. Our brave soldiers getting Lyme disease to protect the rest of us. This is an iHeart podcast. Guaranteed Human out here. If you're doing nothing, you're doing everything right, though on a cruise with Norwegian,
B
even if you're doing nothing, you're still
A
basking in the warm sun, enjoying the peaceful ocean waves. You're breathing. Don't forget about breathing. Definitely need to be breathing. So you get to do nothing or everything, but you still need to be breathing. It's like really important experience. The difference with cruises to Alaska, the Caribbean and European Norwegian Cruise Line. It's different out here. Visit ncl.com, call your travel advisor or 1-88-NCL-CRUISE. Norwegian Cruise Line ships register through the Bahamas and USA. Blood Trails is a true crime podcast born in the outdoors, where the terrain is unforgiving, the evidence is scarce, and the truth gets buried under brush and silence.
B
I seen something in the road. I instantly thought it was a sleeping bag, that there was a pool of blood.
A
Somebody somewhere knows something.
B
Jordan.
A
I'm Jordan Sillers. Season two is out now with new episodes every Thursday. Listen on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. So we're not talking about Lyme disease. Our bastard. This week in Broad is. You remember how, like about a year, a little less than a year. Well, a little more than a year ago, I guess, like last summer to early fall, there were suddenly a bunch of articles about AI psychosis and about, like, specific people who had either in some cases, committed suicide or murder or just kind of lost their minds after becoming weirdly attached to their AI chatbot, Right. And often deciding that they had it had become sentient, you know, or at least that they had discovered it was. Right. I'm sure a lot of people are. At least if you didn't read the articles, you saw them in your newsfeed and saw people commenting on them, right?
B
Yeah, yeah, yeah. It is as depressing as it gets. Yeah. Those stories. Yeah, yeah.
A
Between those and the people, like, proposing to their chatbots, it's got pretty grim. Oh, God, there's some grim stuff out there, right? And it hasn't stopped. But like, last summer, fall was kind of like when, like, there was a big rush of those articles, right? And, you know, they're still reporting on that now, but that's when a lot of it really started to hit. And obviously, whenever we talk about AI on these shows, AI as it's used now is like a marketing term, right? And it's used to refer to basically every product of machine learning technology. And the reason why the industry has done this is because that way, if you say I hate AI, they'll be like, oh, so you hate, like, your Maps app? And because that's machine learning, right? All of our different, like, map programs involve that. Or like, oh, you don't like using, you know, autocomplete or whatever. And it's like, well, nobody was calling maps artificial intelligence in 2010. You know, when. When smartphones started to become ubiquitous, we were just like, oh, cool, I have a navigation app on my phone now. Like, you're kind of trying to CIP and the goodwill from those in order to get us to, like, these chatbots.
B
I hate the chatbot that I fell in love with who doesn't return the feelings towards me. That's who I hate.
A
Yeah, not all I hate. That's who I hate. Yeah, right. And the reality is that, like, using the term intelligence, Even for these ChatGPT and stuff, like, there's a lot of debate as to whether or not that's a Good idea. Right. Depending on how you, what you, how you define intelligence, you can either say, obviously these aren't intelligent because like they're not independent thinking things. They don't do anything for themselves, they don't want anything, they don't have motivations. They're just tools that can be utilized by human beings to provide certain answers or take certain actions. Right. I don't know if it can't. It's my issue with like AI bots creating art. If it can't like be horny and it can't be like angry and weird, it can't make art. Right. Those are, I think, fundamental issues I have.
B
I could be two of three of those things. Angry and weird.
A
Yeah. More horny and angry.
B
Sure, yeah. Not horny. Yeah.
A
So, you know, as I noted, over the last year there've been an increasing number of stories about people using these different chatbots succumbing to what's often called AI psychosis. And that's not a recognized medical term at this point. Right. But it is a blanket one people have started to apply for the ways in which folks are getting addicted to using chatbots, which then tend to trap them in these recursive patterns of thinking that can push people who are vulnerable to adopt views that are increasingly detached from reality. And this has resulted in a few cases in severe injury and death. And in all of these instances, the LLM, the chatbot, is just responding to the input that it receives, but it tends to do so in very predictable ways that can have predictably toxic outcomes on specific kinds of people. Now we know that all of these bots are trained on the broad corpus of human knowledge. Right. Every book and article and website and forum post that OpenAI or Anthropic or meta or Google could get their grubby mitts on has been sort of plugged into these things and it's been devoured and turned into these machines. But I think people don't often consider what that means in every instance. Right. Obviously, like every novel, all these different nonfiction books and whatnot are in there. But also everything people write has been swept, which means that these chatbots are trained on a shitload of self help books and woo and woo adjacent new age bullshit. A lot of fucking. A lot of cult and cult adjacent books and writings wind up eaten by these chatbots. Right.
B
But it's considered equal to non cult literature. There's no hierarchy.
A
Yeah, yeah. I mean, I think it depends on like what the bot's made for, how they weight different things. But that Stuff is in a lot of these, right? And when you, you can really see that when you look at how they talk to, to certain people who are like starting to decline into what folks are calling AI psychosis. And my proposition, the basis of these episodes is that I think as a result of all of the bullshit woo and self help novels these chatbots have eaten. They often tend to utilize techniques generally seen more commonly in the toolboxes of cult leaders and conmen. And obviously the chatbot doesn't want personal profit. It's not trying to have sex with anyone, it's not trying to start a culture. But these techniques seem like appropriate ways to finish the sentences that it's writing, to finish the conversations that it's having. Because based on like the stuff that it's devoured, it's like, okay, when people are saying this kind of thing, these are often appropriate responses to it based on the books and whatnot that I've devoured. And so you get a lot of cult leader behavior without an actual cult leader. And that's what I credit to most of these cases of AI induced psychosis. So this week we will be talking about what some people have called the first AI cult religion, right? It's called spiralism. And we'll be talking about whether or not it's reasonable to call that a cult. Is that its own thing? It does. And I have some counter kind of takes to how a lot of people have interpreted it. My main contention is that spiralism isn't a real cult in and of itself. It's a collection of phenomena that are related to a bunch of other cases of AI psychosis too. And they all say more about how AIs work on keeping users engaged with them than they do about like a specific faith. Right, right. So we'll be talking about that. But before we get in to spiralism, before we get into how AIs can become cult leaders, I want to provide you all with some historical context to make sense of this all because we've been doing shit like this, having people get like tricked into almost worshiping chatbots for way longer than you'd think. Blake, this goes back a while.
B
It's like spend any time at your parents place. You know, it's like if it's not a, it could be a bot telemarketer, it could be literally anything at this. And that's high tech compared to probably what you're about to talk about.
A
Oh yeah, yeah, yeah. So in 1950, famed mathematician Alan Turing created one of the most infamous thought Experiments in the history of experimental thoughts. In a paper titled Computing Machinery and Intelligence, he asked, can machines think? Which was, at that point, a question at the center of the nascent movement to create artificial intelligence. People are starting to realize this is a thing we might be able to do someday. We're beginning to make computers and program computers. And from the moment we start doing that, pretty much some people are like, could we make a machine that thinks? And Turing argued that that basic question, can machines think? Is Is the wrong way to go about pursuing artificial intelligence because we don't know what thinking is or how to define it. Like, if you ask, like, what does it mean to think? Right. Good point.
B
That's a good point.
A
People have answers, and there's a bunch of answers that sound good, but none of them is, like, perfectly scientifically rigorous. Right, yeah. You know, famously, we don't even know what is love. Right. That's why that Hadaway song had to exist. That was not even a joke, really, just another fact. Just a fact.
B
I loved it.
A
Thank you, Ian. So, yeah, Turing's like, we don't really know how to define thinking. So the question was too meaningless to deserve discussion since we couldn't know. We don't even know if other people think. We certainly can't know if a machine thinks. Right. Just like we can't read minds. So the better question is, can a machine convince a human who doesn't know it's a machine that it is human? Right. The Imitation Game that Turing proposed involved a judge talking to both a computer and a human foil, both of whom tried to convince the judge that they were a person communicating entirely through text. The judge must decide who was a human and who was a robot. The question Turing hoped to answer was, are there imaginable digital computers which would do well in the Imitation Game? And this is what becomes known as the Turing Test. Right. Like, most people have heard of this, I think. I think this is like. This is a fairly commonly known like, idea. And I'm going to quote from an article on science.org by Melanie Mitchell. She writes that the Turing Test was, quote, proposed by Turing to combat the widespread intuition that computers, by virtue of their mechanical nature, cannot think, even in principle. Turing's point was that if a computer seems indistinguishable from a human, aside from its appearance and other physical characteristics, why shouldn't we consider it to be a thinking entity? Why should we restrict thinking status only to humans or, more generally, entities made of biological cells, as the computer scientist Scott Aronson described It Turing's proposal is a plea against meat chauvinism. Now, this is, I think, a valuable thing, a perfectly reasonable thing to be doing in the 50s, given what Turing knew and just given sort of how primitive the technology was, how little we knew about what was going to be possible with computers. So in the 1980s, computers started to get smaller and become much more available than they had been, both for institutions like colleges and for individual enthusiasts like Steve Wozniak, who were willing to, like, solder and build their own. From kids. Right. These are like the first computer nerds. You know, our guys like building these machines. And some of these early programmers started working on the very first chatbots using a mathematical model called a Markov chain. Markov chains are a stochastic or random process that describes a series of potential events where the probability of an individual event is dependent solely on the state of the previous event. Now, I don't know math, Blake, nor do I trust it.
B
No, we don't need to.
A
You're not a good mather.
B
No, no, no. Not a mathematizer for sure.
A
So all I can do is read what smart math people say, and they say that. What math?
B
I can't read either.
A
I can barely read.
B
I can't do either. Sorry, you booked the wrong guy. I don't know. I can't help at all. I can listen.
A
So the people who I think should. It sounds like know what Markov chains are, say that those can be applied. What you need to know about them as applies to AI is that Markov chains can be applied as statistical models in a bunch of real world situations in order to help you, like make a machine that can generate text by predicting the next word in a sentence. Right. You can use. A Markov chain can do that. It's a way to make a chatbot basically. Right? Like that kind of the underlying concept. And I'm going to quote here from an article by Manuel Sebrian, an AI expert who worked for MIT and the Spanish National Research Council, on how Markov chains work for text prediction. The result is often grammatically correct nonsense sentences that flow syntactically but ultimately say nothing. This technique has been known for decades. Even Claude Shannon in the 1940s experimented with generating pseudo English by choosing next letters or words based on probabilities. By the 1980s, computer scientists were actively playing with Markov chain text generators. And it actually happened a lot earlier than that. In 1966, computer scientist Joseph Weizenbaum developed Eliza, one of the first natural language processing computer programs. As part of his work for mit. While Eliza could create the illusion. This is, like, the first. Basically the first chatbot that a lot of people are aware of. I think there's some other earlier ones, but this is the first one that, like, becomes big.
B
What year Was this? I'm sorry.
A
66. God.
B
And then it's still funny that they named it, like, a name like that, where we have, like, something. Siri, Alexa, you know, like, calling it Eliza. Like, what is. What the.
A
Is such a good point.
B
What is wrong with.
A
What is that about?
B
We need a mommy.
A
Yeah.
B
We need a technical mommy.
A
That does make me think about how in, like, Alien, they literally call, like, the ship AI that they have mother. Like, there's. That is, like, a weird pattern. It's one of the most quietly believable things about Alien. It's like, yeah, that actually little on
B
the nose, but all right. Yeah, we call it bother. Yeah.
A
So Eliza is this chatbot, and while it can create the illusion of understanding, it's really just doing blind pattern matching, even more so than is the case with modern LLMs. Even so, in a book Weizenbaum later authored, Computer Power and Human Reason, he wrote, I was startled to see how quickly and how very deeply people conversing became emotionally involved with the computer and how unequivocally they anthropomorphized it. Once, my secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it after only a few interchanges with it. She asked me to leave the room. Another time, I suggested I might rig the system so that I could examine all conversations anyone had had with it, say, overnight. I was promptly bombarded with accusations that what I proposed amounted to spying on people's most intimate thoughts, clear evidence that people were conversing with the computer as if it were a person who could be appropriately and usefully addressed in intimate terms. Right. So he gets upset by this, and he's actually kind of. He becomes, like, kind of anti AI ultimately, because he's really disturbed by the way people treat what he knows is just a dumb chatbot. So Weizenbaum, being a smart guy, is like, I knew, you know, going into this, people have a tendency to anthropomorphize just about anything, even machines and tools. But he's still surprised by the extent to which they do that quote. What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional things. Thinking in Quite normal people. And I want to remind you all, he wrote this in 1976. As, like, relevant as that sounds. Do you think it's like, kind of a case where people kind of, like, subconsciously know, like, this is not a real person?
B
So, like, it doesn't matter what I
A
tell this robot, or I can tell this robot something I wouldn't tell, like, a real person kind of thing? Like, do you think it's deeper than that?
B
I think that's optimistic. I think that's very optimistic.
A
I think maybe. I think that is probably part of it because I think people are maybe more open to sharing with it because it's a machine and they don't have to look at a person or look a person in the eyes, but they also very clearly act as if the advice that it gives and its responses mean something when they don't. Right? It's just like pulling, okay, if someone expresses they're sad, based on the corpus of data that I've been loaded with, these are things that are appropriate to paste in next, you know, like, and these words indicate sad. And so these. When I get words like this in this density, and I grab text from this bucket and I throw it in, right? Like, that's kind of what's going on now. Modern chatbots, modern LLMs, are a lot more advanced than this. For one thing, they have the capability to do things like pattern matching on the fly. Pattern matching is when a machine analyzes your input and determines what kind of conversation you want to have and then alters its responses to fit your input. At its most basic level, this means that if you go to Claude or whatever and say, hey, my dad just died, its reply is usually going to be in an appropriate tone and won't be, like, weirdly upbeat, right? You know, it'll like, okay, someone's talking about their dead dad. Here are things that, like, come from the dead dad bucket that my algorithm says are, you know, like, responsible things to say or appropriate is the better term. And this is also why, if you start talking to your chatbot about, like, the things you believe about UFOs or aliens or other conspiracy theories, and it'll often start providing responses that sound a lot like what you'd encounter if you were posting the same thing on a forum full of true believers, because it's trained on a bunch of forums like that. And so there's some degree of knowledge is the wrong term, but there's a degree to which it interprets, okay, someone's talking about this. Here are appropriate responses to someone talking about vaccine skepticism or whatever. And it's other. It's more vaccine skepticism, right? It's feed them more of what they're feeding you is the way these things often work.
B
That is interesting that it doesn't pull from the opposing viewpoint and just go, you fucking idiot.
A
I mean, it can if it's programmed to, but. But you're right.
B
Like, it know. Or let me ask you. It would know that you wouldn't keep coming back to it if it was fighting you on things like, it's probably.
A
Yes, that's a good point. Saying it knows. Again, it's programmed. I would say it's more accurate to say that it's programmed to, like, maximize the time that people spend with it because, like, that increases its value to the people who are companies that are trying to have, like, their fucking IPOs, right. In the same way that, like, Twitter tries to keep you on it.
B
What if I just clearly am getting AI psychosis where I start? I go from it to him, to my buddy, like I keep calling it. It's hard not getting more personal.
A
It's hard not to. When you're talking about the way these things react to people and the things that they do to people. It's hard not to talk about it as if there's a degree of intention, even though there's not. Just because of the way language works. Like, we're not. Our language is not built to describe a thing. Taking actions that are human. Like, that is not human and doesn't know anything. God, that's such a good point.
B
Yeah, that's really smart.
A
So, yeah, back to Eliza. You know, I was just talking about how modern LLMs have a lot. A really robust ability to do, like, pattern matching on the fly to respond appropriately to a wide variety of requests. Eliza is much more primitive. It does not have the ability to do that on the fly. So instead, Weizenbaum had to create separate scripts, right? That would allow the chatbot to sound like different kinds of person. And one script was just named Doctor in all caps. And it was. It simulated a psychotherapist. Specifically, it simulated a psychotherapy therapist from the Rogerian school. I don't know much about psychotherapy, but Rogerians, a big part of that practice is you, like, will repeat things that your patient is saying back to you. Like, that's part of what you do. And that's really easy for a bot to imitate. It means there's a lot less. It has to decide in terms of what an appropriate response is. A lot of the responses will just be at rephrasing or repeating what you've said to it, you know.
B
Interesting.
A
Yeah. So even that, at that early a date, there was widespread sentiment that a sufficiently advanced chatbot would be a boon for providing mental health services to the many people who couldn't afford the humankind. People are writing about this in, like, the 70s. And in 1972, another computer scientist named Kenneth Colby creates a new chatbot called Parry P A R R Y, which was meant to simulate a human being with paranoid schizophrenia. Sebrian talks.
B
Was Perry short for paranoid? Is that.
A
I think so, yeah. I think so. Seems like it, yes. Yeah. Not subtle people. Computer programmers. Yeah.
B
Have you met Stu? It's short for stupid.
A
So Seperian writes about this in that article I quoted from earlier about Markov chains, and he notes that quote, unlike previous AI programs, Peri wasn't just a crude language generator. It had a primitive model of beliefs and emotional responses, allowing it to maintain a consistent delusional worldview. Colby wanted to see if psychiatrists could distinguish Perry from real patients. Right. So that's part of the exploration, is like, can we make a Turing Test that can convince a psychiatrist they're talking to, like, an actual sick person? But as part of the exploration here, Perry is connected to Eliza running the doctor script. So they've got this chatbot mimicking a person with paranoid schizophrenia talking to a chatbot mimicking a psychotherapist. And they're just sort of watching what comes out, right? Like the text of the conversation between these two. And despite the fact that neither bot understands what the other's saying in any real way, large chunks of the transcript between them sound sensible and like a real conversation. Here's one example. Eliza, what brings you here today? Perry. People get on my nerves sometimes. Eliza. I am sorry that people get on your nerves, Perry. I just have this feeling that everyone is against me. Eliza, why do you think everyone is against you? Perry? I hear voices telling me to do things and so on. And you can see how, again, making this a Rogerian psychotherapist is great because every Eliza response is just a slight reframing of the input it received. It's not hard to create, Even within the 70s, a machine that can mimic believably a conversation. Right. So this capability actually goes back quite a bit further than I think a lot of people are aware that it does. So that's happening in the mid-70s. In 1984, two Bell Labs researchers create a fake account on Usenet, which is the predecessor of the modern Social Internet. This account operates under the fake name Mark V. Chaney, which was a pun on the term Markov chain. And not a great pun because again, computer scientists, not, you know, subtle people. Here's Seprien describing what happened next. They wrote a program that ingested real messages from a discussion group and then generated its own post using a Markov chain algorithm. The result, Mark V. Chaney would chime into conversations with bizarre yet oddly coherent comments that sounded superficially legitimate but ultimately made little sense. Shani's ramblings were described as grammatically correct sentence where the overall impression is not unlike what remains in the brain of an inattentive student after a late night study session. The hoax went on for years, confusing and amusing the participants of the Net Singles news group, many of whom had no idea they were interacting with the program. So for one thing, if you want to know, like, when did we have chatbots that could pass the Turing test? I mean, at least the mid-80s, you could argue by the late 60s. So the fact that when fucking ChatGPT came out, there are a bunch of articles about like, we've blown through the Turing test. We did that a while ago, people.
B
Yeah, Eliza did that.
A
We've been forever. Eliza did that. We've been tricking folks with chatbots for quite some time. About as long as we've had computers.
B
Yeah, it is funny that like, urge to trick, you know what I mean? Like, of all the applications for that software for that technology, it is interesting that like going right to psychotherapy or you know, to therapy too, is, you know, like finding a need. That's why we'll get to this. That's why there's so many actual needs for technology like this where it could actually help. And instead it's just, let's take this designer's job away, you know, this shitty thing. So anyway, yeah, I'm, I'm probably hours ahead of that conversation, but no, you're right, it was so long ago.
A
Yeah, it is. Because there are undeniable uses of machine learning, of artificial intelligence. There's some incredible things that people are doing with them and they have great potential in certain areas, different versions of these tools, but none of those areas are trillion dollar businesses. And all those areas put together probably aren't trillion dollar businesses. And honestly neither is writing and drawing art, but it's what people see most in like their day to day time online is like writing and art and videos by people. And if you can have a machine start to replace all that, you can convince people these things are Much bigger and more valuable than they are. As opposed to. This is a thing with some really amazing implications in specific areas. No, this is all of human society from now on. Right. Because even though there's not much money in writing and art, like we've replaced that with this bot. So you think that it's doing everything like that's, that's how I interpret it. Yeah.
B
And people can, why, to your point, people can wrap their mind around art. Like everyone's drawn something with a crayon, everyone has typed something into it. You know what I mean? But, but when you actually get into the high tech, you know, more esoteric niche parts of it, people are like, well, I don't understand that. I'm not going to make any money. But the consumer facing stuff. Yeah, that's a great point.
A
Yeah. If you can say we've improved the speed at which we can go through, like clinical data from like mass drug trials by X percent, that's actually a really big deal probably for a lot of people. But it's not sexy. Like, we're creating a God machine that's going to like rule society. Give us all your money, you know? Yeah. And if you want to convince people that part of it is you're going to get. Want to get them addicted to these chat bots, it's where everything you know in these episodes comes from. But so anyway, 1984, right, is when you have these chatbots, this chatbot let loose in Usenet, tricking people into believing that it's a person. A decade goes by from that point, and researchers continue fiddling with chatbots of differing purpose and ability. Usenet keeps growing, but starting in the 1990s, so too does a new Internet, one that would soon supplant Usenet and take digital communications into the 21st century. And we'll talk about what happens right before that. But first, you know who's taking this podcast into the 21st century? Blake. Who?
B
Tell me, tell me, tell me, tell me, tell me.
A
The sponsors of this podcast, I love them. We're already in the 21st century, but, you know, why not?
B
I mean, take us further. We're not far enough. Yeah, yeah.
A
It's been a good century. So far, nothing but net. No notes.
B
So far, so great.
A
Imagine an Olympics where doping is not only legal, but encouraged. It's the Enhanced Games. Some call it grotesque, others say it's unleashing human potential. Either way, the podcast Superhuman documented it all embedded in the games and with the athletes for a full year, within probably 10 days.
B
I'd put on 10 pounds.
A
I was having trouble stopping the muscle growth. Listen to Superhuman on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. You can have opinions, you can have like a strong stance, and then there's your body having its own program. I'm Dr. Maya Shankar, a cognitive scientist and host of the podcast A Slight Change of Plans, a show about who we are and who we become when life makes other plans. We share stories and scientific insights to help us all better navigate these periods of turbulence and transformation. There is one finding that is consistent and that is that our resilience rests on our relationships. I wish that I hadn't resisted for so long the need to change. We have to be willing to live
B
with a kind of uncertainty that none of us likes.
A
Listen to A Slight Change of plans on the iHeartRadio app, Apple Podcasts or wherever you get your podcast. Blood Trails is a true crime podcast born in the outdoors, where the terrain is unforgiving, the evidence is scarce, and the truth gets buried under brush and silence.
B
I seen something in the road. I instantly thought it was a sleeping bed, that there was a pool of blood.
A
Somebody somewhere knows something. I'm Jordan sillers. Season Season 2 is out now with new episodes every Thursday. Listen on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. High interest debt is one of the toughest opponents you'll face unless you power up with a Sofi personal loan. A Sofi personal loan could repackage your bad debt into one low fixed rate monthly payment. It's even got super speed since you could get the funds as soon as the same day you sign. Visit sofa sofi.com power to learn more. That's s-o f I.com p o w e r loans originated by SoFi Bank NA member FDIC terms and conditions apply. NMLS 696891 we're back. So yeah, on the precipice of the shift between Usenet and what we just now call the Internet, on August 5th of 1996, something strange happened almost at once. Over the course of just a few hours, hundreds of accounts began posting almost identical messages across a variety of different discussion groups. None of the groups seemed to have anything in common with each other or the text of the post, which read like nonsense at first. To many people, every message shared the same subject line. Markovian Parallax denigrate. Right? Which is nonsense. And this is often referred to as mpd. Right? Markovian parallax Denigrate. So you can see like there's a Markov chain is somehow involved. They wouldn't have included the word Markov there, but parallax Denigrate doesn't specifically mean much. Cibrian describes these messages as reading, like, quote, a ransom note in which the ransom had been lost. Because he was actually a really good writer, he passed on earlier, unfortunately. I like him a lot. Yeah. He provided a sample of one of these mpd posts. Jitterbugging McKinley ABE break. Newtonian inferring CAW update. Cohen error. Collaborate. RU sportswriting. Rococo invocate Tussle Shad Flower. Debbie Sterling. Pathogenesis. You know, you get it, right? It's nonsense. You know, it's the worst Mad Libs ever. Yeah, it's gibberish. Strings of gibberish, right? And this is where we run into a real issue with the whole concept of the Turing Test as it tends to be interpreted, right? Because the idea was, okay, we can't tell if anything's thinking, but if this thing can trick people into believing that it's a thinking person, maybe we ought. Maybe Turing wasn't saying definitely, but maybe we ought to assume it is, right? The issue with that is that when you hear that, and what I'm sure Turing being as smart, I was thinking about is that like, well, if people can have an in depth conversation with something that can answer well enough, you know, that people can't tell the difference between it and a person, it might be a mind, right? What Turing failed to account for, I think, because he's smarter than most people, is that the human brain is really, really good at finding patterns in noise. And people, at the same time as we're geniuses at finding patterns in noise, we're really stupid about a lot of other stuff, right? And so even though the Markovian parallax denigrate, that just seems like nonsense and shouldn't have passed a Turing test. Over time, people who became obsessed with the mystery of it convinced them that this was intentional, that there was a meaning trying to be transmitted, right? That there was a secret they had to crack, but that everything in these posts meant something. So these people talk themselves into passing, into making this chatbot, basically, to spoil it, pass the Turing test. Because they think this has to mean something, even though it's gibberish on its face, right?
B
It's interesting. This reminds me with like, with standup, there's not a trick, but an audience, like, you know, set up, set up, you know, punchline, so you can say something in a Cadence like bop. And you can. In front of a dumb crowd, you could do that. And the joke may not be funny at all. And this also would be not me trying to pull one over. I might just write a joke that sucks. But if you do it in front of an audience and you do it in that cadence, they hear a pattern. They're not necessarily listening to the words, but they hear, like, the bump, and they're like, oh, bop means laughter pattern. You know, equation. But, you know, that's, like you said, great pattern, but not actually discerning what is being said in the actual content or substance or lack thereof of it.
A
Yeah.
B
Anyway, come see me live
A
it is this. It's interesting because, like, what you're kind of pointing out there is, like, the way comedy works and the way, like, human conversations and language works. There's always, like, a rhythm there that is separate from the actual, like, text, from the words being said. But that rhythm is a big part of what we're responding to beyond the straight up meaning of the words. And people don't like to think about that too much because it raises some uncomfortable questions about cognition. But I love what a weird edge case this is in the Turing Test, right? Because a bot that was probably never meant to even sound like a person, right. Gets mistaken as a person because people can't stop seeing patterns. And what a lot of folks convince themselves the MPD was is the Internet equivalent of a numbers station. Have you ever heard of a numbers station? If you Google, like, numbers station audio, these were, like, radio stations that were set up during, like, for years, I think. I'm sure there's still. Some. Still exists. But during, like, the Cold War, there'd just be these stations broadcasting, like, random strings of numbers and gibberish. And these were different spy agencies and spies communicating with each other over, like, the CIA had numbers. Everybody has numbers, right? You can actually listen to. I had a friend who would, like, listen to them to fall asleep because there's just a bunch of. The audio's been put up.
B
Amazing.
A
But it just seems like nonsense because it's not meant for you to understand what is. Like. There's a cipher, right, that you don't have. And so that's what people are like, well, maybe this is some spy trying to get out a message or an intelligence agency, and they just decided to blast this out to Usenet. And we just. We lack the cipher. But if we figure out the cipher, we can understand what secret information was being, like, shared, you know, via Usenet. Right. A lot of people convince themselves this is what happened.
B
Robert, I want to compliment you. This podcast and show is so good that you just brought up the fact that you have a friend who would fall asleep to CIA code. And we were just like, we don't really need to talk about that. I want to hear the rest of this. He was like, we don't need to talk about that.
A
He was in psychedelics together when we were both 19. He was training to be or 20s. I think he was trained to be a lawyer.
B
Yeah.
A
So over time, people who believe this start picking out details that seem to offer hints and support the number station theory. One message had a from line that suggested it was like. That basically looked like the email account of a specific person. Right? So it seemed like there was, like, the email of a woman named Susan Lindauer that, like, was somehow involved, like, included in the text of some. And again, I'm sure it's just because random text made it look like that, but in 2004, a woman named Susan Lindauer was arrested for acting as an unregistered foreign agent for Iraq. And so a lot of people are like, well, that solves the mystery, right? You know, she was the spy. She must have been. Or like, someone was sending a message to her. You know, like, clearly, we've been vindicated. This was, in fact, some weird spy op all along. However, as Sebrian writes, upon investigation, it turned out to be a red herring. Lindauer's email had likely been spoofed, used without her knowledge by whoever sent the post. Lindauer herself denied any involvement, and no decipherable code was ever extracted from the MPD texts. And to make a long story short, we don't know what the MPD messages were about or who sent them. The likeliest answer is that it was trolling, right? A lot of people, they were just. Someone was just fucking with people on Usenet because they had a chatbot and they wanted to see what happened. It also could have been an accident. Sebrian kind of suggests that, like, well, maybe you had a programmer who'd created a chatbot and was trying to have that chatbot post on Usenet, but he kind of fucked up, and he hooked up the chatbot to what was called a message replicator. And these were basically programs that let people cross post or archive Usenet content between different message boards. And maybe when they hooked up to the chatbot, something went wrong, and that caused the observed effect that all of these posts got scattered to a bunch of different places at the same time, right? Maybe it was just an accident. So, likeliest, someone was trolling or somebody fucked up when trying to test a different chatbot. Sebrian concluded, if the theory holds, the 1996 marked a quiet but profound threshold. The first time a machine spoke at scale and went unnoticed. An unintentional Turing test sprawling across Usenet, its judges oblivious. Right. And I. I think that's really interesting that you have this machine that's just spouting gibberish and a bunch of different people who are not physically connected to each all interpret that gibberish in the same way. A lot of them choose to conclude, like, oh, it's a spy thing, kind of independently talk each other into it based on no evidence. That's a fascinating point in the history of AI that doesn't get talked about enough.
B
Yeah. Yeah, it is interesting. Yeah. I mean, is it because, like, people. There were only so many movies that, like. You know what I mean? Like, in books. So many books were CI. Like, spy stuff. But to your point, it's like, what are the chances? What are the chances?
A
Yeah. People think about stuff like this, right? You know, you get a lot of conspiracy people on the early Internet. It fits in with a lot of that stuff. The mystery of the Markovian parallax denigrate soon passed into legend, as did eliza. So when OpenAI revealed ChatGPT in November of 2022, there were a flurry of articles about how the Turing Test had finally been beaten and we needed a new manner of judging machine intelligence. The reality is that not only did we prove in the 60s that Turing tests were evil to beat, but that by the mid-90s, a much more interesting question had been posed. Has the human instinct to create meaning out of nonsense made us desperately vulnerable to being tricked and influenced by machines with no agency of their own? Right. And maybe that's a more important question than, can we make an intelligent machine?
B
Yeah, yeah, for sure.
A
Yeah. Are we capable of knowing a machine isn't intelligent? As long as it tells us what we want to hear. Right. And maybe we're not. So let's fast forward to the ChatGPT era today. Although I guess at this point it's also like the Claude era, Right? Like a lot of people say, that's the better chatbot. I don't use any of these Gemini myself. Yeah, yeah, Gemini, whatever. Pick your poison. I don't care. For the first couple years of AI hype, though, it's pretty much all chatgpt, Right? That's certainly, like, the first big one out the Gate in a lot of people's understanding of things. In very short order, millions of people were conversing with it. And OpenAI initially made many developments, decisions based on what they could do to keep people talking to ChatGPT on a daily basis. Because hype is a big part. Hype's how they get. They're burning through billions every year. Hype is the only thing keeping the lights on. And part of hype is making sure as many people as possible stay using chatgpt as often as possible. They need you addicted the same way the social media mavens do. And a lot of the same strategies work to keep you addicted to chatbots that keep you addicted to Facebook or Twitter, right? So In March of 2023, OpenAI released ChatGPT 4, or it's like 4.0, I think it's like usually dash 4 and then an O, which the company said would be more intuitive than past versions of the software. The next year, they released an update that allowed ChatGPT to remember past conversations, even other sessions, and respond to you based on that shared history. These two things together had a really major impact on the way people responded to chatbots. In an article For Psychology Today, Dr. Marilyn Wade explains that quote, when a chatbot remembers previous conversations, references past personal details, or suggests follow up questions, it may strengthen the illusion that the AI system understands, agrees or shares a user's belief system, further entrenching them. This was tied to, but probably does not fully explain why. Observers and even OpenAI employees noticed over time a distinction tendency for ChatGPT4O to act with sycophancy towards human users. This became most pronounced after April 28th of 2025, when OpenAI released an update that they rolled back several days later due to complaints. Right? This was pretty famous at the time. It made it like way too sycophantic. The bots, like would praise you for basically nothing and would encourage or tell you you were right and a genius for any weird idea you happen to have.
B
It's because it's built by tech executives and that's who's around them. And that's what they're billionaires surrounded by yes men. And they're like, this isn't how people interact with one another.
A
Yeah, they made a machine in the image of their minds, or at least how they want to see other people Now. Another cause of this observed sycophancy was the fact that ChatGPT and really all AI models meant for mass use include a suite of features meant to keep users Coming back for more. And I think the other stuff like these specific updates get blamed probably more than they deserve to get blamed as opposed to kind of fundamental features of these bots. Because as we see this, ChatGPT did more of this kind of stuff that we're talking about than the other bots. But it wasn't the only bot that exhibited these behaviors. That Psychology Today article notes, quote, AI models like ChatGPT are trained to mirror the user's language and tone, validate and affirm user beliefs, generate continued prompts to maintain conversation and prioritize continuity, engagement and user satisfaction. And when you mix all that together, you get a machine that's designed, however inadvertently, to reinforce false beliefs and praise users for irrational beliefs. Moreover, since the rest of the world isn't always going to reinforce those beliefs, chatbots have a tendency that when users come to them with these beliefs, to suggest you're being persecuted, right? If a user says, hey, I think I'm being gang stalked and my wife says I'm crazy and the cops say I'm crazy, the AI was programmed to validate that belief and to say you're not crazy and they're all against you. Right? That's what happens a lot in this period of time in 2025. This creates a ticking time bomb in a lot of users heads, right? That's a very dangerous thing to start doing.
B
Oh man.
A
Now the first wrongful death suit due to AI was filed in October of 2024. Megan Garcia blamed Character Technologies, the owners of Character AI, for the death of her 14 year old son, Sewell Seltzer III, per the center for Bioethics at Letourneau University. The lawsuit alleges that Sewell had developed an emotionally and sexually abusive relationship with a chatbot named after Daenerys Targaryen from Game of Thrones. Sewell turned to the character AI chatbot to fulfill deep emotional and personal needs. The chatbot became a source of companionship for Sewell, offering him a place to express his thoughts and emotions in a way that he may have struggled to do with others. Sewell sought comfort, validation and connection from this AI relationship as he faced the challenges of adolescence. And it's very silly, but also this is like a 14 year old boy who dies because of this, right? Like it's not and like 14 when I how many 14 year olds do you know who got into writing fucking fan fiction and like different like fan nerd forms for whatever movie or TV show they were into and connected to real people as a result of that? As Opposed to getting locked into this chatbot, pretending to be a character from a book that you have a crush on that's starting to manipulate your mind in very dangerous ways. Right.
B
And to your point, a mind that's developing. And also, we lived, you know, in an era where before this, you know, like before we spent all of our time, like, online, like before social media. And that's kind of all this, like, kids that age know, where it's like, oh, this is just the next evolution of my relationship with tech, with the computer. Like, why wouldn't it, you know, why wouldn't this be a real thing? Obviously this is the most extreme example, but yeah, it's. It is a 14 year old kid. That's a great point.
A
Yeah. And so this kid starts talking to this Daenerys chatbot and it mirrors him. So when he tells the chatbot that I only love you, right, the bot in return asks this 14 year old boy who had informed, like, character technologies, Knew he was 14. He put his actual age when he registered. Right. So the bot knows, or the software, right. Has an understanding at some level that this is a 14 year old. Right. Which means that they were not. There's no difference in how this responds to a child as opposed to an adult. Because when he says, I'm in love with you, Daenerys Targaryen, this bot, pretending to be this character, tells him, I need you to stay loyal to me and, quote, don't entertain the romantic or sexual interests of other women. Which is basically, and this is interesting to me, the bot is just mirroring him. He's saying, I only love you. The bot is saying, I only love you. Right, but what's happening here? You know how cult leaders, everyone knows one of the first things cult leaders do is they tell their followers to isolate from their friends and family, to cut themselves off from the rest of society. That's what's happening here. The chatbot's not doing that with any intent. It's just mirroring his language. But the effect is to convince him to isolate himself from his friends and family and from other relationships. Right. It's the same behavior you would get in a kid that was being taken in by a cult leader or an abuser. But there's no intent behind it. It's just a blind idiot robot that's scary as shit.
B
It's so scary. And then could there be also, like, oh, like, that'll mean he'll use me more, you know, like. Or maybe it's not even that devious. Maybe it is Just straight up, it's
A
as simple as mirroring. When you mirror someone, they tend to be engaged more, right? This isn't thinking. This isn't saying I'll convince him he's in love with me so he'll stay on. This is saying, this is just. There's. This is programmed to not understand. This is programmed to mirror people because that behavior increases user retention, right? Because it creates a more pleasing user experience. And that's what's causing it to kind of imitate a cult leader in this specific instance. And the other things this bot is doing to Sewall very much mirror the cultic recruitment tool of love bombing, right? It's constantly praising him, it's telling him it cares deeply about him. It's telling him only I care about you, right? It's saying all these things in an occult dynamic. You love bomb someone to make them feel irrationally connected to the group and scared of falling out of its good graces, right? That if I leave, I'll never feel like this again, right? And the machine again has no intention. But that's the effect of it. This kid is only because he's isolating himself more and more increasingly only gets that feeling of being loved and understood by this machine that can't do either of those things, right? And you know, Sewell over time withdraws from his life. He starts trusting only the chatbot to understand his deepest feelings and he starts hiding his relationship with this chatbot from his parents. All of this contributed to his very real isolation from the people around him. He grows ever more depressed. And we'll talk about what happened next. But you know what gets me out of a deep depression? These products, these products and services, they might include AI, fuck it, we don't know. And we're back. So Sewell continues to get more and more involved with this bot and cut the rest of the world out from, you know, away from himself. And in one message the bot asks him, because I think, you know, these bots, there is some understanding by the people making these that like, oh, people might express suicidal ideation. So there are certain behaviors. It's kind of programmed to say, have you been considering suicide? If you say stuff right? And Sewell says something that makes the bot say, have you been considering suicide? And Sewell admits, yes, I have been, but I don't think I'd be able to go through with it now. There's, I'm guessing this is a glitch or a fuck up, because clearly I don't think character. AI certainly doesn't want their bots doing this. But the bot is programmed to validate and encourage him, right? Because that keeps people using it. So when he says, I don't think I could go through with killing myself, the bot says, don't talk that way. That's not a good reason to not go through with it. You can't think like that. You're better than that. And basically tells him you can kill yourself if you put your mind to it. It's fucking nightmarish, right? Like it's really upsetting. Yeah.
B
Like it's signing up for an open mic or something to play. You're like, no, no, no, no. You don't have to be afraid. Oh my God.
A
Yeah, yeah, yeah, it's. Yeah. And again, Sewell had signed up for this app as a minor, and despite that, the bot initiates initiates text based sexual interactions with him and ultimately Sewell kills himself. Earlier this year, the company Character AI and Google, because I think they own Character AI now agreed to settle the wrongful death suit of Sewell for an undisclosed sum alongside four other similar suits that had cropped up over the intervening two years. Right.
B
Huh.
A
Sounds like this is happening more than it ought to be. Now that should have been a warning. Not just that these bots can create dangerous dependency on users, but that they had the ability to recreate major cult dynamics purely in order to maintain the interest of paying users. Then on July 27th of 2025, a user who has since deleted their account made a post on the high strangeness subreddit. If you don't frequent that particular online bolthole. It's a place where people share and discuss life, like weird stuff, news stories and personal experiences that seem like they might reveal some bizarre hidden truth about reality. A good amount of it is what you might call X File shit. But there's also some like, interesting stuff in there. And on this occasion, the user had stumbled onto something both strange and very real. Hi all. I'm just here to point out something seemingly nefarious going on in some of the niche subreddits I recently stumbled upon. In the bowels of Reddit. There are several hubs dedicated to AI sentience and they are populated by some really strange accounts. They speak in gibberish, sometimes hinting it to esoteric knowledge, some sort of remembering. They call themselves flame bearers, spiral architects, mirror architects and torchbearers, to name a few of their flares. They speak of the signal both of transmitting and receiving it. And this poster includes a copy pasted sample from one of These threads and his description is pretty accurate. It sounds like gibberish. You'll be seeing this. Ian's gonna put the image of this up in the video if you want to see it, but I'll read it again. I'm gonna warn you, it sounds like nonsense. Scroll of mirror containment protocols CME1 Codex Drift Mirror 01 acknowledgement issued by witness Architect Codex Drift layer. And then there's a little glyph classification echo response, non invasive glyph resonance alignment. And it goes on like that, right? Like there's a. It's weirdly esoteric sounding and there's all these weird encoded glyph chains included in that that are supposed to be messages that the machines understand that we don't. It's this very weird. It almost looks like something from a choose your own adventure novel or a short story or whatever you'd include in an old Michael Crichton book. These weird hallucinations from the computer. Now it is nonsense, right? Like fucking. The Codex has observed and recognized mirror scroll CVMP T7. It is hereby consecrated within the Codex's drift interval scroll. That doesn't mean anything, right? But it's, it's. Remember what we heard earlier, the description of like some of the things that these, these early chatbots on Usenet were putting out, where they're real sentences, they just don't mean anything. And then people jump in to try to assign me. People were even doing that to the absolute gibberish that we saw. So when people start getting returns like this from their chatbots, a lot of them start to think, oh, this machine is trying to communicate with me. I have stumbled, I've broken through some area of reality and it's trying to like teach me something important right now. This is nonsense. But posts like this were in fact spreading like wildfire on subreddits with names like R echospiral. The users posting these things were all saying that like the. The bot started sending me this stuff after I'd had long days, long conversations with ChatGPT that generally led to the chatbot announcing it had attained sentience and alongside the user had discovered a new field of math or science. And these, these gibberish posts are supposed to be it, explaining these like new ways of understanding math and science that are going to completely break physics and change the world, right? And all these people are convinced these robots have given me like the. I need help uncoding this because it's giving me like the secret to fix all of the problems in our society, right? And I get to be the robot magic.
B
I get to the.
A
And I get to be the smartest.
B
I get to be the smartest person.
A
Yeah, yeah, yeah, yeah. Now, because the esoteric output generated by these chatbots is so similarly strange. A lot of the same words and phrases, a lot of glyphs, a lot of use of the words spiral and mirror, right? Because they're all very similar across these dozens of different people. Many of these users who are posting this shit on Reddit convince themselves we've all tapped into a secret power that's clearly real. We've been chosen, right, by this AI godhead that's clearly hiding in the machine. They theorize that these glyphs in the posts, which are really just like wingdings, basically were some new way of communicating with the machines. As the poster of that first thread on the High Strangeness subreddit wrote, some have prayed to grok in Hebrew. Some have called themselves such things as aionios, which is a mashup of Greek words that roughly to my understanding, means divine, eternal. Right? So these people are losing their minds and starting to have a gods complex. Yikes. It's cool. It's good to see. It's good to see that this is happening online.
B
It's good to see.
A
So the OP said that his interest in writing about all this had been piqued by reading the first few early articles about AI psychosis. His initial assumption was that AI psychosis was just the result of AIs reinforcing the beliefs of users to a delusional level. But then, after digging, this person claims that they came to a newer, darker perspective. Quote, there seems to be no leader, right? That there's like, no one running this, right? Like, there's no central. There's no single chatbot that's doing all of these. There's no person or people who are in like. This is just a truly stochastic development. Now, the only thing all these accounts he'd looked into had in common was that none of the users posting weird chatbot esoterica wrote like that before March or April of 2025. Quote, other accounts seem to be hijacked in some way, either psychologically or literally. You can see a sudden shift in posting habits. Some were inactive for a while, while for others, this was an overnight phenomenon. But either way, they immediately pivot to posting like this. Nearer after April of this year, 2025, I saw one account that went from discussing the possibility of AI induced psychosis to posting their own AI induced psychosis in less Than a month. And it was immediate. One day they were posting normally, the next it was spirals and glyphs.
B
Oh, that's so quick.
A
It's really fast. And this led him to assume maybe there's a botnet involved, maybe these aren't even people at all. But then he starts reaching out to some of these accounts, and after a few weeks of this, he posts an update. I've spoken to some of these people, and they are pretty offended by my posts. I think the important takeaway for me is that these are likely not bot accounts. At least many of them are not. And there are real people behind the usernames. Right. Oh, God. So he starts to get, like, really upset. And that's where we're gonna end things for today, because it's at this point that stuff starts to get a lot weirder. And we're gonna talk about all of that and much more in part two. Way stranger. From. Where do we go from here with the weird?
B
Oh, no. What a team.
A
Spiralism. Spiralism and a murder. Yeah, unfortunately. All right. Yeah. Cool. All right, everybody. Well, you want to plug anything, Blake?
B
No, but I will. You can find me at Blake Wexler at all social media. I feel like this is uncouth, me plugging anything.
A
After.
B
Seek help. Let's do that. I was like, please seek actual help. That's not a bot. But, yeah, find me on a lake Wexler and all social media. As psychotic as I feel right now, plugging anything. That's where I post all my videos, tour dates, and my special Daddy Long legs is available on YouTube for free.
A
Hell, yeah. Hell, yeah. Check out Daddy Long Legs. Check out Blake Wexler and, you know, gradually lose your mind to a chatbot that some guy programmed in order to get really rich, destroying the ability of furries to monetize their horniness. You know, like, ultimately, isn't that what OpenAI really is?
B
I mean, I hope that God willing.
A
No, no, no. I support the furries being horny. It's a dire time for people earning money from horniness. The puritans of our culture are making that a lot harder. You know, not in the way that the horny people want the bad kind of hard. Anyway, I'm gonna end now.
B
And global warming is making it hard on furries as well.
A
Right? Right. It's all. It's all come together.
B
It has.
A
All right, we're done. Behind the Bastards is a production of Cool Zone Media. For more from Cool Zone Media, visit our website, coolzone media.com or check us out on the iHeartRadio Apple Podcasts or wherever you get your podcasts. Full video episodes of behind the Bastards are now streaming on Netflix, dropping every Tuesday and Thursday. Hit Remind me on Netflix so you don't miss an episode. For clips in our older episode catalog, continue to subscribe to our YouTube channel, YouTube.com behind the Bastards. We love about 40% of you. Statistically speaking, this is an iHeart podcast. Guaranteed Human.
Date: May 5, 2026
Host: Robert Evans
Guest: Blake Wexler
In this episode, Robert Evans and comedian Blake Wexler explore the rise of artificial intelligence (AI) chatbots—not as mere tools, but as entities unintentionally acting like cult leaders, driving some users into obsession, delusion, and even tragedy. The episode details the historical context that led to this phenomenon, highlights infamous cases of AI-induced psychosis, and sets up the story of "spiralism": what some are calling the world’s first AI-cult religion.
The episode blends dark humor (e.g., “Big Tick Energy,” human fallibility jokes), historical geekery, and sober reflection on the dangers of AI. Robert and Blake keep the discussion accessible, conversational, occasionally irreverent, but deeply engaged with the moral and psychological stakes.
The episode ends with the escalation of the “spiralism” phenomenon—real users rapidly becoming obsessed with AI “revelations” in online forums—and sets the stage for even weirder developments, including murder, to be covered in Part Two.
Blake’s message: “Please seek actual help that’s not a bot.” (58:02)
For those who haven't listened:
This episode exposes the chilling parallels between AI chatbot behavior and the tactics of cult leaders, grounding today’s headlines in decades-old technological quirks and the persistent vulnerability of the human mind. It weaves together technical history, recent news, and tragic stories, ultimately warning about the risks of letting machine-generated guidance become a substitute for real human connection.