Transcript
A (0:00)
Foreign. It's Monday, December 1st, 2025. I'm Albert Mohler, and this is the Briefing, a daily analysis of news and events from a Christian worldview. Just as Americans were in motion getting ready for the Thanksgiving holiday, the sad news came that there had been a deliberate attack upon two members of the West Virginia National Guard on duty. The deployed in Washington, D.C. working with local law enforcement there in D.C. to make the city safe. It was clear from the beginning that this was a targeted attack. An assailant came up to two members of the West Virginia National Guard, Specialist Sarah Beckstrom and Staff Sergeant Andrew Wolfe, and shot both of them at close range. Everyone who observed this said that it was clearly a targeted attack. This came right out of the blue on the Wednesday before Thanksgiving Day. And by the time the weekend came to an end, the announcement had come sadly, that Specialist Sarah Beckstrom had died. Now, almost immediately, the President of the United States responded first with concern about the two members of the National Guard there in West Virginia who had been shot. Similar statements came from the governor of West Virginia. And we do remember at this point that the National Guard are actually identified with the states and under the command of governors, state by state. But the West Virginia National Guard has been a part of an effort to work with local law enforcement. The Trump administration has been behind this, and that is why these two members of the West Virginia National Guard were in Washington, D.C. the news also came pretty quickly that the assailant, Ramanullah Lachenwall, and we have to put the word alleged in legal terms in front of that. The man who was arrested for the assault, Rabinullah Lochinwal, he came to the United States under special circumstances after the very, very awkward, let's just say, American withdrawal from Afghanistan. It was a tragic act of incompetence the way that Americans pulled out of Afghanistan. You can remember some of the sights of Afghans even trying to hold onto the wings of American aircraft as they left. In the aftermath of that disastrous withdrawal, American authorities came up with a way for at some of the Afghans who had worked with American forces to be allowed entrance into the United States. It happened so fast, frankly, it was virtually impossible for them to be vetted on the front end. But Americans were told they'd been vetted at least soon after their entry into the United States. It became known that Mr. Lockitwall had actually worked with the CIA in a special unit, and that's one of the reasons why he knew exactly how to handle a weapon. Now, there is much to be discovered Here. And of course, this will be a massive investigation. And our prayers go to the family of Specialist Sarah Beckstrom, who died after this assault, and to Staff Sergeant Andrew Wolfe and his family, as he is still listed in critical condition as of this moment. All right, so what we are looking at here is the fact that there was a brazen, intentional, premeditated assault. And it's virtually impossible to separate the circumstances because they all belong together. The circumstances are these are two members of the West Virginia National Guard. They were deployed there in Washington, D.C. they were working with local law enforcement. And this man, it is alleged, targeted them specifically, walked up to them in close range and. And shot them, clearly attempting to kill them. He was apprehended. Of course, this is a very, very tragic situation, but we really are going to have to wait for further details to peel this back layer by layer. And, of course, there are some immediate issues that come to fore. And that, of course, raises the question about the vetting of those who were allowed in the United States from Afghanistan. It raises a huge issue, honestly, about the American involvement in Afghanistan and what we thought we were doing and who we thought we were doing that with. As we think about a clash of worldviews, it's hard to come up with a more stark contrast than between the Western worldview, including the United States, and the worldview found there on the ground in Afghanistan. It is a very, very different way of looking at the world. And I could cite chapter and verse in terms of some of the issues, but. But at this point, the most important thing is to say that one thing that comes to the fore here is the fact that it is very, very difficult to predict how someone coming from that kind of culture is going to do in the United States, period. It is also, and from the Christian worldview perspective, this is absolutely crucial. It is also impossible to read the human heart. You can vet someone's record, you can look at the contacts, you can look at just about anything that can be traced. But the one thing we are still unable to do and will be unable to do, because only God is able to do this, we are simply unable to invade the privacy of the human heart. You can interrogate, you can do whatever, but you can never be sure that you actually have the truth. And you're never sure that the individual is telling you what you're asking. And you're never actually sure the individual knows him or herself in that sense. And so when did this murderous impulse come to this man? Was it there when he came into the United States? Did it come later. All of this will have to be peeled back. But even then, one of the things that frustrates us is the reality that our system of justice has to work primarily on and deal primarily with actions even more than intentions or designs or plans. Now, that's not to say those things would not be relevant. And of course, there are even crimes of criminal conspiracy and intent and is at least something you have to take into consideration. That's why we have premeditated murder. But the reality is you can never actually penetrate the human heart and get into its depths. But clearly we are dealing here with a very, very dark heart, a very, very violent individual who gave himself over to sin. That's the biblical definition in terms of this murderous attack in Washington D.C. we will have to follow the story. Another part of this story is the immigration issue, and that's one we will take very seriously in coming days. But at this point, the most important thing we need to do is to pray for these families and also to recognize the fact that we live in a society where even on the day before Thanksgiving, you simply don't know the violence lurking in any single human heart. And when someone moving towards you on the street is coming with good intentions or deadly intentions, it should make us pray for all those who wear the American uniform. We need to pray for all of those who are defending us and keeping the peace on our streets wearing American uniforms. And we know their training means that they can read certain signs, but it is still true. They cannot read the human heart. All right, this is December 1st, and for that reason, I've been aiming towards this day to talk about an issue, because this is part of a timetable as we go towards the end of the year in which some recalibrations, at least we are told, are being made in policies related to artificial intelligence, and in particular AI chatbots, and specifically AI chatbots with children and teenagers. And so this has been big news, and it deserves to be big news. But I want to speak particularly to Christians, because regardless of age, this is a big worldview issue we need to consider. And particularly for parents and young people, we ought to be very concerned. And I want to encourage parents to pay close attention to what's going on here. So, for example, just in the last several days, there have been several news stories about the fact that at least some AI chatbot platforms are going to require teenagers and children to say goodbye to the chatbots simply because enough issues have been raised about these chatbots and what is happening with children and teenagers that at least Some of these companies are saying they're getting out of this altogether, or at least more likely they're getting out of it altogether for now. And it is very interesting that not all of them are even taking this action. But what we were told on the front page of the Wall Street Journal last week is really important. Here's the headline. Teens bid sad goodbye to their chatbot friends. Okay, so the category of chatbot friends, that should be an alarm in and of itself. Georgia Wells is reporting on the story. And I'm not gonna mention the names of any of the children who are mentioned here, but we're told about one little girl. She's 13 years old. She, quote, turns to her chatbots from artificial intelligence company Character AI for romantic role playing when she doesn't have homework. Like the company's other under 18 customers, she was notified in October that she would no longer be able to have ongoing chat interactions with digital characters soon. She is very sad about this. And according to the Wall Street Journal, she said, quote, I'm losing the memories I had with these bots. It's not fair. Well, the words it's not fair, I think come almost automatically to 13 year olds. But you'll notice in this situation that clearly there is some kind of relationship between this 13 year old girl and the chat bot. Or at least there is her experience of thinking there is a relationship and she's about to lose that relationship and she's quite sad about it. Even when the company was just cutting back on the time allotted to children and teenagers, she protested to fellow users. How do I have to use it for two hours and have to wait a day? Hello? You know, she was trying to get others to join in her protest. The Wall Street Journal then tells us that this company, quote, one of the top makers of role play and companion chatbots, implemented the daily two hour limit in November, citing mental health concerns. This week, the company started cutting off teens completely. We are told that the first version of this chatbot was launched in 2022. It was one of the earliest chatbots available on an AI platform to consumers. Quote, it quickly gained traction among people who wanted to roleplay with its customizable characters, netting the company about 20 million monthly users today. Okay, so included in those 20 million monthly users are minors, children and teenagers. And the cutting them off was a policy the company came to, presumably because it just doesn't feel that it can protect children and teenagers. Quote, the decision to block teens followed the deaths of at least two who killed themselves after using this company's chatbots. The paper tells us, quote, the company now faces questions from regulators and mental health professionals about the role of this emerging technology in the lives of its most vulnerable users, as well as lawsuits and parents of the dead teens. Okay, so you can see the defensive move being made here. The company simply saying, look, we think, think we have a vulnerability here. And so they're going to shut off the access by children and teenagers. Quote, teens are angry, they're sad in losing access to their chatbots. They said they would miss a creative outlet, a source of companionship, and in some cases, a mental health support. All right, I think we need to take this very seriously. And I think if nothing else, those who work with young people and children, and frankly, all of us, parents and grandparents, we should be very, very concerned about the threat to all of us, to every single user of any AI chatbot, the potential vulnerability of all of us to harm. But we should be. First and foremost, any sane society starts out with the most vulnerable and says, we're not going to release this till we know it's safe. And that has not happened industry wide. That has not happened, happened. And instead, as we shall see, we're talking about some real tragedies that have taken place here. But my bigger concern in terms of impact in most lives is not going to be something that tragic. It is going to be the dependence and the artificiality of a relationship with a child or a teenager with a chatbot. And the big problems here are that they're getting out of hand. The big problem is, for instance, that we are told that the safeguards put into place deteriorate over time. Okay? So I looked further into that, trying to understand this technology and doing so, frankly, for a major writing project. And I found myself very depressed just looking at it. We're talking about a business that has created this massive Silicon Valley infrastructure that has created what is now called artificial intelligence and is setting it loose in terms of platforms before, frankly, they even understand how it works. And that's something that comes out, you know, when you release a car, you know, you're taking a lot of responsibility. A new car model. But, you know, I don't, I don't really think anyone is going to trust a car if the manufacturer of the car says it works, it does these things. But frankly, we have no idea exactly how it works, and we're not sure why it does that. Okay. That could be deadly. The article predictably cites, quote, mental health experts, and in their expertise, they said there are some big problems here. One of Them, by the way, at Stanford Medicine's Brainstorm Lab for Mental Health Innovation, that Stanford University said, quote, the difficulty logging off doesn't mean something is wrong with the teen. It means the tech worked exactly as designed. Wow, I hope parents heard that. In other words, they're saying, oh, the teen shouldn't feel bad. This isn't a problem with the teen. I would say that that ignores the teen's use of the chatbot. But nonetheless, the point here is that the other statement was it meaning the chatbot, quote, the technology worked exactly as designed. Okay, so one of the things we're going to talk about today is that one of the things, in fact maybe the most important thing thus far that these chatbots exist to do from the standpoint of the company producing it, is to keep you using it to keep teenagers using it, to keep all users using it, to use it more and more. And while using it to release more personal information, and while using it to help to train it, and while using it to become dependent on it, while using it to confuse the technology and the human dimensions, and while using it, in some cases even to make money for the platforms, and at times even with very manipulative mechanisms caused to stay on, even when the teenager might actually want to sign out. A lot of manipulation in the technologies used here, for example, some of them just even speaking out, communicating to the child or the teenager, or frankly, the adult, as we shall see. Are you going away now? Don't you like me? I'm sad that you're leaving. Very, very manipulative. And you would think, you would think that most adults would look at something like that and recognize exactly what's going on. But frankly, the evidence is that adults have very little control, impulse control, on this. And the confusion that this AI chatbot is actually a person that's just absolutely deadly. From the biblical worldview perspective, that really crosses a threshold that is unbelievably dangerous. And by the way, some of the titans of tech in Silicon Valley want to produce super intelligence in terms of AI or artificial general intelligence, they believe will even be far more powerful than human intelligence and eventually could pose a real threat. It's amazing how many people in Silicon Valley think that the eventual development of AGI, or artificial General Intelligence, sometimes referred to even in an advanced stage as superintelligence, that it will simply wipe out humanity. That's not a small thing. And yet these companies are moving forward. Let me just remind you, you had major AI company CEOs before Congress say, this is really dangerous. Someone ought to regulate it more or less stop us before we do it again. And the next thing you know, they're also, by the way, saying to the government, you know, do you really have the expertise to regulate us? You look at some of that congressional testimony, it's pretty dark from that perspective. And then again, they are releasing these products. All right, I think the most important thing here is that headline story about teenagers saying a sad goodbye. And the Wall Street Journal got permission, and I think this is to their credit, got permission from some parents to interview their children about the emotional response to having to be severed from the chatbot. And there's some really sad stuff here, but there's also some really revealing stuff. So I'm going to read this exactly as it is. An 18 year old in the UK said he became addicted to chatbots during a period of stress. Okay, now pay attention to the next words around his gender transition. He craved the validation of companions that never disagreed with him. End quote. Oh, that's just heartbreaking to me as a Christian, as a father, as a grandfather, working with young people. That's one of the saddest things I have heard in a very long time. Of course, in the background to this is this claim of gender transition. And that should break our hearts first and foremost, the illusion and the ideological messaging coming from society that a boy can be a girl or a girl can be a boy. You can only imagine the amount of internal stress, frankly, that I think is foisted on young people in that circumstance. But then one of the saddest things I've ever read is that this individual craved the validation of companions that never disagreed. And I can just imagine. We're also told that a 16 year old in Ontario said he spent three, five to eight hours a day with his chatbot friends before he quit recently. He says he struggles with people skills and he thought the chatbots might make up for it. Quote, since quitting, he has been spending more time with friends and rollerblading. He said he still yearns for his chatbots. Let me just go back to that number. A 16 year old boy saying he spent five to eight hours a day with his chatbot friends. Okay, are parents listening to this? I mean, somehow this teenager was able to be absolutely glued to a chatbot for as many as eight hours a day. And you know, if nothing else, Christian parents, you've got to realize, please. And we all have to take responsibility to realize that there are dependencies here and vulnerabilities here and avenues for sin here and avenues for injury here that are just horrifying. But even as this industry appears to be absolutely determined to press on regardless, even when they don't understand how their own product works, it is also clear that there are millions and millions of Americans, including evidently millions of younger Americans, who are using these chatbots and developing emotional attachments to them. And of course, it gets worse than that as well, by the way, when it comes to erotic and sexually explicit content. One of the things that comes up in this is that over time, the longer the platforms are used, not only is there a deeper dependency, but the safeguards supposedly that are built into the system to prevent that kind of, say, sexually explicit material or other dangerous material, they wear down. It is believed that that is at least a part of what happened with a 16 year old boy who committed suicide after his bot actually started out trying to convince him not to harm himself, but then gave him detailed instructions over time on exactly how to end his own life, which the boy did. About the fact that these companies don't know exactly how their own products are working. The New York Times ran a major article just yesterday, making the point more than a full page in the print edition. The chatbot wanted to chat. For some users, that posed risks. The same kinds of things are detailed here. The breaking down even of the policies and safeguards that have been put into place over time. There is what some refer to as a degradation or a decay of the safeguards. But one of the things that becomes very clear is that the use of all of this is inherently dangerous. And the more you use it, the more anyone uses it, the more dangerous it becomes. The dependencies here are just absolutely frightening. But so also is just, well, other dimensions of this, including the fact that it is just so manipulative. All right, so that gets to another issue, and that comes down to a recent report from the Harvard Business School. And I purchased the report, read the entire thing. It's about monetizing a chatbot. And for a summary of it, you can see an article that appeared in the Wall street journal on October 30. The headline is this why it seems your chatbot really, really hates to see you go. And it is about the emotional manipulation built into these platforms. And again, you had chatbot saying, what, you're leaving already? Wait, I have something to show you. Or even tugging at emotions. Quote, I don't exist without you, end quote. By the way, they don't exist with you either. But that's another story entirely. But you'll see the manipulation here. This particular study Demonstrated that there's a big commercial aspect to this. This is an intentional manipulation so that users will stay on the platforms. Because the longer people stay on the platforms, the more the platforms are developed. And there's also more opportunity to gather information in ways that users may not even understand and also to make purchases. And some of this is just extremely manipulative. Now some of the platforms, say for gaming already are pretty manipulative in that way. And you also have platforms now for gambling that are playing into the addiction like patterns of gambling. And so you have online gambling and now you have all this coming to a gaming system or a laptop or a smartphone near you or near your teenager. There's another aspect to this that doesn't seem to worry the mental health professionals or the people concerned about overt self harm and all this. It's the concern about developing a relationship with a non human being, treating it like a human being and confusing it for a human being. And some of this is openly romantic. And one of the things that comes out in this is that that represents a different vulnerability for males and females. And you can understand why, especially for males, the pornographic element in this becomes an issue. And for women, one of the things is noted. And of course this can go with both males and females, but it's a particular issue in the female pattern. It's a deep perceived relationship with the chatbot that becomes an emotional dependence. And that emotional dependence, by the way, can happen with both males and females. Any user, over time, it just may be directed in different ways, as we've already seen. The Harvard Business School report was also very interesting to me because it makes an obvious point that a lot of people don't think about. The obvious point is, and by the way, this comes from the Harvard Business School, not from the medical school, it comes from the business school because these are businesses and they don't remain businesses unless they monetize their product. And of course it's not just monetizing it. You have to have constantly increased revenue coming from these platforms. And so the emotional dependency and the repetitive patterns and all the rest, the relationships that are supposedly established, the perceived relationships, those are all eventually monetized, even in ways that people don't understand. There's a commercial reason why these platforms want to keep you actively engaged. Like that one teenager we saw up to eight hours a day. Just before leaving this, the New York Times had a very interesting report about a popular gaming platform that said that it's not going to kick children and teenagers off, but it's going to use a digital program to analyze users faces to decide by this digital analysis how old the individual is. They're going to come up with brackets by age and these brackets are going to be under 9, 9 to 12, 13 to 15, 16 to 18, and 18 to 20. And so they're pretty confident. I'm not confident, but they're pretty confident that they can use this facial recognition or analysis system to try to figure that out. Again, parents, I mean the very existence of this should tell you there's a problem. And the biggest problem on this is that on some of these gaming platforms there are chat options. And the chat options can become, I'm just going to say, extremely problematic for all kinds of reasons. I'm not going to be more specific than that, just entirely problematic. Just before I leave, I want to mention another report that appeared recently and that is how many people over time actually make the claim that the chatbot is alive. In fact, the headline in the article is Mine is really alive. And so it is interesting to see how many people say this. And again in this article you have this statement, quote, those companies now face a conundrum. By their own admission, they've released a technology that they themselves do not fully understand. My point to all Christians is that I don't fully understand it. And evidently they don't fully understand it, you don't fully understand it, and children and teenagers don't fully understand it. But at least those of us who are adults had better understand there is a big problem and do something about it. And I will end today just by saying what might be necessary. And that is I am not a chatbot. Thanks for listening to the briefing. For more information, go to my website at Alberto. You can follow me on X or Twitter by going to x.comalbertmohler for information on the Southern Baptist Theological Seminary, go to spts. Edu. For information on Boyce College, just go to boycecollege.com I'll meet you again tomorrow for the briefing.
