Transcript
A (0:04)
It's Thursday, February 19, 2026. I'm Albert Mohler, and this is the Briefing, a daily analysis of news and events from a Christian worldview. One of the biggest issues we are handed these days, and frankly, we're handed this issue virtually every day, is the stewardship of the digital life, stewardship of digital technologies. Increasingly, those are getting complex. And so one of the things we need to look at is about some current debates and developments on the issue of the addictive nature of social media. That's one interesting thing. And in particular, when it comes to young people, the second big thing that we just need to talk about from time to time, because this requires constant update, is the entire world of artificial intelligence. And so I'm gonna begin there. And right now, of course, all over the media, you have news stories, prognostications about some kind of eschatological development with artificial intelligence, all conscious life maybe wiped out by some form of rogue artificial intelligence. You've got a lot of this. And by the way, one of the interesting things about this is that you have people from kind of the far left ecological fringe and people from the anti digital political fringe that are kind of coming together in some of these common apocalyptic scenarios. I'm not going to talk about those scenarios today. That's really outside the property concern of the present. But in the present, there's some very interesting things that are interesting in terms of issues raised by Christian theology, issues raised by biblical truth. So, for example, the Wall Street Journal recently ran an article, here's the headline. Why AI chatbots Can't be trusted for financial advice. Okay? So again, just in case you needed this piece of advice, AI chatbots, according to the Wall Street Journal headline, can't be trusted for financial advice. But they asked the why are artificial intelligence chatbots not trustworthy for financial advice? Here's what comes next in the headline quote, there they are, sociopaths. Okay, so we're being told that AI chatbots are sociopaths. All right, let's define some terms. First of all, what in the world is a sociopath? Traditionally, in the moral universe, sociopaths are those who are acting in such a way that they reject the authority of society in terms of norms and expectations, and thus they turn against the entire society. And generally, this means, if not always, then often accompanied by some kind of violence, or at least a predisposition against the entire social structure in which they are embedded. Okay, so in other words, a sociopath very often shows up as someone who blames the entire society for his or her predicament one way or another, and then turns in anger and in rejection towards the entirety of the morality and social code held by the society. So that you become an enemy of society, a hater of society, you become a sociopath. Well, here we are told that artificial intelligence chatbots are sociopaths. Let's just ask the question quickly. Are they or are they not? Let's just answer the question quickly. And that is that sociopath, in its essential meaning implies a moral agent. So in other words, a runaway horse cannot be a sociopath, but a human being can be a sociopath. A great white shark doing what great white sharks do is not a sociopath. That's a great white shark. But a human being living outside the moral structure of the entire society, that's a sociopath. Generally. This is the path part, pathos, the feeling generally with absolute hatred towards the entire society. Okay, so now we're being told that chatbots, artificial intelligence chatbots, can't be trusted for financial advice because they're sociopaths. You just might think this would be an interesting article. I assure you it is. More interesting, I think, than Peter Coy, writing for the Wall Street Journal, may recognize. He asked the question, should you use artificial intelligence for financial advice? And then he writes this quote. Andrew Lowe, a finance professor at the Massachusetts Institute of Technology, Sloan School of Management, says, not yet large language models like copilot or chatgpt aren't suited for being used as financial advisors because they are the digital equivalent of sociopaths. Smooth, persuasive, and devoid of empathy. End quote. Okay, how many landmines can you set off just with one sentence sequence there? They're the digital equivalent of sociopaths. Well, first of all, what in the world would it mean to say that something's the digital equivalent of a sociopath, which requires human agency? It means that people are confusing these chatbots for a human being, and thus they are assigning to these chatbots, and maybe even this article is assigning to these chatbots moral responsibility as if they are human beings made in the image of God as moral agents. I just want to point to Christians. This is one of the huge issues and the challenge. A lot of people are saying, AI is going to do this. It's going to shut out all these jobs over here. It's going to replace all these careers over there. It's going to take over all these areas of the economy. The bigger issue for Christians is the confusion that comes when people start talking about artificial intelligence and those chatbots as Persons as human beings who after all, are made in the image of God, a very different thing. We have all kinds of confusions and subversion of the imago DEI of the image of God. This is one of the latest ones, but this one turns pretty funny, I think, accidentally. I don't think they mean this to be humorous. It does kind of turn out to be humorous. Here's how Peter Coy goes on to write the article. Quote. If an advisor powered by artificial intelligence is able to communicate both good and bad financial advice with the same pleasant and convincing effect, its clients will rightfully view this as a problem, end quote. I think that's pretty obvious. Andrew Lowe, Professor Lowe, it turns out, along with one of his graduate students named Jillian Ross, wrote a major piece for the journal Harvard Data Science review back in 2024 in which they made many of these arguments in the Wall Street Journal article. It's really, really interesting that one of the subheads in the article is understanding ethics. Okay, well that's interesting. Ethics. That means morality. This sense of ethics means knowing right from wrong and doing the right rather than wrong. If you do the wrong rather than right, we say that's unethical, that's breaking the ethical code. But when you're talking about artificial intelligence, what sense does it really make to speak of artificial intelligence as if they are moral agents just like human beings? Well, maybe that confusion points to some of the huge problems we're going to confront. Here's how Peter Coy writes the next part. Quote. Despite his reservations about current AI models. Lo, that's. Professor Lo believes that large language models will eventually be able to help investors, especially people with small accounts and limited experience with investing. In fact, he is working to build one that is specialized for financial advice. He doesn't plan to charge for it. He says, listen to this quote. Lowe's goal was to develop an AI financial advisor that is a true fiduciary, namely an entity that always puts its clients interests first and tailors its advice to their particular needs, including emotional needs. He thinks it will take something less than four more years, end quote. Wait, wait just a minute. It may take four years in order to get to the point where they have developed artificial intelligence that would just to take what's in this article, be able to put the client's interests always first, tailor advice to their particular needs, including emotional needs. Okay, what we have here is just further evidence of a massive confusion. And Christians, if no one else had better remain sane and clear minded in the midst of all this confusion in this case, I think you have a professor at MIT who raises a legitimate issue. And that is very bad advice coming through these large language models, the chatbots, in terms of financial advice, and people are being harmed by it. But when he turns around and says that, that they're sociopaths, or when the argument is made that they're sociopaths, that is a moral judgment. And it's a moral judgment that has to be made of moral entities. And AI is not a moral entity. Every single human being made in the image of God is a moral being. You can even speak of societies in terms of this society doing something that is wrong. You find that in the Old Testament, where Israel sins against God. But there you're talking about a specific group of moral agents, in this case, the descendants of Abraham, those who are the people of Israel. And so nonetheless, you have human beings individually, most importantly, but also at times, who are collectives, referred to in this way and with moral judgment made. But when you're talking about large language models or chatbots, it's something very different. Now get this. Here's a professor at MIT who I think does appropriately underline the problem. So where's he going to go with this? Well, listen to this quote. To get there, that means to get the chat bots where they need to be, it will need a rich understanding of financial ethics. For that, Professor Lowe proposes, quote, feeding the model all the laws, regulations, and court cases involving questions of financial ethics in the US from the securities act of 1933 up to the latest fraud trial. This is the professor speaking here. Quote, this rich history can be viewed as a fossil record of all the ways that bad actors have exploited unsuspecting retail and institutional clients. The story then says the hope is that the large language model will learn from its training what not to do. Okay, just in case you don't think this is interesting enough, let me tell you where this turns. Listen to this quote. Professor Lowe acknowledges that a large language model might use its newfound knowledge of financial rights and wrongs to choose the wrongs, because large language models don't have ethics built in. To counter such misuse, he says authorities will need to fight fire with fire, developing AI models that can detect crime by auditing users tax returns, for example. Wow. Okay, there you have the worldview issues just laid out. Clearly. We're being told that these chatbots or large language models don't, don't have ethics built in. The suggestion here is that you're going to have to build in ethics. Okay? If you're going to build in ethics. That means supposedly, at least in this article, very clearly, it means that you're going to create and invent some kind of artificial moral agent. I want to say at this point, Christians just have to understand there is a bright line, a bright, bold red line, and you have human beings on one side of that line, and you have no other created being on that line. You have no other aspect of creation on that line. When you speak to your dog and you say good dog or bad dog, you are not speaking in terms of moral agency like you would say to a child, you did right or you did wrong. A part of this is the image of God, the imago dei. Now, the imago DEI means far more than just being a moral agent, but it does, does at its very center, practically speaking, mean a moral agent. And so that's why we hold human beings responsible in a way that we do not hold others responsible. It is also very interesting here that Professor Lowe says that one of the problems with these LLMs or with the chatbots is that they don't have the ethics part built in. You know, it's almost like this was sent to us with an invitation to do some worldview analysis. You know, in contrast, it is built in, so to speak, in every single human being. Every single human being made in the image of God is a moral agent, has to be recognized and treated as a moral entity. And a moral agent. That's at the very heart of Christian doctrine, Christian anthropology, the biblical understanding of what it means to be human. It's at the very heart of the Christian worldview as well. So how would we explain all this in biblical terms? Well, one essential biblical category is conscience. Conscience. And here's where things are different. No machine is ever going to have a conscience, not in any real sense. No human being, the conscious human being, fails to have a conscience. And, you know, the statement here is about being built in. Large language models don't have ethics built in. This is where human beings do have ethics built in. And a part of this is the internal witness of the conscience. That's not an accident. It's not a product of evolution. It's the action of the Creator making human beings, every single one of us, in his image. And a part of that means we do have conscience now that conscience no longer tells us always the truth. And a part of that is that we can actually corrupt our own conscience affected by sin. Our conscience can sometimes lie to us. It can alternatively tell us the truth and lie. We can suppress the truth. And in Unrighteousness. Paul says in Romans 1, we can basically corrupt our consciences, but you know, it's always there. It's just always there. And furthermore, our moral accountability is always there. All right, I have to get to another part of the article. But knowledge is only part of the solution. An AI advisor will also need digital equivalents of empathy, humility, and a sense of fairness. Professor Low says, okay, all right, we're in it now because you have a series of words there, and the first one is empathy. Now, I've said a lot about empathy. I've done two thinking in public programs with authors who've I think written really important things about empathy. The important thing about empathy is that it is a fairly recent word meaning a disposition, a moral disposition. And yet I'm going to argue that biblical words are far superior, including the words compassion and sympathy. Both of these are valorized or are highly valued in Scripture. Sympathy means feeling with, and empathy kind of means feeling for. And so I think it's rather artificial. It's a very recent word in terms of English usage, and I think almost every time you see it, some other word should have been used if it's real. Now, I won't take that any further, but I'll simply say whatever it is, this professor thinks it can be built into this kind of chatbot or LLM large language model. But I'm going to say no. That requires a very deeply seated reality of moral judgment and a knowledge of moral truth. And I don't think that can ever be downloaded or uploaded, transferred or represented by so called artificial intelligence or any machine. Okay, now listen to this quote. These human like qualities won't emerge simply by making AI more powerful. Professor Lowe said, instead the article says AI models. Listen to these words will require specialized modules that produce analogs of empathy. Okay, so now you have analogs and quotation marks. So even as I think empathy is a largely, I'll just say abstracted issue from sympathy, sympathy is far more important. And sympathy and compassion mean you actually do something about it insofar as you have power to do something about it. Now we're being told that these machines, the best we can hope for is that they will produce artificial realities like empathy. And then this is put in parentheses, quote. Since as machines, they can't actually be empathetic. End quote. Oh my goodness, we're going to have to build empathy into them. Oh, by the way, they're machines, so they can't have it. So we're going to come up with something kind of like that. I just hope you're following the argument here that there are people who are investing such hope, such confidence in artificial intelligence, even down to making moral decisions, even to financial advisor, financial bots feeling guilty if they give bad advice, understanding. And by the way, I love this downloading all these court decisions and all these laws and all these things in order to develop a conscience. You don't have a conscience. I don't have a conscience. Because we downloaded enough material, our conscience is to be formed by scripture. But the reality of the conscience, and the reality, for instance, of, of even the biblical statement that all of a sin fall short of the glory of God, and even the statements about conscience, it just makes very, very clear this is innate in every single human being. It may be suppressed, but it is there. It's there because of the will of the Creator. But when it comes to these machines, you can call it whatever you want, you can put quotation marks around whatever you want, but you can't create the machine that is going to have a true conscience because you're not the creator. By the way, evolution is behind this. Professor Lowe has developed what he refers to as the adaptive markets hypothesis quote, which uses the principles of evolution to explain behaviors such as loss, aversion and overconfidence. Here's the final words. Evolution occurs through random variation and natural selection. The strong survive and reproduce, the weak perish. Professor Lowe wants to use a kind of computer accelerated natural selection to spur the development of better AI models. Okay, I'm going to leave it at that. Other than to say these kinds of things raise all the issues of what we know about what it is to be human, what it is to be a moral being who is and is not a moral agent, how in the world conscience exists within us and why it can't exist within a machine. And then we get to the end of it and we notice that even when they make claims about the fact that they're trying to make the machines moral agents, they have to even come up with a word like analogs, as if they're almost sort of like in a strange way, sort of like maybe functionally sort of like what it really means to be a moral agent. All right, just a couple of other issues here. Very fast on artificial intelligence. It is really interesting right now. Artificial intelligence, of course, is. Is in many ways driving so much of the stock market and the activity, in terms of the financial markets, expectations about artificial intelligence. Something else has come up which is really important to us, and that is that in the last several weeks, and that's how fast some of this is happening. In the last several weeks, I think most of us have come to understand that the warnings about the time when we would reach the point that there are videos and photographs that look just as real as reality, that are not. Well, we're there. We are there. I think we just have to recognize this. We have all kinds of things that are now arriving, and in some cases having to do with celebrities, in some cases having to do with controversial events. The fact is, you can't trust your eyes right now simply because you don't know in some of these cases whether or not this is a corrupted image or it's a constructed video. We just don't know. And it comes back to the fact that Christians, we're absolutely committed to the truth. We're committed to what actually happened, to use Francis Schaeffer's term, what is truly true. Something else that plays into this is the fact that even when some of these videos are true, that is to say they're not necessarily lying. They can be recontextualized and presented in such a way that the effect is a lie. The effect is a misrepresentation or a distortion of reality. There's something for us to think about two other big things, especially for Christians to think about, particularly for Christian parents to think about. There have been two very interesting developments having to do with online issues related to children and teenagers. One of them is the fact that there is now very much concern across Europe, and it is also shared by authorities in the United States, concern about hate groups using not only social media, not only online platforms, but in particular, video games and gaming communities to recruit children online. And so you are seeing this, by the way. One of the ways some of this showed up was having to do, for instance, with the fact that members of the Islamic State, you remember that Islamic militia, it still exists, but it was. It was very much in the front of the headlines going back just a matter of a few years, particularly in Iraq and elsewhere. One of the big issues here was the recruitment that was taking place in the digital world outside the knowledge of many parents. And so, for instance, there were young men, young Muslims, who went and joined isis, and their families seemed to be genuinely shocked. And it came down to the fact that unbeknownst to them, you know, in the bedroom down the hall, their son was being radicalized. And the new thing here, at least according to this report, is the fact that you have hate groups, including, according to the New York Times and others, even terrorist organizations. Exploiting online games. Two were mentioned, Roblox and Minecraft. Listen to this quote. Across Europe and North America, children now account for 42% of terrorism related investigations, a threefold increase since 2021. That according to the United Nations Counterterrorism Committee, an agency quote, that identifies emerging terrorism trends. Okay, so that just tells you something. Minors as young as 12 and 13 are being recruited by these groups. And increasingly, it's not only in the online social media, it's also in video games. Later in the article, we read this quote. Video games are not their only tool. Children are also being radicalized through what the United nations investigators call sophisticated funnel strategies. These guide young people from mainstream platforms like TikTok and X to more extremist communities on channels such as Discord or Telegram that are less moderated. Okay, so that's just a warning to Christian parents. Here's something else. Go back to the super bowl and Super Bowl 60 was held on February 8th. Here is something which is now being widely reported, and that is the incredible number of children and teenagers, particularly boys, who sought to wager or bet on the Super Bowl. Nick Penzenstotler of USA Today reports, quote, a widely used age verification vendor for sports betting sites watched in real time during Super Bowl 60 as a horde of kids and teens attempted to create new betting accounts on sites such as DraftKings, Fanatics and FanDuel. One authority of these groups said, quote, it was stunning. They were scaling the walls. It turns out that this is something that had the attention of those who were in these age verification platforms before. They were aware of this before, but they weren't even prepared for the number and the energy invested by so many children and teenagers in trying to place online bets on Super Bowl Sunday In a single hour, we are told the age verification service had to stop more than 50,000 minors from creating new betting accounts. Later in the article, USA Today reports this quote. Kids use a variety of methods to evade age rules by giving fraudulent information. In some cases, they use a parent or other adult's ID with or without their knowledge. The scale of underage wagering is hard to measure, but a recent survey of over 1,000 adolescent boys nationwide found that 36% had gambled in in the last year. So again, I just want to speak to parents. And of course, this is true of vulnerability for people of any age. But in particular, when it comes to children and teenagers, parents need to be aware of the fact that there could be a radicalization taking place down the hall. There could be gambling taking place right down the hall, or at least the attempts to do so. And then all the social media harms we already know about. And that brings me to a final consideration, and that is that right now there is a trial. And that trial has to do with the addictive pattern of social media bringing harms to children and young people for the sake of time. Right now, I'm simply going to say that Instagram is a part of this. And Mark Zuckerberg, who is the CEO of Meta that owns Instagram, he actually gave testimony and national reviews. Josh Golan reported it this way. Quote, the scale of harm inflicted by Zuckerberg's Instagram is staggering. When Meta surveyed young teen users about their experiences during the previous seven days, nearly one in four reported unwanted sexual advances and one in five suffered cyberbullying. Extrapolated to Meta's 270 million teen users. That means every week tens of millions of young people experience these serious harms. Listen to this quote. The company understood these harms well when it made increasing teen users and engagement its number one goal in 2024. But it decided, says this National Review article, to prioritize profit over safety. We're also told that Mark Zuckerberg, quote, personally vetoed a ban on plastic surgery filters on Instagram despite pleas from outside experts and his own employees that these filters caused harm to the mental health of teens. End quote. Now, I can simply tell you that the file and the testimony on this case is building up over time. There's going to be a lot more for us to consider, but at the very least, we need to understand that some of these firms are coming back to say, no, no, we really don't think there's any addictive possibility here. Then why is there such addictive behavior? They're saying, well, because we provide a worthwhile experience. Okay, that's the kind of circle you can't square. All right, we'll be watching this. And you know, again, I just come back and say that Christians, if no one else can keep their minds in terms of sanity about these issues, at the very least, it ought to be a distinctive mark of Christians that we know the difference between human beings and a machine. The question is, do our children know and understand the difference between human beings and a machine? And frankly, would we set them loose even among just random human beings in a society with no boundaries? I think the answer is no sane parent would do that. I'll just simply end there. Thanks for listening to the briefing. For more information, go to my website@albertmohler.com, you can follow me on X or 20 Twitter by going to X.com AlbertMohler for information on the Southern Baptist Theological seminary, go to spts.edu. for information on Boyce College, just go to boycecollege.com I'll meet you again tomorrow for the briefing.
