Joe Allen (3:48)
War Room. Here's your host, Stephen K. Banner. Good evening. It is Thursday, January 8th, in the year of our Lord 2026. I am Joe Allen, and this is War Room. Battleground. As you know, Posse, artificial intelligence has spread out across the world, infecting brains like algorithmic prions, giving the sense that perhaps the entire human race is under threat of getting digital mad cow disease. We've seen instances of AI psychosis. We've seen instances in which artificial intelligence has lured children into suicide. Now up on Capitol Hill, the fight for who gets to run this algorithmic insane asylum and who goes to the digital padded room has heated up. We have laws on the books across the country at the state level banning psychiatrists from using artificial intelligence as a kind of agent, as a proxy for their practice in Illinois. We have laws on the books in California to hold up AI companies to accountability, transparency. SB53 in California is probably one of the strongest laws looking at the catastrophic risks of AI and making some attempt to hold these companies accountable. You have a similar law on the books in New York, the Raise Act. And Josh Hawley and Richard Blumenthal have introduced a similar national level bill entitled the AI Risk Evaluation Act. The goal being to monitor companies and force them to publish their safety protocols, to publish any safety incidents and to delineate what sorts of penalties they would suffer if, for instance, their AIs began to lure children into suicide or drive people insane. At the national level, this struggle for control over who is in charge of the future of AI, who is responsible for any damages and what direction it will go is led at the moment by a bipartisan coalition, a very small one. But if I look into my crystal ball, I certainly see as this issue heats up as the various catastrophes become more and more imminent, that this fight will be explosive. You have Bernie Sanders, who recently learned the word artificial intelligence, calling for a full moratorium on data center construction. That that may be unrealistic, but at least it sets a bar. It tells these companies that someone is willing to stand up to them and even if it doesn't end up being Bernie Sanders, ultimately we know that you have younger, brighter minds on the left like Ro Khanna. And you have younger and at least diligent individuals like Ron DeSantis in Florida who are willing to step up and lead the charge against these companies and their excesses. Now, as you know, myself, I'm much more concerned about the social and psychological implications of all of this. The AI psychosis is monstrous. The ways in which these sycophantic systems will lure people into not only mental instability, but also suicide. And in the case of the famous murder suicide that occurred last August in which a 53 year old former Yahoo executive murdered his mother at the encouragement of ChatGPT and then stabbed himself to death. And the authorities found that GPT was encouraging not only his general break with reality, but also his suspicion that that his mother was in fact in on the conspiracy against him. These sorts of things are extreme edge cases. These sorts of incidents give us a sense of how bad it could get should these prions spread and the infection become worse. But just on a general level, you don't have to go too far into the Internet to see that not only are search engines now dominated by AI interpretation rather than guiding you to human produced information, but social media is suffused with it. You see endless streams of AI slop, AI generated images, AI generated posts, essays that are supposedly human created, which are obviously the result of algorithmic systems and of course deep fakes. If you look just recently, the shooting in Minneapolis, you you have real footage of an incident which is tragic and an incident which we should be able as a society to look at the video evidence from multiple angles and come to some kind of consensus, some kind of conclusion as to what is and isn't real. And yet you see the split. Wherever you are on that line, you see the split not just in what is right and what is wrong, but what is real and what is not real. And this is real video evidence. Imagine a world in which half three quarters of the videos on the Internet are simply deep fakes. And they are so close to reality, they're so photo or video realistic that there's really no way for the human eye or the human mind to detect the difference. The only recourse you have is to turn to an AI to ask, is this real? I've talked about the religious implications of artificial intelligence for years. If there is any one question that religion answers that humans are yearning for eternally, what is real? What we see are the wealthiest men on earth, empowered by the most powerful government on earth, putting their algorithmic systems, their non human minds forward as the ultimate arbiter of what is and isn't real. And if you think that the fight in Minneapolis is going to spark off into something like another subsequent string of national tragedies, imagine two, three, four years on down the road if these companies are not restrained, if the flow of AI slop and deepfakes is not stopped. What it looks like when we're all scrambling to decide what is real and what is not, while half or more of our countrymen are activated by videos, text, fabricated evidence, deep fakes that have encouraged them to hate their fellow Americans. It's a dystopian idea, one that I don't think we are necessarily going to experience in its fullness. But some portion of it is already happening. The seeds of this dystopia have already sprouted. And it's up to us on the individual level, on the communal level, to push back, on the institutional level to say this is not how our companies, our churches, our government agencies are going to be run at the behest of algorithms. And of course, at the political level, by putting, putting in place regulation and perhaps even banning certain levels or certain uses of artificial intelligence to at least give humanity a fighting chance in this cosmic war against the machine. Beyond the social and psychological problems, you have the economic problems, you have the problem of replacement. What happens when jobs en masse are replaced by AI? And then on the deepest level, the catastrophic risks. What happens if AI systems allow any simpleton to create novel viruses, for instance, or any other type of bioweapon? What happens when AI systems empower a tyrannical government or security state to unleash swarms of death drones that can autonomously kill hundreds, perhaps thousands of people with only one push of the button? And in the most, far out the most fantastic vision of human doom? What happens if these AI companies create a system that they can't control at all? What happens when they create first a human level artificial intelligence, artificial general intelligence? What happens if they create a system or a series of systems, a system of systems which is smarter than all human beings on Earth combined? Here to talk about that possibility is Liron Shapira, host of Doom debates. If Denver will roll. I just want to give you a sense of what liron has going on over there. It's fantastic. And I encourage you to dig.