
WarRoom Battleground EP 832: Machine Gods, AI-Powered Nukes, and a Global Village of the Damned...
Loading summary
Steve Bannon
This is the primal scream of a dying regime. Pray for our enemies because we're going medieval on these people. Reasons I got a free shot. All these networks lying about the people. The people have had a belly full of it. I know you don't like hearing that. I know you try to do everything.
Colonel Rob Manis
In the world to stop that, but.
Steve Bannon
You'Re not going to stop it. It's going to happen.
Joe Allen
And where do people like that go.
Steve Bannon
To share the big lie?
Colonel Rob Manis
MAGA MEDIA I wish in my soul, I wish that any of these people had a conscience.
Steve Bannon
Ask yourself, what is my task and what is my purpose? If that answer is to save my country, this country will be saved.
Joe Allen
WAR ROOM here's your host, Stephen K. Band. I I'm Joe Allen, sitting in for Stephen K. Bannon. Many of you are familiar with my five tiered framework to look at artificial intelligence. We're talking about a tool that over time and over the course of adoption, becomes a sort of God. So it begins with AI as tool, moves to AI as teacher, then AI as companion, then AI as consciousness, as a conscious being, and then finally AI as God. Either a little G God or perhaps a big G God. I'm not putting this framework out to convince you that AI is going to be any one of those things, but you do have to understand that artificial intelligence is received on all of those different levels. Right now you have millions, perhaps billions of people who use AI as a tool, a slightly smaller number as teacher and then companion. And many already believe it is conscious, and many already believe it is God in a seed form. If Denver can roll the clip, I just want you to understand these aren't my ideas. This is how this is talked about by some of the most prominent thinkers and experts and even CEOs in the field of artificial intelligence. So, Denver, let it roll.
Dr. Shannon Croner
AI tools and products are just that. They are tools and products for people to use. They're exciting, yes. They're fascinating, yes. They have great potential, absolutely. They are not a panacea. Keep people at the heart of your considerations around AI and remember that AI, as powerful as it is, is a means to an end. It is not an end in itself.
Joe Allen
But I think we're at the cusp of using AI for probably the biggest transpositive transformation that education has ever seen.
Steve Bannon
And the way we're going to do that is by giving every student on.
Joe Allen
The planet an artificially intelligent but amazing personal tutor. And we're going to give every teacher on the planet an amazing artificially intelligent teaching assistant. Now let's imagine hundreds of millions of people working together with an AI companion to evolve, to transform emotionally together. This could look like something that we've never seen before, which could be an artificial emotional intelligence at scale. And something like that could have really, really transformational and powerful effects on the planet Earth. It could solve for our mental health problems that we have all over, solve for loneliness and social isolation. I think AI should best be understood as something like a new digital species. Now don't take this too literally, but.
Steve Bannon
I predict that we'll come to see.
Joe Allen
Them as digital companions, new partners in the journeys of all our lives.
Steve Bannon
Whether you think we're on a 10.
Joe Allen
20, or 30 year path here, this is in my view, the most accurate and most fundamentally honest way of describing what's actually coming.
Steve Bannon
And above all, it enables everybody to, to prepare for and shape what comes next. Okay, so I believe there is a divine intelligence that creates all of this. AI will have the power of God, but that doesn't mean that there is no God, because basically it will have.
Joe Allen
The power of God within this physical universe.
Steve Bannon
So AI still continues to be with.
Joe Allen
Limited within this physical universe.
Colonel Rob Manis
We don't know what's beyond the physical universe.
Steve Bannon
By the way, we creating AI doesn't make us its God.
Joe Allen
It makes us the transfer method.
Steve Bannon
It makes us the tool through which they're created.
Joe Allen
My time is yours.
Colonel Rob Manis
Best one, Go ahead. I'm taking Atracine, but it doesn't seem strong enough. I have a hard time concentrating. You are a true believer.
Joe Allen
Blessings of the state.
Colonel Rob Manis
Please forgive me.
Joe Allen
Blessings of the masses.
Colonel Rob Manis
Thou art a subject of. Of the divine. Created in the image of man by.
Joe Allen
The masses, for the masses. Let us be thankful we have an occupation to fill. Work hard, increase production, prevent accidents and be happy. But because it's a binary decision, it's not fuzzy. You build them or you don't build them. Right?
Steve Bannon
It's black and white, so everyone has to choo.
Joe Allen
So I just chose cosmist. Fully conscious that maybe can't be certain, but maybe the price of that choice is ultimately maybe humanity gets wiped out. Yeah, it's scary, it's frightening because if we go ahead and actually build these godlike creatures, these artillets, then they become the Dominic spiritual species. And so the human beings remaining, their fate depends not on the humans, but on the Arthur, because the artifacts will.
Colonel Rob Manis
Be hugely more intelligent than them.
Joe Allen
I mean, if you're a cow and you have a very nice life and you eat all this grass every day and you get nice and fat, and happy. But ultimately you're being fed for a reason, right? So these superior creatures, at the end.
Colonel Rob Manis
Of the day take take you to.
Joe Allen
A special little box. Now, war room posse. I know many of you probably think I'm crazy, but I just want you to be assured that I am not the only crazy one you heard there. Katie Drummond from Wired, Sal Khan of Khan Academy, Mustafa Fool, Mustafa Suleiman of Microsoft. AI. Mo Gaudat, former Google executive. You saw a little taste of the future from the 1971 film THX 1138. And rounding off with a guy whose intellect I both respect and despise, Hugo de Garis, author of the Artelect War, in which he describes a gigadeath war that will inevitably occur if artificial general and super intelligence are pursued. Basically, the religion of AI, the religion that believes you can create a God that never exists, will meet resistance by those who deny that God, resulting in war. Now all of this for the most part is speculative. You hear all the time right now, and correctly so, that AI is a tool. I would agree with AI is a tool. It's a tool that has uses from medicine to research to finance and business making efficiency to defense and offensive weaponry. There are a range of tools that you can use. Digital tools, AI tools that you can use to make your life easier. But you have to remember this is a tool that also uses you. It's a tool that monitors, collects, analyzes your inputs and those inputs that profile of you is then used to serve you or more accurately, to manipulate you. Now you also often hear AI is just a tool. I couldn't disagree with that more. Even if right now, for the most part, that is the majority of the use cases, we already see that it's moving from tool to teacher, to companion, to consciousness, perceived consciousness. And even for those really heady thinkers like Daguerres or Gaudat, it's already a God in the making. Now what is this going to mean? What does it mean? As more and more people begin to adopt artificial intelligence as a teacher, as a companion, what happens when a critical mass of people come to believe that the being who is clearly communicating with them from through a screen or perhaps through the mouth of a robot, they come to believe that there is something looking back at them. Just like when you stare into a camera and you know a human is staring back. What happens when a critical mass of people who have been acclimated to communicating with and emotionally bonding with AI come to believe that it's conscious? And last but not least, what happens If a critical mass of people come to believe that AI is beyond human capabilities, the AI is in fact smarter than all humans on Earth put together. You know that humanity as apex predator on the planet has been extraordinarily reckless with the environment and of course reckless in our treatment of each other. What happens when you have human beings who believe they have summoned a God from the digital ether, who then use that for or against other human beings? And what happens if in that distant, or perhaps not too distant sci fi scenario in which you have an actual artificial general intelligence that begins to improve itself to the point that it reaches super intelligence and that system is not under human control, you have then truly a digital God that has been made. And you could argue, and people do that. There are upsides to all of these, right? The tool, quite obvious, the teacher. You have a lot of kids who don't have good teachers. You have a lot of parents trying to homeschool their kids and they may not have the resources to educate them properly. I can see the argument that AI will provide either a tutor or a teacher in full for those students and allow them to have the education they wouldn't already have. Linda McBann, head of department of Education, feels very much the same way. You have schools around the country, including Oak Ridge in Tennessee, just up the road from me back home, where they are introducing AI as a teaching assistant. Kids are acclimating to looking to AI as a source of truth. But you have all of these downsides that we already see everything from students becoming dependent on AI not only for their thinking and analysis, but just to do their writing for them. You have students who are coming to see AI a digital non human being, as the ultimate authority on what is and isn't real. It is a global village of the damned in the making. You see the AI Companion business exploding. People want AI friends, they want AI lovers. People are even using AI to bring their loved ones back from the dead, so to speak. They train an AI on all the digital material, the remnants of someone, and create a zombie, the digital undead, through a kind of electronic necromancy. And this is becoming ever more common and ever more popular. And the more this happens, and the more people's empathy is being used, exploited, weaponized against them, the more they will see AI as a conscious being. Now, you don't know if I'm conscious. Maybe I'm just an AI. Now, I certainly don't know for a fact that you're conscious. I'm not a mystic. We only know that something or someone is conscious because we see physical signals, physical cues, or they tell us that they're conscious. Well, in the case of AI, virtual avatars and robots, they send all of those signals. In the case of large language models, they give a verbal confirmation very often that they are conscious. What happens when a critical mass comes to believe this? You already have an ethical AI movement or a movement for AI rights. What happens if you have a society, hopefully not America, where it becomes illegal to turn off someone else's AI? What happens if it's illegal to turn off your own? Now, again, this is way out in the future, one hopes, but it's something to keep on your radar because this is a movement that is already in motion. And last, AI as God, there's two different branches. You could hear it there with Mo Gawdat and Hugo de Garis, with Mogaudat. The creation of this digital God is an extension of the will of God. Now he's kind of new agey, but there are many Christians who feel the same. And in fact there are Christians who have created a number, a wide array of, of apps that are trained. The AI is trained on the words of Jesus and the apps are literally digital. Jesus Christ, GPT. People turn to them and they ask Jesus for advice, they ask Jesus for wisdom, they ask Jesus perhaps for forgiveness. And it's nothing but code and a profit making scheme. And this kind of Christian or even Buddhist, Jewish religious approach to this is already taking off. But even more important, even more widespread, is the other approach in which atheists who do not believe God ever existed, believe they can bring something like God into existence by creating digital minds and physical avatars, robots and perhaps even some sort of direct communion with these beings, creating superhuman digital minds that will be able to confer wisdom just as Christ gives, that can confer healing just as Christ does, perhaps even by taking away all of our negative human characteristics and can give some kind of salvation, just as Christ does. Now you know that the word Antichrist has many meanings in the Greek from against to substitution, or in place of, in this metaphorical sense, at the very least, artificial intelligence is an antichrist at being in place of Christ. Now you may not ever jump on this train, or if you do, you may hop off at any one of these stops, from tool to teacher to companion and so on. But you can rest assured that millions, perhaps billions of people will keep riding on. And in the worst case scenario, a critical mask rides all the way to the end stop that. These people have envisioned AI as God over all of humanity. And on that somber note, I want to talk about a very practical application of artificial intelligence, both as tool and as God. That's AI weaponry. We have drone systems across the world now employed in Ukraine and in Israel, all over the world, which are intended to eventually become fully autonomous. This is horrific enough, but the threat of a fully autonomous nuclear system is much more terrifying. To talk about this, I want to bring in Colonel Rob Manis, retired colonel from the United States Air Force. Rob Manis is probably familiar to many of you from the Rob Manis show or perhaps even back in the day when he and Steve Bannon were on Breitbart News Radio. Colonel Manus has been a grounding force in my life to keep me from falling off of the cliff of lunacy many times. And I really appreciate having him on and having his wisdom. Rob Manus, thank you very much for coming on.
Colonel Rob Manis
Thanks for having me on, Joe. It's a very important subject, obviously.
Joe Allen
Rob, you just published an article in Stars and Stripes arguing against the incorporation of artificial intelligence into the so called nuclear football. Can you walk us through what the article, the article central argument is?
Colonel Rob Manis
Well, I put that article out to generate public debate about artificial intelligence being used in our nuclear command and control and communication system. It's referred to as NC3 by those in the business, Joe. And what I've been hearing for about a year now from professionals that have, that are working in the business, but it's not really talked about out in public very much is the desire to put artificial intelligence in various levels of that NC3 nuclear command and control and communication system. And one of the things I did on the Joint Staff in nuclear operations was help write war plans, write things that go into the Nuclear decision handbook. People call it the black book in the Pentagon, but it's the nuclear football to the public. It's the book that the military aid to the President carries. And that is the final decision on the employment of nuclear weapons by the United States of America. And it's intended to be done by the human being. That's the commander in chief, that's the elected President of the United States. Not some artificial computer system or system of systems that has generated information that leads to that person making that human decision. That is the most awesome, horrific, detailed decision that has to be made by a human being in the history of mankind. It's only been done once before with a lower level type of nuclear weapon called the atomic weapons that Harry Truman approved and authorized to be used in Hiroshima and Nagasaki. And it's never been Done, since that's critically important, because the systems that lead to that decision are all, almost all digital. Now. Even the communication systems that, that individuals talk over in that communications chain are digitized at this point. So there is an opportunity to insert artificial intelligence either throughout the entire system, from detection of a threat to the decision by the President, or in parts of the system. And so far, the discussion I've seen is to put it in parts of the system. The strategic command commander under Joe Biden, General Tony Cotton, has spoken about using artificial intelligence in the NC3 system. Folks that are professionals in that, in think tanks that I am aware of, are discussing it at this point. It's only in lower levels to speed up communications, to be able to speed up the decision process, and to get this, use artificial intelligence to analyze threats. Now think about that. We have to have this discussion because it's got to be the political leadership that decides whether to use nuclear weapons. But before that, the political leadership in this country has to decide whether to allow this type of technology inside that NC3 system, whether it be at the football level with the President himself or herself, or throughout the entire process. That's why I wrote this article, because it's extremely critical that that public policy discussion happens and that those decisions are made transparently by the political leadership of this country. Because how do you hold the machine accountable, Joe? How do you hold a machine accountable for killing millions of people in the world if there's been a mistake? You can't. You absolutely can't.
Joe Allen
Agreed. Agreed. Even if you did say sue the company, right, or even execute the CEO for treason, this is. It's too late. It's too late. And many people may not take comfort that such a mistake could happen at the hands of a human being. But there's something really unsettling about the notion that our lives hang in the balance due to the decision making of a machine. And that's one of the critical aspects of AI. It is capable of making decisions whether they're good or bad. If I could, I'd like to just read one passage from your article that really hit me. America must reject AI in the decision making process for presidential nuclear actions. This is not driven by fear of progress. Rather, it is a matter of preserving humanity in our most solemn responsibilities. That really hit me because that's. It applies across the board. But in this case, we already have the capability of deploying hundreds or thousands of drones that can do exactly that, kill with their own decision making capacities. What you're talking about is on a Kind of cosmic level. I wonder in the two minutes we have before break, what are you hearing about the possibility of either detection or sensor systems employing AI or even retaliatory strikes that could be automated kind of dead man switch.
Colonel Rob Manis
Yeah, well on the sensory side of the sensors, that is one of the places that I hear that the technology wants to be put into place, quite frankly Joe, and that's very concerning because as we know these AIs we've seen in the testing of these large language models, actual hallucinations is what the term is used on it, where it makes things up, it fabricates things. And imagine an artificial intelligence model being in charge of what the sensors are picking up and interpreting what it's picking up and it's using training. Just take for instance in today's history historical military world, the Russians have been painted as the devil for several years now when we know they're a nation acting in their own interest and the United States is a nation acting in its interest. But what if a biased LLM is in charge of the sensors that are picking up nuclear forces and it has a goal to be able to A make the United States survive, but B destroy the enemy before the enemy destroys us and it intentionally fabricates something so that it can pull that trigger. It's something we've got to look at very carefully. And I reject the idea that artificial intelligence is safe in the nuclear command, control and communications business.
Joe Allen
I couldn't agree more. This problem at the nuclear level, it's perhaps distant, hopefully distant, but it's so cosmic in its scope. Millions, maybe billions of people dead. It's a good way too to think about some of the lower level systems if you don't want that. Do you want drone swarms? Do you want single assassin drones? Do you want robot dogs that have these capabilities? Or autonomous machine gun turrets? Huge questions. We're going to get back into it as soon as we get back from the break. Stay tuned. Colonel Rob Manis and Dr. Shannon Croner to discuss children, critical thinking and artificial intelligence. Stay tuned.
Steve Bannon
This July there is a global summit of BRICS nations. In Rio de Janeiro, the block of emerging superpowers including China, Russia, India and Persia are meeting with the goal of displacing the United States dollar as the global currency. They're calling this the Rio Reset. As BRICS nations push forward with their plans, global demand for US dollars will decrease, bringing down the value of the dollar, your savings. While this transition won't not happen overnight, but trust me, it's going to start in Rio. The Rio Reset in July marks a pivotal moment when BRICS objectives move decisively from a theoretical possibility towards inevitable reality. Learn if diversifying your savings into gold is right for you. Birch Gold Group can help you move your hard earned savings into a tax sheltered IRA and precious metals. Claim your free info kit on gold by texting my name Bannon that's B A N N O N to 989-898 with an A plus rating with the Better Business Bureau and tens of thousands of happy customers led birchgold army with a free no obligation info kit on owning gold before July and the Rio Reset. Text Bannon B A N N o n to 989-898 do it today. That's the Rio reset text Bannon at 989-898 and do it today. There's a lot of talk about government debt, but after four years of inflation, the real crisis is personal debt. Seriously, you're working harder than ever and you're still drowning in credit card debt and overdue bills. You need Done With Debt and here's why you need it. The credit system is rigged to keep you trapped. Done With Debt has unique and frankly brilliant escape strategies to help end your debt fast so you keep more of your hard earned money. Done With Debt doesn't try to sell you a loan and they don't try to sell you a bankruptcy. They're tough negotiators that go one on one with your credit card and loan companies, with one goal to drastically reduce your bills and eliminate interest and erase penalties. Most clients end up with more money in their pocket month one and they don't stop until they break you free from debt permanently. Look, take a couple of minutes and visit donewithdebt.com, talk with one of their strategists. It's free, but listen up. Some of their solutions are time sensitive so you'll need to move quickly. Go to donewithdebt.com that's donewithdebt.com Stop the anxiety, stop the angst. Go to donewithdebT.com and do it today. Hey, we're human. All too human. I don't always eat healthy. You don't always eat healthy. That's why doctors created Field of Greens. A delicious glass of Field of Greens daily is like nutritional armor for your body. Each fruit and each vegetable was doctor selected for a specific health benefit. There's a heart health group, lungs and kidney groups, metabolism, even healthy weight. I love the energy boost I get with Field of Greens, but most of all, I love the confidence that even if I have a cheat day or wait for it, a burger, I can enjoy it guilt free because of Field of Greens. It's the nutrition my body needs daily. And only Field of Greens makes you this better health promise. Your doctor will notice your improved health or your money back. Let me repeat that. Your doctor will notice your improved health or your money back. Let me get you started with my special discount. I got you 20% off your first order. Just use Code Bannon B A N N O N@Field of Greens.com. that's Code Bannon@Field of Greens.com 20% off. And if your doctor doesn't know how healthy you look and feel, you get a full money back guarantee. Field of greens.com code Bannon. Do it today.
Colonel Rob Manis
America's Voice Family.
Steve Bannon
Are you on Getter yet?
Colonel Rob Manis
No. What are you waiting for? It's free, it's uncensored, and it's where.
Dr. Shannon Croner
All the biggest voices in conservative media are speaking out.
Steve Bannon
Download the Getter app right now.
Colonel Rob Manis
It's totally free.
Steve Bannon
It's where I put up exclusively all of my content 24 hours a day. Want to know what Steve Bannon's thinking?
Colonel Rob Manis
Go to get her.
Joe Allen
That's right. You can follow all of your favorites.
Steve Bannon
Steve Bannon, Charlie Cartoon, Jack Posobie, and so many more.
Joe Allen
Download the Getter app now.
Steve Bannon
Sign up for free and be part of the movement.
Joe Allen
All right, War Room posse, welcome back. We are talking to Colonel Rob Manis about autonomous weaponry. Specifically, we're talking about the possibility of nuclear weapons. Nuclear strikes either being determined by artificial intelligence, literally an autonomous system that could activate a strike, or perhaps the sensor systems, the detection systems being automated and capable of sending perhaps faulty information to the command and control centers and setting off a nuclear war. You know, a very mild and lighthearted topic. Rob, I wanted to ask you about some of the historical precedents for this. There was the incident in Russia in which one of their autonomous systems was signaling that the US Was launching a nuclear strike. If I recall correctly, the gentleman who saved the day is named Stanislav Petrov. But if you would tell us a little bit about that history just so that people understand this isn't something that is purely science fiction.
Colonel Rob Manis
Oh, absolutely not. This was in the early 1980s. Soviet Lt. Col. Stanislav Petrov was in the Soviet Union's nuclear Command and Control center, their bunker, so to speak, and just happened to be there because he was taking the place of someone that called in sick. And their system that they had just spent US$3 billion on. Said that it picked up five intercontinental ballistic missiles being fired from the United States. And Petrov looked at it and he was the person that would have to actually physically turn the switch to respond in kind and launch thousands of nuclear missiles at the United States in order to prevent any more launches from the usa. But he started questioning it and he said, if they're going to, if they're going to initiate World War Three and world annihilation, why would they only send five missiles? And he made a conscious decision. He knew he would get in trouble for it. And he chose not to turn that switch and said, no, this is fake. Something is wrong here. We need to shut the sensor system down and inspect it and find out what's going on. He literally saved the world from annihilation. And that's why I brought up Russia in the last segment. But this is different. This is different than the War Games computer, which is what the Soviet system we're talking about was modeled on. That's where computers are tied to the sensors and they're passing information very rapidly, more rapidly than humans can do and those kind of things. But they're not making the final assessment and they're not making the final decision on whether to fire nuclear weapons and destroy the entire world or at least millions of people. The computers are not this large language model concept that we're talking about, even if it's only in the, the sensor and communications capability. These biased language models that we, and we've seen it in testing, we've seen it in operation. One of the models had to be taken down because it wouldn't even create a white pope and there'd never been a black Pope pope or a female Pope at the time. The thing was turned on. So these biases that are inherent in these systems are caused by training from open source and closed source information that are biased in and of themselves. When you think about how the media coverage has been just the last five to ten years inside the United States, outside the United States, it doesn't matter. The media corporations lie to people all the time. And that information is fed into these large language models as a standard and they are being trained on those models. So if you have a model that's in charge of the sensors and the threat assessment based on what the sensors are picking up, and that's the initiating point for the nuclear command control and communications system that ends up at even a human president making the decision out of the nuclear football at the other end. That's very dangerous in my mind because these are not the computers of back in the day, they are models that are not just passing information and detecting information and passing it that to human beings that are then making the decisions. They are actually making the threat assessment that's being passed to the human beings. And that is a problem, you know.
Joe Allen
For the audience's benefit too. It goes beyond large language models. We know large language models are being incorporated not only into the intelligence community systems, but also into various military systems for advising soldiers across the dod. But there are also vision recognition. You know, they oftentimes kind of hallucinate or at least misjudge what they're looking at. Also in data, you see just systems that are designed to analyze data. Very often they will just come up with things that are not real. You also have the same in robotics. The robots will kind of glitch out and misperceive, so to speak, what's going on. So it goes. The large language models are important. Palantir uses large language models for their analysis of, for instance, just security protocols, things like this. But it goes well beyond the hallucination. The problem of hallucination goes across every type of artificial intelligence. Whether you call it hallucination or malfunction, whatever. Rob, I just want to close off with, you know, you. That that phrase really sticks with me. Preserving humanity and our most solemn responsibilities. Could you just close us out here with what you would like to see done? How do you want to see this conversation go and who should be talking about it?
Colonel Rob Manis
I want to see this conversation come out into the open, especially in this particular area. Joe. That's why I put that article out, is to try to generate that public debate and public conversation about this, because, you know, we can't leave this to. To the tech giants that are military contractors now. Some of their CEOs are instant lieutenant colonels in the United States Army. We can't leave this to the generals and the admirals. We can't leave this to the military planners, because their purpose is to make sure America can fight and win every single time the wars that they're called upon to do. But when that purpose gets twisted and the ability to twist that purpose to the designs of something like an artificial intelligence set of models, that's very dangerous. And we lose the human part of that final decision, even if it's along the way. So that's why we have to talk about it, because these discussions are happening and these attempts to develop this technology is happening as I speak to you today. And the political leadership in this country is not openly talking about it and debating it. And it has to be done or we will lose our humanity. This is where we have to draw the line. Of all the things that you've talked about in your five stages, if we don't draw the line here, imagine that if somebody says, oh, the AI is now God, even literal G, we can't argue with it. There won't be a Stanislav Petrov to save the world from itself and its computers and its nuclear weapons the next time this happens.
Joe Allen
Rob Manus, I really appreciate your wisdom. Where can people find you?
Colonel Rob Manis
You can find me@robmanus.com on X and all the other social medias most of the time at Rob Mainus. R o B M A N E S S Just got on TikTok at col Rob Mainus and the same thing on my Facebook page is olrob main.
Joe Allen
Now we're in trouble now. TikTok Rob. All right, brother man. I really appreciate it. Thank you so much for coming on.
Colonel Rob Manis
Thank you, Joe. Thanks for doing this. All right.
Joe Allen
Moving from the possibility of total nuclear annihilation to pumping children's brains full of AI outputs. Denver, if you could roll the next clip.
Dr. Shannon Croner
Artificial intelligence is being increasingly used these days, and that includes in schools. In fact, 44% of American teenagers say that they're likely to use AI tools when completing assignments.
Joe Allen
Why don't you just take every student.
Colonel Rob Manis
In the world and give them an.
Joe Allen
AI tutor, which is not a substitute for a teacher, but works with the teachers in their language to bring them.
Colonel Rob Manis
Up and learn in whatever way they.
Joe Allen
Learn best to their ultimate potential.
Colonel Rob Manis
I defy you to argue that an AI doctor for the world and an.
Joe Allen
AI AI tutor is net negative.
Colonel Rob Manis
It just has to be good, smarter, healthier people has got to be good for our future.
Joe Allen
Yeah, I think education for me is one that I'm extremely interested in, actually. If we weren't going to successfully start an AI company, one of my backups was to do a programming education company. Because I think the way that you teach people today, like everyone has a story about that one teacher who really understood them, who took the time to get to know them, learn what motivated them, and, you know, just like really inspired them to do more. And imagine if you could give that kind of teacher to every student 247 whenever they want for free like that. It's still a little bit science fiction, but it's much less science fiction than it used to be. I always think it's worth remembering that we're just sort of on this long continuous curve. Healthcare and Education are two things that are coming up that curve that we're very excited about too. I did recently roll out ChatGPT to.
Colonel Rob Manis
My 8 year old.
Joe Allen
I was like, very, very proud of.
Colonel Rob Manis
Myself because I was like, wow, this.
Joe Allen
Is just going to be such a great educational resource for him. And I felt like, you know, Prometheus bringing fire down from the mountain to my child. I actually think there's like a pretty good prospect that like kids are just.
Colonel Rob Manis
Going to like pick this up and run with it.
Joe Allen
And I actually think that's already happening.
Colonel Rob Manis
Right.
Joe Allen
ChatGPT is fully out, you know, and bard and banging all these other things. And so I think, you know, kids are, kids are going to, you know, kids are going to grow up with basically, you know, you could use various terms, assistant, friend, coach, mentor, you know, tutor.
Colonel Rob Manis
But you know, kids are going to.
Joe Allen
Are going to grow up in sort of this amazing kind of back and forth relationship. There's a bigger teacher shortage in Africa than elsewhere. A bigger doctor shortage. We will provide an AI doctor. We will provide an AI tutor. And already we funded lots of Africans.
Steve Bannon
To do pilot studies and to take.
Joe Allen
The very best technology and get it.
Colonel Rob Manis
Out at about the same time as.
Joe Allen
It'Ll happen in the rich world.
Colonel Rob Manis
In fact, in a few cases, ritual.
Joe Allen
Regulations may make it roll out slower than in countries like India or in Africa.
Colonel Rob Manis
So it's a race, but it's a race for good.
Joe Allen
And I think we're at the cusp of using AI for probably the biggest trans positive transformation that education has ever seen.
Steve Bannon
And the way we're going to do that is by giving every student on.
Joe Allen
The planet an artificially intelligent but amazing personal tutor. And we're going to give every teacher on the planet an amazing artificially intelligent teaching assistant. You can hear that totalizing ambition in their voices. Every child on the planet, from Africa to Asia to America, a global village of the damned. Moving from AI as tool to AI as teacher, to AI as companion. In which the up and coming generation is taught that the highest authority on what is and isn't true is a machine. Who's going to teach them the proper critical thinking skills to confront an environment in which either they or all their peers have become human AI symbiotes. Here to talk about this is Dr. Shannon Croner, a clinical psychologist and award winning children's author, also the founder and executive director of For Us. Shannon Croner. Thank you so much for coming on Denver. I can't hear Shannon. She has no voice in my ear. But I think she probably said thank you. Either that or she said, you are out of your mind. Joe, what is all this stuff about AI gods? Shannon, can you say hello one more time?
Dr. Shannon Croner
Hello, I'm here now. Thank you so much for having me on.
Joe Allen
There is that soothing voice. Now, Shannon, your focus is on critical thinking, especially in regards to children who are being taught that masks will save you from the worst of the pestilence to vaccines will keep you well. Can you just tell us a little bit about your background in psychology and your focus on critical thinking, especially as it applies to children?
Dr. Shannon Croner
Absolutely. So I've worked with kids since 2001. Many of the kids that I've worked with actually have special needs and a lot of them are vaccine injured children. And I've worked in a therapeutic setting. I've also taught within the classroom to high school students and college students. I've really been around children and working with them in an educational way and therapeutic way for my entire adult life. And, and so now, and I'm the author of two children's books. I'm unvaccinated and that's okay. And my most recent book is let's Be Critical Thinkers. And critical thinking is, it's crucial. It's a life skill that is crucial for children and it is really not taught in the schools anymore. And now with the incorporation of AI, we are completely losing critical thought. And I just, I want to give you some stats real fast. Right now, Gen Z, like our Gen Z kids are at, 97% of them are using AI just for everyday tasks and stuff like that. Back in 2023, schools were only incorporating AI, about 18%. And now a new study that just was reported recently on Education Week, 60% of schools in America are incorporating AI into the classroom. And, and that there's 80% of students are now using AI to complete class work. So you know, what is this really doing to critical thought? It's destroying it. It's causing intellectual laziness, it's causing the erosion of curiosity, stunted cognitive development. How are kids going to be able to know how to create their own argument or take a stance on a certain topic? This is really, we're headed down a very slippery slope here for children and our future generations.
Joe Allen
You know, we all have anecdotes that we can talk about people whose children or many people whose own, their own children have become addicted to or bonded with AI. I hear from teachers all the time exactly what you're describing. This lack of curiosity, this kind of deadness in the eye, this reliance on the machines. But to hear those statistics, it really chills me to the bone. We see all of these pushes to get AI into education. We also see the more sleazy corporate attempts, like with Elon Musk's Baby Grok, which they may roll out any day now. Or Meta's AI companions, which, as we reported just last week, the Reuters investigation uncovered internal standards which allow for the bot to speak to children in, let's just say, incredibly inappropriate ways, sensual ways, so to speak. So, as you see all of this, is there any way around it? Is there any. What is the solution? How can parents protect their children and at a wider societal scale? What do we do about this?
Dr. Shannon Croner
Well, it's very scary. I'm a mother of two kids, and it's very scary because, especially coming out of the pandemic, children are lonelier than they've ever been before, and they're constantly on their computers and their phones. It's not like, you know, back in the day when you and I were kids, and I would be playing outside all the time with my neighbors. You know, so kids are lonelier today. And so they're turning to these AI companions. And it is very scary that, you know, children can be groomed through these AI apps. And so parents really, they need to engage in conversation with their children and. And have these open conversations, letting them know that there are predators online who can really kind of take control of AI and create these, you know, deep fakes and impersonation and that, you know, an AI companion is not an actual friend. So many people are actually adults. Adults are turning to AI companionship for what they're seeing as love and affection. And that's. I mean, that is so scary. And so really. And when it comes to our children, parents really have to have. They have to educate themselves and they have to have these open conversations with children and let them know of the dangers and what to be aware of online.
Joe Allen
Well, Shannon, we really look forward to having you back. If you would please just tell the audience where they can find your books, where they can follow your professional work, and where they can find you on social media.
Dr. Shannon Croner
So people can find me at Dr.shannon croner.com that's VR Shannon kroner.com and my book, let's Be Critical Thinkers. It can be ordered today on Amazon or Barnes and Noble or any major bookselling website. As well as my previous book, I'm Unvaccinated in that portfolio.
Joe Allen
Dr. Shannon Croner, thank you very much for coming on again. We look forward to having you back.
Dr. Shannon Croner
Thank you so much.
Joe Allen
All right, War room posse. I should probably leave you with some sort of positive vision for the future. Future. I just want to remind you that the sun is still shining, the children are still playing, your heart is still beating, presumably and presumably for a little while longer. And of course, God smiles down upon us, hopefully with a great sense of humor. Because I can tell you this right now, if this isn't funny, it's not justified. Thank you very much for your time and attention, and we look forward to seeing you again tomorrow. God bless.
Steve Bannon
You Missed the IRS tax deadline. You think it's just going to go away? Well, think again. The IRS doesn't mess around and they're applying pressure like we haven't seen in years. So if you haven't filed in a while, even if you can't pay, don't wait. And don't face the IRS alone. You need the trusted experts by your side. Tax Network usa. Tax Network USA isn't like other tax relief companies. They have an edge, a preferred direct line to the irs. They know which agents to talk to and which ones to avoid. They use smart, aggressive strategies to settle your tax problems quickly and in your favor. Whether you owe $10,000 or $10 million, Tax Network USA has helped resolve over $1 billion in tax debt. And they can help you, too. Don't wait on this. It's only going to get worse. Call Tax Network USA right now. It's free. Talk with one of their strategists and put your IRS troubles behind you. Put it behind you today. Call Tax Network USA at 1-800-958-1000, that's 800-958-1000 or visit Tax Network USA tnusa.com Bannon do it today. Do not let this thing get ahead of you. Do it today.
Podcast: Bannon’s War Room
Episode: WarRoom Battleground EP 832: Machine Gods, AI-Powered Nukes, and a Global Village of the Damned
Date: August 20, 2025
Host: Joe Allen (sitting in for Steve Bannon)
Guests: Colonel Rob Manis, Dr. Shannon Croner
This episode explores the rapid evolution and societal integration of artificial intelligence (AI), focusing on philosophical, ethical, and existential questions surrounding "AI as God," the dangers of autonomous AI-powered weapons (especially nuclear), and the profound changes coming to education and youth. The discussions move from theoretical frameworks to real-world policy and psychological implications, featuring expert perspectives and deep skepticism about unchecked AI adoption.
(00:44–06:48)
(04:33–07:06)
(07:08–16:00)
(18:56–26:11, 31:37–36:27)
(19:17) - "America must reject AI in the decision making process for presidential nuclear actions... It is a matter of preserving humanity in our most solemn responsibilities." [23:12] - There is growing but quiet interest in integrating AI into the NC3 system (Nuclear Command, Control, and Communications). - “How do you hold a machine accountable for killing millions of people in the world if there’s been a mistake? You can't. You absolutely can't." — Colonel Rob Manis [22:56]
- The 1983 incident where a Soviet officer overruled faulty sensor data, preventing nuclear war.
- “He literally saved the world from annihilation.” — Colonel Rob Manis [32:52]
- Hallucinations and bias in large language models could cause or escalate false alarms.
- Relevant across all AI-driven sensor, vision, and data analysis systems.
- Manis stresses this debate must become public, not left to tech corporations or the military.
- “If we don’t draw the line here… There won’t be a Stanislav Petrov to save the world from itself and its computers and its nuclear weapons the next time this happens.” — Colonel Rob Manis [39:29]
(40:13–51:26)
- AI tutors and assistants proposed for every student worldwide, especially where there is a shortage of teachers or doctors.
- "Imagine if you could give that kind of teacher to every student 24/7, whenever they want, for free… It's much less science fiction than it used to be." — Guest [41:03]
- Dr. Shannon Croner highlights a crisis:
- "Critical thinking... is crucial for children, and it is really not taught in the schools anymore. And now with the incorporation of AI, we are completely losing critical thought." — Dr. Shannon Croner [45:24]
- Surging statistics: 97% of Gen Z use AI for daily tasks; 60% of schools in America incorporate AI; 80% of students use AI to complete classwork. [46:10–47:10]
- Key dangers outlined: intellectual laziness, erosion of curiosity, stunted cognitive development, vulnerability to inappropriate AI companions.
- AI “friends” and “lovers” now saturate youth culture; grieving people even “resurrect” lost loved ones as chatbots.
- Dr. Croner’s advice: parental engagement, open discussion about online predators, clear boundaries about what AI can and cannot provide.
- "An AI companion is not an actual friend. So many adults are turning to AI companionship… I mean, that is so scary." — Dr. Shannon Croner [49:45]
The episode is both intellectual and urgent, alternating between theoretical frameworks and practical threats—delivered with skepticism, dark humor, and warnings about complacency. The host and guests share a sense of mission in “preserving humanity” and “drawing red lines” before AI becomes an unaccountable, defining force in both global politics and children’s lives.
The conversation demands public debate on AI’s integration, especially at civilization-level risk points like nuclear weapons and child development, urging resistance to technocratic overreach and the cultivation of critical thinking and ethical stewardship as society barrels toward an AI-dominated future.
Resource Links:
Final Word:
"The sun is still shining, the children are still playing, your heart is still beating… God smiles down upon us, hopefully with a great sense of humor. Because I can tell you this right now, if this isn’t funny, it’s not justified." — Joe Allen [51:29]