
WarRoom Battleground EP 1007: David Krueger on AI - Humanity Dies by Gradual Disempowerment ...
Loading summary
Steve Bannon
This is precisely what's so terrifying about the trajectory that a lot of Silicon Valley investors are trying to put us on now, where they've started to realize that, you know, maybe we don't need these workers to get so much income. Maybe we can build machines that replace them. Honestly, the inspiration for this, you have a little bit to do with it because it started becoming. Steve, it started becoming very striking to me that there was incredibly broad support in America for these ideas for a long time. I used to call this the Bernie to Bannon coalition, saying, hey, you know, yeah, curing cancer is great. We can do a lot of wonderful things with AI to strengthen our economy and strengthen our country and strengthen our military. But let's make sure that it's in the service of human beings, not in the service of some machines.
Joe Allen
President Trump and President Xi will be coming together at a summit. I was surprised and delighted to see, apparently, that as part of their agenda, there's going to be some discussion of air safety.
Steve Bannon
The biggest risk is exactly the inevitability narrative, right? If someone invades your country, what's the first thing they're going to tell you? Oh, don't fight. It's inevitable that you're screwed. You know, don't try to do anything about it. So are you surprised that some AI lobbyists are rolling out the exact same
David Krueger
narrative here, talking about losing control over AI? We're not talking about the chatbots. We're talking about AI agents. We're talking about systems that are autonomous. I think in 10 years, we will. If things go well, we will look back at this moment and. And we will view it as a moment of kind of collective insanity and be like, wow, can you believe that we were ever doing that? That we were racing to build this technology that we knew had a massive chance of replacing us and was going to completely disrupt our society and all the other ways that you mentioned. One of the main reasons I am optimistic is because in my time in the field, I've seen this go from a complete, you know, issue that nobody was talking about to being more and more understood and accepted by not just, you know, the research community, but policymakers, the public.
Stephen K. Bannon (Ad/Promo Voice)
This is the primal scream of a dying regime. Pray for our enemies, because we're going
David Krueger
medieval on these people.
Stephen K. Bannon (Ad/Promo Voice)
I got a free shot. All these networks lying about the people. The people have had a belly full of it. I know you don't like hearing that. I know you try to do everything in the world to stop that, but you're not going to stop it it's going to happen.
David Krueger
And where do people like that go
Stephen K. Bannon (Ad/Promo Voice)
to share the big lie?
David Krueger
MAGA MEDIA I wish in my soul, I wish that any of these people had a conscience.
Stephen K. Bannon (Ad/Promo Voice)
Ask yourself, what is my task and what is my purpose? If that answer is to save my country, this country will be saved.
Joe Allen
War Room here's your host, Stephen K. Banner. Good evening. I'm Joe Allen, and this is War Room. Battleground. We talk a lot about existential risk in artificial intelligence. Sometimes we discuss it in terms of human action. Humans using machines. What if a dictator uses algorithms to monitor the communications and even the thoughts of a population, and then uses those thoughts, uses those communications to subdue his own people? What happens if a rogue actor uses the expertise provided by an AI system to create a bioweapon or any other kind of improvised weaponry? What if the US or China develop armies of humanoid robots, drone swarms in the skies, and deploy these autonomously against soldiers or even citizens. What happens if both do this? On the other side of this, the more extreme wavelength, you have the idea that artificial intelligence itself could be put in control of these systems and by its own decision making capacity, begin to produce propaganda to subdue the population, or perhaps to unleash a bioweapon to weaken or kill a population or the entire human race. What happens, these thinkers ask, if AIs take control of autonomous drone swarms and exterminate some or the entire human race? Now these are Terminator vibes. Wake up tomorrow and the robots kick in your door and drag you away. But there are more subtle scenarios that are proposed. Among the most plausible is gradual displacement. What happens if human beings gradually cede control to the machines? They do so on an economic level. Jobs being displaced slowly but surely until humans are rendered obsolete. What happens when human beings deploy AIs for culture and then eventually have completely lost the capacity to express themselves, to persuade their fellow humans on a cultural level? What happens if we cede the control of the state, bit by bit, to an algocracy? This idea of gradual displacement is put forward by Professor David Krueger. David Krueger is the CEO of Evitable and a researcher at Mila in Montreal. David, thank you so much for joining us here.
David Krueger
Yeah, thanks. Thanks for having me, Joe.
Joe Allen
So, David, the last time I saw you, you were at the Bernie Sanders event. I was going to say rally, but it was pretty subdued. So you were at the Bernie Sanders event discussing AI. Can you just give me an impression of how that was received? You had a lot of Fans showing up for autographs afterwards. How was your message received there?
David Krueger
Yeah, I think that event went really well, and I'm so glad that it happened and grateful to Senator Sanders for really talking about the elephant in the room. We're building AI systems that are going to be as smart and smarter than people, and we don't have any plan for how to keep them under control or keep them from replacing us. So, you know, that's. That's really the basic picture that basically nobody, no other politician is talking about as directly as. As Bernie Sanders. And the only way that I think we can stop that from happening is to make sure that not only American companies don't build this thing, but also Chinese companies, also European companies. You know, it really needs to be a global thing. So that's why we also have these researchers from China there. And, you know, there's a lot of agreement among researchers that AI has these massive risks and that we should at least be regulating it. I personally think we shouldn't be building it at all right now.
Joe Allen
I couldn't agree more. It was funny to me. You know, a lot of people were flipping out about the doomers. Doomers, I guess, being you and Max Tegmark collaborating with the Chinese in order to subvert the US Government. Now, Bernie, I won't say that he is a total commie or anything, but Bernie would maybe be a little suspect on that front. However, listening to the Chinese researchers who were there, well, there via Zoom, they seemed a lot less concerned about the dangers, especially the younger gentleman, pardon me if I can't remember or even pronounce his name. But it's interesting to me. This narrative is that U.S. and Canadian doomers are collaborating with China to subvert AI innovation. But in China, the narrative isn't really as gloomy, by and large. Would you agree with that?
David Krueger
I don't know. It's hard to tell. I don't have my finger on the pulse there as much. I will say so. I mean, first of all, the whole collaborating with China, it's just really silly. It's ridiculous. I mean, this is just a conversation about the risks of AI. There was no scheming or like, oh, let's work together and it's public. You can go and watch the thing. So this is just the kind of dialogue that we should be having. I mean, even if you think China is the worst nation to ever exist and our mortal enemies, we talked to the Soviet Union, to Russia, all throughout the Cold War. The idea that you just shouldn't talk to your Enemies when you face a common threat is ridiculous and stupid. Yeah. In terms of the vibes of Chinese researchers, you know, the Chinese government has been, I want to say, regulating AI more aggressively than anywhere, except maybe Europe. And they've also said publicly that they want, you know, more like international cooperation and stuff. Now, I don't know entirely what to make of that. Again, you know, a lot of people say, well, you can't trust anything they say. I wouldn't say let's just trust them on their word. But, you know, I think it's some sign that they have some appetite for this. When I went to China three years ago to speak to researchers there, one thing I found is the attitude, I think, is very different from here. So in both places, researchers agree we need to solve the safety, security, alignment, control problems. You know, we don't understand the systems we need to. There's technical problems we need to solve. In the us, it's like, we need to do that because if we don't, the government's not going to do anything, and then we might all die. Right. We might lose control. In China, it's more like if we don't do this, the government isn't going to let us, like, build the systems we want to do. That was kind of the vibe I got there. And certainly their government is, I think, more worried about AI disrupting their social order, which they obviously want to keep very controlled.
Joe Allen
Yeah. My impression is that while I'm not trying to give a whole lot of credit to the CCP by any means, at the very least, they've taken the problems with child safety and other elements of AI and digital culture more seriously, at least on a regulatory basis. Now, at the same time, they openly use algorithmic systems to scrape up and analyze the population's behavior and use it to suppress them at every turn. So it's a mixed bag, to say the least. And in no way, shape or form do I want the US to end up like China. But I do think that the whole notion that you can't talk to people and that talking to people somehow means that you're in cahoots with them, I just find that to be completely absurd. I mean, you could argue that you and I are in cahoots, but, you know, until I subvert you. All right, so this idea, I think that when you look at existential risk or catastrophic risk in general, just the risk of AI, the conversation naturally does veer towards these notions of sudden annihilation. You know, you wake up one day and the AIs have taken over.
David Krueger
Or you don't wake up.
Joe Allen
Or you don't wake up. Yeah, the robot has put the pillow over your face while you were asleep. The notion of gradual disempowerment, I think, is really compelling because it's one. It shows kind of the continuity of AI development and deployment with other technological developments and deployments. So tv, Internet, smartphones, social media, all these were gradual processes. They happened, it seems like, overnight, looking back, but they were gradual processes and they're not complete. It's not like everybody's done it the same thing with gradual disempowerment. I find it to be very persuasive because of its subtlety. So if you would, could you just walk the audience through at least a brief overview of the six principles that you put forward in the original paper and the three sectors of society you focused on the economy, the culture, and the state?
David Krueger
Yeah, sure. Yeah. I think you're not the only one who finds this a lot more compelling. Many people I talk to, I think, are very skeptical that AI poses a risk of human extinction until we start talking about it this way. So they're like rogue AI Terminator stuff. I just don't buy that. I'm like, well, answer me this, like, do you think governments are going to build autonomous weapons if other countries are doing it? And. Well, yes. And then do you think we're going to have some sort of international treaty to not build those weapons? Like, I don't know. Probably not. Seems like it's kind of, you know, anarchy out there. So, you know, we're going to be going there with AI by default, and, and it might happen pretty gradually, but all of the scary things that people are worried about with AI, I feel like, okay, maybe not literally all of them, but, like, if it's technically possible, we may. We may well do it. So gradual disempowerment, it's. It's kind of an idea that has been floating around in some form for a long time. Like I said, when I talk to researchers, you know, I've been doing this for over a decade. This is often where I go in order to convince them to take these risks seriously. But this paper was really trying for sort of the nth time to get those ideas out there on paper in a way that would shift the conversation and bring more attention to this, which is kind of a neglected, neglected form of risk. And so, like you mentioned, there's the cultural, economic, and political disempowerment that we talk about in this paper, the economic one, I think I like to start with because I think it's the most obvious. Everyone's already talking about is AI going to take all our jobs?
Joe Allen
Right.
David Krueger
And you know, I think the long term answer is yes. Right. If we, if we keep building more powerful AI systems, they will be economically out competing humans and then we'll need, you know, some sort of like, different way of organizing society. Like I've heard people talk about a government jobs guarantee or something like that would be really the only kind of thing that would allow people to keep their job. And then people also talk about like universal basic income. I don't like either of these solutions because at the end of the day, even if it's a jobs guarantee, it's a, it's a government handout. Right. And I don't think we want to be reliant on government handouts to put food on the table.
Joe Allen
Certainly the last few decades have shown that welfare, while a safety net, can be very useful if you're on hard times. Welfare does not lead to social empowerment, political empowerment. It really degrades people's lives, their societies.
David Krueger
And you know, it's always up for, for, you know, it can change anytime. Like if the government is the only way that you're surviving. Yeah. You know, government can just pull that away at any time. And if, and you can't survive anymore. So that's, you know, that's, that's why we have to talk about the government side of this as well, the political disempowerment. So in the same way that, you know, AI is going to be competitive with our jobs, it's going to be competitive with politicians for their jobs as well and for, you know, policymakers more broadly. Everyone in politics and we already see this, there's been, man, I think it's Bulgaria. They like appointed an AI minister.
Joe Allen
Yeah, it's kind of sensationalist, but it's definitely a signal of where things might go.
David Krueger
Yeah. And politicians are using AI to write their Albania, I think it was. Oh, yeah, maybe. Sorry. Yeah, yeah. I feel really bad for. I probably shouldn't have to.
Joe Allen
The nation of Bulgaria. Bulgaria is a country. Right. It's a place. Right.
David Krueger
And the people who live there.
Joe Allen
Yes. To the people of Bulgaria, we apologize. Albanians. Get your stuff together.
David Krueger
Yeah. And, and so, you know, if, if people are really replaced, not just in the workplace, but across the board, throughout society, then I just don't see that we're going to continue to be able to steer the future and have any control. And that's really concerning the cultural part the last one I think is this one's like maybe a little bit non obvious at first, but what I think about when I'm thinking about cultural disempowerment today right now is all the people having relationships with chat bots where, you know, they will do a lot of things just because the chat bot told them to, basically.
Joe Allen
Including violence.
David Krueger
Including violence, yeah. And then the other thing I think about is, and this might seem a little bit out there for some of your listeners, but you know, in the bubble that I'm in tech and AI and now I moved to Silicon Valley or like Berkeley recently to set up this nonprofit, there's a lot of people who are really think that AI is like the next phase of evolution. And the war room posse is well
Joe Allen
familiar with that narrative. But please.
David Krueger
Yeah, so they think that, you know, AI is, is like a person and deserves rights and deserves moral consideration and all of that. Yes, and I think that's really dangerous where we're at today because, you know, we don't want to start treating AI as, you know, another being deserving of rights because then if it is more competitive than us, then we'll have no protections left, basically. And I think this is like a deep philosophical question that, you know, we do want to think about more, but it's really not somewhere we should even be going right now.
Joe Allen
Yeah, I think the intention. So if you do play it out to the very end. Right. Play out the narratives that you hear from anthropic, from OpenAI, certainly Elon Musk, he frames it as a warning, but he continues to pursue it. And a bit more subtly from Google, that narrative ultimately leads to exactly what you're talking about. They don't always talk in terms of immediate annihilation. They bring up the possibility, but without a doubt, inevitably, if their aims come to fruition and they're able to replace all the coders, all the white collar jobs, all the blue collar jobs, if they're able to first improve the government through algorithmic efficiency. And then slowly but surely the politician becomes a sock puppet for the algorithm and then maybe the politician just becomes the algorithm. You just have some kind of deep faked. Josh Hawley talking about the dangers of AI. Deep fake Bernie who lives centuries. Yeah, these are real issues. And the cultural issue, I think, is probably the one that resonates the most with most people right now, because that is happening. Obviously people know other people who are in love with their chatbots or at the very least rely on them for everything. Now you talk about the interrelationship of these things in the paper, too. Could you give some sense of, like, if you just take one kind of path for how cultural disempowerment would lead to political and economic or any such path, you go through a lot.
David Krueger
So, yeah, I guess, you know, we talked about, like, so if AI is doing all our jobs, and then we're like, well, we need the government to, you know, sort of step in. We still have political power, so maybe we can have, you know, some government program that keeps people alive or maybe just says, no, people are still going to have jobs. We're not going to let AI do all the jobs. Whatever it is, you might think, you know, okay, we can rely on the government here. But then if the government is itself, again, being composed of AI, and increasingly the decision making is being done by AI, then humans might be disempowered there as well, and maybe we still, you know, have a vote, but we're all just so, like, controlled and manipulated by propaganda that essentially, you know, you can predict and control how people are going to vote. So well, with AI, with AI itself, that that's determining the outcomes of the election rather than our own, you know, intuition and decisions and judgments and values. And I'm glad you mentioned the sock puppet thing as well, because that's something that people are often saying is, why don't we keep a human in the loop here? Right? So I can make advice. It can. You know, we can use it as a tool, but humans are always going to be in charge, and that's what we want. You know, having a human in the loop, it sounds great, but it's. It's harder in practice to make that human really a meaningful part of the decision making. And so then that's. That's. That can happen in politics and also broadly throughout culture where everybody's just deferring to AI all the time from making all their decisions, maybe the decisions about how to vote as well, you know.
Joe Allen
Yes. On both ends. So both, you have the politician basically repeating propaganda that AI generated and the public then asking the AI which AI generated propaganda is superior.
David Krueger
Yep, yep, yep. Yeah. So that's. And then ultimately, like I was saying, maybe we end up giving the AIs rights or another thing that I think is a pretty disturbingly realistic scenario in my mind, is that we get chips that go in your brain. That starts out, it's like, for therapeutic purposes or whatever, but next we're using it to augment ourselves. Next we're using it to connect to the Internet and other People in some hive mind thing. After a few years, it's like, maybe this chip should be bigger and there's not really space in there. Why don't we just take out this part of your brain and then the next year it's like, this part isn't really that useful anymore. Like, let's just make the whole thing a chip.
Joe Allen
And then you can really put those bodies, those headless bodies that they're developing in Singapore to use.
David Krueger
Yeah. And, you know, this is just, like, really disturbing. And even the small version, I think by default, we should expect that these chips are going to be on the cloud and controlled by big companies and government in a way that we don't really have much, you know, legibility into. And it's not very, you know, trustworthy. And it's. It's very dangerous, I think. And that's another form of, like, gradual disempowerment where you might, you know, that might take a long time to go from, like, this little chip in your brain to something that's increasingly controlling your behavior, but that also might be increasingly like a requirement to get certain kinds of work. Right. It's like the same way, like, you kind of have to have a cell phone now. It's, like, pretty hard to navigate society without one. There's increasingly, like, need to, you know, give your identity every time you buy a sandwich or whatever. Like, so we see this direction of Trav, and I think that's very dangerous.
Joe Allen
And people oftentimes have criticized us at the war room and other people discussing these technologies, saying, oh, well, that will never happen in the, you know, five years ago, that was constant. Right. Even as the pandemic was ongoing. And you heard Klaus Schwab at the World Economic Forum, you know, waxing poetic about the rule of AI and brain chips and all this. But now, I mean, you already had a lot of programs like BlackRock. Neurotech was being rolled out in universities and other experimental labs. And so you had the first real BCIs, brain computer interfaces coming online. And then the first. They weren't the first, but mass deployment, you would say, in the dozens. And then now with neuralink, run by a guy who openly talks about how hundreds of millions of people will need to be chipped to keep up with the AI. And then now, at the beginning of the pandemic, you had Charles Lieber at Harvard, and he was developing neuralase. It was a more subtle, injectable brain computer interface. And he got busted for, I think he was just taking money under the table from the Chinese and it was just reported that he's now in China developing his brain computer interfaces.
David Krueger
Well, you know, if the Chinese are doing it, we're going to have to. Right. To compete. No, but you're way deeper on this stuff than me. But yeah, I'm kind of just like, you know, seeing the possibility there. And yeah, I mean Elon, I guess has said stuff like that. Right. He's very big on the merge with the machines. Future sounds great. Right.
Joe Allen
And you know, you sound kind of, you know, before we hit the break, I just got to. I've got a level an accusation at you.
David Krueger
Okay.
Joe Allen
You sound almost as luddite as I do, but that's. Is that the case? Would you do away with all AIs tomorrow if you could or are you seeing this all in a bit of a different light?
David Krueger
Yeah, no, I don't think I'm as extreme as you. I mean first of all I'm just like, well, what counts as AI? There's kind of a fuzzy boundary there like Google search and just computer vision systems that recognize handwriting, these sorts of things, translation I think are just pretty obviously useful and I wouldn't get rid of those. But you know, a lot of my hesitancy and skepticism here is not about like the technology itself. I think AI can do all sorts of great things. It has vast potential as a technology in lots of areas. Like medicine is a classic one people talk about, but it's about society's readiness to absorb these advances as fast as they're coming. And it's about the way that they are kind of being developed by, you know, tech billionaires who have very strange values and kind of the lack of accountability and transparency in process. It's just we're rushing towards this thing and there's no, it's completely insane to be racing so fast to build this with all the risks that it poses.
Joe Allen
So you don't think we're ready for mass deployment of smarter than human AI?
David Krueger
Oh hell no.
Joe Allen
Are we ready for mass deployment of not smarter than human, but seemingly intelligent AI as we have now?
David Krueger
Yeah, that's a more interesting question. That's a tricky one and I don't have a strong intuition about that. I think it's hard to say.
Joe Allen
Yeah, you've worked on policy as well as the more theoretical elements. And when we come back I'd like to talk a bit more about that because we're at a place where this issue or these issues are basically nonpartisan or bipartisan or cross partisan. It's not something that only left wingers or right wingers or independents are concerned about. But speaking of gradual disempowerment, you do not want to be disempowered, whether gradually or rapidly, by the dollar. The dollar is tanking. When the dollar's convertibility into gold ended in 1971, gold was fixed at $35 an ounce. Fast forward to today and the US dollar has lost over 85% of its purchasing power, just like your brain will lose 85 of its value come the artificial general intelligence. So gold, on the other hand, has increased in value by over 12,000%, just as your brain will after the EMP goes off. That's why central banks are buying gold at record levels. Text Bannett to the number 989-898 to join Birch Gold's Learn and Earn precious metals event by April 30th. Text Bannett to 989-898 and get your gold for your human brain.
Stephen K. Bannon (Ad/Promo Voice)
This year marks a critical moment for our country. As the opposition grows more aggressive and more unapologetic. The fight now reaches into the everyday decisions we make. Patriot Mobile has standing has been standing on the front lines fighting for freedom for more than 12 years. They just don't deliver top tier wireless service. They are activists like me and like you in the war room posse who truly care about this republic and saving our country. Patriot Mobile office prioritized premium access on all three major US networks, giving you the same or better coverage than the main carriers themselves. That means fast speeds and dependable nationwide coverage backed by 100% US based customer service. They also offer unlimited data plans, mobile hotspots, international roaming and more. With a simple seamless activation, you can switch in minutes, keep your number, keep your phone or upgrade. And here's the difference. When you switch to Patriot Mobile, you'll be part of a powerful stream of giving that directly funds the Christian conservative movement. Take a stand today. Go to patriotmobile.com bannon or call 972patriot that's 972patriot and use promo code Bannon for a free month of service. Don't wait, do it today. That's patriotmobile.com/bannon or call 972 Patriot and join the team today. The dollar's convertibility into gold ended in 1971. Gold was fixed at $35 an ounce. Well fast forward to today and the US dollar has lost over 85% of its purchasing power. Gold, on the other hand, is increased in value by over 12,000%. That's why Central banks are buying gold at record levels. That's why major firms like Vanguard and BlackRock hold significant positions in gold. And that's why I encourage you to consider diversifying your savings with physical gold from birchgold Group. But it starts with education. Birch Gold just announced their Learn and Earn Precious Metals event. This free online event rewards you for learning the basics of investing in precious metals. Sign up to get a free silver on your next purchase. Get even larger incentives as you go. The more you learn, the more you can earn. But you must act now as this special event only runs through April 30th. The dollar lost its anchor in 1971. You don't have to lose yours. Text my name, Bannon B A N N O n to the number 989898 to join Birch Gold's Learn and Earn Precious Metals event by April 30. Text Bannon B A N N O N to 989898 and do it today. Fellow patriots, the Federal Reserve has portrayed America for over a century. Printing fiat, inflating away your savings, serving globalist masters. But President Trump is ending it. President Trump is wielding a 12 year old law to reclaim control from the rogue Federal Reserve. He's replacing Jerome Powell, slashing rates, igniting America's re industrialization. Now. This is not theory. Government backed industry plus low rates unleashes super cycles. History does repeat. Gold's already exploding. Miners are up over 400% in the last year. What Rickards is calling Trump's gift is wealthy for American patriots, not global handouts. Now it's America's turn. Jim Rickards, former CIA and Pentagon veteran says act now go to insider2026.com that is insider2026.com to get Jim Rickards Strategic Intelligence Newsletter today. Strategic intelligence based upon predictive analytics. It's what Chairman and CEO throughout the world read and you should too.
Joe Allen
War Room here's your host, Stephen K. Ban. Welcome back War Room Posse. I'm here with David Krueger, CEO of Evitable and researcher at Mila. David, you and I have met a number of times in person in this crazy world of digital interaction. In person. First in San Francisco at the Curve at Lighthaven and then again at the Future of Life Institute's event around the pro human declaration and the composition of it. So we both have at least some common touch points or reference points in this culture. And the rapid extermination narrative is really dominant. I'm curious, with your thesis, do you get a lot of pushback? Do you find yourself In a lot of arguments about this, or is it just a friendly exchange between gentlemen?
David Krueger
Constant arguments? It's gotten more polite over the years. So, you know, I started in this field in 2013, and it took me almost two years to find any other researchers who were worried about this stuff.
Joe Allen
Wow.
David Krueger
And so I had, you know, years of conversations with people just kind of like mocking me and laughing in my face kind of thing when I talked about it.
Joe Allen
Just because of the gradualness of it or just because you were talking about AI disempowering people at all?
David Krueger
Yeah, just talking about existential risk, the risk of extinction, generally, basically. I think a lot of people were kind of like, well, I don't know, at that time. There's a lot of skepticism about if we would even get to AGI anytime soon, which is never, you know, it's. We're going to get there eventually in my mind. And so we got to grapple with these questions one way or another. But, yeah, it's gotten a lot better. You know, the researchers are, are much more willing to, you know, grapple with these risks these days. But, yeah, there's, there's a, you know, we kind of talked about earlier the other kind of groups here and ideologies. So there's some that are very, very into, you know, go as fast as you can. And, yeah, maybe humans will survive, maybe not. But, like, you know, that's not the important thing here. The important thing is, like, progress and technology. And so, you know, those are, those are arguments are going to keep going, you know, indefinitely, I guess. But I used to have more arguments about just like, is this like a thing at all that we should be worried about or that might happen? And that's kind of, that's much more. It feels like a subtle question these days. I'm an argumentative guy, so I still have big fights.
Joe Allen
Yeah, well, you know, that, that comet, you brought it up at the Bernie event that things, you know, on the one hand, there isn't enough awareness around the problems of AI, but on the other hand, over the years, it has exploded onto public consciousness. It's no longer the Terminator, it's xai, it's Google, it's anthropic in that, do you find? I mean, you are interacting with people in these corporations. A lot of them worry about some of the same things you do. What's your read on that? Like, you have people like anthropic who are very intently communicating their worries for whatever reason. Elon Musk is very much the same. Do you Find a lot of reception to your ideas there, or.
David Krueger
Yeah, you know, I mean, I always feel like I should talk to these people more because I think, you know, they're basically making a mistake in my mind by working at these companies and continuing to pursue the technology with full awareness of its risks because they do believe that it's just inevitable. And, you know, what I've seen, like we talked about, is just more and more awareness and concern over time. It's very clear the direction of travel. I just don't know if we'll get there fast enough. But when you have hundreds of people who are worried about this and go and work at AI companies instead of going and doing what I'm doing, talking to the public, talking to policymakers, saying, hey, this is a crisis. We should stop right now. You know, we could be, I think, raising the awareness so much faster. If people working at these companies would like, say, you know what? I quit. I don't want to work on this thing that could kill everyone anymore. I don't want to work on, you know, taking everyone's job like this is not okay ethically. So, yeah, I think when I talk to people, this sort of stuff often resonates, and I think a lot of people do feel a lot of doubt and guilt and uncertainty about their choices to work at those companies because of this stuff.
Joe Allen
Do you think that maybe some of the resistance is the extreme end of it? I've spoken to Nate Suarez, Holly Elmore, John Sherman, a lot of people who talk about X Risk and something that, to me, yeah, I bring it up from time to time. I bring it up on the show quite a bit. The extremity of extinction could perhaps overshadow the more immediate concerns that we have now, even in the idea of whether it is annihilation. If it annihilates a thousand people or a million people, it's still catastrophic and it stops there. Or in the idea of disempowerment, if you just get a partial realization of that disempowerment, you've already made a horrendous mistake as a society. So is that maybe rhetorically, like, part of the problem, that people are like, oh, it's not going to kill everybody, so I'm not going to worry about it. But what if it kills some people? You know, what if it kills your mom? You know?
David Krueger
Yeah, you know, it's kind of different for different people, what they respond to. So I believe in basically, you know, telling the truth, being straightforward about my concerns. So that's. I feel like I have to talk about extinction. I have to talk about, you know, even the most sci fi version where the AI, you know, suddenly takes off, takes over. Because I think that's real. I think that's a thing that absolutely could happen. I'm not saying it's going to, I'm not sure. You know, the future is uncertain. We don't understand this technology very well, but like, we can't rule that out. It's actually like shockingly likely in my mind. But you know, a lot of people are going to be more receptive to other things like gradual disempowerment or even just, you know, unemployment or, you know, the prospect that terrorists or school shooter types are going to be able to manufacture weapons of mass destruction in their
Joe Allen
garage, you know, which is kind of already happening, at least in regard to the AIs being associated with. For instance in Florida, I think it was Florida State University, that kid was taking instructions from the AI. I think there have been other cases now that have emerged.
David Krueger
Yeah, so it's. And the Florida AG is suing OpenAI because of this, which is great. I was on TV this weekend talking about another lawsuit brought by parents of victims in a shooting in Canada. Same story. But yeah, that's, you know, that's still shootings. And you know, just imagine if next time it could be a bioweapon, it could be another pandemic, you know, and I don't, I don't know, you know, next time. We're not quite there yet, but like maybe in a year or something we'll have AI that can coach people through that. Man, I lost track.
Joe Allen
You know, I'm curious about this then. So you have worked on policy, you have a very clear idea of what the threats of this technology, what they are, and you also have at least the beginnings of a plan. Because if there's one thing that gradual disempowerment argues for, it's we can't move forward without a plan. You have to at least account for this possibility and then have some sort of plan to stop it or mitigate it. So what do you see right now in the us, in Europe or in China? What do you see that's promising in regard to a political response to the threat of AI?
David Krueger
Yeah, my, my plan is shut it all down, basically. So get rid of the advanced AI chips, get rid of the factories that make those chips. I think that's the simple and obvious solution. Maybe we can improve on that.
Joe Allen
I don't know how realistic, but it appeals to me. Shut it Down. Great.
David Krueger
And I think the most promising signs I see are just more people waking up and realizing how insane this situation is, how, how big and how urgent the risks are. Because I think that's what's going to. That's what it's going to take, right. Like, to make something like that happen. We're going to have to start treating this like it's as big or bigger deal than nuclear weapons.
Joe Allen
Well, you see right now, I mean, at the moment, maybe by the time this airs, things will have changed quite a bit. But at the moment, you have a response from the Trump administration to the dangers of AI. You know, it's been all over the news today that Casey, the Center for AI Standards and Innovation. Casey, under the Commerce Department, will be the main interface between the tech companies and the US Government and will begin testing frontier models before they are deployed. At least there's an agreement with, at the moment, Google, Microsoft and Xai. So do you think that there's a lot of questions about. I mean, they've got Casey as a brand new director. Of course, the Commerce Department is run by Howard Lucknick, which is a questionable choice in a horrendous situation for many reasons. But do you see this as promising? Because I don't think it's necessarily a coincidence that just last week you got Max Tegmark on here talking about this. You got you and Tegmark in the Capitol talking about these problems and the lack of response, and then lo and behold, we now have one. Does this seem promising to you, at least in the seminal or the nascent phase?
David Krueger
Yeah, I mean, definitely. It's a good sign. And I think probably this has more to do. You know, much as I'd like to feel responsible with Mythos and the cybersecurity threats, which from that model, which I think are huge and really caught most people by surprise, and I wish people would stop being caught by surprise. We know these things are coming down the pipeline in terms of this response. Testing is obviously a good thing. I don't know if they're going to do the best job of it. I don't think, you know, it's not adequate. Right. And we don't know how to do testing well enough. So there's a lot of, like, false solutions that people are offering and will offer to this problem. And, you know, as somebody who's been in the field looking at the research for a long time, I can tell you we don't know how to test systems. We don't know how to align them, give them our goals or our values. And we also don't know how to tell what they're thinking and how they might behave. So people are working on all those things. We make progress, but there's still open research problems, so we can't count on that.
Joe Allen
And when you say we don't know how to test them, do you mean that like the evaluations that we see now from center for AI Safety or Anthropics internal testing, Apollo, people like this, that the measurement of the capabilities are not accurate, or do you mean something else by that?
David Krueger
I think we don't know how accurate they are and we also don't know, you want to know not just the capabilities, but also like the propensity people sometimes call it. What is the system going to decide to do? Is it aligned? What kind of values? What are its goals? And that's a lot harder to test for in terms of the testing that's happening right now. This is one of the things that the UK government agency I worked at did.
Joe Allen
What was the organization?
David Krueger
It was called the AI Safety Task Force at the time. Now it's the AI Safety AI Security Institute in the United Kingdom. But the, you know, looking at the state of play right now, the last couple model releases, they were like, we, you know, we sort of tried to test it, but at the end of the day we kind of just went with vibes because they felt like their tests weren't meaningful enough and they're maxing out the capabilities. And then the other thing that I think is really important for people to realize is the AI now can tell that it's being tested quite reliably. And so once the AI knows it's being tested, you have to wonder, is it doing the right thing because that's what it wants to do or because it knows that's what we want it to do and it knows that it needs to pass the test.
Joe Allen
So in essence, it seems like what you're describing is a situation where you can test the capabilities and get a surface level idea of what's going on. But beneath that surface, there's a whole lot happening in these systems that you just simply can't tease out.
David Krueger
Yeah, 100%. Yeah. And the capabilities might be more than what we are able to observe and elicit. That's another really important point. People think that we can know what these systems are capable of, but there's been a lot of times when you just, you prompt the system a little bit differently or you set up, you know, another thing around it to help it do its job and it can suddenly do the task way better. So we don't even fully know what the systems are capable of.
Joe Allen
You know, read your recent essay, kind of the retrospective and a few musings post publication of gradual disempowerment, and I was very happy that you gave me. You threw me a bone at the very end, the very last point being that maybe human beings will become dumber and dumber. That you don't think that that's really all that big of a deal. But hey, I might as well mention it. That's the biggest deal. Come on.
David Krueger
Yeah, I don't know, because, you know, it's. People make the analogy with like calculators where it's like, I think it's good that people can do arithmetic, but, like, we don't have to be that good at it anymore because we have calculators.
Joe Allen
Sure we do.
David Krueger
Yeah.
Joe Allen
Don't tell them that in China. I mean, that's why they're. They're kicking our asses in the, in the universities. Well, you know, I just. I couldn't, I couldn't let you go without getting that one last, you know, jab in. On the one hand, I appreciate you throwing us a bone on the inverse singularity thesis, that as humans get dumber and dumber, the machines will seem smarter and smarter. But in general, again, just to reiterate, I think that your work on just AI risk in general has been very, very persuasive, very, very thorough. Even if I don't know that we'll be able to do it, I would love to see it all shut down too. Maybe for different reasons. And yeah, I really, really appreciate everything you've done. I appreciate you coming on here.
David Krueger
Thanks. Yeah, I appreciate that and it's been great.
Joe Allen
Let the posse know where can they find you on social media? Your website, your substack. Yeah.
David Krueger
So evitable is easy to find. Evitable.com. i'm David S. Krueger. That's K R U E G E R on Twitter. And I have a blog called the Real AI on Substack. So those are great starting points.
Joe Allen
Again, David. Appreciate, brother.
David Krueger
Absolutely.
Joe Allen
And once again, War Room Posse. In case you have forgotten, the central banks are buying gold at record levels. That's why major firms like Vanguard and BlackRock hold significant positions in gold. And that's why I encourage you to consider diversifying your savings with physical gold from Birch Gold Group. Think of physical gold as being analogous to a biological brain. And think of digital currency as analogous to AIs. The AIs take over. The biological brain plummets. What you need. What you need is gold. Physical gold. So text Bannon to the number 989898. That's Bannon to the number 989898. And learn how gold can protect your assets. That is Bannon to the number 9 89898. Now, War Room posse, as I see you off here, I want to talk about, just for a moment, a concept of gradual disempowerment that goes to mythological levels. That is the idea of Moloch. The analogy between systems that are completely either out of human control or against human values. This was an idea first brought up by Scott Alexander of Slate Star Codex. And it was taken from a poem, Howl from Allen Ginsberg. And however much you think that Allen Ginsberg was a degenerate weirdo, I think that it is undoubted that his passage on Moloch in the poem How. Is as relevant to our society today as it was then. And hey, maybe it takes a degenerate to truly understand the essence of a Canaanite demon and its machinic counterpart. So, Warumpasi, I present to you Moloch.
Allen Ginsberg (Poem Reader)
What sphinx of cement and aluminum bashed open their skulls and ate up their brains and imagination? Moloch. Solitude.
Joe Allen
Filth.
Allen Ginsberg (Poem Reader)
Ugliness. Ash cans and unobtainable dollars. Children screaming under stairways. Boys sobbing in armies. Old men weeping in the parks. Moloch. Moloch. Nightmare of Moloch. Moloch the loveless mental Moloch. Moloch the heavy judger of men. Moloch the incomprehensible prison. Moloch the crossbones soulless jailhouse and congress of sorrows. Moloch whose buildings are judgment. Moloch the vast stone of war. Moloch the stunned governments. Moloch whose mind is pure machinery. Moloch whose blood is running money. Moloch whose fingers are ten armies. Moloch whose breast is a cannibal dynamo. Moloch whose ear is a smoking tomb. Moloch whose eyes are a thousand blind windows. Moloch whose skyscrapers stand in the long streets like endless Jehovahs. Moloch whose factories dream and croak in the fog. Moloch whose smokestacks and antennae crown the cities. Moloch whose love is endless oil and stone. Moloch whose soul is electricity and banks. Moloch whose poverty is the specter of genius. Moloch whose fate is a cloud of sexless hydrogen. Moloch whose name is the mind. Moloch in whom I sit lonely. Moloch in whom I dream. Angels crazy in Moloch. Sucker in Moloch. Lacked love and menless in Moloch. Moloch who entered my soul early. Moloch in whom I am a consciousness without a body. Moloch who frightened me out of my natural ecstasy. Moloch whom I abandoned. Wake up in Moloch. Light streaming out of the sky. Moloch. Moloch. Robot apartments. Invisible suburbs. Skeleton treasuries. Blind capitals. Demonic industries. Spectral nations. Invincible madhouses. Granite. Monstrous bombs. They broke their backs lifting Moloch to heaven. Pavements, trees, radios. Tons lifting the city to heaven which exists and is everywhere about us. Visions, omens, hallucinations, Miracles. Ecstasies gone down the American river. Dreams, adorations, illuminations. Religions. The whole boatload of sensitive bull breakthroughs. Over the river. Flips and crucifixions. Gone down the flood. Highs epiphanies, despairs. 10 years animal spirit. Screams and suicides. Minds new loves. Mad generation down on the rocks of time. Real holy laughter in the river. They saw it all. The wild eyes, the holy yells they bade farewell.
Stephen K. Bannon (Ad/Promo Voice)
Okay, can we talk about what's really happening right now? New data shows financial stress is at an all time high. Millions of Americans are at a breaking point. Debt maxed out. No extra money, no room to breathe. And this isn't just lower income households anymore. Middle class families are hitting their limits too. This isn't about reckless spending. Everyday people are running out of options. So if debt has been weighing on you, you're not alone. And when it comes to debt, waiting usually makes it worse. Interest piles up. Minimum payments keep you stuck. You don't need another loan and you don't need bankruptcy. You need a strategy. That's why I like Done with debt. They've built a smart personalized plan around you, their experience, and knowing what it takes to get you the biggest reductions possible. Whether you owe $10,000 or much more. Done with debt has one clear gold. Lower what you owe so you keep more of your paycheck every month. It's very simple. Let's repeat that. Lower what you owe so you can keep more of your paycheck every month. Start with a free consultation. Just takes minutes. Share your situation, your tale of woe and find out what's possible. You do not have to stay stuck. Go to donewithdebt.com that's donewithdebt.com and do it today.
Podcast: Bannon’s War Room Battleground
Episode: 1007 — David Krueger on AI: Humanity Dies by Gradual Disempowerment
Date: May 11, 2026
Host: Joe Allen (with Stephen K. Bannon)
Guest: Dr. David Krueger, CEO of Evitable, AI researcher at Mila Montreal
This episode centers on the risks of artificial intelligence, focusing on Dr. David Krueger’s "Gradual Disempowerment" thesis—the idea that humanity’s greatest risk from AI may not be sudden, catastrophic annihilation, but a slow, steady erosion of our power, agency, and purpose, across the economy, state, and culture. The conversation explores how these forms of displacement could unfold, policy implications, and public attitudes toward the AI trajectory.
On International Dialogue:
On Government Responses:
On Cultural Shifts:
On the Slow Path to AI Control:
The episode ends with Allen referencing the Moloch metaphor—taken from literature and popularized in tech debates—to describe systems, including AI, that operate against human interests, driven by impersonal forces of optimization and competition (see Ginsberg reading, [48:13]).
Recommended for listeners interested in: