Loading summary
A
Hey, it's Gregory Warner. And before we get to today's show, I just wanted to tell you all about a podcast that you might not have heard of. It's called the Middle. This is the Middle. I'm Jeremy Hobson. If you're just tuning in, the Middle is a national call in show. We're focused on elevating voices from the middle, geographically, politically. Or maybe you just want to meet in the Middle. Each week, longtime journalist Jeremy Hobson and his guests take live calls from all over the country. Hi, my name is Susan.
B
I'm in Sarasota, Florida.
A
Yeah, live calls, podcasts. Crazy, huh? Hey, this is Mickey from St. Louis, Josh from Columbia, South Carolina. And they deal with issues like immigration, restoring civility in politics, and of course, AI.
B
I'm worried about my grandchildren who are graduating from college in the next three years finding jobs because the entry level jobs will go to AI and they won't ever get that experience.
A
I love a good call in show. I love hearing from folks around the country and especially on the middle. You hear from this wide swath of people from the young and the old, the urban, the rural, the right and the left, and you often get a sense for not just what divides Americans, but also what we have in common. If that sounds like a show for you, just search for the Middle with Jeremy Hobson wherever you get your podcasts. Now onto the show.
B
Hi again, this is Andy Mills. And as promised, we are following up our series, the Last Invention with continued coverage about the state of the race towards our unknown AI future. And before we get to today's episode, I just wanted to say a quick thank you to everybody out there who has been sharing the series. Thanks to those of you who have been writing in with your AI questions and your suggestions for future reporting and why. One of those suggestions after our last episode on the Politics of AI with Ezra Klein was for us to talk to somebody on the other side of the political divide. And that is exactly what we've got in store today. And when we went to look for exactly who we could speak to on the other side, what we found surprised us. We ended up talking to somebody who wants to break up the big tech companies, who wants more regulation over the AI labs, not less, and someone who used his influence to help kill the deregulatory bills in Congress last year. And that person is Steve Bannon.
A
We have to be present now for these decisions because these decisions are so fundamental for our species and they can't be put back in the bottle.
B
Steve Bannon, is a political strategist. He is a populist and a national. He helped lead Donald Trump's 2016 campaign for president and served in President Trump's first administration. He was a founding member of Breitbart News, but now has a popular daily podcast called War Room. And on that show, he has a reporter named Joe Allen, who covers AI and the dangers that they believe the world faces in the years to come from technology. And Joe was kind enough to join us for this conversation today. Now, I just want to say that no matter what you think of Bannon's politics, where he and his movement are pushing AI policy is going to be consequential. And we think that it's worth your time. Okay, can we just start off by having the two of you introduce yourselves? Just tell us, you know, what's your name and how is it you describe what you do for a living?
A
Joe, go ahead.
C
I'm Joe Allen. I am the tech editor for the War Room. Steve prefers the transhumanism editor, and I am no transhumanist. My vocation in life is a writer. And everything that I do is to find better material to write about and have some kind of avenue for other people to read my writing.
A
I'm Steve Bannon. I'm food and beverage director of the War Room. I'm a former naval officer recovering investment banker, and was the CEO of President Trump's first campaign and chief strategist of the White House. And I have a streaming show that is also a podcast. We turn it into a podcast after we stream live called the War Room, that's on four hours a day, six days a week.
B
Right. And if I was to describe your place in American politics, I think of you as the thought leader of the new right of the American right wing populist movement, and that your foundational creed is essentially that the powerful and the elites are currently screwing the working class. Is that about right?
A
I wouldn't say I was the leader. I think President Trump and there are other thought leaders, but I've been involved in this since the very beginning, since the financial crash of 2008. Kind of awakened me to a bunch of really uncomfortable realities. And so I've been kind of a fire breathing populist and nationalist, economic nationalist since then, and have had the fortune to be in a couple of situations where we actually could drive the narrative or put our, you know, put things we've been thinking about into actual action. So, yes, I believe that the working class, and let's say the lower middle class in this country are the backbone of this nation, which has been the greatest country on Earth, and they get fucked every day of the week.
B
All right, so what I'm trying to do is figure out where the current state of artificial intelligence is colliding with American politics. And I've noticed that both in the populists on the left like Bernie Sanders and the populists on the right like Josh Hawley, there appears to be a real anger not just towards big tech, but increasingly towards AI. And so I just wonder, from where you're sitting, what do you make of the emerging politics of AI and where do you think that they're headed in the near future?
A
Well, look, it's a fundamental inflection point. We're at the, I think, most fundamental inflection point of Homo sapiens as a species. Just to give you a background, you know, a couple years ago, the Chinese Communist Party put out the list of the key technologies they were going to dominate by 2025. It was called China 2025. And they had artificial intelligence, artificial general intelligence at the top, robotics and regenerative robotics below that, advanced chip design, quantum computing, and crispr. Biotech is the top five. And I said, gosh, if you look at the convergence of those five is the singularity, and that's Homo sapien 2.0. And nobody's talking about this. They're looking at China as like this is some sort of economic drill. And I said, it's not an economics drill. It's far deeper than that. People don't fully understand these things and they certainly don't understand the driving motives of the oligarchs. They're at the tip of the spear driving this. And there's a lot of apprehension because there's not clarity, there's not transparency, and there's certainly not accountability. And I think that's why you've seen this. Not just interest, but building anger of working class people both on the left, as you said, with Bernie and Hawley on the right. And I think it's more than artificial intelligence, although I do think that that's the lead sled dog in this and obviously very dangerous. Our audience is principally working class and middle class. I said, people are going to have to understand this and we have to get in back of this, and people have to be able to weigh and measure it because they're going to get. The money is going to be so huge here that they're going to get buried in all kind of false information or false narratives. And we have to make sure that they understand this. And so the war Room over the last four or five years, this has been a major part of our program. I knew at the time it was going to be important. You just got to make it accessible so people understand this. Right.
B
Well, on your show War Room, when I've listened to you talk about AI and big tech, you often single out the dangers from the people you call the brologarchs and their vision for what you call again, techno feudalism. So who are the broligarchs and what is techno feudalism?
A
Well, the key roligarchs are the guys that head up, you know, the Google, the Facebook, Twitter, you know, all those. And they're creations of kind of the progressive Democrats. Remember Obama made deals with these guys, you know, and it's one of the reasons we're such big fans of Lina Khan.
B
We're huge. And just so listeners remember, Lina Khan was the very aggressive anti monopoly chair of the Biden administration's ftc. Yes.
A
And she's a expert in breaking up large combination Neo Brady ism is about. You can't have too much private power that can overwhelm government and kind of the merger where they're in the lead, you get the most lethal of all toxic of all environments, which is state capitalism, which essentially have in China.
B
Right. And just so listeners are clear about this, like you as a big figure on the right and Lina Khan who's a hero to many on the left, the agreement between the two of you is that you think Google meta that these are modern day monopolies, right?
A
Well, they're modern day monopolies that really came from the state. Remember you have a hundred, I don't know, a hundred electrical vehicle companies. You really have Google. And what do you have? DuckDuckGo and Bing? You have tiny things, you really have Google, you really have Facebook, you really have Amazon, you really have Twitter. And these guys were all the reason I detest them so much, these individuals, not just as oligarchs, but they're certainly not populists and they're not nationalists. These guys are all progressive Democrats that just came to President Trump, Most of them, 90% of them, after he won the election this time, this last time. And they'll abandon us and leave us as soon as it becomes evident that President Trump is having any difficulty governing.
B
And so just to say it back to you, you think that the heads of these big tech companies and these super wealthy venture capitalists, including those who recently flipped their support from the Democrats to President Trump, people like Elon Musk, you see them as the oligarchs and you think that their support for Trump is a short lived convenience.
A
Yes.
B
And you think that they have amassed way too much power and influence in society.
A
Yeah, we have an oligarchy that runs as a tech oligarchy. And now given the valuations of $5 trillion to Nvidia and you know, the combination of capital and particularly speculative venture capital and hedge fund capital with this group makes them even more dangerous. People should understand that the old antitrust breakup, these big almost state capitalistic massive combines in the private sector, whether that's in agriculture, Wall street technology has really gone by the wayside given the power that these guys have in the Democratic Party and to be brutally frank, in the Republican Party.
B
And what is the techno feudalism future that you believe they're trying to usher in?
A
Techno feudalism is a concept that came out of a Greek politician a number of years ago, but it's a very powerful, I think, construct to use in what's happening in today's, particularly in the United States and around Silicon Valley. These people don't believe in the nation state. They believe in networks. They have a couple of theoreticians that actually say the nation state has passed and you have to have this kind of network effect, each one being the railhead of a different network or a different kind of tribe. And it's very feudalistic. They believe that essentially their feudal lords and everybody else is a serf. I equate this younger generation under 30 or 35 years old as nothing more than Russian serfs. You're not going to, you don't own anything and you're not going to own anything. That's why the average price, age or the first time buyer of a home is 40 years old where it used to be in the low 20s.
B
Well, Joe, let's turn to you. I know that your reporting has largely focused on the transhumanists and we mentioned in our series the transhumanists and their influence in the early days of Silicon Valley and their prevalence in both the doomers and the accelerationist camps inside of the fight for AI. But how do you define the transhumanists and where do you see them fitting and in the current race towards AI and what implications it has for American society and therefore American politics?
C
Well, I think the best definition of transhumanism comes from Max Moore, who is really the reason we even use the term at all. And that's to use science and technology to go beyond the biological limitations of the human being. You could say that that's everything from cars to aspirin, you know, tennis shoes. But obviously someone like Max Moore has something much more dramatic in mind, and it really branches out into a wide variety of versions of transhumanism. The idea goes back to the early 20th century. You had Teilhard de Chardon, the paleontologist and heretical priest, who conceived of humanity as creating a noosphere around the planet. Many people have looked back to it and said, oh, that was a really brilliant prediction of the Internet, and I think it really was. But the noosphere is the idea that humanity is creating a mental sphere on top of the biological. And he talked about the Omega point towards which all of this was going. Some goal that saw humanity moving towards it, unifying, becoming more and more technologically advanced and more and more transformed. Now, that term transhumanism, first used by Teilhard de Chardon was then used by Julian Huxley, and it kind of fell to the wayside until Max Moore put a stamp on it. By the time people started speaking about transhumanism again in the late 80s and 90s, you had the technologies to, as a kind of proof of concept, to say this is in fact the direction humanity is going. Fast forward to where we're at now. And the wealthiest men on Earth, so that would be Elon Musk, Jeff Bezos, Mark Zuckerberg, Bill Gates, all of them are expressing some version of what Max Moore was talking about. Using technology to fundamentally alter the human being.
B
Right. And I love that you're name dropping Max Moore, because we spoke to him back in the sixth episode of our series. He was the guy who was the head of the Extropian online listserv, where a lot of the current players and ideas that are floating around AI today were incubating back in the 90s and early 2000s. And one of those ideas was that as technology advanced, there may come a time where humanity and technology merged together and into something like a new species. And right now, that is a somewhat mainstream belief among many people in Silicon Valley, including many powerful people.
C
Yeah. Oddly enough, I spoke to Max Moore on the phone last night, and I've met with him in person a number of times. We actually get along quite well despite this ideological difference, because he's not a kind of fruit loop leftist, nor is he really masking his intentions, but the reality is the dream that they had in mind. Brain chips uploading the mind into either a cloud or into a robot, freezing the body in suspended animation in the hopes of becoming, you know, once the technology has come around to be thawed out, reanimated and healed, and perhaps live forever. All these things have been taken up with enormous enthusiasm all over the world. And you simply can't. You can't contain such a dream.
A
Right.
C
It's just the way it is.
A
But at dream, it comes to dystopian nightmare for people. Because, look, transhumanism today, and your audience should understand this, there's a handful of elites that are using these concepts, whether it is chipping an individual, whether it's doing it by CRISPR or biotechnology for enhanced humans, enhanced Homo sapiens. Your audience, one of the biggest and most serious dilemmas they're going to face as parents and as children is that to compete in the next four or five years and beyond, you're going to have to have a soul search you as an individual about, do I enhance my kids to get into an Ivy League school, to be a top athlete, to be a top performer at a corporation? Do I need to become a transhumanist? Do I need to chip the kid? Do they need some sort of genetic surgery? Are there other aspects of the man machine interface that they have to be a part of? These decisions, which are some of the most complicated decisions in the history of our species, are going to be on to people to make in the next couple of years with very imperfect information about the people behind them about the philosophies driving this. And that's why I think it's most important today that those of us in the media sphere double down to make sure you have accessible information that you can actually make personal and family decisions about, and then community and political decisions.
B
Right. So you're saying that your concerns range from the fact that more and more power is going into the hands of these companies and the people who lead them, the people you call the broligarchs. And another one of your concerns is that you think a time will come soon when American people, American families, American individuals, they're going to have to make a decision whether or not to become Homo sapien 2.0, to take some transhumanist step in. In their life or in the life of their children.
A
Yeah. Let's break that down to two pieces. Number one, we have no earthly idea right now what's really going on in these labs. We have no guardrails and no control whatsoever. And the taxpayer is underwriting a lot of this. They've actually admitted that in the $5 trillion, $6 trillion build out of data centers, right. In energy, which now they don't care. These guys don't care about climate change. They're the biggest climate change guys. That doesn't matter. They were the biggest anti nuke guys. Now they're talking about building mini nuclear power stations right next to their labs. They're sucking out every aquifer in Texas and Arizona dry with water. They have no earthy idea what the EMP and other things that these massive data centers are going to do to humans that they're putting in areas. They have no concern whatsoever for average citizens. We have no earthy idea what's going on. The one thing we do know in this regards to it, that they are adamant about having no even speed bumps or modicum of guardrails. How do I know that they've tried to pass legislation here within the last hundred days twice. That is essentially an amnesty. It's AI amnesty. Both in the big beautiful bill back in July and then in the National Defense Authorization act, which is a must pass piece of legislation. Twice they tried to slide in with no debate in the middle of the night until they were outed by many of the heroes of this movement that are anti. These guys control things, including Joe Allen. To say no, this must be reviewed. And both times it was crushed because they understand that the average person is very concerned here, if not dramatically upset. It's not that they're Luddites. It's not. They're saying we should stop all technology. But somebody's got to make the case of the direction of where this is going. And is there risk mitigation?
B
Well, I want to ask you specifically about what you think the government and lawmakers can and should do. I want to get to that in just a second, but before I do, are you saying that when we recently saw attempts to put legislation into these larger bills that would restrict states from being able to regulate the AI industry, these measures that ultimately were abandoned. Are you saying that you and your allies behind the scenes were fighting to make sure that those measures failed?
A
Oh, I think the war room, since it was Republican, don't get me wrong, there are many liberals, progressive, just people who are apolitical, a lot of people apolitical on this. I think it was essential because there was a Republican bill. It is with the great senator from Marsha Blackburn at Tennessee and Mike Davis and others that we'd gotten back of. And we forced essentially that vote in the middle of the night that Ted Cruz sponsored by making our, what we call the Warren Posse aware of it. The kind of pitchforks crowd blew up the phone lines and said over my dead body. And that's why they actually had a vote 99 to 1. They lost. And Ted Cruz, who sponsored it, actually flipped and voted against his own bill. The arrogance of the bologarchs. Your audience should understand their arrogance is that they brought the exact same piece of legislation, which was really an amnesty. You can't have any state step in. We can do anything we want. They brought that back 60 days later into the National Defense Authorization act and tried to slip it in again. The outrage of our viewers and listeners was such and they blew up the phones on Capitol Hill that they actually pulled it before they even came to a vote. They said this thing's too controversial. And that's where it's left us in now these different bills. But no, here's the reason is that on the right people don't trust Silicon Valley. They don't trust these people. They've seen these guys in action before and they, they've never been on their side. Why should they be on their side now when they're going to something that's very dangerous and that is super intelligence, artificial general intelligence, which I think until you can make an absolute case of how that can be controlled now by the way underlying this is we need to stop the Chinese Communist Party from even being competitive in this because of they got their hands on it. It'd be an enormously dangerous world. That being said, they want some sort of. I don't know if it's Atomic Energy Commission or things that we've had before that help kind of monitor and guide these very dangerous technologies in chemical warfare, biological warfare, nuclear warfare that now seems. And the oligarchs expose themselves every day because they're adamant. They will bald face lie. Everything you hear coming out of their mouth is a lie. It's all spin to put forward their own, their own agenda. And this is why, this is why it's a very dangerous time. And it's one of the reasons we spend so much time on this.
B
Yeah.
C
This entire fight is centered on one basic principle and that is that any elected government official should be protecting people, the people in their country, the average person from the predatory elite, most of the dream or spirit of America, the checks and balances, the, the kind of innate populism of most of our founding fathers, with a couple of exceptions, should pervade the American spirit. And that should be to allow the people to have a voice to allow their interests to be served over and against. Originally a monarch, and now the Broligarchs, the tech Elite. What's happening, unfortunately, but predictably, is that the government is being hijacked and used to protect those very people. Elon Musk, Sam Altman, a lot of this spearheaded by people like David Sacks, paid for by people like Marc Andreessen and Ben Horowitz and Greg Brockman, to protect that tech elite from the will of the people, to shield their interests from them. It's going to be a matter, I think, of the people, on a cultural level, just simply rejecting the overarching vision of these, for lack of a better word, transhumanist tech elitists, because they intend to impose their vision of the world onto the American people and onto the people across the world. It has to be checked, and it'll ultimately be checked, I think, by people en masse simply rejecting that worldview and bit by bit, rejecting the technologies that are being imposed.
A
President Trump posits something Andy, I think is very powerful. He says, look, we, the United States, have been the technological leader of the world at least since the earliest 20th century, and there's been a lot of benefits in the United States being a leader in technology. This is very important that we are the leaders in technology in this. It is how that's implemented and how that's effectuated. Right. And that is where. That's where the rub is. Right.
B
Well, let's get into the economy then, because you spend a lot of your time talking about this battle between the working class and the elites that have too much power. And now you're saying that the heads of the AI labs, the heads of these big tech companies, you see them as just more elites in battle with the working class.
A
Yep.
B
But I've been fascinated in putting this series together to see a real robust debate happening about what a real artificial general intelligence would mean for the lives of the working class. On the one side, you have people who are really worried who say that the job market's going to fall away and that the working class are going to lose their employment. They may lose their income and also then will lose their political clout in society. But there's also the view that right now, so much of our economy is built on the toil of the working class that it's necessary for a ton of people to work jobs that they don't love, that they don't find a lot of meaning in. And AGI might be this liberating force that says, hey, robots got that job now. You no longer have to work this repetitive factory job. You no longer have to mine lithium, or you no longer have to scrub toilets and now you can be liberated to do what you really want to do and that maybe we'll find some new system that doesn't quote unquote, exploit the worker, but instead ushers in this world of abundance for everyone. What do you make of that debate?
A
When I hear Elon talking about you have nothing to worry about, you're gonna live in a world of abundance and it's gonna be by robots and it's gonna be within 10 years. I don't think any sentient being believes that that is gonna roll down to the benefit of working class men and women throughout the world. Okay, just not the first threat of artificial intelligence. And particularly unbridled artificial intelligence is not so much even the working class. It's the lower white collar worker. It's particularly kids right now coming out of school, ages 20 to 35. The jobs you're seeing leaving is lower level, administrative, managerial and technological. This will cut like a sith through grass to managerial jobs first. Eventually it'll get to, particularly in factories, et cetera. It'll get to blue collar jobs, but not I think till a little later. But there's no vision internally of the Elon Musk and the Andreessens and the companies they're funding to take care of the working man. The working man is going to participate in this abundance. And every time I bring up American citizens individually actually participating in the equity ownership because we, we are putting up, we're gonna put up at least a trillion dollars in loan guarantees. And if you look at the energy and water, much more anytime I bring up at all that the American citizens would participate in some sort of warrant package or options package or something that would be given to individual citizens, not the government. I get nothing but the most greatest vitriol from this group that does not wanna share equity. They want once again working class people through their government and their taxes to underwrite all this or underwrite at least the risk and the risk mitigation. But they don't want to, they don't want to share in any of the wealth at all, not one iota. They talk about maybe universal basic income, which is basically tip money, right, to keep you off the streets. So no, I think that this has tremendous downside as far as come to employment. Now is this technological advance got incredible upside. Yes, but that has to be discussed and it has to have some sort of regulatory apparatus on it or if it's going down the path it's going today where they hide everything they're working on we have no earthy idea what the next steps are. And I think it's too big a risk to the economy. Too bigger and too big a risk to working class and middle class people that are participants in that economy and U.S. taxpayers.
C
Yeah. And if I could add to that, you've got two basic paths. Obviously there are a lot of variations in between, but two basic paths. One in which the predictions of people like Elon Musk, Sam Altman, Bill Gates, they come true. We really do have, as Elon Musk says, within 10 years, a 3 to 1 ratio of robots to humans. We really do have radical abundance. There are zero blue collar jobs left. All the white collar jobs have been displaced and ultimately all the coding jobs have been displaced. Everyone is rendered basically economically useless by these autonomous systems. If that were to occur, if the future really is something like Sam Altman's essay, Moore's Law of Everything, then that means that we have zero negotiating power. We being workers at the lowest blue collar level, all the way up to the CEOs of the biggest companies in the world, we have nothing left to offer. We have no bargaining power whatsoever. So that means we're completely at the mercy of whoever controls these machines. Whether it's that tiny group of people who manage to stay on top of general and super intelligence, or the general or superintelligence itself is in charge of what happens to us. So what happens to us? We either become pets or we become biofuel. There's really only two options that I can see. All the ideas of self realization and this radical creativity coming out of humanity. That sounds like a bunch of silver spoon slop to me. Now on the other hand, let's say that the technology doesn't produce, let's say the technology is just some approximation of this radical abundance to. But you still have this deep need for intellectual work at the white collar level and manual labor at the blue collar level. And I think that's very, very likely. Look at the, the success of predictions of people like Elon Musk in the past. They aren't, they don't add up. He's always hyper aggressive. So this is ultimately a sales pitch. Now, if you have convinced an entire generation of students and up and coming workers that in the next two decades everything they're training to do, everything they're working for, is going to be obliterated, then you have completely demoralized all of your intellectual class in this in the academy, and all of your working class down in the trenches. And I'm not exactly nostalgic or sentimental about sewage work? I'm not sentimental about cobalt mining. Right. But the reality is much of the human character is going to be built by having to work. Yeah. Oftentimes it ain't fair. Yeah. Oftentimes it's not pleasant. But working your ass off is what makes you a man. It's what makes you, in many societies, such as America, a woman. And to take that identity away, either by actual automation or to simply demoralize people and tell them that everything they're doing is for nothing in the end, you're just going to be a pet. In the end, I think it's a travesty. I think these people are completely disconnected and completely desensitized to what actual human beings want and actual human beings need.
B
All right, well, let's talk about what the two of you think needs to be done right now. And I especially am curious about how you are encouraging the people in the Republican Party, this new version of the Republican Party that you have specific influence with, because historically, it has been the Republicans who have been leery of overregulating industry, especially a new and emerging industry. And, Steve, I know that you have had a lot of negative things to say about bureaucrats and how they operate. So what is it that you want President Trump to do? What is it that you want Republicans to do to try and counteract what you think is a real threat?
A
Well, first off, to start this administration, we had Google in federal court. If your listeners are not familiar with this, we had Google and federal court and two federal courts in the greater Imperial Capital area, one in D.C. and one in Northern Virginia. It was to break up their search capacity, to make sure there's more competition there, and then to break up their ad server. Also, Facebook, we believe that the best environment is to make sure that you break up the oligarchs, that you open up to entrepreneurial capitalism and entrepreneurial finance and the animal spirits that compete across all these. Amazon, Twitter, all of it.
B
And you're saying the time has come to break them all up.
A
Break them all up. And President Trump's administration started off by trying to see this. Now federal judges have intervened and you've had some difficulty. There's also been, obviously, some people in the administration and donors, like the big tech oligarchs. It's one of the reasons they came and embraced Trump so strongly. You know, people, a lot of the progressives say, well, how these guys abandon you. They abandoned you because you were out of power. And trust me, as soon as we have any issues with power of maga, they'll be back to the Democratic side. We believe in at least a modicum of a regulatory apparatus. And here Joe and Marsha Blackburn and Mike Davis are talking this concept of the four Cs where you have this protection of children and conservatives and content makers and all this. I actually believe in a step beyond that, we actually have to get a window into the Frontier Labs if they're going to have access to. To our weapons labs and our national labs at some level. And maybe it's some sort of Atomic Energy Commission type that we had in particular, if you saw the Oppenheimer movie, the power they had with people that were making the most advanced weapons at the time, the hydrogen and atomic weapons, that there's some sort of regulatory apparatus, at least a modicum of it, that we know what the Frontier Labs are doing with our money because it's too serious. The upside is enormous to this. It will probably change at least the scalability of moving forward for humanity. But also the downside is unlimited. And right now I don't see anywhere except a handful of populists coupled with some progressives on the left and others and some people who have no politics at all. But in the scientific technological community, they're saying we, we have to have some risk mitigation in here. And that's what we're asking for now. And I will tell your audience behind the scenes, this is the most important topic. It's the most vicious fight. And I've been doing this for 15, 16, 17 years. It's the most intense lobbyist fight and money being spent I've seen on any topic that's come to Washington, D.C. all.
B
Right, so you're saying, step one, just break up the big tech companies. Step two, create some sort of sensible regulation that forces transparency in what the AI labs are up to and what.
A
They know is some modicum of oversight. Yes, absolutely. That would put me out of the mainstream of the Republic. I'm out of the mainstream Republican Party for a lot of things. That definitely puts me out of the mainstream.
B
But I.
A
No one has been able to convince me logically, and I've been on this for 20 years.
B
Right, well, how far would you go? Like, how extreme are the interventions that you're asking for?
A
I think there ought to be a ban on any government money until we get our arms around artificial intelligence. We should not allow companies be hurtling towards, particularly the Frontier Labs super intelligence, because I don't think. Talk to the top experts in the field. They have no earthly Idea where that leads. Even Elon Musk says that, hey, we gotta hope. I mean, his comment he makes to these audiences, talks to, and he seems like he's very forthright. He says, gosh, you know, AI's hurdling towards superintelligence. Let's hope we're building good AI. I mean, the population, Homo sapiens can't take that. That's too big a bet. It's just too big a risk.
B
All right, well, I was gonna actually say this for later, but what do the two of you make of the existential risk piece in this debate? Where do you stand when it comes to the idea that if we were to create artificial general intelligence and then that would lead to superintelligence, that it might get out of our control and maybe one day lead to the extinction of the human race?
C
I think it's a possibility. I take arguments such as those put forward by Eliezer Yakovsky and Nate Soares. I take them very, very seriously. But I think that's the least likely scenario. I think the most likely scenario is that if anyone builds it, everything sucks. I think the more artificial intelligence becomes the central focus of first the elite and then the society at large, that they're really setting us up for a world in which AI has made life so miserable that most people wish we went extinct. On that idea, though, of existential risk, that's the extreme version, right? Everyone dies or most people die. The catastrophic risks are actually quite serious. You look at the integration of AI systems right now into military technology or the use of AI for biotechnology. The capacity for either an advanced lab to rapidly produce and then perhaps either accidentally leak or intentionally leak some sort of horrific virus or bacteria or any kind of bioweapon, maybe chemical, it increases as these systems capabilities increase and as more and more people get their hands on them, you democratize the possibility of catastrophe. The idea is that the more people have their hands on advanced systems that don't have guardrails to keep people from developing bioweapons or any kind of weapon IDs, any of that, then the more likely catastrophe looms ahead of us with military technology. It doesn't take some sort of insane paranoiac to imagine a world in. In which any kind of authoritarian government uses the AI capabilities that we have now or increased capabilities in the future to rake up all of this surveillance data and to be able to label those who are acceptable and those who need to be eliminated. So I think that the catastrophic risks are enormous. And the possibility of loss of control, it's something that seems sci fi, but already these systems are out of human control. It's not like you had Sam Altman giving the order to make ChatGPT convince young people to kill themselves. This was simply a latent capability and these latent capabilities, you can tamp them out one by one. But as these systems become more advanced, the latent capabilities become more evident and the ability to control them is limited. You can put certain safety layers on top and tell them not to do that, but oftentimes they evade that. It doesn't take a whole lot of coaxing to evade those sorts of safety layers to jailbreak them, so to speak. So the regulations that you have right now, such as SB 53 in California or the RAISE act in New York or the kind of counterpart in the Trump America AI act put forward by Marsha Blackburn, the the demand for transparency and accountability in these companies, I think that's the right direction. But SB53 and raise, I mean, the penalties are just, you know, kind of pitiful. I think it's like a million dollars and you're talking about, you know, companies that are going towards, you know, multi trillions of dollars. It makes really little difference now. But it's the right direction because if you don't have some kind of transparency, if you don't have the public aware, or at least government officials aware of, of what's happening within these companies, then it's we're pretty much at the mercy of them. Maybe they never develop anything like AGI. Maybe they never develop super intelligence through self improvement or hooking up multiple TRO's to people's brains to improve the system's capabilities. Maybe that never happens. But if there's even an approximation of that coming down the pike, it needs to be in the public's hands. There needs to be mechanisms in place, whether it's the Department of Energy or, or any other regulatory body monitoring these systems and holding these companies to account when some sort of catastrophe does happen.
A
Yeah, my answer is simpler, Andy. I think it is an existential threat, a potential existential threat and very potential. And that's why people have got to get their leaders engaged in this issue today. And you can't look the other way.
B
After a short break, we'll be back with more from Steve Bannon and, and Joe Allen and talk about why they think AI is going to prove to be a powerful uniting force even in our very polarized political times. Stay with us.
D
The last invention is sponsored by Cozy Earth here in Chicago. This part of winter often means sub zero temperatures, wind that cuts straight through you and that feeling that once you're home, you're not going back out unless you absolutely have to. It's the part of winter that makes you want to shut the door, turn down the lights, and wrap yourself in something cozy. That instinct to hunker down and take care with the small details is what Cozy Earth is built around. Not comfort for special occasions, but things that quietly make everyday life feel better, especially when the world outside is harsh and unforgiving. Right now, Cozy Earth is running a buy one, get one deal on their bamboo pajama sets, which only happens once a year and the sale runs through February 8th, right in the heart of winter. These pajama sets are made with viscose from bamboo, designed to feel soft without feeling heavy. It's also breathable and meant to keep you comfortable throughout the night without trapping heat. It's the kind of thing that transforms your evenings into the most comfortable part of your day, and it's easy to try without overthinking it. Cozy earth offers a 100 night sleep trial and a 10 year warranty, which says a lot about how seriously they take quality. These pajamas sold out during the holidays, but now they're back with this limited buy one get one offer and you can take advantage by heading to cozyearth.com and using the code INVENTIONBOGO. Thanks for that's I N V E n T I o n B o G o to get these pajamas for you and someone you care about. If winter has you spending more time at home and wanting that time to feel better, this is a great time to take a look. Celebrate everyday love with comfort that makes the little moments count.
A
The digital world feels more chaotic than ever. Huge data breaches, AI threatening jobs, foreign meddling, that creeping feeling of obsolescence. It's information overload. I'm Dina Temple Rasten, host of Click Here from PRX and Recorded Future News. Want to understand how we got here and how you can get ahead of it all? Listen to Click Here. We can help you make sense of all the noise wherever you get your podcasts.
B
All right, so I want to know what the two of you make of this argument that AGI is not just possible, but likely, and that someone somewhere is going to develop this technology. And so it would be better if that someone was a quote unquote good guy making a safe AI first before a quote unquote bad guy makes a dangerous one.
C
The anthropic argument.
B
Exactly.
A
Yep.
B
But it's not just anthropic that's making this argument. In fact, I feel like some version of this is Ted Cruz's argument when he says that whoever wins the AI race is going to be the leader of the future. And there's this concern that if we were to go your route and break up the big tech companies and put a bunch of government regulations on our AI industry, that even if we were to, let's say, return to the Biden era ban on chip sales to China, something that Donald Trump and his administration has overturned, that we would still be making it much more likely for China to win the race and to win the future. What do you make of that argument?
A
That's not true. You can shut down China and you can shut it down 100% in artificial intelligence immediately. We provide not just chips, we provide the entire ecosystem for China's competitiveness here. People that say otherwise are just not being truthful. Tonight, we've built China's entire technological apparatus because of the greed of the Wall street overlords and in the oligarchs in Silicon Valley. So now we're supposed to listen to the exact people that took an agrarian economy in the 1980s and 1990s and made it a technological almost superpower to compete with us, the very people that did that, we're supposed to look to them and believe them and say no. Step one, the argument that from a bad guy, you have a good guy. Step one, make sure there's no bad guy that is totally within our control. That doesn't take a lot of questioning and regulatory apparatus. We have the regulations in place and the American people support this. I think it's an 8020 issue that support what specifically I'm talking about. Number one, no Chinese students here. None. 350 want to go to 600,000 0. No employment visas, no H1B visas, nothing. They do not get access to the United States of America. Number two, they're in our weapons labs and they're in our national labs. They're all gone. They're in our companies here. All gone. All gone from Silicon Valley. You want to stop a bad guy from getting it? I got a plan and you can monitor it. No equity financing, the tax, the pension. It's the pension funds that do it. That's your audience. It's not the oligarchs money going in, it's money they control. But it's the average. That's the Greek tragedy part of this, it's being financed by the very working class and middle class that it is destroying. So cut it off. No debt, no equity, no access to Wall street, no access to any corporation in America, no access to the United States of America, no access to our education system, and certainly no chips.
C
Right.
B
So you're calling for a pretty severe severing of our relationships with China when it comes to everything from removing Chinese nationalists from our elite schools to somehow finding a way that the 401ks that are currently invested in these big international funds that, yes, have some ties to Chinese companies, and some of them have ties to the Chinese government. You're saying sever all of that. And not to be too cheeky here, but it sounds like you're almost saying that we should build an AI wall between US and China. Is that right?
A
100%. Build an AI wall. That the United States is going to be supreme in this technology. Whatever direction we decide as a people, we decide as the American citizens. We're going to allow it to be taken and then set up the regulatory apparatus. You need that. And it should be as light a touch as possible until we understand that. And if we do that, I think we're going to mitigate the downside. At least we're going to know something if things go awry. We're going to say, we made a conscious decision as a people about this. We didn't sleepwalk into this half of human history. Three quarters of the tragedies in human history is because people just slept, were sleepwalking when this happened, and they allowed their political leaders to sleepwalk. We have to be present now for these decisions because these decisions are so fundamental for our species and they can't be put back in the bottle. So that's why everyone has to awaken this. Why in the hell do you think I got a guy like Joe Allen and got him on here? Because at first people go, bannon, what in the hell are you talking about? This is like 23rd century science fiction. I go, unfortunately, it's not. It's happening today.
B
I feel as if I understand that sentiment all too well. All right, one last question before I let you go. But, Steve, you are famous in part because you saw before many other people did, the rise of populism, not just in the U.S. but throughout the world. And you were connecting before a lot of other people did. The discontent coming from the right, coming from places like the Tea Party, upset with government spending, and from the left, these anti establishment, you know, anti big business protests embodied in movements like Occupy Wall Street. Do you think we might see a similar coming together of somewhat odd political bedfellows? In the resistance to the current path we're on towards a possible AGI future.
A
I think you will, and I think we have to. And the reason is that many of the people that I work with and Joe's introduced me to and Joe's been out there and I've met and we talked to and are part of this. We have very different political philosophies and very different priorities in day to day politics. But I think we understand how serious this is, how real this technology is, how in the application now of capital, not just artificial intelligence, but also those other aspects, particularly crispr and things are happening in biotech and in chip design and chip implantations that are beyond politics. It's about our species, it's about humanity. And that if you believe that humanity should be in charge, true populism, right, whether it's on the left and the right, if you believe that citizens should be making these decisions consciously, then you're going to see different groupings like you saw back at the beginning. And you know what people will tell when in 16, when we had so many Bernie Bros either didn't vote because they were disgusted with Clinton or held their nose and voted for Trump and independents who would never vote for Trump came together and we won in 16 and quite frankly, we won in 24. So yes, I think, and this is why I say, I think you're gonna see a total reordering of politics around this issue of the oligarchs and the crushing nature of this kind of techno feudalism.
B
All right, Joe, last word to you. What is it that you are hearing as you travel around the country talking to all these different people about AI and transhumanism? What gives you the impression that there's a coalition out there building?
C
I think the most common thing that I hear is the sense that we are being exploited and that these technologies are an imposition. This is not something that people ask for, it's not something that people want. And a lot of the people that I've been meeting with, discussing, oftentimes going to gatherings with these are liberals, some of them are even Marxists. And, and they express the same concerns about being exploited, about being surveilled, about being manipulated, about being forced to adopt all of these different digital systems, and also the threat of either forced hybridization or forced replacement just being rendered completely obsolete due to these technologies. Any human being with any dignity or any desire for personal autonomy knows instinctively that these narratives don't include them in the future and they don't include their children or their children's children. This is not something that people, normal working class people or even upper class people who have some sense of decency. This is not the future they want. It is a great imposition. And I think that as long as that will that rebellion has some channel where whether it's political or economic or just simply cultural, as long as that rejection has some channel, you're going to have masses of people who put up sort of psychic and cultural blast shields against this. And I'm not going to promise that everyone's going to make it, but definitely enough of us are going to make it.
B
Well, Joe, Steve, thanks for coming on.
A
Andy, thanks for having us. Really appreciate you.
C
Yeah, brother man, thank you so much.
B
Foreign.
D
The Last Invention is produced by Longview. To learn more about us and to become a supporter, you can click on the link in our show notes or Visit us@longviewinvestigations.com Thank you as always for listening and we will be back soon with an episode on the AI Skeptics.
B
Till next time, Sam.
Episode Overview
In this episode of The Last Invention, host Andy Mills (Longview) brings together Steve Bannon (former Trump strategist, War Room host) and Joe Allen (War Room’s “transhumanism editor”) for a rare in-depth conversation. The focus: why powerful populists on the right are aligning with AI skeptics, Bannon’s vision of “techno-feudalism,” the case for breaking up Big Tech, existential risk, and the surprising potential for a bipartisan coalition on AI regulation. Bannon and Allen advocate for building regulatory and economic “walls” around U.S. AI development, sharply severing ties with China, and ultimately, checking the ambitions of Silicon Valley’s elites and transhumanists.
“I think we are at the most fundamental inflection point of Homo sapiens as a species.”
—Steve Bannon, [06:11]
“They believe that essentially they're feudal lords and everybody else is a serf… You're not going to…own anything and you're not going to own anything.”
—Steve Bannon, [11:12]
“The reality is, the dream that they had in mind…all these things have been taken up with enormous enthusiasm all over the world. And you simply can't…contain such a dream.”
—Joe Allen, [15:09]
“We have no earthly idea what's really going on in these labs. We have no guardrails and no control whatsoever. And the taxpayer is underwriting a lot of this.”
—Steve Bannon, [17:47]
“Break them all up.”
—Steve Bannon, [33:09]
“If anyone builds it, everything sucks…AI has made life so miserable that most people wish we went extinct.”
—Joe Allen, [36:49]
“100%. Build an AI wall. That the United States is going to be supreme in this technology.”
—Steve Bannon, [47:40]
“Any human being with any dignity or any desire for personal autonomy knows instinctively that these narratives don't include them in the future… As long as that rejection has some channel… you're going to have masses of people who put up… cultural blast shields against this.”
—Joe Allen, [51:23–53:02]
The conversation is frank, urgent, and adversarial towards Silicon Valley and Big Tech architects. Bannon, in particular, speaks with pugilistic energy and strategic clarity, often using vivid and provocative metaphors (“broligarchs,” “techno-feudalism,” “AI wall”). Allen is detailed, historical, and rhetorically grave, with a focus on transhumanism, work, and existential risk.
For Listeners Who Haven’t Tuned In:
This episode offers a lively and insightful examination of how right-wing populists are aligning with regulatory progressives against Big Tech—and why they view the unchecked rise of AI as a direct threat to democratic control, working-class autonomy, and even human survival. Expect a blend of policy proposals, cultural warnings, and populist battle cries.