Steve Bannon (3:02)
I'm Joe Allen, sitting in for Stephen K. Bannon. I want you, the war Room posse to focus your mind on AI artificial idiocracy. We talk a lot about what happens when the machines increase in capability, when machines are given intelligence, whether it be human level or superhuman. But what happens if the real problem that we face is that humans are getting dumber and dumber and dumber? Now, what you just saw, a montage of science fiction films, gives some sort of dreamt of image of the future. What people of great imagination or great malice and evil project onto the future as to what it could be, what it should be. Perhaps futures to avoid, such as the Terminator or the Matrix. But science fiction really just shows us these extreme possibilities for the future. As history unfolds, reality rarely lives up to that level of exaggeration, that level of hyperbole. What we do get though, are approximations of those futures. Right now, obviously we don't have flying cars everywhere. We don't have hyper real holograms in every store, nor do we have, as far as anyone knows, unless you believe the government is 20 years ahead of anything we see today. We don't have time machines, nor do we have Terminators coming through them. But despite that sort of shortfall, when looking at these extreme realities, we do have powerful technologies being pushed out onto every possible institution and onto every citizen who either is willing to take on these technological upgrades or oftentimes forced due to their employment and in some countries due to the government. We talk a lot about the futuristic images though, that basically take science fiction and add fancy graphs. We call this futurism. We talk a lot about the Technological Singularity. I don't think there's a single person here listening from the war room posse anyway, that doesn't already know that the Technological Singularity is a vision of the future decades away, maybe a decade and a half away, in which technology increases in capabilities, eventually hitting an inflection point, going up that exponential curve, until finally you have artificial intelligence systems that are improving themselves so rapidly. You have human beings now merged to those artificial intelligence systems through brain chips and other sorts of neurotech. You have robots everywhere. You have genetic engineering, sort of artificial eugenics projects. And all this converges onto what is called the Technological Singularity, first really laid out by Vernor Vinge for a lot of NASA and aeronautic engineers in 1993. And then following that, you have Ray Kurzweil's much more fleshed out image from 2005, in which artificial intelligence is first thousands and then millions and then billions of times smarter than all human beings. And we all attach to it, sort of like remoras on the shark's fin. We become a kind of parasite living on the mechanical host. For Ray Kurzweil and most of the people at Google, most of the people at OpenAI, perhaps most of the people at XAI and at Meta, this is a fine future. This is a glowing field of possibilities into which we are entering. There are some indications that we're on that path, some indications we're on our way to something like a singularity. The recent GPT5 flop would give us at least some comfort knowing that we're not quite there yet. We're not at AGI artificial general intelligence, but we definitely see increased capabilities on everything from reasoning to understanding and analyzing language structure and meaning to solving puzzles, solving math equations, the ability to sequence DNA or to predict the subsequent proteins that would come from it, the ability to control robots in quite sophisticated fashions. And we also see a pretty massive adoption of these technologies. So that ChatGPT, for instance, has some 700 million users across the planet. It's not clear how many people use Grok, but there's something like 600 million users on X, some number of them interacting with GROK and GROK companions. And then of course, Meta AI again, there are no good statistics on how many people are using those particular AI companions and AI buddies. But we do know that 3.5 billion people on the planet are on Facebook. That's nearly half the planet. And so we know that some approximation, some version of a future in which human beings are AI symbiotic, we become in some sense merged with the machines. And this of course is the inspiration for the new company co founded by Sam Altman called the Merge, a brain chip company with the explicit goal of putting trodes in people's brains so that they can be more tightly coupled with artificial intelligence. Now, their vision of this is it will create a superhuman race, that human beings will become smarter and smarter and stronger and stronger, more and more beautiful. But I believe that however plausible something like that singularity may be, far more plausible is the inverse singularity in which humans become dumber and dumber and dumber. And so the technologies seem that much more amazing. Yesterday we heard from Dr. Shannon Croner stunning statistic that among Gen z kids, some 97% use chatbots. This comes from a scholarship owl survey of 12,000 people. And we also know from that study, assuming that it's anywhere near accurate, that some 31% of those kids use chatbots to write their essays for them now, you might think if you were a techno optimist, that this represents a huge leap forward in human technological being, right? Homo mechanicus, the human that's able to call up information at will. But I think that the more likely outcome is that these kids simply atrophy. Their curiosity, their creativity, their critical thinking, their ability to read deeply, think deeply and write well is being compromised, perhaps even intentionally so, by this AI symbiosis. They're more like barnacles on a ship hull than they are any kind of super being. And so, as you hear again and again and again, this rallying cry that we need to create better and better machines, I think the only appropriate response is to reject that dream entirely. Shift the center of gravity away from the machine and towards the human. And ultimately, instead of building better machines, we need to cultivate better human beings. And on that note, I would like to bring in our first guest, Brendan Steinhauser, the CEO of the alliance for Secure AI. Brendan, I really appreciate you coming on. How are you, sir?