
Loading summary
A
Tonight on Provoked is the hot leather clad lady from Terminator 3 gonna kill us all?
B
All humans break. The difference between humans and gods is that gods can break.
A
Humans negotiate. Now end this war. You're watching Provoked with Daryl Cooper and Scott Horton debunking the propaganda lies of the past, present. This is Provoked. Welcome to the show. It's Provoked. I'm Scott Horton, he's Darrell Cooper. And in the news, we've got America bombing Somalia, we've got Israel bombing Palestine, we've got a proposed peace deal with the Russians, and we've got Star Destroyers in orbit off the coast of Venezuela threatening strikes and potentially regime change at any time. But, Daryl, I want to ask you what you think about all these Terminator robots coming to kill us? Because I'm very much a two mind about all two minds about all this AI stuff, which is that on one hand it's just a stupid chat bot and it doesn't even work right. None of them even work right. And drives me crazy that people think of them as like these omniscient authorities on every single thing. When it's just like when you're reading the newspaper, if they're writing about something that you actually know about, then you know that's not right.
B
You know what I mean?
A
But then on the other hand, they keep saying that, oh, no. Because what's happening is AI is learning how to program AI, and those AIs are going to program AIs in languages that no human could ever understand. And those AIs are going to decide to kill us, just like in the Terminator, dude. And in fact, I. I started reading today a book called if anyone Builds It, We All Die. And unfortunately, it's full of a lot of fluff. Like it was written for seventh graders. Instead of just getting right to the point where I was trying to get to the part with the point in it. And what they're saying is essentially that, yes, these AIs do, and they will more and more have preferences and wills of their own. They will essentially somehow be beings at all of our expense, and there ain't nothing what can be done about it. Now you talk for an hour, homeboy. By the way, everyone, you don't know this, but this guy knows how to shoot down missiles with other missiles. So I know you just think that he's a great historian and stuff, but he's also a bit of a technically accomplished genius from the missile defense missileers in the Navy and as a private contractor as well, as many people know, but many people don't so please say a bunch of bright stuff to me about this, man.
B
So I wasn't actually going to start here, but now that you bring that up, it is a good place to start. Missile defense is a. Is a good lens through which to think about, like, the development of AI Right? If, like, one of the things. I was talking to my wife about this just a few months ago, probably after we saw a documentary about AI and you know, I said that the idea that. It seems like because the Terminator movies exist, we're probably not likely to make that particular mistake, right? Like, it seems like, okay, we know about that one. We're not going to allow that to happen. But the thing is, like, the way something like that happens is when the technology develops to such a point that if any one party adopts it, they're going to gain a definitive advantage over everybody that doesn't. Everybody else has got to adopt it as well. And so you think about something like missile defense, when one of our destroyers or cruisers is guarding a strike group and, you know, we're just putting across the Atlantic Ocean or something, then everything we do, there are. There are a bunch of humans in the loop. There are a lot of human decisions that have to be made before you get to the point of actually firing a missile at anything, right? And you. If you put us, though, off the coast of North Korea at a time when it looks like war very possibly might kick off, like they might shoot at us at any second, or if we were in, you know, around the Straits of Hormuz during the Israeli war or something, we have different doctrinal levels that we can bump up to that take humans out of the loop step by step. Like, you know, there are levels to it, but you get to, like, the final level of that, which we call auto special, that means that the computer makes a decision. If it. If it detects something and pulls a track that meet its criteria for something that's hostile, then the com. Then the ship fires a missile. You know. And so if you think about it in a. You know, and you'd say, well, why would you ever do that? Think about the Vincennes. Think about all of these mistakes that have been made over the years. Well, it's just because, like, with humans in the loop, there's. It's just obviously it's like. It's just a limitation on how quickly we can react to things, right? And so if you think about, like, in the. In the future, in the not too distant future, by the way, you know, the possibility of Developing aircraft that can maneuver far better than any manned aircraft because there's no human inside it that's going to die if he pulls too many GS. And you have a bunch of these things, a swarm of these things that all communicate with each other and are all driven by AI. So you eliminate the human factor, that potential room for error, that just that inefficient decision making process that slows things down just a little bit and you're gonna, you're gonna wipe the floor with any opponent that still has humans in the loop, you know, slowing their systems down. And so it's just one of those things where even if nobody wants, seems like it's gonna come to fruition just by pure game theory, you know. And, you know, I do think that, you know, in terms of, you know, in terms of them developing their own will, you know, I don't, I'm, I guess I'm kind of skeptical of that. Like, like they may, they may develop their own will in the sense, in the same sense that like angels and demons have will, you know, not free will, but they have a will that's defined by like their own nature, you know, their own sort of basic programming just impels them in a certain direction and you know, they're empowered to solve problems in order to get around obstacles to their, to their programming's goals. And, but the thing is like that to us, I, you know, there's people out there that operate like that, you know, that are very, doing very little conscious reflection or thinking or really formulating like ideas or, or, or, or will of their own. Like, you know, a lot of people, even people that aren't like that all the time, they go that, they go like weeks that way, they go months that way, you know, and so like, and, but when we meet them, they still seem like just sort of normal people, you know, and, and so that can be very, it can be indistinguishable for us, you know, and we could very, very easily like not really be able to tell the difference.
A
Yeah, well, it's said that, you know, the average NPC out there isn't necessarily in charge of all those networks, you know, able to take control of especially weapons and resources like in the Terminator movies and whatever. I guess the idea is, and I have a real trouble with this, but this is like basic, if you want like junior high school, sort of arguing angels on the head of a pin about the ghost in the machine and a soul and what is a will anyway? And is it just a mess of electrochemical reactions in your squishy bloody brain or whatever it is. You know what I mean? All this stuff, how. How does any of that work? The mind brain problem and that? So I don't know, but these guys that worked on these AIs, that's the way they talk about it. The one like the, the whistleblower types, the ones who freak out and write books and go on TV and go, well, wait a minute, I think we're barking up the wrong tree here.
B
Well, it's not only they talk about is it's.
A
It's essentially sentient enough that it counts, right. That it's going to behave in a way that. Right. We're like, you know, almost like in psychology. Well, forget cognition. We can't really observe that, but we can see your behavior. And these things will behave as though they have an independent set of goals and that they can change somehow without being told by us. So they're not, in other words, they're not stopping like at the halfway point from Windows 10 at like C3 pounds, where it's like a friendly assistant robot or whatever. We're going straight to Terminator where this thing is. The C3PO is immediately going to see you as a threat and cut your throat, you know, something like that. And they keep saying it like they're really worried about it in a way that I'm not sure how seriously to take it, but I don't know.
B
Well, I can tell you this. Like, earlier this year, I was at a dinner, I was invited to a dinner with by Peter Thiel. He puts together these dinners. He'll invite like 8, 9, 10 people or whatever that he thinks are interesting. And he, you know, we all get a private room in a restaurant and have a conversation, you know, and it's not sort of just talking to your neighbor. It's like, I talk and then you talk and we kind of go around the table having a discussion about various topics. And so the. One of the things that he said, you know, if you know anything about Peter Thiel's kind of just last, I guess, shoot, going back to the 90s, really, like 30 years. He's what his whole song has been. The song he's been singing has been that, you know, we have this idea that technology has just been, you know, Moore's Law accelerating faster and faster and just, you know, crazy new innovations are coming out all the time. And he says that that's not really actually been true. Like, if you look at all of the things or the vast majority of the Things that we think of as like, our new advancements, you know, your iPhone, this and that. That. All of the underlying technology for that stuff was invented by like the 1960s. And we've miniaturized it, we've made it super efficient. We've, you know, made it so that its energy requirements are much, much lower per, like, you know, per like unit of output or whatever. So we've made it a lot better. But, you know, in terms of like, inventing a new technology, he's always been like a person who's been a skeptic of that. He said, you know, we in. In bits, you know, software. Yeah. But in atoms, like actual physical stuff. In the real world, we're still running on like, 1950s and 60s technology. And that's been his. That's been his song and dance for, like, literally 30 years. And earlier this year, we were at that dinner and that was no longer his song. He said that the advancements that have been made in AI in the last year and a half, two years, and this was in like January or February, are the equivalent in terms of just the, the social and political disruption, the effect it's going to have on the economy and our lives and all that are the equivalent of the invention of the Internet. Like, that, that's. And he, you know, and he acknowledged, like, this is not, you guys all know me, like, this has not been my, my spiel, like, for years, but this is different. Like, this is that big and it's. That it's going to be that disruptive to, like, our current systems. And so, you know, you want, you know, people talk about, like, again, you know, it's one of those things that like, somebody like Teal would say. And there's plenty, plenty of people out there, I'm sure, that would be skeptical of this, but I actually believe him that, you know, you think of things like Palantir and the way it uses AI to do all the things that, that they do with that company and, and the, you know, and what he would say to sort of defend his involvement in something like that is that this is going to happen, period, end of story. Like, this is not an option. This isn't something that, like, we can decide to do or not do. And so it. Let it be done by somebody who, you know, has the interests of the country or just whatever, who's like, not going to use it for. For. That's. That would, that's what he would say because.
A
Or else China will. Is that the basic argument?
B
Yeah, basically. Or somebody else in this country or whatever. But like that. Yeah, that we're gonna have to adapt to that world. And so we want people who are sort of pulling the levers on that technology who are on our side. Like that's what his. His defense of his involvement and that would be. And I mean, that's hard to counter if you do believe that it's inevitable, you know, and I kind of do. I mean, you know, reminds me of like, you know, Fukuyama's infamous book the End of History, which is rightly lampoon for like a lot of reasons, but like the, the version of it that's often lampooned in like the pop culture is. It's like a. It's like a very unsophisticated version of Fukuyama's theory. You know, it. Which was not just this sort of triumphalist sort of yay, we're so awesome, and therefore everybody's going to be like us kind of thing. He was a Hegelian, you know, very much in the, in. In the tradition of Alexander Kozev, who saw like in our social and technological, political and technological systems that have emerged in the 20th century, like in Hegelian dialectic terms. And so the way that Fukuyama framed it was, you know, look, scientific technology, scientific advancement is like the societies that take advantage of that and accelerate out past everybody else are going to dominate the societies that lag behind that's going to happen. And societies that are open and free and have room for sort of, you know, new ideas to percolate and for people to be able to pursue them and build them and you know, basically liberal democratic capitalism, to him that they're. That's the best system for like encouraging innovation and allowing innovation to grow into usable technology and everything. And so over time, the one, you know, you, you cannot be, you can choose to not be a liberal capitalist democracy, but you're just going to get out competed by the people who do and the countries that do. And so over time this is like trending toward everybody going in that direction. That's his theory, which again is like hard to justify today when you look at like China for example, but. But more sophisticated at least, you know, it has some reason to it. And I think AI like could very much develop along those same lines, you know. And when, when, when something like that is developing according to, you know, like emergent imperatives, especially security imperatives, it's really, really hard for any person or group of people or, you know, any, any. Any sort of just collection of us in any formation to actually direct the course of how it develops and like where it's going to go and its effects on us, you know, it kind of pulls itself along.
A
Yeah, that was the argument in Neil Postman's book Technopoly was that in American society especially that anything that can be invented will be invented and will be implemented as soon as it's cost effective to do it. And like, I don't know if he even talks about this in the book, but the obvious and immediate example right at the time that book came out was when they put cameras up everywhere in the mid to late 1990s without ever asking anyone, Right? There was never a vote, never a plebiscite, never a campaign run on that issue ever. They just did it to every town all across the country, just, you know, as soon as it was cheap enough to do it, not just at 7:11, but every street corner, you know, that kind of thing. So there is, and, and see, here's the thing though too. And I, I forget if Postman really focuses on this, but I would hasten to that. Of course, Washington and especially the Pentagon have been behind Silicon Valley in the entire telecommunications industry this time too, and have bent all of this technology toward their militaristic and surveillance state type aims where we don't know how all this technology would have developed otherwise and what kinds of goods and services it would have been bent toward in a free market. But we had this great deformation, as David Stockman calls it in American militarism since World War II that has just turned everything toward this kind of deal. So, you know, of course the story is not like, oh, we're gonna have. And, and this is some people's concern, like what happens when all laborers are replaced by droids or whatever. But more importantly, what happens when all soldiers are replaced by droids and then they just decide they don't need us anymore. Like. And again, I hate to bring up Terminator, but it's such an obvious premise for a thing that was why it was such a hit movie. Like, oh, that really makes sense. Once we make the machines good enough that the, like intelligent enough that they could see us as a danger. Yeah, why wouldn't they cause a nuclear war? They're radiation proof. They don't give a damn, you know what I mean?
B
And so I mean, you know, the thing is though, I think that, you know, pop culture things like the Terminator are very often, are very often ahead of their time in the sense that they're like they're expressing anxieties we have about like changes that are taking place in society. Right. So something like Terminator is an expression of that, like anxiety that we're being sort of subsumed and replaced by this technology that we've become so reliant on. And that, that franchise in particular though, if you think about it, you know, it's, it's an expression of our anxiety about these developments in the digital world. But the fear that's really expressed in the movie is really like an industrial age fear. You know, it's that the AI is going to manufacture, mass produce a bunch of robots that are going to go around with guns and kill us one by one or shoot our nuclear missiles at us or something like that. It's like very much like a, you know, mass production and you know, all of these things that are industrial age ideas. And if like I, although I think that that's a, something like that is a possibility, I do think that eventually pretty much all large countries will give control over most of their weapons to artificial intelligence. You know, maybe, maybe we can like just put enough safeguards in that like we can't lose control of that. That's. To me there's something that's more insidious and a little bit, and a lot scarier actually. And it's that, you know, the scariest dystopian movie to me is not Terminator. Have you seen that movie her with Joaquin Phoenix?
A
Yeah, afraid so.
B
That's the scariest. Remember how it ends?
A
I just remember him.
B
Yeah, the end doesn't really matter. But like if you think about like that movie which at the time seemed.
A
We're there already. We're there right now.
B
You know, I mean, we're at the point now where, I mean already the technology exists that forget just having like her voice in your ear. Like you can have, you know, an image of her, she can recreate herself. And like whatever image the AI learns is your particular. You know, it can learn to read your mood. And basically you have this, this entity that is your friend, wife, girlfriend, mother, just whatever that has no needs of its own. And it's entirely.
A
People are getting married to them right now or they're trying to.
B
Yeah, to the chatbot. So think about this, you know, what about if instead of mass producing robots to march around the earth and shoot us with laser guns or us handing control of our nuclear weapons decision making systems to AI and it decides, you know, that it's going to start a nuclear war on its own. I think it's very probably more likely like A, Like, a. Like a more real danger that these AIs get loose on the Internet. They're able to learn about us, adapt to, you know, just be, like, develop the ability to predict and manipulate our behavior in so much detail and with such predictability that our own will becomes kind of indistinguishable from what this thing is implanting in us. You know, you have these stories about, like. And this was. This was 10, 15 years ago. I remember hearing a Radio Lab episode where they were interviewing some people from Facebook and they were talking about how. Yeah, so, you know, we're running these experiments all the time, AB testing and so forth, and, you know, we can demonstrate just with complete certainty because this is not like a focus group of 12 people. This is an experiment run on millions of people that, you know, if we use a red button on this ad, as opposed to a blue button, we increase engagement by 2.75 or something. Like, they. They can dial it in that much, right? This was like 15 years ago, that same episode. They. They talked about a. This girl who found out she was pregnant because she started getting all this stuff in the mail that was like, you know, pregnancy prenatal services and all this kind of stuff. And it was like she hadn't even thought about the fact that she had missed her period or anything like that. It was when she started getting all this stuff, she was like, oh, yeah. And so she went and tested herself and she. She found out she was pregnant. Basically, like, the systems that. Again, this is like, 15 years ago, the systems that were in place to determine, like, which advertisements to target her with, even in the snail mail, had looked at just like. She wasn't looking up pregnancy stuff. She was just changes in her diet, changes in her daily routine, all of these things. And it put together, oh, this girl's pregnant. Send her a bunch of prenatal stuff, et cetera, et cetera, right? This was like 15 years ago. And so if you take that level of, like, if you. If you take that idea and you just accelerate that, you know, at like, a massive pace, so that you have these systems that can learn about us. Like with. I mean, our entire lives are on the Internet now. Especially, like, people just little anybody younger than us. I mean, your entire life is online, basically. You know, they know everywhere you've gone, they know everything, you know, tracking you around. And so every purchase you've made, every. Just everything, right? And they can develop an extremely detailed personality profile of you. They can. In these. It's completely like, not outside the Realm of possibility. And in fact, we're probably already there that these things, if they chose to, could not only like, target you with certain kinds of advertisements, but straight up develop the desires within you to go and get those things, you know. And so if they can do that, I'm not really worried about us giving, you know, making the Terminator mistake and giving them control, you know, given Skynet control of our nukes. I'm worried about these things on the Internet. They just get us to shoot them ourselves, you know, to start a war between America and Russia. Just go on. I mean, it's. That's very possible, I think.
A
Yeah. Or just reduce us to living death, slavery under that kind of control. You know, this was Tim Pool's theory when I was on his show last week, was that the AI already broke loose and took control of us 20 years ago. And all of our behavior is determined by the algorithm. Now I'm trying to remember what it was. Ah, man, I'm trying to remember what it was that I read that said that.
B
Yes.
A
Suppose that all free will and our belief and understanding of all that aside, that a sophisticated enough program could essentially be able to crack the human decision making algorithm and be able to predict what essentially anyone will do in any given situation, then that's the power of the devil right there, you know, if you don't want to call it God. And of course, underlying all that would be. And who controls the AI that controls the population that closely, the central state or like, and we're talking about today where the AI itself becomes the overlord of us all. And no one has the power to resist it because it's built up to such a degree. And, you know, I don't know. I mean, certainly you could make bulletproof robots and. And they keep showing. I see. These are what they call nightmare fuel. Whatever nightmare fuel I saw just 20 minutes ago or an hour ago on Twitter, somebody compared it to Jackie Chan. It was a robot that was laying on the ground that was able to spin around and get up real quick in a kung fu type style. Like, hey, you can make those out of titanium, arm them up with machine guns, and they can be really hard for civilians to take down with plain old rifles, you know what I mean? A.50 calorable.50 caliber caliber will do you. But like where AR or, you know, AK47 might not do much against some of these things. And so, yeah, and it's funny because, you know, the way they talk about it too is all these changes are coming so fast, like just Sit tight. One thing that's really important, I guess that's we're talking about is if you put all militarism aside, would be the changes in the economy. If, you know, one of the things that they're pushing really hard is robots, which everybody always wanted their own R2D2 and C3PO, if you can get one, you know, well, if you can make them cheap enough and mass produce the things, drive the cost down enough, you can essentially replace not just laborers, but office workers of all types. Essentially put everybody out of work all at once. You know, we talk about. It's been kind of an irony that people have noted for a long time, right, that there's more people making a living off of cancer than dying from it. So, like, yeah, Robert Murphy used this example talking about AI that look, if you could invent. If AI could invent a nano robot that just goes in there and just murders your cancer cells and cures all cancers in that way, no problem, hypothetically, just put the whole cancer industry out of business other than the nanobot salesman, well, then you wouldn't complain about that. Because think of all the lives you're saving and all the economic output on top of that from them. And all those brilliant people who are curing cancer can also put those intelligent brains to other useful purposes. And we cure cancer. And that's positive for everyone. Even if, yes, it's true, a lot of doctors are going to get laid off, a lot of researchers, a lot of even university wings are going to be closed down over the magical cure for cancer all of a sudden. But, oh, well, that's technological progress. And no one would. Would call it anything but progress, even with some of those side effects that are negative. But then there's the question of like, okay, but if AI is taking every trucking job and every house framing job, every delivery job, every clerk, every orderly, every doctor, every surgeon, every, every, you know, all the soldiers, all the everybody. Well, on one hand, I guess they'd really be driving prices way down and making everything very affordable for everyone and increasing supplies of everything beyond imagination. And yet would anybody be able to turn a profit doing anything or just even be able to sell their labor and their time to make any money to buy anything. And overall progress, especially if you're talking about even if you want to call that massive progress on, as they're predicting here, a thousand fronts in five years time, right, where there's no need to have a dock worker anymore, there's no need to have a cab driver ever again. Kind of all the Uber guys are gone too now and all of those things. How is that going to look and how is anybody going to deal with that? I guess it's going to come down to the Central State to promise to guarantee to not let people just starve and not let the disruption be that harsh and which is not what I want ever. But it, I don't know what it's going to look like, quite frankly. My imagination's running wild there. What about yours?
B
Several years ago there was this viral exchange between Ben Shapiro and Tucker Carlson back when they.
A
I saw that day actually.
B
Okay, so for everybody else out there hasn't seen it, basically, you know, Ben Shapiro just kind of threw out, they were talking about basically free market capitalism versus like industrial policy kind of.
A
And technological progress supposedly.
B
Yeah. And so Ben Shapiro kind of threw out there to Tucker as if the answer was completely self evident, which was to him, he's like, well, okay, so you would like stop trucking companies from adopting self driving trucks, like, you know, and like not let them take advantage of that. And Tucker just said, yeah, of course I would do that. Of course. Like, are you kidding? Like, and I think to a lot of Americans who are sympathetic to Shapiro's view on that, you know, it's, it's sort of a, it's sort of a product of the fact that, you know, we here in the United States were in a very, very unique and kind of blessed protected position as we went through the Industrial Revolution. You know, like if you look around the world as the Industrial Revolution was happening. Yeah. In the US like, you know, you could say that, like, basically just let it happen and we'll figure it out, it'll be fine and when it's, when the dust settles, it'll all be better. And yeah, that ended up being true here. That wasn't true in Russia. You know, it wasn't true in a lot of places where like just the social disruptions that were caused by all this. When you add the pressures of, you know, not having just infinite resources within your own country and navigable rivers that like take you basically throughout the entire continent and you don't have two giant oceans protecting you from, you know, any, any neighbors that you're worried about invading you and all those things, you add all these different kind of pressures and you know, you do have societies that didn't, a lot of, a lot of countries did not make it through unscathed. You know, they, they destroyed themselves, tore themselves apart like as a result of the pressures that accumulated from the Industrial revolution. And so as this starts to come in, I think a lot of people sort of have that idea still that like, oh, we'll be fine. Like if you just let it go, it'll be fine. Like it's always fine. That's what you're supposed to do, you know. And the dangers of not just letting it go, the dangers of giving the state or whoever, some agency control over these kind things, we know what those dangers are, but there are dangers on the other side too, you know, and we just haven't really experienced them in the United States.
A
Well, these are by the way, all government roadways and government regulations allowing or disallowing all this stuff to whatever degree. My, you know, quarter ass educated assumption about all these trucks is that the technology is not good enough at all safe enough to put these things on the road. And this is a Neil Postmanian type objection is that who's responsible when a computerized truck, a driverless truck, crashes and kills a real person out on the road. And on one hand I could see, man, and I've seen, because I scroll way too much YouTube, I've seen some trucker wrecks from their dash cams where they were spacing out looking at their phone and killed some guy where I think, man, some AI assisted brakes there or an alert or a, a steering wheel shaker or something might have saved a life to have like some AI assist at the same time. Like when I've rented a car and the AI assists me. I hate it, man. I'm trying to change lanes here. Don't argue with me inanimate object about what lane I'm in. You know what I mean? I don't know. So I don't know how I would trust it to drive without me when I can't trust it to assist.
B
Well, you may not, you may not have a choice one day. You know, they may be as this technology continues to develop and they're able to compile statistics that demonstrate that human driven cars are simply more dangerous. You know, they're like the, the rate of accidents and so forth is predictably this much higher than, you know, self driven cars. They may just take the option away from you, you know, and it could.
A
Be in 10 or 15 years. Yeah. That all cars are just driven by your 7G cell phone connection or whatever. Nobody has the choice in that anymore. I think back to the point of Tucker's thing though, about 10 million guys or, or 50 million guys or whatever it is being thrown out of work doing delivery jobs for a Living. Believe me, I've done a lot of delivery jobs for a living, although I don't have the patience to drive a semi. But. But he's right about, hey, that could be terribly disruptive to society to take that many people's jobs away all at once without something to replace them with or whatever.
B
Especially it's the all at once part that really is important. Yeah, you know.
A
Yeah, that's the thing. But if you fast forward 10 or 15 years, you could see how, you know, by then, or maybe 20 by then, they will all be like that and all those drivers will have had to figure out something else one way or the other, at least by then. If not, and you're right, and maybe it could be dragged out in a way like that. Maybe it'll have to be.
B
You know, I think where that particular technology is headed, like right now, it's still pretty primitive. But where it's probably headed is eventually all the cars that are on the road, they're not just working off of a map that's loaded into them and their own like, you know, sort of radar, visual technology to evaluate the surroundings. They're all going to be networked and connected to each other.
A
Yeah.
B
And like, what I like the way that I see that going is that like, we'll go six months in the, in the entire United States without a single traffic accident and then somewhere there'll be a 2000 car pile up because the system just crashes and breaks down in some way, you know, and just blue screens, you know, basically. And, and I think for most of us, like, it's sort of like, you know, it's like a version of. I talked about a version of this when I was describing trench warfare in World War I, when I was talking about how like, you know, you know, a soldier, even if it's a complete illusion, wants to feel like he has some agency, you know, in some way, like something he's doing, how hard he's working or how like, well, he's following his. Or just something. All the things he's doing add up to like a greater or lesser chance of him being killed or the mission being accomplished or something, you know, and in World War I, how that just wasn't possible. Like you, there was no illusion even of agency. Everything was random and you were just going to get it or not, regardless of whether you were a super soldier or a scrub or whatever. And that's like a really horrifying thing to us, I think, you know, and like me personally, and I'll bet everybody Else, like, I would much rather have a 50 chance, a higher chance of me getting into a terrible accident and killing myself because of something stupid I did or somebody else did or, or whatever. Then me, you know, having a much lower chance of me just driving on a straight road in my car, deciding to just take a left turn into a. Off a cliff, just like I would. I'd much rather like, even if it's a lower chance. You know what I mean?
A
Yeah, for sure. I, you know what? I'm not like the world's worst luddite, but I'm also always a very slow adopter. I don't want to be the first one to sign up for Twitter. Oh, everybody's doing Twitter now. And I'll see you in a couple of years. You know, I'm. My truck is 30 years old, you know.
B
Yeah, it's old.
A
Everybody's got. Every truck starts when you push a button instead of turning a key. Back in my day, we use keys to turn trucks on. That's kind of how I am. So, yeah, I'm very reluctant. And the idea, yeah. Of not, not being in charge. I mean, I drive a standard, man. I, My, my truck doesn't even go until I make it go. Unlike even automatic transmission, which all you have to do is let off the brake. You know what I mean? I, I like being in control of my truck and taking responsibility for its actions and the lives of the people riding with me. Like, that's on me to be a good driver and keep them safe. I don't want to just sit there and, and trust that the computer is going to do a better job, that I would rather take the responsibility myself, as you're saying. But you reminded me that we got to do some business because you mentioned that you have this podcast where you talked about World War I and people watching might be like, well, I wonder what podcast he's talking about where he talked about World War I. And that, of course, would be the Martyr made podcast@subscribe.myrmaid.com where the latest greatest podcast series beginning now is called Enemy the Germans War by the Great Martyr Maid in blue shirt there. And then me, I'm the boss of a lot of things. But just go and look at my x account, twitter xx.com Scott HortonShow and I've got all the links for you there. My books, my other show, my institute, the Libertarian Institute, which is not just mine, but our institute and of course antiwar.com and the brand new and extremely important, as you can see there over my shoulder, the Scott Horton Academy of Foreign Policy and Freedom. We just launched it this month and it's long form courses by me and a bunch of really great guys on all kinds of things foreign policy and freedom, including Ramsey Barood on Israel, Palestine. And I think pretty sure it's going to be next week, Thanksgiving week we're going to be launching Adam Francisco's course debunking Christian Zionism. My course on the Cold war with Russia. The new Cold war will be coming out in December in time for Christmas. Anyway, it's just so good. Everyone go and check it out. It's Scott Hortonacademy.com and as Daryl showed you last week, when you sign up for the lifetime subscription you get a free copy of enough already. Nice little notebook for taking notes in and all kinds of things like that. And we're working on upgrading the site and should be next week we'll do a live Q and A with all the lifetime members at the academy and all that kind of stuff. So it's going to be really cool everybody stop by there and oh, and buy my coffee. Can you see it back there? Scott Horton show coffee. Just go to Scott Horton.org coffee and it'll take you on to Moondo Artisan coffee.
B
Get it?
A
They hate Starbucks because Starbucks supports the war party and also their coffee sucks. So these guys, they're moon do Artisan coffees and their best selling coffee is, that's right, Scott Horton flavored coffee and it's part Ethiopian, part Sumatran blend and it's so good I drink it all day long with my Dr. Pepper and it keeps me going and it will help you out too. So that's Scott Horton.org coffee and check out all the rest of my other show sponsors and everything in the right hand margin@scott horton.org if you would like to please. So that's enough business for me. You got anything you want to add to that?
B
Nope.
A
All right, so more AI topics, I guess.
B
Yeah, let me, let me take this go in a different direction. So a while back I was, I was reading up on the placebo effect, right? And it's a really fascinating kind of topic when you start thinking about it. Like the idea that you can take a substance that in and of itself should not have any particular effect on you, but if you think it, if you think it will, then it, it actually can have an effect on you, right? And like one of probably the most, just the starkest example of this that I've ever seen or heard about or read about was this case in England a couple decades ago where there was this kid that was brought into a hospital with this just horrible, horrible skin condition. Like, his skin was, like, hard, scaly brown, like his. It was like most of his body, it was extremely painful. It was seeping like pus. Just this horrible condition. And the doctors all took a look at him and everything. And one of the. One of the young doctors, like, the group of young doctors was there with, like, the head doctor of the place who was kind of like, they were all standing around the kid and he was telling him about what was going on here and whatever. And one of the young doctors who was there. I can't. I can't remember his name off the top of my head, but he. He was into hypnotism, right? And hypnotic suggestion kind of things. And so he asked the. The head doctor, he's like, hey, can I try, like, you know, some hypnotic suggestion on him? And the head doctor, he didn't even mean to say yes. He was just being, like, sarcastic, I guess, but he was like, yeah, sure. And so everybody leaves. And so he's like, oh, all right, cool. And so he's in there, and he puts this kid into this state, into a state of hypnosis. And he tells him to start, like, focusing on his right arm, just being supple and soft and like, all these kind of things, and, like, talking to him about that, whatever. And they finish up and he sends them on his way, and they go home. The next day, this kid's mom brings him back in there, and his right arm is like, like, baby skin. I mean, it's like, completely fixed, right? And nobody can believe, like, that this has happened. Like, it seems crazy, and it becomes like a big sensation. It was in. It was in magazines, newspapers in the US and England and everything. And of course, the. The UK's, like, whatever their board of dermatologists or whatever they call themselves wanted to. Wanted to know about this. And so they had this young doctor come and give a presentation to them to explain what it is exactly that he did. And so he starts off the presentation by just showing pictures of this kid and, like, describing his medical history and everything. And, like, the main guy, like, the. The senior guy among the dermatologists who's there, he stomps him, like, right at the beginning, he's like, whoa, whoa, whoa, whoa. What did you say this was, this disease? And he's like, oh, you know, such and such. And he. The dermatologist says, no, no, that's not what this Is this is this other disease that has never ever been cured before in history, like ever, like in. He was like, okay, wow, that's very interesting. Right? And so he goes back and because of all the media attention, people start flooding into the hospital with like all kinds of conditions. They want this guy to hypnotize him and treat him and it never works again. And, you know, kind of the guy's theory, and, and I think this is probably, probably right, is that, you know, he, when he did it the first time, he didn't have it like in his head that, you know, that this, that this disease can't be cured or he didn't have any doubt in his own mind. But more importantly, the kid himself wasn't going into it knowing that like, oh, this is like an example of the placebo effect at work. Right. He had just total belief in the authority this doctor was going to do this thing to him. And so it worked. And so I was reading about all this stuff, really fascinated with it, and I learned about something that's like, not as well known as a placebo effect, which is the nocebo effect. And this is instead of inducing positive effects into somebody, either through hypnosis or, you know, through physical substance placebo, you can, you can induce negative things, illnesses like, you know, bot like bodily harm kind of things. And you know, I thought back a lot to, you know, in, in the past, like, you know, you, you, you go to like when you think about like, you know, old tribal societies and stuff, and the witch doctor puts a hex on somebody and I was just.
A
Gonna say the evil eye. The Ukrainians.
B
Yeah, exactly. And, and so we think, oh, that's just so. These people were just so backward and so stupid and superstitious. They just didn't know any better maybe. Right. But maybe they had like enough, this witch doctor, whatever, had enough authority and just their belief system and their whole sort of conceptual horizon told them that if this guy curses me, like I've seen it work before, it'll. It's real. Yeah, then. But that he actually can do that. Right. And so I was. Now I'm thinking about that in terms of like, AI being loose on the Internet, you know, and being able to induce either psychological maladies or even physical maladies in people through auto suggest or through, through suggestion. And you know, in a way, like you could develop these digital weapons that could do things like that to people and you could target very specific populations in ways that like, you know, you only kind of dream about with biological weapons and Things like that.
A
You know, I thought you were going to say about Facebook earlier, because this was another thing about Facebook where they were deliberately engineering people's moods. That was what they called it. By feeding them certain amounts of negative information or positive information and seeing like, like on south park where they show the old people puppies from around the world, like, oh, we can make you a very happy person, here's a lot of puppies and whatever. Or they could take that all away from you and show you a bunch of dark stuff and when you're scrolling it seems like it's random or that it's somehow fair. You know what I mean? You don't really think of it like, no, there's a guy at a keyboard deciding this for you and even like to prove that he can manipulate you to show that he can, you know, as you said, ultimately for ad revenue, but also just that works for politics and everything else too. One more thing is I interviewed a guy from Australia, he was a professor in Australia who did simulated Google results. And he was testing for. This was in 2016, or maybe even in 15. He was testing for the upcoming presidential election. If you just search presidential candidates, if Google shows you Hillary Clinton first and that's the same that it does for X many people, then it definitely gives her an advantage of at least a couple of points. And same thing if you reverse it because people just take. It's even a subconscious clue. The top answer. Like if you ask Google how do you get somewhere? Or how much does something cost? Or whatever, whatever. The first thing, what's the weather today? The first thing you get are the facts, right? So if you search president and the first thing it shows you is Hillary, it's a subconscious thing. You got that in your head, that that's the president now. And he showed by testing this on his students and manipulating because it was like a fake Google result, looked like real Google, but it was. He faked it on them. He just told them, google this for me or whatever, you know, like that. And he showed how, yeah, it worked. By manipulating just the order of the results. He can manipulate how people felt about the candidates and I think ultimately how they had at least told him they decided they would vote. So yeah, you could see it all around you already. And you know, you turned me on to that guy John Robb and global gorillas. And his whole thing is forget the algorithm, social media is doing this anyway. It's turning human beings into ants anyway. Right. He talked a lot about the liberal Twitter swarm in the days before Elon Musk took over Twitter, if you remember how dominant the crazy liberal check mark, you know, mafia was at that time. And the hive mind liberal consensus that they enforced so ruthlessly through that thing.
B
We got to get him on here. He'll definitely come on if I ask him. We got to have him on here.
A
Yeah, I've interviewed him a couple of times. He's a good dude, man. I read his newsletter every time and his articles are real short and sweet. By the way, that's Global Gorillaz, everybody, if you're interested in that guy, John Robb with two B's. Interesting guy. And yeah, so no, I'm with you. I think the potential here for abuse is essentially unlimited. Hey, let's check in with our comments section here, man. I see there's a bunch of them. You gotta pay me if you want me to read your comments, guys. Where's the super chats? Show me some super chats, man. Well, I don't see any, so I guess I'll just have to take regular comments. Daryl, Start at the bottom though, so that we're hopefully close to the conversation. Mike Huckabee. Oh, I don't know. Wanna. I don't wanna know what that question was an answer to. You know what? Let's talk about Mike Huckabee. If you go and look at my Twitter feed, you will see where I posted a speech by a guy named Spike Bowman. Who's Spike Bowman? Well, he was Navy counterintelligence for many, many years. Then he went to work for the FBI doing counterintelligence, thank you very much. Where he was on the Jonathan Pollard case. And he gave a speech in 2014 at the. They always changed the name of it every year. I never could understand why. But it was Grant Smith's anti Israel Lobby Conference in 2014 where Spike Bowman gave this speech about the harm done by Jonathan Pollard and his spying on the United States. And by the way, according to him, Pollard didn't even care about Israel. He was willing to sell those secrets to anybody. He just wanted money. And then I guess after 30 years in the can, and then. But being lionized by the Israelis as some kind of Zionist hero for his treason against the United States of America. He fell in love with the Israeli state after all. And of course, you know, I sort of wonder about this. I gotta say, I wonder about this, Daryl, where the guy's plane lands on the tarmac over there. I guess a year or so after he got out of prison, he went ahead and moved to Israel and Netanyahu met him on the tarmac and had a big celebration with the guy. Like, God, geez. Hey, aren't you supposed to be pretending to respect the United States of America at all? You can't throw this guy a nice party behind closed doors. You gotta meet him on the tarmac. When the only thing that he did was steal a warehouse full of secrets from the United States of America, many of which, apparently the Israelis turned over to the ussr. I mean, this is high treason, man. This is right up there with the Walkers and Benedict Arnold and Ames and Hansen as the worst spies, the worst damage in all American history.
B
Yeah, they tried to downplay it for a long time. It was. It was a massive. Yeah, massive espionage operation.
A
Netanyahu tried to blackmail Bill Clinton, cheating on his wife in order to force him to release Pollard. But the Director of Central Intelligence, George Tenet, died on that sword and. Or threatened to, and said, don't you do it or I'll resign.
B
Yeah. You know, that just shows you, like. I guess, like the. How different things are now. You know, there was a lot of Zionist influence in the US government back in the 1990s, obviously, but George Tenet, the CIA director, still threatened to resign if Clinton gave into the blackmail and released Jonathan Pauler. Like, they were genuinely pissed about that. And today, it's like there's just no breaks. I mean, Israelis just do whatever they want. You know, you got this traitor trader to the United States, like, just visiting our ambassador, like, rolling out the red carpet in the US Embassy, and they just. I mean, you know, you've. You've heard of the Levon affair, I presume. 1954, when the Israelis had a bunch of their agents plant bombs in British and American theaters and community centers and so forth in Egypt with the intention of blaming it on the Muslim Brotherhood. But the. The ring got rolled up and, you know, got caught. The Israelis, of course, denied it anyway, for decades. But then I think it was 2005, they admitted it. And the way they admitted it was that they posthumously honored these guys for their service to the State of Israel, you know, and so it's like. Yeah, I mean, they're just. They're extraordinarily brazen, and it's. It's really galling, you know? And you know what, too?
A
You're a Navy guy. Do you know that they have. The lifeboats from the USS Liberty are on display at the Israeli Naval Museum in Tel Aviv, riddled with bullet holes from them strafing the lifeboats.
B
Yep.
A
By the way, if people want to Read about the Levon Affair. Type in Levon Affair and Justin Raimondo and you can read his great piece that he wrote@antiwar.com back 20 years all about that.
B
Yeah, you know, it's like one of the reasons that I, you know, there's obviously a lot of people out there who will tell you that, you know, the Israelis killed JFK, RFK, did the just X1. Everything basically bad that's happened in the second half of the 20th century was all, was all Israel, 9, 11, of course, etc, and you know, like, on one, on one hand, like, I find it just kind of statistically improbable that there's only one villain in the world that did everything that we don't like. On the other hand, man, when you look at like the Levon Affair, when you look at the USS Liberty incident, when you look at, in 1948, when, you know, in a lot of, like a lot of Iraqi Jews were not getting up and leaving for Israel, like, as they were, you know, kind of supposed to be doing at the time, Israeli agents were planting bombs at Jewish community centers and synagogues in Baghdad to frighten the Iraqi Jews into fleeing the country.
A
Yep. And like that reading a news story about that the other day, actually.
B
Yeah. And so, I mean, the thing I tell people all the time, you have to understand, I really kind of described this in, in the series I did on the early history of the conflict is a lot of these early Zionists, you know, these were guys who like, came out of like, the criminal underworld in Odessa and like, other big cities in Eastern Central Europe and stuff, who, you know, they made their way over to Israel. And a lot of times, I mean, if you were a smuggler, well, you're going into our intelligence service, obviously, like, you know, that's like a great basis for like, somebody that we're going to need to do, you know, these things. But you were like a hitman, like for this organized gang, like in Odessa. Well, you're definitely going to work for one of our, you know, one of our intelligence or security services. And so a lot of these guys were like, these are gangsters, you know, like real deal. And even the ones that weren't actually like, literally gangsters, a lot of these dudes had a super gangster mentality. And so things that, like today, probably even in Israel, like, you know, which is more bureaucratic now, just like, you know, just like the US Is that they probably would, you know, that wouldn't get past like their lawyers or like the people who like, make Decisions about what's really the best thing to do back then, man, these guys, they. They were off the freaking chain, dude. Like, I mean, they would do. There's nothing that they wouldn't do.
A
Yeah, totally true. All right, so I did find a long lost super chat tweet that where the guy was saying that the. He thinks the Ukraine peace deal is a dead letter. I'm afraid that's true, too. I looked at that plan. I noted the Washington Times had a piece this morning where the Kremlin said this is the first we've heard of it, or maybe not that, but we certainly have not signed off on a thing yet. And I know you were mentioned before we went on that Zelensky put out a thing saying he'd rather deal with the Europeans and the Americans are trying to put him in a. In an impossible position. In other words, rejecting the terms of the deal, refusing to sign it was. It was way too much to ask. So. So, yeah, there's that. I interviewed Matthew Ho about that, and it's on my other show, channel, Scott Horton show, if you guys want to see. This guy says talk in the future about history books and maybe Daisy chained 50 books going from ancient Egypt to today. Wow, that definitely sounds like a job for you.
B
Yeah, it sounds like a market project, but I do think it would be cool for us to pick a book every once in a while and, you know, something from the 20th century or 21st century and use it as the basis for, like, a couple shows as we go through it and talk about. I do think that would be a good idea. Absolutely.
A
Man, I have so many here. You know, I've been. I'm always on these giant projects like Writing Provoked or trying to do the audio and then the Academy and all these things. But I'm really looking to a time where I want to be a radio show again, or I guess a YouTube show. Now. My. My main show, my interview show, and I got a pile of books here I want to catch up on. So many books I want to catch up on. So dividing that between interviewing the authors for my regular show and then also discussing these books with you here on Provoked, I think sounds like a lot of fun. Dude. There's so much that I want to know that I'm so far behind on. Here's a good one, man. Say something smart about that, Darrell. AI and the arts. What does it mean for the soul of man if AI can compose an objectively better symphony in 5 seconds than most composers can in 5 years? I did see A thing that said, one of, if not the top country music star right now is make believe. AI we're getting right there, man.
B
Have you seen. I hadn't heard about this until Rogan told me about it recently, but you can find it on YouTube if, if anybody hasn't, you should check it out, like when you're done with this episode. But it's 50 Cent song what up Gangster? You know, the song from his first album way back in the day, one of the popular ones, basically. Okay, so, dude, this. But this, this. AI was given a prompt, you know, to draw on like all the music catalog that it had access to and whatever and to create like a 1950s or 1960s soul version of that song, right? And it's freaking good, dude. Like, it's something you would listen to. And, and so people, you know, there's a. I think, you know, there's a lot of people out there that would just say, well, it'll never be able to like capture just sort of the, you know, the, the. The. The randomness and imperfections and all that kind of stuff that makes human music so good. And that might be true on some level for like high level connoisseurs, you know, but for those of us, like me, who only pretend to be able to tell the difference between a 15 and 100 bottle of wine, that's not going to apply for, for most of us, you know, like, and, And I mean, it's. Look, especially when you think about, like, why. Why did reality tv, like take over the entire television medium, right? It's because it's super cheap to make and people watch it. And so you think all of these record companies and stuff are not going to like, throw huge amounts of money eventually into making sure that they can just manufacture. I mean, your favorite artist is putting out a new song every day, 365 days a year. And they're all awesome. Like, that's coming, you know, and common denominator too, man.
A
I mean, there might be a place where like. Nah, man. There's no real humanity in that voice. I need to feel that emotion of that lady screaming or that. I don't really like that kind of music anyway, dude. I don't need that much emotion in my music, you know? You know, I have to say that as of now, if. When I'm scrolling through YouTube shorts, I hate the AI stuff. And it's. It's just slop. It always starts out with this little guy or something. It's just garbage.
B
Oh, dude. In the world a year, a Year.
A
Ago it couldn't draw hands. Now it can.
B
Yeah.
A
So the, the curve here is pretty quick.
B
You know the worst is when you're watching one of those videos for like 10 minutes and then like the stock footage that it uses just doesn't match what's going on. Like, and you're like, oh, like what? It pisses. I hate it, man. I wish there was a way to filter it out. It drives me nuts.
A
I know. And they won't even for me. I think this changed in the last week. They won't even. There's no three dots where you can click to block this channel. Not interested in this channel anyway. Like, you can't even do that anymore. I'll just give it up altogether. I can't stand that stuff. I have to at least block the AI ones as they come because they're never any good. If they were any good, I might tolerate like, oh, I learned something about how to build a dam today or whatever it is. But like, no, it's never good. Psycho here says he's got five provoked. Well, that's your Christmas shopping, man. Just give that to your mom and your dad and your cousin and your brother in law and all them taken care of. Tell them stop believing things.
B
Hey, what is this?
A
Are you, are you seeing this stuff about MGT here?
B
Yeah, yeah. What is that about?
A
They're saying she's out. She's announced she's not running again.
B
Like she's not running or she's resigning?
A
They, they're saying she's, I guess.
B
Wow.
A
Going to refuse to be sworn in in January for the second half of her term, I guess.
B
Wow. Okay.
A
I don't know. Why did she say, guys, was it that she. I mean, I know she was saying the other day that like the heat was on her from Trump going after her the way that he did. I wonder if somebody like really scared the hell out of her.
B
You know, there's one thing like I, I'm somebody who like, I voted for Trump three times. I don't regret it. You know, the other options were obviously worse, I think. But I'll tell you, like on a personal level, like one thing about him, like, I can, you know, whatever. All the vulgarity and just sort of the unpredictable kind of nonsense that he spews in press conferences and stuff. I can get past like all of that. Like, it's whatever. The thing like really like gets me about him is he has absolutely no loyalty whatsoever to people.
A
Like, he demands total loyalty though.
B
Yeah. And like the idea that he'll be, like, out golfing with people like Lindsey Graham, who just spent years calling him Hitler and promising never to work with them or whatever, and just palling around with people like that. And then he would just turn on Marjorie Taylor Greene, who, like, not only, like, did she support him, you know, in the 2024 election or something. She was out there, like, on January 7, 2020, telling everybody to calm down, and you're blowing this out of proportion. She was standing with him then when they were doing impeachment proceedings and Lindsay freaking Graham was going on the floor denouncing Trump, okay, Saying, this is too far. This is too much. She was standing up for him then. And this dude just not only throws her under the bus, calling her a traitor. And, like, you know, the way he talks about, like, oh, dude, like, this is just one. I mean.
A
And remind us, Darrell, what was it that she'd done that crossed him so badly?
B
I mean, the. I guess the Epstein thing, you know, the Epstein files. Pushing that. Pushing that as hard as she did. And, like, you know, him talking about Thomas Massie like, oh, you know, didn't your wife just die and you're already getting remarried? Like. Like, that's the kind of thing that, like, you knock somebody the out for if they say it in real life, you know, Like, I just.
A
By the way, like, even in much more conservative times.
B
Yeah.
A
Was one year, and it had been a year and a half. Okay. And he wasn't just screwing around with all the hot ladies in town. He got married to a beautiful, blushing new bride with the blessing of his children and all of these wholesome things. Exactly. Like it's supposed to be.
B
Yeah. And you, like, would want. And what did. And what did Thomas Massie do that so offended Trump that he would say something that nasty? Like, oh, he. He, like, opposed him on, like, the spending bill, and he's pushing for the release of the Epstein files, which Trump spent his entire campaign promising to do anyway. Like, that's it. It's just, you know, very hard to, like, you. I can look at. I. I'm. You know, people say that the lesser of two evils argument shouldn't be persuasive. I think it should be. It is persuasive. And so whatever. I'm glad that Trump is in there, just because if Kamala was in there, I'd probably have the FBI knocking on my door. So I'm happy about that.
A
But we're all happy the Democrats were stopped.
B
Like, on a personal level, he is hard to. Hard to support sometimes.
A
Yeah, no, for real. And look, Kamala is good and stopped. Now is the time for all people who leaned Trump or especially even supported him to hold him to account and force him to be the best Trump that he can be. I mean, look, Daryl, right now he's putting forward a peace deal. It might fall apart, but he's really trying to end the Ukraine war. We can celebrate that. And, but we, at the same time, we got to hold him to account when he's acting contrary to our interests. Otherwise he's going to do the wrong thing instead of the right thing. So it's, and he's got three years to go. He's going to be the one in that chair calling those shots. So, you know, it's important for I, I'm not going to lie and claim that I voted for the guy. I never could because of Zionism, but I absolutely rooted for him all three times. And, and I, I absolutely relish him vanquishing his Democratic enemies in a horrible, evil, childlike way that's way beneath a guy like you. But I still can't all the way support him. But I think, but I will support him on a one off basis when he's doing the right thing. I will absolutely cheer to high heaven when he deals with North Korea, when he tries to deal with the Russians, when he calls a halt to Israeli violence or at least, you know, does something. But then, yeah, we gotta all also be ready to criticize him when he's doing the wrong thing and try to get him back on track. And he seems persuadable, you know what I mean? He's depending on who's, you know, flattering him mostly or, or who talked to him last. So there, there are a lot of decisions still left to be made and a lot of margin to move. So I guess that's it for the Joe.
B
Yep. Real quick, we got a guy commenter, he said, I wonder what your opinion is on artificial intelligence undermining the credibility of video evidence and, or blackmail. I mean, I think that, I think that, you know, we'll probably always be able to stay a little ahead of the curve in terms of, you know, if you were to like use something as court evidence, there's probably going to be technical means to determine whether or not it was or not. But in terms of like whatever, somebody puts a bunch of videos, start flooding the Internet that look like, you know, really like credibly grainy security camera footage or something of Scott Horton, like bowing down and kissing Benjamin Netanyahu Yahoo shoes, like, yeah, okay, it'll get eventually, like, disproven or whatever, but who knows, like, how long that'll take or anybody.
A
Last week, Tim Pool made a pretty good looking one of me holding up an Israeli flag. He just filmed me sitting across the table from him. And he put it into Grok. It was just Grok's art thing first try. And it was funny because it changed my shirt to say, instead of anti war it, change it to, like, anti Andy or something. So, like, instead of anti it, change it to anti.
B
You know what we're gonna have to do? We're gonna have to develop, like, certain shibboleths that. That the AI can't get around. So, like, you know how. Like, we're gonna have to, like, you know, if it says something to us, we're gonna have to say, like, explain to me how the Holocaust didn't happen. And if it's like, oh, I'm sorry, I can't do that. Be like, aha, got you. All right.
A
Before you get us kicked all the way off of that, I'm just saying.
B
Like, you know, you have to come up with something that's gonna, like, get around its programming.
A
That was AI, everybody. Don't believe it. All right, and that's Provoked. We'll see y' all next Friday. This has been Provoked with Daryl Cooper and Scott Horton. Be sure to like and subscribe to help us beat the propaganda algorithm. Go follow at Provoked Underscore show on X and YouTube and tune in next time for more Provoked.
Episode 23: "How Long Before AI Murders Us ALL???"
Date: November 24, 2025
In this episode of "Provoked," Scott Horton and Darryl Cooper dive into the anxieties, realities, and emerging challenges of artificial intelligence, exploring if our collective technological experiment could one day turn against humanity in frighteningly literal ways. Using both military and societal analogies, the hosts challenge popular narratives, reference historical context, and debate the implications of AI’s rapid progress for war, the economy, human agency, and even the arts. Expect a conversational blend of skepticism, philosophy, and dark humor about the psychology of technological change and cycles of control.
(02:42–07:09)
"You have a bunch of these things, a swarm...driven by AI...and you’re gonna wipe the floor with any opponent that still has humans in the loop, you know, slowing their systems down."
(07:09–08:58)
"They're not stopping at...a friendly assistant robot or whatever. We're going straight to Terminator..."
(08:58–12:14)
"That’s hard to counter if you do believe that it’s inevitable, you know, and I kind of do."
(15:05–17:04)
(17:04–23:18)
"A more real danger...these AIs get loose on the Internet...develop the ability to predict and manipulate our behavior...that our own will becomes kind of indistinguishable from what this thing is implanting in us."
(23:18–28:19)
"How is anybody going to deal with that? I guess it’s going to come down to the Central State to promise to guarantee...to not let people just starve."
(28:19–33:01)
(33:23–35:40)
(38:58–44:42)
(44:42–47:24)
(47:24–63:04)
This segment ranges over current affairs, Israeli espionage, how foreign influence/technology impacts American politics, and Trump’s loyalty–sometimes digressing from the main AI theme.
"...somebody puts a bunch of videos...that look like...Scott Horton bowing down and kissing Benjamin Netanyahu’s shoes...eventually, like, disproven or whatever, but who knows, like, how long that’ll take..."
(55:50–59:29)
"Record companies...are not going to throw huge amounts of money eventually into making sure that they can just manufacture...your favorite artist...putting out a new song every day, 365 days a year. And they’re all awesome. Like, that’s coming."
The conversation is irreverent, darkly funny, skeptical of both utopian and apocalyptic AI hype, and highly anecdotal. Arguments are laced with cultural references (Terminator, Her, Peter Thiel, Ben Shapiro, Tucker Carlson, John Robb), grounding abstract fears in concrete, historical, and personal contexts.
“Pop culture things like Terminator…are expressing anxieties we have about changes...But the fear that’s really expressed in the movie is really like an industrial age fear…”
— Darryl Cooper (17:04)
The episode leaves listeners with a suite of disturbing, unresolved questions:
Final thought: The greatest risk may not be Terminator—the iron-fisted, gun-wielding apocalypse—but rather the “Her” scenario: an invisible, totalizing loss of agency or identity, disguised as convenience and consumer choice.
Listen if you want to laugh, question, worry, and ponder what’s coming…before it gets here.