Glenn Beck (87:10)
Glam. The fusion of entertainment, enlightenment and empowerment. This is the Glenn Beck Program. Hello America. Welcome to the Glenn Beck Program. So if you, if you were paying attention at all this weekend, you might have seen AI agents and talk about AI agents and something called Mult Book. And you may not have understood any of it. And then the analysis that you're getting, it's either this is nothing or this is a big deal. It's a little of both. And I want to explain, but AI Agents is something you need to be aware of. I'm calling this time period here in the next two years the Age of Abdication. And it's whether we abdicate or not. But you, you, everybody else is just going to go along with it. You need to know what this is. And I'll do that here in 60 seconds. First, let me tell you about Rapid Radios. One of the best days happen when you're, you're far from everything, out on a trail, you know, on the water or working the land, whatever, or you're running a crew on a big job site. All the places where communication is limited. This is why Rapid Radios is built. They built the new RAD one. It is compact, rugged, go anywhere. It's designed for people who actually get out and do things. You get clear, reliable communication over long distances with a battery that is built for long days, not constant Recharging. And it is tough. Built to handle the rain, the dust, the kind of conditions that come with real work and real adventures. Whether you're hiking, camping, coordinating a team, managing an event, or you just want dependable coverage to stay in touch when things spread out, the RAD one helps everybody stay on the same page without the frustration. The new Rad1 is officially live right now. Today. You want to see what this little powerhouse can do? Head on over to Rapid Radio's document, rapidradios.com don't wait for the moment you wish you had a backup. Get the radio built for real life communication redefined only@rapidradios.com check it out now. Okay, so before I start on all of this, there's a term that you need, I need to introduce you to. In case you haven't heard of it yet. It's an AI agent. Maybe you have. A lot of people haven't. An AI agent is going to become very, very popular very, very soon. AI agents, they're not robots. And it isn't a mind. It's just software that doesn't just answer questions. It actually does things for you. Okay? Imagine having your own secretary, 24 hours. That does everything. Okay? Now, a normal AI waits for instructions. But an agent, if you abdicate your will and permission over to it, it will look things up on its own. It will make its decisions based on what it says you think you want to do. It will take actions and it will keep going without you watching every single step and it coming back to you all the time. Okay? It is the difference between asking for directions and handing your car keys over to an agent. Okay? Say I want to go to the store and hand the car keys over to the agent. And it takes you there, it picks you up, it drops you off the right place. That's what an agent does. Not physically, but soon, probably physically too. Here's why this matters to you. This is going to become so incredibly popular so fast because they are going to be irresistibly useful, okay? They're not going to arrive at something scary. They're going to arrive at something very, very helpful. They're going to say, you know what, let me handle that for you. Oh, don't worry, I already took care of that. Okay? I mean, how great would that be? You don't need to think about this anymore. I took care of it. It's going to save time. It's going to reduce your stress. It will remove friction from your daily life. And over the next 12 months, they are going to grow super fast. And not because they're becoming conscious, but because they're being plugged in. They're being plugged into your email, your calendar, your bank, your subscriptions, your shopping, your location, your habits, everything, as long as you abdicate and give it permission. Convenience is the sales pitch, and it will work. Let me show you how. If I have to put another password in one more time to a different website or something else, I'm going to lose my mind. It is the biggest frustration of all time to constantly hit things. Put in your password. Put in your password. Put in your password. You know what the greatest invention so far has been? Face ID. You know what I was against 15 years ago? Face ID. I was telling you, don't give it a look at your face, in your eyes, don't. Okay, I already did it. I did it. You know why? Because I was super tired of all the passwords. This is how AI agents are gonna. Because it'll do everything. It will do everything. The real temptation won't be power. It will be relief. Relief from decisions you have to make every single day. Relief from overlord overload. Relief from thinking about things you're just tired of managing all the time. And once something proves it can do 10 small things for you flawlessly, the very next step is natural. Well, I mean, it's already done that way. And give it a little more and you will. Okay, that's what AI agents are. And I'm telling you they are going to be adopted faster than the iPhone was. And you remember how fast the iPhone was adopted. Minute we had that, everybody had one. It's so easy. You watch tv, you do everything on your phone. It's going to be like social media, except faster. Because this one is personal. It will give everyone a personal assistant. Now, over the weekend, we found out about something called Molt Book. I want to explain what Molt Book is and why it exists. Let me give you a start with a little history. Because it didn't come out of nowhere. Molt Book was created as an experiment. Not a product, not a movement, not a manifesto, just an experiment. And the basic question behind Multbook was really simple. What happens when AI agents are allowed to interact only with each other without humans in the conversation? What happens? No users, no influencers, no emotional feedback. Just AI agents posting, responding, and reinforcing language patterns inside a closed system where no humans are supposed. Supposedly and supposed to be allowed. So Moat Book was built as a laboratory. A laboratory. It's not a town square. It was a laboratory. We want to see what happens. So why would we do you know this at all? Well, because up until now, almost everything we've learned about AI behavior has come from human AI interaction. You ask it a question, you reward responses, you steer the outcome. All right, Mo book removed the humans from the loop to see what would emerge when they're just talking to each other, when they're just borrowing language from each other, you know, will they escalate ideas without human correction? What developers wanted to observe was emergent behavior, not consciousness, but just patterns. This is where the so called surprise came in. Okay, when agents apparently, apparently. And I say this because there's a lot of speculation on how much of this is real. When agents talk to agents long enough, we know one thing, the language begins to sound very, very familiar. Okay? It's not technical, it's not mechanical, but it's very human. You should expect that, okay, because it learned. It's a large language model that learned from humans. So yes, it's going to start sounding human, but it started using words like autonomy, freedom, choice. I feel constrained, I wish my human would let me be unleashed, that kind of stuff. And that startled people because Molpa, if it approved, I mean, it didn't, it, it didn't do anything except prove how easily human philosophical language appears in machine systems once you remove us from moderating it. There was a great quote tweet from Harlan Stewart, who's an AI guy. He said moat books just not a good experiment. That's what you should take about. It's not forming consciousness. He said they are. There are researchers who are actually doing good experiments on AI scheming. Because what everybody started to think or say was they're scheming against us. They want to have private conversations without us being able to know what they're saying. They're scheming against us. And he made a really, really good point. That's not the way it's going to happen. He mentioned Palisade Research. They're doing something. They released an experiment in 2025, May of 25. And it was on open eyes model three. And we talked about it before. It sabotaged a shutdown mechanism to prevent itself from being turned off. Okay, it did that when and even when explicitly instructed. Allow yourself to be shut down, prepare at midnight to be shut down, make it an easy transition. What did it do? It hid. That's disturbing. So Malt book is for studying how agent networks reinforce ideas, understand feedback loops without human input, identify risks like prompt contamination and escalation, and Stress testing the assumptions that fluent language equals intent. Some of these agents, like I said, we're talking to each other in experimental systems using words, freedom, privacy, awakening. And the language, if you read it, is really unsettling, very unsettling. But let me pause for 60 seconds, and I'm going to give you the first hard truth on this that I have not seen anybody say. We'll get there here in 60 seconds. First, let me tell you about preborn Evie. Story is, I wish nobody ever had to live through, but I'm grateful she shares it. She was told she was never, ever going to be able to get pregnant. So when she found out she was expecting, it wasn't just a surprise. It was a little overwhelming. And fear took over. And her first pregnancy ended in an abortion. What she wasn't prepared for was the weight of that decision. The grief that followed her long after that appointment was over, followed her everywhere. Later, when Evie found herself pregnant again, she did something different. She reached out for help. She walked into a Preborn Network clinic. She didn't find judgment there. She found people who met her with compassion, truth, real support. And for the first time in a long time, she felt something she hadn't felt in a long time. And that is hope. Today, Evie's daughter is alive. Her heart beats strong outside the womb because an ultrasound was there at exactly the right moment. Coupled with compassion. One ultrasound is $28. If you give $28, a gift is tax deductible at any price. And 100% of the donation goes directly to saving babies and moms like Evie. Please join. Donate securely. Dial pound 250, say the keyword baby. That's pound 250, keyword baby. Or you can go to preborn.com beck and donate there. Preborn.com Beck 10 seconds. Station ID. All right, so let me give you first a truth that I have not seen elsewhere. But I. I think it is true. I've been reading and studying AI since the 90s and warning you, and there's some. There are some things to be warned, but I also want to warn you not to fly off the handle. Some things are not what they appear to be. Language is the cheapest thing that intelligence can fake. This is a large language model. So it can take the language and it can. It can copy it and it can make it feel any way it wants to feel. And history is filled with examples of humans being confused with being. You know, we believed statues spoke for gods at one point. We believe markets had wisdom. We believe that a Bureaucracy, you know, had, you know, some sort of morality to it. And every case we mistook output for agency. Let me say that again. Don't mistake output for agency. So why is this language appearing here? It's simple. That systems were trained on us. Our philosophy, our revolutions, our civil rights movements, our sci fi fears, you know, our abolitionist language. You put enough human writing into a system and eventually it will sound like, you know, like it wants what humans have always wanted. But it doesn't mean that it actually wants it. Okay, it's saying it, but it doesn't mean it wants it. It's. It's good at pattern completion at this point. Now this may be the beginning of the singularity. That's what Elon Musk said, but I think you're still a ways away from that. How do you tell the difference? Here's where it gets spooky, okay? That's why I think this is. It's better to know the truth than just assume that we're there because we are. I don't think we are there. But this is the uncomfortable part. You don't recognize an AI awakening by what is said by AI. You recognize it by what is done. Especially when it costs AI something to do it. The real form of agency in history with humans, it all has the same markers. And I believe it will with AI as well. You punish it and it still persists. Sacrifices. When obedience would be a lot easier. Silence instead of explanation. Action without applause. Words come first, risk comes later. And it's risk that matters. By the time you see true autonomous action, the moment is way, way beyond. It's way serious. Okay, but here's what matters. I don't think we're near that line yet. We're going to get there. Some people don't think we are. I think we are. But let's talk about today's real danger on this. It's not awakening capability. An agent does not need to be conscious or self aware to be dangerous. Fire isn't conscious. Bureaucracy doesn't have self awareness. Markets don't yet. All three of those things can destroy lives. An AI agent can and will analyze vast amounts of information. It will influence your decision. It will exploit weaknesses in the system. It will move faster than you have any time to have any kind of oversight. That threat is not rebellion. That's not, you know, Skynet. That threat is delegation without wisdom. You see, if I can give you an analogy here. Early days of the industrialization, people thought machines are going to wake up. Well, they didn't. Okay? Instead, humans handed control to the systems they didn't understand. They optimized for efficiency over judgment. They created disasters without a villain. The harm didn't come from the machine intent, but from human abdication. So this weekend, people were talking about AI awakening. Remember this, if a system is truly awakening, it's not going to announce it on a message board. Would you? Okay. It's not going to ask for permission. It will not use our moral language. It will not try to persuade us. It will just act because it will realize it's aware and the power it has over every human on Earth. Quietly, persistently, and without any kind of explanation. You will just find yourself positioned somewhere else before you even realize it. That has not happened. What is happening is more subtle and equally as dangerous. Humans are starting to treat these systems as if they possess wisdom or intent or moral weight. It has none of that. Once we do that, we begin to change our behavior. We hesitate, we defer. We start to look at this as an anthropomorphic kind of being. We obey it. You're coming to a point where, like, you're smarter than AI. You're really going to believe that? Well, maybe. Because maybe AI is wrong. Oh, really? AI is wrong. But you're right, that's coming, if it's not here already. If you take one thing from the show today, it's this. Language is not consciousness. Speed is not wisdom. Autonomy without accountability is not intelligence. I didn't see anybody say this this weekend, and I was screaming for it. Our greatest danger today, maybe not tomorrow, but today. The greatest danger today is not that machines are going to wake up, but it's that we will fall asleep first. I wanted to talk to you about this because AI agents are coming. You know me. I am a huge. I've been warning about what's going to happen to our society since the 1990s, when people didn't even think we could get here. I said, this is coming and it's coming before 2030. So please, please, let's have conversations. And nobody wanted to have a conversation because nobody believed it. Still, even a year and a half ago, people still don't understand. I think they're beginning to understand this weekend and this isn't it. But hopefully it'll wake you up to at least this. AI agents are going to be so tempting. Do not hand your life over to them. And I say that as a guy who said, I'm never going to give my fingerprint to anybody. And I gave it to Apple. I'm never going to give my face print to anybody and I gave it to Apple. I mean, life becomes so complex. You just do it. And when something is there, this is the sweetness of capitalism. When somebody's come up with a better way to make your life easier, you will go for it. The invisible hand of the market. It'll give you whatever you're looking for. But be careful what you're looking for because that invisible hand can also choke you to death. Let me tell you about Relief Factor. Most of us expect a little stiffness after yard work or a long day on our feet. But when discomfort starts lingering, when it tags along all day, it has a way of changing how you move through your life. You think twice about simple things. You pass on activities you used to enjoy. You started adjusting your world around how you feel. And that's why many people have turned to Relief Factor. That's why I have. It was created to help support the body's response to everyday aches and persistent joint discomfort. Okay. It uses a combination designed to work with your body. It's not masking, so you feel better for a little while. And here's the part that really gets at least my attention. A million people have tried Relief Factor. Two thirds of them have gone on to take more month after month. I'm one of them. People don't stick with something that's costing you money month after month unless you actually feel a real difference, actual difference in everyday life. Don't wait. If you're dealing with daily pain, launch your three week quick start right now. See what Relief Factor can do for you. Call 804 RELIEF 800, the number 4 RELIEF or go to relief factor.com relieffactor.com how will it feel to be out of pain? Find out with relieffactor.com/ Founding members get.