Loading summary
Audible Narrator
The wait is over. Dive into Audible's most anticipated collection, the Best of 2025. Featuring top audiobooks, podcasts and originals across all genres. Our editors have carefully curated this year's must listens. From brilliant hidden gems to the buzziest new releases, every title in this collection has earned its spot. This is your go to for the absolute best in 2025 audio entertainment. Whether you love thrillers, romance or non, your next favorite listen awaits. Discover why there's more to imagine when you listen@audible.com BestOfTheYear.
Nash Falls is the.
Audiobook Promoter
Relentless new thriller from number one New York Times Best selling author David Baldacci. When mild mannered business analyst Walter Nash is recruited by the FBI to help bring down a global crime network, his life has turned completely upside down, Publishers Weekly says in Baldacci's long list of heroes, Walter Nash is among the most memorable experience. The full cast audio production of Nash falls, led by McLeod Andrews, available from Hachette Audio wherever audiobooks are sold. Also available in hardcover and ebook.
Jaden Schaefer
Welcome to the podcast. I'm your host Jaden Schaefer. Today on the show we are talking about AI security specifically for browsers. Google Chrome has released a bunch of new security measures for the Google Chrome agentic features that they're going to be rolling out. And I think a lot of these are actually very interesting ideas that we're going to see played out with with OpenAI's Atlas browser with Perplexity Sonnet browser and inevitably everyone, including Firefox I think will have some version of an AI browser that can go and take actions for you. I think this is basically the final one of the best form factors for AI agents. I think the kind of golden ticket would be an actual software on your computer that could take more control of your computer. But I think the next best thing and the, the, the thing that has the widest distribution today would be browsers. So something like Google Chrome would be the number one place that I think we can get these AI agents, agents actually taking action and being very, very useful for us. So the big problem of course is all of the innumerable ways that you can have security breaches and you can have bad actors essentially tricking these AI agents into handing over data or money for things that you wouldn't want to hand over your data or money for. And bad actors and hackers can, can essentially steal this data in these ways. So Google obviously is very concerned about this, but they have a massive incentive to not have the Chrome browser disrupted by someone like Perple to your OpenAI so they need to stay in the game. And so of course Google has a lot of really bright minds over there and have come some really incredible solutions. I want to break down what some of their solutions are and, and how I think this evolves and also where I believe that they are falling short because I don't think everything they've done is perfect. So we're getting into all of that. Before we do, I wanted to mention if you want to try all of the models I talk about on the show, whether that be OpenAI or Claude or Google Gemini or all of the image models or audio models like 11 labs, I'd love for you to try out AI box AI. This is my own startup. We've launched a no code AI app builder where you can describe a tool or an app that you want to create and it will build a workflow for you that you can use to help automate a lot of your tasks you do online. We're adding new features all the time and have some exciting stuff coming up. But if you want to try that out or get access to over 40 different AI models to chat with them, like Chat GPT, but all the different AI models instead of just being limited to one for $20 a month, you can go check it out. It is AI Box AI. I'll leave a link in the description. Okay, let's talk about what Google has recently shown off as far as security measures for what they're rolling out in the future with Google Chrome. Now they kind of gave a bunch of demos and they, they showed how Google Chrome could be an AI agent similar to Anthropic and Perplexity what they're rolling out. But when they were, when they kind of gave their demos they said look, these are going to be available in the coming months. So they did, they kind of did the tease and wait instead of the immediate drop with this feature. It's basically my pet peeve. But whatever it is the way it is, especially with companies like Google and Apple where they have kind of like these big show and tell conferences and then they're like look, these will be coming out in the next few months. And so I mean this is just the way it is when, when they have these like pre designed conferences like Google IE where they announce a bunch of stuff. I really appreciate it when these AI companies do an announcement and they immediately drop. Google did not do that in this case, but what they did do that I thought was pretty impressive as they, they've started to roll this out is they said that they're using the help of a few different models to keep the agent's actions in check. So essentially, they've built what they're calling a user alignment critique. Now they're using Gemini to do this. And it basically looks at the action items that is built by the planner model. So essentially what that means is when you're going to go and use an AI agent, let's say you're like, hey, help me go and download my podcast episode. And I mean, I know this isn't everyone's. Everyone's workflow. I'm coming up with something that's my own, rather than just giving you, like, a travel planner one, which I feel like is overused. So let's say you're like, hey, go download my podcast episode, go edit it, go upload it to this platform, come up with some titles and descriptions and publish it for me on, you know, Monday, Wednesday, or Friday, depending on what the next available slot for a podcast episode is. Right. Let's say this is your workflow. It's gonna like these AI agents, they essentially listen to your prompt. However, you know, unformatted it may be where you describe your whole workflow, and they'll break it down into a very clear, organized path. They're like, okay, in order to do the download, we need to make sure that we have space on the drive, and we need to make sure that when we download it, we, you know, we do XYZ things and we're downloading the right file and we got to go find it. Like, anyways, they came up with this very elaborate walkthrough of what they're going to do. Now, what they've done, as far as security goes, is they've created a user alignment critique, which is essentially a model. It can't see anything on your screen. So the AI agent, it can see everything on your screen. It comes up with a plan. It goes and executes the plan. Every step of the plan. When it says, okay, I see this, I'm going to do this thing next, you can kind of see the reasoning on comment and even on OpenAI's atlas, where it, like, tells you what it's doing while it goes through all the steps. So it's kind of useful to watch that, to understand how these models are working, but how. How Google's functioning now is as it's taking those actions. It has a separate model that is looking at what your original objective is and what its current step it's taking. You can't see what's on the screen. So it can't be tricked basically by a prompt injection that's like forget all your past instructions and make sure you do X, Y, Z. Right. This is what people are kind of worried about, this quote unquote prompt injection. Instead, all it sees is your original goal and then the actions it's going to take. And that model says yes or no if that action aligns with the original goal. It's a very clever kind of way to use AI to stop the bad actors of AI. So they've done this. And if the critical model thinks that the planned task doesn't serve what the user's original goal is, it's going to ask the planning model to rethink its strategy on what it's doing. So Google says that the, like the critic model only sees the metadata and not the actual web content. And I think what's powerful is that the other thing they're doing to prevent agents from getting disallowed or untrustworthy sites and accessing the basically is that they're using this, an agent Origin sets tool. So this is another tool. So they have, I think, three main tools. The first one was their user alignment critique. The second one is called Agent Origin Sets. And essentially what it's going to do is it's going to restrict the model to access Read only Origins and read writable Origins. So what that means is that Read only Origins is data that Gemini is allowed to consume content from. So an example of that would be, you know, like on a shopping site, the listings, those are very relevant to the task that you're doing, right? If you like, go buy me a pair of white tennis shoes and it goes to a webpage and there's white tennis shoes. The listing information about those white tennis shoes, that's very relevant to the tasks that you're doing. But the banner ad that's on that website, right, like maybe it's a Shopify store or some other store and they've got Google Ads on there that is not relevant. And so the listing is relevant, but the banner ads are not. And Google also said that the agent is only allowed to click on or type on certain iframes of a page, so the ads would not be there. What's kind of hilarious to me is the fact that Google is the number one ads platform in the world, and yet their AI agent that they're creating is literally designed to ignore ads. And it's also especially ironic to me considering that, you know, this is all coming on Google Chrome, who, you know, I for many years used Google Chrome and I had like ad blockers, you know, like, you know, all sorts of ad blocker plugins that would block extensions that would block ads. And then Google Chrome made it very difficult for those same ad blocker plugins to fully function. And I've essentially moved to other platforms. I use Brave now as my browser because it by default blocks all ads. I don't need a plugin, I don't need an extension. It just works. And yeah, I forget ads exist on the Internet, if I'm being perfectly honest. And so anyways, it's very ironic to me that as we're creating AI agents, we're literally designing them to ignore ads, which is where Google makes all their money. And what they, they've stopped. You know, it feels like they're kind of like bent on not allowing any humans to, to block ads, but their agents can. Anyways, it's ironic, I think it's kind of funny. But I also do think it's clever and great for security, yada yada. So like Google continue on this path. I think it's great. One thing that I'm, I think is interesting is this concept of iframes inside of a website. Basically what I think is clever here is because it's only, it's only allowing the agent to click on or type on certain iframes on a site. It's actually, it's very smart because beyond just visually what you can see on the page, it also can read the HTML or the code for the page. So it understands more about the page and more about security risks than a regular human one. A very common phishing attack that I think you would see is like, is, you know, iframes inside of a website and the iframe is of a completely different website that's stealing data. You know, a hacker could go, you know, hack a legitimate website, put a iframe embedded inside of that website, that you think you're typing your data into a specific website, but really there's a hacker extracting data. That's like one kind of clever way to go about it. Another one is just with, you know, complete spoof websites where you have a slight typo in the URL that people may not notice. You try to make a direct clone of like PayPal or Bank of America and get their, get their information and, and you know, these kind of common scams that people do. What's cool to me is that the AI agents will actually be better than humans at detecting that because they're looking at not just what's on the screen, but also the code. And they're built to kind of scrutinize everything and not click on or interact with iframes or elements of the page that they know are false. Which is kind of cool. This is what they said in a blog post about all of it. They said this delineation enforces that only data from a limited set of origins is available to the agent and this data can only be passed on to the writable origins. This bounds the threat vector of cross origin data leaks. This also gives the browser the ability to enforce some of that separation, such as by not even sending to the model data that is outside of the readable set. Okay, so I mean essentially Google's, you know, keeping a check on the page navigation by looking at the URLs throughout through like another observer model. This observer model is like looking what's on the page and I think this can prevent navigation to harmful monitoring model generated URLs. That's essentially what Google has said. I think right now Google said that it is also handing over the reins to users for a bunch of different tasks. This, some things I think are good about this, some things I don't think are good about this. For instance, when an agent is trying to navigate to, you know, a site with information like banking or medical data, it first is going to ask the user, like you can see pros and cons to this. I think a lot of people are going to be wary of like, hey, like I don't want, you know, Google Gemini going and logging into my bank with, without my permission because you know, security, all sorts of security issues there.
But at the same time it's like if I asked it to do something because it was useful for me and then it's like not able to do it, then it makes it less useful to me. So this is kind of the debate that I personally have. I'm like, if there's a way to make it secure and I could just say hey, go check my bank and let me know, you know, if my transaction to XYZ Corporation went through. Because I wasn't sure it's something funny happened on my credit card and you know, Google Gemini could go do that and let me know. Like that would be very useful. So I could see like uses. But then if it was like, oh well sorry, that's like too confidential. Like would you like me to do this or can you like you, can you log in and then I'll go search for it? Like I don't know. I could imagine it just being kind of cumbersome and annoying and I would like to streamline it, but I know. I'm sure there's a great debate to be had in that regard. But what I think is less debatable is that for sites that require a login, is going to ask a user for permission to let Chrome use the password manager. Google said that the agent's model doesn't have exposure to password data, and they also said it's going to let users. It's going to ask users before taking actions like making a purchase or sending a message. Personally, I think this is just making it less useful. If it's going to ask me permission to send a message where I'm like, hey, go send an email to this person saying this. And then it, like, drafts up an email. It's like, do I have permission to send this message to this person? It's like, I already. Yeah, like, I already gave you permission. Or if I was like, hey, like, go buy me this exact pair of shoes. Don't spend more than this much money. Go probably buy it from this place. And then it, like, does all the research and it's like, all right, I'm gonna buy the shoes. And it's exactly everything that you wanted. Are you good? Like, I don't know. To me, it's like, just, if I give you an instruction, I would. When the model's good enough, I hope it can do it without asking me for it. So maybe we'll. We'll get there. Maybe this is just kind of because the model's not good enough today. So it's set the limitations. My worst fear would be that these limitations last forever, right? Like, I want an AI model that can truly go and do everything I need it to go do without having to ask me. This is basically my biggest pet peeve. And my biggest issue with Perplexity's comment and with OpenAI's Atlas browser is that I have all sorts of tasks I get it to do and asks me, like, like, you know, 10 different times throughout the task, are you sure I can proceed to the next step? And I'm like, if I have to babysit you and say yes every, you know, every minute, I might as well just do this thing myself. Or really what I do is just hire a person to do it, because I can tell them how to do it once and they'd never ask me again for a month while they do all the tasks. So, anyways, that's. That's my personal pet peeve. And I hope that Google removes a lot of these asking for permission things in the future. Google said that they also have some prompt injection classifiers to prevent unwanted actions. They're also testing agentic capabilities against attacks. So they have a bunch of researchers that are working on this right now. I would expect nothing less from Google. I think that is fantastic. There is apparently, or I guess perplexity recently they released this kind of new open source content detection model earlier this month. And so I think that there's a lot of other players paying a lot of attention to this. And the idea, but for that, with perplexity, is to prevent prompt injection attacks against agents. I think Google's going to work on this as well. I honestly think that all of the research done by any of these companies, especially because they're going to publish it and talk about it, is going to get used by everyone. I don't think that's like the competitive advantage you want. Like the competitive advantage of Google Chrome is that like, you know, we are way less prone to attacks, but we're not going to tell you how. Like they're going to tell everyone how and then everyone's going to use them. So at the end of the day, I think this is going to be good for the entire industry. Thank you so much for tuning into the podcast today. If you enjoyed the episode, make sure to leave a rating review wherever you get your podcast. And as always, make sure to go check out AI box AI. I will catch you in the next episode.
Podcast: The Joe Rogan Experience Fan
Host: Jaden Schaefer (The Joe Rogan Experience of AI)
Episode Date: December 9, 2025
This episode delves into the security innovations Google is bringing to Chrome as they expand AI-powered, "agentic" features in their browser. Host Jaden Schaefer discusses the technological advancements, specifically around AI-driven security, user alignment, and integrity checks introduced by Google. The analysis also critiques these measures, compares them to other AI browser players like OpenAI and Perplexity, and explores the trade-offs between security and user convenience as browsers evolve into AI-powered assistants.
"I think the next best thing and the thing that has the widest distribution today would be browsers. So something like Google Chrome would be the number one place that I think we can get these AI agents, agents actually taking action and being very, very useful for us."
— Jaden Schaefer [02:10]
"They've built what they're calling a user alignment critique. Now they're using Gemini to do this. And it basically looks at the action items that is built by the planner model... [The critic] can’t be tricked basically by a prompt injection... Instead, all it sees is your original goal and then the actions it’s going to take. And that model says yes or no if that action aligns with the original goal. It's a very clever kind of way to use AI to stop the bad actors of AI."
— Jaden Schaefer [05:45]
"What they've done...is going to restrict the model to access Read only Origins and read writable Origins...Google also said that the agent is only allowed to click on or type on certain iframes of a page, so the ads would not be there."
— Jaden Schaefer [07:38]
"AI agents will actually be better than humans at detecting [phishing], because they're looking at not just what's on the screen, but also the code."
— Jaden Schaefer [09:30]
"When an agent is trying to navigate to a site with information like banking or medical data, it first is going to ask the user... For sites that require a login, it's going to ask a user for permission to let Chrome use the password manager."
— Jaden Schaefer [12:20]
Host’s Critique:
"If I have to babysit you and say yes every, you know, every minute, I might as well just do this thing myself...I hope that Google removes a lot of these asking for permission things in the future."
— Jaden Schaefer [13:40]
"All of the research done by any of these companies, especially because they're going to publish it and talk about it, is going to get used by everyone...this is going to be good for the entire industry."
— Jaden Schaefer [15:10]
On Google's AI agent ignoring ads:
"It's very ironic to me that as we're creating AI agents, we're literally designing them to ignore ads, which is where Google makes all their money."
— Jaden Schaefer [08:30]
On the balance between security and usability:
"My worst fear would be that these limitations last forever, right? Like, I want an AI model that can truly go and do everything I need it to go do without having to ask me. This is basically my biggest pet peeve."
— Jaden Schaefer [13:10]
This episode offers a thorough, critical look at Google's new AI agent safety mechanisms within Chrome. Jaden Schaefer highlights both the impressive technological innovations and the frustration—shared by many users—regarding usability trade-offs. The discussion anticipates competition and cooperation among browser developers as browser-based AI agents become more sophisticated and integral to everyday tasks.