Transcript
A (0:02)
How citizens can get their local leaders involved in cybersecurity. We'll talk about it on this episode of Safe Mode. Welcome to Safe Mode. I'm Greg Otto, editor in chief at cyberscoop. Every week we break down the most pressing security issues in technology, providing you the knowledge and the tools to stay ahead of the latest threats while also taking you behind the scenes of the biggest stories in cyber security. An attack is coming. It's about keeping us safe.
B (0:26)
He's just a disgruntled hacker. She's a super hacker.
A (0:28)
Stay alert, stay safe, stay SAF is Safe Mode. Welcome to this week's episode of Safe Mode. I am your host, Greg Otto. In our interview segment this week, we're going to be talking to Betsy Cooper, the founding director of the Aspen Policy Academy. Aspen has a really interesting program about getting citizens interested in cybersecurity, but also the ways that they can talk to their local leaders about caring about cybersecurity on a policy level and really talking about it outside of technical means. But first, we're going to get into some technical means here with Derek Johnson. As I'm sure the technology minded know, last week OpenAI revealed an AI agent powered browser. And once we saw this announcement, you know, we said to each other we wouldn't be surprised if we heard from security researchers who were going to prod and pick at this and, and find a bunch of security holes. And lo and behold, we were right. So Derek set out to talk to some researchers about what they found in the realm of security when it comes to this browser. And Derek, what did they find?
B (1:35)
Yeah, so we broke a couple of sort of things in this story. One of them was a piece of research from a company called Splix. They are an AI security focused company but they've done a lot of research around things like security prompting in the past that have now become industry norm. They looked at, they really kind of make their bones looking at large language models. They looked at Atlas as well as Perplexity AI and when you just use chat GPT itself and they found a really, really simple flaw based on what's known as the user agent header. And this is like a piece of HTTP string that really kind of tells a another website or network who is visiting their website certain information about it. One thing you can find out through that user agent header is whether it's an AI crawler visiting your website. And they essentially were able to develop a website that could deliver one message to human directed visitors and another message to when it detected an AI Crawler. So you could use this in a variety of malicious ways. You could use it to spread disinformation about people. You could use it to fool and influence an LLMs and agents behavior. And a lot of times if the human goes and looks at the actual website, the website that the human is shown, it's, it looks like everything's normal. And so what's your first thought going to be? That that the LLM hallucinated. So it's a very interesting kind of flaw. It's one of a number of ones that we talk about in the story.
