Transcript
Alex Johnson (0:00)
Today on the podcast, I want to talk about an issue with AI and cybersecurity. Now, a lot of times people talk about, oh my gosh, AI is going to completely be used by hackers and malicious people to destroy companies. And I mean, I'm sure there's like different ways that you can use AI to do good and bad things. Of course. And there's, you know, red teaming groups inside of all these top LLMs to mitigate that. I'm not actually talking about that today. I want to talk about false positive bug reporting AI slop that are used to create fake reports saying that there is security vulnerabilities with companies and how hard it is. Some companies are getting completely overwhelmed and shutting down their bug bounty programs because they're getting inundated with so much AI slop. So actually this is another area, because now these programs are gone, that these vulnerabilities are not getting reported or seen. And this actually could create its very own security problem. So I want to get into all of that today on the podcast. But before I do, I wanted to mention if you want to try any of the top models that I talk about on the show, I would love for you to check out my platform, which is called AI Box AI. It's a platform where you have the top 40 different AI models all in one platform for one price. You get access to Gemini, Claude Grok, a whole bunch of image and text models that you may not have tried before. And it's only $20 a month, so you can try all of them. If you're interested, I think you can go check it out. There's a link in the description to AI Box AI. All right, let's get into what's going on with these AI slop fake reports that are basically exhausting some security bug programs. Pretty much what's happening is there's been a problem in the past, of course, with hackers finding a vulnerability, exploiting it, and, you know, causing great financial harm to a company. And so in response to this, a lot of companies have created these bug bounty programs where basically if you go and see like a bug or you see a security vulnerability, you. You can report it and if it was a big one, you'll get paid for it. And I think Meta has been kind of famous for this in the past, but a lot of companies do this basically, like, don't hack us. If you found a breach, we'll just pay you and you can go away quietly, basically without having to breach our customers, whatever. So what's interesting though is here's a quote from Vlad Ionsk who kind of talks about this problem. He said people are receiving reports that sound reasonable, they look technically correct and then you end up digging into them trying to figure out where is the vulnerability. And then of course it turns out there is no vulnerability. It turns out it was just a hallucination all along. The technical details were just made up by an LLM. And of course these LLMs are so good at making up these issues and it goes beyond just like I think if your average person tried this is their, their side hustle. They tried to submit these for like bounties. I don't know if it'd be a very great side hustle but because of course they, you know, they just don't know that much. But I think if you are already perhaps a hacker and this would probably seem like a really, you would basically know potential vulnerabilities and so you could prompt ChatGPT to be like, hey, I found a vulnerability in this platform for X, Y, Z reasons. Please write a detailed report of the vulnerability, how it works and here's some code and how it could have implement, you know, been integrated into their code base. Here's the tools they use, right? Like if you know enough to be dangerous, these reports can look really, really real. And it's, and it's basically hard to dig in and figure out what's real and what is not real. So this has been completely overwhelming a lot of different companies and some of them have literally just shut down their entire programs. Now there are a lot of people with different opinions on this stuff and just how like prolific or I guess how much of a problem this is in the entire industry. Pet Crunch did an interesting report where they actually went and asked a whole bunch of different companies and got different responses. One that was interesting. I know SKU who used to work at Meta's Red Teaming, they asked him about it. He said yes, this is absolutely a problem. I think he said like basically if you ask it for a report, it's going to give you a report these AI models and then people copy and paste it in. And he said that you're going to run into a lot of stuff that it really looks like gold or AK issues, but it's actually just completely made up. So you know, from his example. So one thing that I did think was pretty funny though was well, not funny. Okay. Actually this is alarming and this actually gets to the, the problem. But one open source developer, so he maintains the Cyclone DX project over on GitHub and he actually pulled down his bounty program earlier this year after he said he got, quote, almost entirely AI slop reports. Like that was basically the only thing that they were, that they were getting over on their project. So he completely pulled the bug bounty program down, which obviously, like, if this is what happened in every company, this would be a serious issue. I think at the end of the day a lot of people were making a lot of hype about this, like, oh my gosh, bug bounty programs are getting completely shut down because AI was overwhelming them and so, and therefore no one's going to be catching any of these basically security vulnerabilities. And the AI industry is going to, you know, basically have destroyed this and now we're going to have a whole bunch of bad hacks and leaks and all that kind of stuff. At the end of the day, I think it's not going to be probably that, that big of an impact on the industry for big companies, for smaller companies. Yes, like this guy, he's, you know, basically maintaining this GitHub project on his own. It has 137 stars. So it's not a massive project per se. Or actually perhaps that's the fork that he was working on. So in any case though, smaller projects, it's not going to be as big of an issue, but when you're. Or it's going to be a bigger issue, but for bigger projects I don't think it's going to be as huge of an issue. So they asked a bunch of other people. One in particular was Michael Prinz. He's a co founder, co founder of HackerOne. He said that they had encountered some AI slop, but it didn't seem to be the end of the world. He said, well, we've also seen a rise in false positives, vulnerabilities that appear to be real but are generated by LLMs. These low signal submissions can create noise that undermine the efficiency of security programs. So he's kind of concerned about it. One company that said this is absolutely not a problem was basically the leading bug bounty platform. So it's called, the name of it is called bugcrowd. So basically you can use bugcrowd to help you find bugs or security vulnerabilities on your platform and they'll crowdsource it. People will go in and actually like look at what's going on there. What I thought was really interesting. So Casey Ellis, he's the founder of Bug Crowd, he said that there's definitely some researchers that use AI to okay, this is what he's saying said definitely some AI, some researchers use AI to find bugs and write reports and then they submit them to companies. And he said that they're seeing an overall increase of 500 submissions per week. This is his direct quote. He said AI is widely used in most submissions, but it has yet caused a significant spike in low quality slop reports. They'll probably escalate in the future, but it's not here. I mean, basically he makes all of his money off of sending these reports to companies and crowdsourcing it. So it's in his best interest to say that this is not a problem. And I mean he's like, yeah, we're seeing like more reports go out, but it's not, you know, due to AI. People are just using AI to write reports. And like, I get that side of the argument because yes, it's so much easier to write these reports with chat, GPT or some other tool like that. But at the end of the day, I, I guarantee, I mean it's in his best interest to just say that the AI slop reports are, are not a huge issue. One thing that they said that they do, they said that they have a team who analyzes submission reviews. They look at them all manually using quote, unquote, established playbooks and workflows, and they work with machine learning and AI assistants to look at them. I mean, basically there's some sort of quality control versus just dumping AI slop straight into a company's website. But at the end of the day, I don't think it's probably that. I don't know, it's definitely not going to solve the problem and the problem will rear its ugly head again at some point. Smaller projects completely shutting down their bug bounty programs because, because it's overwhelming them kind of makes sense. I mean, it doesn't take that many emails to overwhelm a solo dev who doesn't want to be dealing with all of those emails. But when you get to a bigger company, what is the output? What actually happened? So they actually asked Mozilla and Mozilla employees who review bug reports for Firefox, they say they don't use AI to filter reports as it would be more difficult to do it with like, pretty much they're worried that they would eject a legitimate bug report. So they don't, they literally don't even use AI to sort through them. And they said that, they said that they have, quote, not seen a substantial increase in invalid or low quality bug reports that would appear to be AI. Generated. So I think they said that, you know, depending on how many reports get flagged as invalid, they say that they've seen five to six reports a month, less than 10% of all monthly reports. So really not a lot of false reports there. But this is a big company that can handle a lot of traffic, they can look through a lot of reports. They have a big team to manage it. So I do think, like there is an issue in the industry for smaller companies, for bigger companies like Mozilla, they're going to be just fine. It's interesting because Microsoft and Metta have both bet really heavily on AI. None of them wanted to comment on this issue in particular. I mean, it makes sense. It's not really in their best interest. But one thing that Randy Walker, so he's, he's from HackerOne, he said, quote, AI security agents. Well, he basically said that they're creating a new triage system that combines humans and AI. And the new system basically leverages, quote, AI security agents to cut through noise, flag duplicates and prioritize real threats. Human analysts then step in to validate bug reports and escalate as needed. So at the end of the day, the solution to AI slop might be AI to review. Now, of course, over at Mozilla, they're worried that they could miss out on bug reports based off of that. But I think at the end of the day, we're going to have to get to some happy medium where you are able to use AI to basically figure out how likely it is a real vulnerability, how likely it's not. And at some point in the future it'd be Fantastic if the AIs are good enough to read the reports, go do some testing, digging and discover if the vulnerabilities are real. But these things are very tricky, right? Like security vulnerabilities. They're not always just straight in code. There's all sorts of ways that you can hack and get into stuff. You kind of have to think outside of the box. Social engineering, like there's all sorts of very creative ways and AI models I don't think are always the best at that, so, or even possible to do a lot of that stuff. So it's going to be interesting to see where this goes. Thank you so much for tuning into the podcast today. If you learned anything new, make sure to leave a rating and review over on the podcast. Appreciate everyone that listens and gives us a thumbs up over on YouTube and subscribes. I hope you guys all have a fantastic rest of your day. And I'll catch you in the next episode.
