Leo Laporte (44:57)
So the news late last week was of the discovery of another serious local privilege escalation discovered in the Linux kernel. And it had been there for a long time. And yes, before you ask, it was found by an AI vulnerability discovery system operated by a security firm named Theory. They wrote, quote, an unprivileged local user can write four controlled bytes into the page cache of any readable file on a Linux system and use that to gain root. A simple 732 byte line 9 line Python proof of concept has been posted to GitHub, which immediately elevates any normal user to root. And of course that's not something you want to leave unpatched. So this important and I'm sorry, this is important. And Linux distros, the ones that are for sure known, Debian, Ubuntu and Susie have immediately issued patches for the problem, and overseers of many other distros have as well. Red Hat initially said it was going to defer the fix, but then later changed its guidance to indicate that it will be going along with the other distros and will be patching promptly. The CVE has been rated as high severity at a 7.8 out of 10. And of course it's only. It's only, only. I mean, still, that's bad. 7.8, which is, you know, it's as bad as it gets for a local privilege escalation. But the attacker first needs to get into a non root account where they're able to then execute this script in order to obtain elevation. But on the other hand, anybody who has local access to a machine also is able to use this. So it's A complete breach of Linux security. You know, account security. At the end of one of the reports of this, I ran across the statement AI Assisted Vulnerability research recently prompted the Internet Bug Bounty. That's ibb, the Internet Bug Bounty program, to suspend awards until it can understand how to manage the growing volume of reports. I thought that was interesting and it was news to me, so I went hunting. Here's what I found about that. Near the end of March, the Internet Bug Bounty program, which is run by HackerOne, paused their acceptance of new vulnerability submissions due to what HackerOne described as an increasing imbalance between vulnerability discoveries and the ability for open source maintainers to remediate them. And of course, yes, AI is the underlying driver of all this. Okay, but let's for bat, we'll back up a little bit. Recall that the Internet Bug Bounty is a crowdfunded vulnerability reward program that was started 14 years ago back in 2012 and it's operated through the HackerOne platform. Its purpose and intent is to reward and thus incentivize independent security researchers to find and responsibly disclose vulnerabilities in widely used open source software. The funding for the program comes from a consortium of major tech companies including Facebook, GitHub, Shopify, TikTok and others who all contribute to a shared bounty pool. The underlying idea is that since everyone depends on open source infrastructure, everyone should share in the cost of helping to secure it. And the vulnerability discovery payout structure is pretty simple. 80% of each awarded bounty goes to the researcher who reported the vulnerability, with the remaining 20% being contributed to the open source project itself, where the trouble was found to support, you know, its repair and remediation. So that helps to fund the remediation work and, and makes the program go. It's been widely seen as a success, having paid out more than one and a half million dollars since the program began. But almost predictably, AI has messed everything up. Hacker once stated, quote, the discovery landscape is changing. AI assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed. The balance between findings and their ability to fix them. You know, remediation capacity in open source has substantially shifted, so the problem is being called triage fatigue. And the trouble is not just the increased volume of reports that would be bad. What's interesting is it's not, it's also not the signal to noise ratio. The actual problem is the nature of the noise. Weirdly, the quality of the noise while still noise has increased. We all know Daniel Stenberg, the creator of Curl. He expressed it this way he said. More convincing crap is worse than obvious crap. You can't dismiss it quickly, you have to investigate it, and you waste real time getting to the point where you can prove it's nonsense. At scale, this stops feeling like a helpful external contribution model and starts to resemble something closer to a denial of service attack on the people who are responsible for security, which is like, yikes, a consequence of AI. So 31 years ago, way turning the clock way back 31 years ago, in 1995, Netscape launched the first widely recognized paid bug bounty program, offering to pay researchers back in 1995 for their responsible reporting of significant bugs, which they discovered in Netscape Navigator 2.0. So they were really ahead of the game at that point. Of course, they also had a web browser that was ahead of the game, and that model has been functioning vibrantly. The notion of paying researchers for responsibly reporting bugs they find been functioning ever since. So the notion of that AI may be driving a fundamental change to this long standing vulnerability discovery and reporting model is important enough, as I said at the top of the show, to be a contender for today's main topic. Except that the idea of Google going off half cocked and adding an explicit AI interface for JavaScript in Chrome that also needed ample discussion space today. And we're going to cover Mozilla's pushback against that at the end of the podcast. But meanwhile, the company Aikido, which is deep into automated vulnerability discovery as a business, recently interviewed not only curls Daniel Stenberg, who I just quoted, but also Casey Ellis. Casey's the founder of Bug Crowd, and as such is one of the people who helped establish and formalize bounties for bugs starting back in 2012. Aikido titled their report Bug Bounty isn't Dead, but the Old Model is Breaking. I'm going to share what he wrote and also what my intuition immediately suggests about the nature of the change. So they wrote Bug bounty has been a very hot topic lately when we're seeing high profile programs go offline or fundamentally change the Internet. Bug bounty, one of the most important programs for open source programs, is pausing submissions, Curl is removing payouts, and Node JS is removing its bounty entirely. That's not noise, that's signal. We wanted to understand what where bug bounty is actually heading. So we sat down with two of the most credible voices on opposite sides of this conversation. Daniel Stenberg, creator of Curl, who's living the maintainer reality and recently halted bug bounty payments, and Casey Ellis, the founder of bugcrowd, one of the People who helped establish the model in the first place. What we found was that the bug bounty model is at a crossroads, and we're in the midst of a big shift. Before we get into where the model is headed, let's take a step back and understand why. It's been one of the most effective ideas in security over the last decade. It all stems from the idea of letting the Internet try to break your stuff before attackers do. And it worked because it gave companies scale they could never hire. As Casey put it, if you're trying to outsmart a global pool of attackers with someone working 9 to 5, the math for that is wrong. They said that's the magic of bug bounty. Instead of relying on a handful of internal people, you tap into a global pool of different skill sets, different perspectives, and different motivations, all attacking your system in ways your internal team never thought of. And that's without the significant overhead required to hire specialist experts internally and then work to keep them busy. All this explains why bug bounties became fundamental to modern security programs. What's changing now is not the demand for security. It's the economics of how bug bounties operate. AI has altered the balance, and not in a good way. Finding bugs is now cheaper than ever. Writing reports is even easier, and submitting them has become effectively frictionless. Meanwhile, the cost of validating those reports and then actually fixing the issues has not changed at all. Those final two required steps, validating and then fixing bugs, remains as labor intensive as ever. We are seeing this play out in practice. There are three types of report submitters. There are those companies that use a new approach for legitimate reports. These are reports that use layered AI approaches that combine the strengths of multiple AI models, guardrails, orchestration and context, such as Aikido's own AI pen testing capabilities. And Aikido is of course, plugging their own solution, as we would expect them to on their own website. But we know that Anthropic also set up their Mythos preview system to do the same. Both are discovering and importantly, verifying suspected vulnerabilities to produce much higher quality reports, which in the case of Mythos, include proofs of concepts, of exploits. Aikido continues enumerating these three classes of bug sources. So they, they, they said. Then there are individuals who escalate their research and report writing using AI as a tool. And finally, there are individuals who are able to upskill by virtue of these AI models. They generate reports that seem technically plausible, but are still completely wrong. Daniel described it perfectly. And this is where he we we quoted him earlier saying more convincing crap is worse than obvious crap. They said you can't dismiss it quickly, you must investigate it right, because it looks real. And then you waste real time getting to the proof that it's nonsense. At scale, this stops feeling like a helpful external contribution model and starts to resemble something closer to a denial of service attack on the people responsible for security and the impact they write has been truly devastating. The Internet bug bounty program paused all new submissions because AI has dramatically increased discovery volume beyond what their maintainers can handle. No JS lost its bounty when funding disappeared. The reports still come in, but the payouts are gone and Curl removed financial rewards after being flooded with AI generated reports, Casey emphasized that this isn't a new problem, it's an old one, just massively accelerated. He said we're doing stupid things faster with more energy. Bug Bounty, they write, has always had an issue with being a level playing field. One person submits a report and another person has to validate it. That sounds equal on paper, but in practice it has always been difficult for one person to keep up with validation. Even before AI existed. Now it's practically impossible. We're now in a world where anyone can generate dozens of reports, make them appear credible and submit them instantly on the receiving end. However, the constraints have not changed. It's still humans reviewing, triaging and making decisions. Open source has been the first to feel this impact. Open source is where the pressure has shown up first, largely because it was already operating close to its limits. Most projects are maintained by small teams, often volunteers with limited time and resources. Yet they underpin massive portions of the web. Of course, we all think of that XKCD cartoon, right? With a little tiny block that's holding up this whole creaky infrastructure. They said add financial incentives, global participation and now AI generated submissions and the system is quickly overwhelmed. The Internet bug bounty program said it directly quote, AI assisted discovery has shifted the balance between findings and remediation capability. Translation, we're finding more bugs than we're able to handle. So now the bounty is gone and yet the expectation of reporting remains. But the question is, is the way bug bounty programs have been used to effectively scale security teams and improve security posture still viable without financial incentives? Bug Crowd's founder Casey Ellis doesn't necessarily believe so every organization should have a vulnerability disclosure program because if you're on the Internet, people will find issues. But not every organization is in a position to run a public reward driven bounty program. In Casey's words, Curl likely should not have had one to begin with. Casey said, I don't think every organization should run a bounty program. The Curl program should not have been a bounty program in the first place, unquote. And yet Daniel's experience shows something more nuanced. Daniel views the bounty program as a success because it incentivized real scrutiny of the code. He said, I've always thought about it as a success because it's a great way to actually encourage people to scrutinize the code. So what happens when you remove financial incentives? You'd assume that when you remove financial incentives, you'd get rid of AI slop, but that you'd also reduce the likelihood of genuine vulnerabilities being disclosed. However, when Curl removed the financial incentives, something interesting happened. The low quality AI generated noise largely disappeared. Daniel said, quote, we have stopped getting AI slop security reports. Instead we get an ever increasing amount of really good security reports submitted in a never before seen frequency, which put us under serious load, unquote. Okay, so I'm going to interrupt here to mention that I have a theory about why that is. Back when discovering vulnerabilities required long hours of painstaking, grueling work to step through and reverse engineer code, it was no fun. The only motivation, and it needed to be significant, was the promise of a big pot of gold payout at the end of that tunnel. AI driven vulnerability discovery has changed that. Today, AI makes bugs both fun and easy to find. It allows less skilled users to participate, thus broadening the bug hunter base. And there are plenty of people who would sincerely like to give back and contribute. Until now, they haven't been able to, but now they have the means. They don't need a monetary incentive. They truly want to help. I think it makes sense. Aikido continues with their report writing. Instead of drowning in low quality reports, maintainers are now dealing with a high volume of genuinely useful findings, many of which are powered by AI assisted research. The barrier to entry has dropped, not just for bad reports, but for good ones too. But this creates a new kind of pressure. Even high quality reports take time to understand, to validate, and to repair. And many of these good findings still fall into gray areas. Bugs that may not meet security thresholds but still require some attention. The result is a sustained and in some ways increased load on already constrained teams. So in a strange way, the system has not been relieved, it's been refined. And this is where it gets interesting. Because while this is painful in the short term, it might actually be a step in the right direction. By removing financial incentives, we strip away a large portion of the noise. What's left is a signal that is, on average, of higher quality, more intentional, and more aligned with actual security outcomes. AI is lowering the barrier for researchers to do meaningful work. It's enabling more people to find real issues faster than ever before. That combination, less noise, more signal, but still overwhelming volume, suggests we're in a transition phase. The historical model is breaking under the pressure, but what's emerging underneath it might be better. This would look like a system where disclosure is expected, not incentivized. Rewards are more targeted, not broad, and the focus shifts from more reports to better outcomes. We're not there yet. Right now, we're in the messy middle, where the old model no longer works and the new one hasn't fully formed yet. But if this plays out correctly, we don't end up with less bug bounty. We end up with a more sustainable version of it. What we're likely moving toward is a model where vulnerability disclosure becomes a baseline expectation across the industry, rather than something optional or incentivized. Public bounty programs don't go away, but they become more controlled, more targeted, and more aligned with organizational maturity. AI will inevitably play a larger role in filtering and triaging the growing incoming volume of reports. It won't solve the problem entirely, but it will become part of how we manage it. We'll also see a shift in what gets rewarded. As automated systems become better at finding low level issues, the value of those findings will drop. Instead, incentives will move toward higher impact work, the kind that requires creativity, context, and a deeper understanding of the systems. That means researchers will increasingly focus on areas like chaining vulnerabilities, exploiting business logic, and breaking complex or emerging technologies where automation may continue to struggle. Okay, so think about this from the bounty provider standpoint. Taking curl as an example, Daniel terminates bug bounty payouts and observes an immediate drop in the total number of reports. But it's the bogus reports predominantly that disappear, not the useful reports that describe true problems. Given that, why would he ever resume bounty payouts? The Internet bug bounty is likely to observe the same thing. As I noted, what appears to be happening is that bugs are now so much easier to discover, even fun to find and report, that it's no longer necessary to dangle a carrot. Actual human altruism, which, believe it or not, in 2026 still exists, is now sufficient to drive what once required the promise of payment. It'll take a while for this to percolate throughout the industry, but my prediction is now that the 31 years of bug bounty programs we've had ever since Netscape first offered payment for reports of bugs in Navigator 2.0 is probably going to wind down over time. And the reason our programs are currently overwhelmed by good bug reports is that unfortunately, they are very buggy. It's going to take a while. I mean, this is the, that, that, that, that new phase where AI is finding problems that were not. Is truly finding problems that were not known to exist. Those will wash out of the system over the next six months or so, and then the volume of really good reports will necessarily drop because there won't be nearly as many bugs to be found in real time. And as AI then continues to check code before it goes out the door, we're not going to have new bugs introduced into the ecosystem. I think it's really interesting that potentially we are talking about a major shift in the way bugs are discovered. It won't nearly be as much for money moving forward as it has been in the past. Leo.