Loading summary
A
Hello everyone, this is Tom Uren. I'm here with the Gruck for another between two nerds. G', day Grok. How are you?
B
G', day Tom. Fine and yourself?
A
I'm well. This edition is brought to you by Knock Knock, the platform that links your identity service with your network access controls. Find them at Knock Knock Knoc Knoc IO So Gruk, I was looking around the Internet for, for inspiration as per usual, and I came across this like Microsoft puff piece. Basically it's like the 2026 Work Trend Index annual report. I've never heard of these things. Its title is Agents Human Agency and the Opportunity for Every Organization. So it's basically just, you know, AI is going to be so wonderful. This is what we need to do. The job of every leader is to re architect work. Every firm is a learning system. AI lifts the ceiling on individual potential. So I'm very cynical about this. And actually the first thing that occurred to me is that criminal organizations are way better placed to take advantage of this than a traditional, normal commercial business or organization. And I feel like criminals are in a high risk, high return, high reward environment. They're comfortable with the risks of being a criminal. The downside of failure is you don't steal money. And the upside is you get a lot more money. And so it feels like the incentives are aligned to experiment with AI very strongly. Like I've heard of software engineering organizations where they're basically having AI shove down their throats. You know, here's the budgets you've got for all of the frontier labs. You must use them. And I think the incentives there are, there's potential benefits, but there's also potential downsides because if you stuff up in a big way like that could be reputationally damaging or whatever, right?
B
I mean, you might be introducing more technical debt that you'll just have to pay down and it might be due very, very soon because it'll just be so bad, right? So you can have developers there looking at like, I don't want to do this because I don't think the code quality is like safe enough for us to run in production. And if we start doing this, then, you know, we're going to have so many bugs, it'll just take forever to keep fixing it. And it's just going to be a nightmare.
A
And already you hear stories about, you know, people who, yeah, the AI just destroyed our production environment. The despite being told not to. And that is reputationally damaging. Whereas I don't see any cybercrime group like, what's the reputational damage if they happen to do that.
B
Yeah. If they don't hack someone, do they get laughed at in the forums?
A
Exactly. Yeah.
C
Yeah.
A
So this didn't strike me as good news.
B
So I think one of the big reasons that criminals are set up to take advantage of AI is very much that they're a manual business. As an industry they pull in billions of dollars a year and everything they do is basically run on instant messaging and web forums and the equivalent of pieces of paper in a, in a binder. Right. Like they just take notes and whatever and it's. They keep trying to use like project management tools, but they haven't been to a project management course and so they use them incorrectly.
C
Right.
B
But the thing is, right, they have a lot of processes that they could automate that they haven't, but now they absolutely can because they can just get AI to do it. So I think like if you're a normal business and you need like a customer relationship, like a CRM, you don't necessarily sit down and build one yourself. You go and you find like, I guess, Salesforce, whatever, and there you go, you have one. That's it. That's a huge part of your business process. So now done. Because there's a solution that you could just go and buy or maybe you're a firm that's been around for a long time, you might have developed some sort of legacy system and you're still using it. And so again, you have an automation already in place that can manage this, this entire process for your business. And it's very unlikely that you're still doing, you know, all of your sales stuff. If you're making millions of dollars a year or a month even, it's very unlikely that you're managing your sales process on post it notes.
C
Right.
A
So I was just thinking about why criminals wouldn't use those services. And I suppose one reason is that if you're a criminal organization, it's probably a bit hard to just buy Salesforce. I would hope they're not using Salesforce. And I suppose the other reason is if you're making really good money, I guess the opportunity cost of stopping to get a really good process, it's just not worth it because you can steal
B
more money than you're probably not thinking in terms of like compounded over the next seven years, this 3%, you know, increase, you know, blah, blah, blah. It pays for itself in just four years. Yeah, like they're not thinking like that. But now they could just sit down and be like, make me a tool to manage my portfolio of hacked companies. They might have to word it slightly differently to get the AIs to do it, but they're not saying, you know, write me an intrusion kit that can evade edr. They're saying write me a business software tool that will allow me to be more productive and more effective. Because if you look at Fin7, the Fin7 bottleneck is not gaining access to systems and then converting that access into money. They have solved that pipeline. You take initial access at one point and they just sort of go through their business process and they get money at the other end.
A
Yeah, I was looking at a Quite an old Fin7 report from Mandian and basically it describes like, well organized, quite adaptable. So they changed quite rapidly over time, had a variety of tactics and I haven't heard about them recently. Are they still around?
B
No, I think they got the got broken up.
A
But that type of organization still must still exist somewhere.
C
Right.
B
So like if you take down lock bit, it doesn't solve ransomware. Y the reason I picked Fin7 is that they, they actually had a sort of corporate structure. Like there's a C suite and then there were these people that they hired to do other things for them and then there were the affiliates. So it was very much based on business, like, like it very much had like, you know, a cfo, coo, a CEO who was, you know, doing strategy and vision planning and, and going away from Ayahuasca and coming back and. Yeah.
A
Were they the ones who would hire people on the pretense that they were a pen testing company?
B
Yep, yep.
A
Yeah.
C
Right.
A
And so they'd have like, I guess, unbeknownst, like unwitting.
B
Unwitting, that's right, yes.
A
Pen testing mules.
B
Right. Which I find a little bit hard to believe because there would have been so many red flags.
A
It feels like a don't ask, don't tell policy.
B
Yeah, it's one of those. I work for a legitimate business. I just don't ask them any questions
A
and they pay me in cryptocurrency and
B
they tell me it's just more convenient for everyone. You know, I strongly suspect that for them, for all of these ransomware groups that are doing this, their big wins are going to be in things like portfolio management, being able to manage just the sheer number of things that they've hacked so that they can increase that and keep building on it. I know that the ransomware as a service software does that to a degree, but I still feel like you'd be able to have people doing reconnaissance to find opportunities and being able to put that in as an input that could then get farmed out better to people who could try and hack it. Like there's just, there's quick wins and.
C
Right, yeah, you're doing it, you're doing
B
it manually and it works. But I think you could scale up. They have more room for growth.
C
Right, right.
A
Yeah. So I was thinking about what the problem actually is. So it seems to me that there's all this concern about the rise of models like Mythos and now GPT 5.5 which seems like it's pretty much just as good and the criminals, which I think on the whole for most people are the bigger problem. Like it feels like there's a national security problem where it's the sharp pointy end where perhaps maybe having advanced exploits makes a difference. But for the vast majority of most people the problem is crime and fraud. And that's never relied on having access to zero days or exquisite vulnerabilities. And it's just this constant background noise that goes on regardless. And it feels like AI makes a difference there, but not the difference because of the bugs. The difference is that it's, well, it actually falls into Microsoft sales pitch where it's an enabler of the individual and they can do just so much more work. And like you say, they're sort of poised to take advantage because they've been, when it comes to corporate software as a service, they've been second or third class citizens that have been shut out, out of the ecosystem.
B
Yeah, yeah, absolutely. And, and it seems to me that, you know, people are freaking out about this, this oday potential of unleashing mythos or GPT 5.5 or whatever. And I'm, I'm not super worried about it. Like I, I guess it's not going to be good for a while, but.
C
Right.
B
But I don't feel like that is the main problem.
C
Right, right.
A
I think a lot depends on what you think. Not very good. Looks like
B
I'll be fine. I mean, you guys are screwed.
A
Well, and I guess I'm thinking about it like now just in discussion about the instruments of state, the, you know, the military, banks, I think well defended organizations will do okay.
C
Right.
B
I think if you're well defended, you are not 1o day away from the worst day of your life.
A
You shouldn't be.
B
You shouldn't be. Right. And if you, if you talk to anyone who's been doing cybersecurity at the, like at the real top end for you Know, one or two decades. Absolutely. There is no way they would allow a customer to be, you know, everything's absolutely fine as long as none of our software is ever hacked.
A
Yes. Yeah.
B
Like that's just. That's not the posture that you want at all. It's always going to be right. Like, everything's fine. Because if they do this, then we have like segmentation and if they get in here, then there's leaf privilege and they can't escalate to there. And if they do that, then they're still in like, you know, the DMZ area and like, you can just good
A
old defense in depth.
B
Right. Like you just have a lot of things in place, such that 10 day or even 100 days doesn't necessarily cause a problem because you just don't have an architecture that can fall like that.
A
Yeah. So I recently read a book, it's called the Art Thief. And it's about a guy who steals billions of dollars of art and just keeps it in his room. So he's a collector. And it struck me that this is like the cybersecurity environment. There's some places which are of high interest and they invest in security and they do a good job. And then there's a lot of places which are not of high interest and they sort of fall opportunistically.
B
Yeah, they've invested in enough security that they don't get vandalized by a kid running down the block.
A
Yeah.
B
But if someone was to like walk in and just.
A
That was what this guy was doing. He would walk in with a screwdriver, and when the one security guard walked out of the room, he would unscrew a screw, he'd walk back in and he'd look at something else. And then like. And just apparently he was quite good at it. So. Right, A skilled thief, he was really,
B
really good at unscrewing things. Absolutely.
A
The other trick was he never sold the. And so mostly they would catch people because if you're trying to sell a unique piece of art, you have to market it as a unique piece of art. Else it's just a random. Could be anything. And so it feels like that for probably the vast majority of it on the planet, no one really cares that much. And would that be true? Maybe. Vast majority is the wrong way to think about it. But for a lot of things, they're just not that important to either criminals or the people who run them. And so no one cares. No one defends them that well, but no one bothers to attack Them either. So it's security through irrelevance, I guess.
B
So many, many years ago, Rolov taming, he pointed something out and I don't know what his actual point was because I forget it. But he said, look, you and I both know that every web application can be hacked. We've done enough pen tests, we've done enough web application assessments that if you sit down in front of a company or in front of the web app and you just guard it for a couple of days, all of this stuff will fall out. So how come everything isn't hacked? If everything is hackable, how come everything isn't hacked? Like, there's something going on there. And I think it's what you're bringing up, which is just. It's not worth it for most people. Like, there's no way of, like, the process of turning a hacked company into money is not straightforward.
A
I think it's like the art thief. It's a bit riskier as well, because there's a money trial.
B
So I think there's a lot of room for growth with criminal organizations adopting agentic development practices, but not for exploits. And there's an article I sent you a couple weeks ago that I just fundamentally disagreed with it.
A
So it's by this guy called Drew Bruinick. Cybersecurity looks like proof of work. Now, is security spending more tokens than your attacker? So that's a question mark. So I think this is a think piece. I don't know if Drew entirely believes it. But you don't like it.
B
I had one or two complaints. He comes up with this thing called the security economy. So to him, security is the side that knows about the most vulnerabilities.
C
Right, right, right.
B
Like, if you're aware of more vulnerabilities, then you beat me because you know more vulnerabilities.
A
Right, yeah. So he references a chart that has basically, on the. What is it? The X axis, the number of tokens you've spent.
B
Right.
A
And on the Y axis, it's how far you get on standardized tests run by the UK's AI Security Institute. And below that, he has in bold, to harden a system, we need to spend more tokens discovering exploits than attackers spend exploiting them.
B
I just point out that's not quite right. I mean, technically, you need to spend more tokens patching vulnerabilities than criminals spend exploiting them.
C
Right.
A
I guess a criminal would spend some to get the exploit as well.
B
Right.
A
So there's differing costs potentially depending on how hard those tasks are.
B
So Patching is probably the easier task overall. But here's one of the things is I don't think criminals get to be criminals by spending a lot of money.
A
Well, I guess right at the beginning we pointed out that they're not running based on zero days. Typically there are a few exceptions, but most criminal organizations don't need them.
B
Right. And, and I've never needed. When there was that one group clop that did find zero days. And the thing is, they were very targeted, right. Like they, they went out and they selected software. So they said like, okay, this particular software that's used for, you know, file management, that's always on the edge. We want an ODA in that, that gets us access. Yeah, they had that good thing.
A
They had a. Would it be, I guess it would be a business model where it was find enterprise devices that have data on the edge of networks, find zero days, compromise them all within a week or so and take all the files and then do extortion. And that seems a very clean ab the underpants gnomes, but with actual three steps. Made sense.
C
Yeah.
A
Because of that. That all fit together in a business process.
B
Right. And so I was going to say, like, I think the thing is you don't need AI to find bugs in enterprise software like this.
C
Right, right.
A
And my understanding is the payment rates have actually also gone down a lot. So it was a. And I think, you know, there could be several reasons organizations could go, well, having files just sitting there at the edge of our network is a bad idea and just have, you know, data deletion policies, like, it's there for transfer, but once you've transferred it, let's just get rid of it.
B
There's literally any way to secure a system where your default is a very vulnerable piece of software connected to the Internet and your internal network. There's any number of architectural changes you could make that would improve that. I mean, literally any change is going to be an improvement at that point.
A
And I guess this speaks directly to that point that it's not. There are, I guess in terms of the terminology, the compensating controls that even if there is a zero day, it can render it or mitigate it or render it ineffective even.
B
There are a lot of steps to preventing and detecting hackers and they are not all bypassed by the existence of one ODE that doesn't magic away everything else. It's a fundamental misunderstanding of what makes places secure. You know, if you look at Chrome, Chrome has multiple layers of defense. So it's to the point where when you do get an exploit, you have to do it in an area that's been sort of scoured for years of all possible issues, or you have to, you know, break out of jails and sandboxes and all this stuff. Like, you would need multiple chained exploits. Like, it's not an easy thing to do. Like, if you hack someone's Chrome and you are nsa, you can then figure out what you're doing to achieve your mission objectives and do that like it's a new tool in an existing toolkit and it doesn't change your business processes. Right, right. But I don't think there's a lot of ransomware groups that are going from Chrome exploits to pivoting to then take over everything else when they could instead send a phishing email with an attached exe called like zip exe or just
A
buy info stealer logs or.
B
Yeah, like, I mean, if you look at, you know, the shiny hunters who were like, they were prolific in what they were hacking, and they weren't actually hacking in any sense that Mythos would recognize. They were just taking infosteeller logs and trying everything. And then once they get inside, they just take the git repos and then look for embedded secrets and then use those. And it was an O day free process. Like there's no exploiting involved.
A
Yeah, yeah. But I mean, a while ago we spoke about that person who was using Claude and ChatGPT to help once they compromised. Well, both compromising and getting into a network. And that felt a bit like the art thief hacking, where it was an organization no one had bothered to hack before. And so that, that feels like the, I guess, joyride hacking.
B
Yes.
A
And it doesn't seem like they monetize that particularly well.
B
So, like, I think criminals are more risk tolerant and they're also less exposed when things go wrong for them because they're not structurally exposed to anything that AI could get wrong. If they have AI handle some process for them, absolute worst case, they stop making money for a while because the AI is not doing its job. And that's not great, but it's not the end of the world because you pull the AI out, you keep going.
A
Yeah. It seems like there's opportunity costs, not actual costs.
B
Right.
A
And so it's not good if you're not making money, but it's, I think, relatively easy to see how you could do better using it.
B
Still, it's not going to impact your future revenues in any way. Right. It's. It's a point. Like at this point in time, I'M not making money. If I change things, then that will change. I saw a tweet recently that was pointing out that AI is going to crater the bug bounty market.
C
Right.
B
Google has already revised all of their payments down significantly. And so the, the argument that was made is that you have all of these talented security people who now no longer have a living doing bug bounty stuff and they're going to have to do something like they still still need to live.
A
Yeah.
B
And so they propose that there's probably going to be a crime wave from that, that they would then turn to crime. And I'm not going to say it will never happen, but it seems to me that that misunderstands how criminals make money because they don't make it from finding bugs.
C
Right, Right.
B
They make it from knowing how to turn a compromised company into money. And then they have a process where when you hack a company, you do XYZ and at the end of it you have money some percentage of the time. And so they will find the bugs so they can do the hacking the company part and start that process. But if you don't have that process in place, you know, you're not actually going to make money out of it. Like there's, there's skills involved.
A
I mean, I think one of the things I talked about was how there's a sort of different segments of cyber adversaries. There's the advanced ones where I think maybe zero days does make a difference. There's the vast majority of the commodity, commodity criminals, where I think it probably doesn't make a difference. Like the zero days in particular. AI, I think could be actually more beneficial for criminals for the, for the reasons you've talked about. And then there's also the, I guess I'd call it joyriding where.
B
Right.
A
They do it because of the intellectual stimulation and challenge they get out of it.
C
Right.
A
And if you make it super easy for everyone, does that take that away?
C
Yeah, yeah, yeah.
A
Like you're left with the risk, without the fun. And it's unclear to me whether that will go up or down because I think the people who used to do that, they really enjoyed it.
B
If like it's joyriding because you go out, you steal a car and you do dumb stuff in it. If you just put your Roomba in a car and let it drive off,
A
it's like you steal a Tesla and you put it into full self driving and that's it.
B
You're right. At that point, it's no longer joyriding, it's just writing that's right. Thanks a lot, Tom.
A
Thanks, Scott.
Podcast: Risky Bulletin by Risky Business Media
Episode Title: Between Two Nerds: The AI-first crime gang
Date: May 12, 2026
Hosts: Tom Uren (A), The Gruck (B), occasional interjections by a third speaker (C)
This episode explores the idea that cybercriminal organizations are uniquely positioned to benefit from artificial intelligence (AI) more rapidly and flexibly than legitimate businesses. Tom and The Gruck discuss the evolving structure and operational sophistication of cybercrime gangs, how AI could serve as a force multiplier for these groups, and why this could be more concerning than advanced state-sponsored threats in terms of day-to-day cyber risk.
Setting the Stage (00:10-03:05):
Tom skims through Microsoft’s optimistic 2026 Work Trend Index, and cynically notes that criminal organizations are likely to gain more from AI than risk-averse corporate environments.
"The downside of failure is you don’t steal money. The upside is you get a lot more money." – Tom Uren (01:01)
Technical Debt & Reputational Risk:
The Gruck describes developer resistance in legitimate orgs who fear AI-generated code introducing bugs and leading to "nightmares" that criminals just don’t worry about.
"If they don’t hack someone, do they get laughed at in the forums?" – Tom Uren (02:57)
Room for AI-driven Automation (03:08-05:22):
The Gruck notes that most cybercrime is still run in a "manual" way, with processes ripe for AI-driven automation (think business tools, not hacking tools).
"Now they could just sit down and be like, make me a tool to manage my portfolio of hacked companies." – The Gruck (05:05)
Fin7 as an Example (06:22-07:49):
The hosts discuss Fin7, a corporate-structured cybercrime gang, as a model for how future AI-powered groups might operate.
Criminals as Main Threat (09:03-10:55):
Tom argues that, for most people and orgs, the "background noise" of cybercrime and fraud is a bigger risk than state-sponsored, high-exploit attacks.
"AI makes a difference there... its an enabler of the individual and they can do just so much more work." – Tom Uren (09:40)
Well-defended Organizations vs. the Rest (11:00-14:19):
The Gruck believes competent security teams with "defense in depth" will fare fine, not being "one 0-day away from the worst day."
"You are not 1o day away from the worst day of your life." – The Gruck (11:19)
Why Everything Isn’t Hacked (14:19-15:20):
Anecdote: Even though every web app has vulnerabilities, few are targeted due to low reward and logistical hurdles to monetizing hacks.
Process vs. Exploits (15:20-18:30):
Gruck pushes back on the idea that cybercrime is about 0-day exploits and says skill is more about business process: turning access into money.
Architectural Security (19:06-21:02):
Effective security often comes from basic architecture changes (least privilege, segmenting), not from racing attackers for vulnerabilities or exploits.
"There are a lot of steps to preventing and detecting hackers and they are not all bypassed by the existence of one ODE that doesn’t magic away everything else." – The Gruck (19:40)
Commodity Crime & Automation (21:04-22:54):
The norm is "O-day free" crime: using infostealer logs, extracting credentials, abusing secrets—AI can speed up these processes but doesn’t change fundamentals.
Bug Bounty Market and Talent Migration Fears (23:04-23:54):
Gruck rebuts the Twitter panic that AI-driven bug finding and falling bounty prices will push top researchers to turn criminal.
"They make it from knowing how to turn a compromised company into money... if you don’t have that process in place... you’re not actually going to make money out of it." – The Gruck (23:55)
Cyber Adversary Spectrum (24:23-25:21):
Gruck and Tom break up adversaries into:
"If you make it super easy for everyone, does that take that away? Like, you’re left with the risk, without the fun." – Tom Uren (25:08)
On AI Enabling Crime:
"Now they could just sit down and be like, make me a tool to manage my portfolio of hacked companies." – The Gruck (05:05)
On Security Through Irrelevance:
"For a lot of things, they’re just not that important to either criminals or the people who run them. And so no one cares. No one defends them that well, but no one bothers to attack Them either. So it’s security through irrelevance, I guess." – Tom Uren (13:32)
On Defense in Depth:
"You are not 1o day away from the worst day of your life." – The Gruck (11:19)
On Monetization Over Exploits:
“They make it from knowing how to turn a compromised company into money... if you don’t have that process in place... you’re not actually going to make money out of it.” – The Gruck (23:55)
On Joyriding in Hacking:
"If you just put your Roomba in a car and let it drive off, it’s like you steal a Tesla and you put it into full self driving and that’s it... it’s no longer joyriding, it’s just riding." – The Gruck and Tom Uren (25:21-25:39)
00:10–03:05 | AI and Criminal Incentives:
The hosts discuss why cybercriminals are more incentivized to experiment with AI than businesses.
03:08–05:22 | Automation Opportunities in Cybercrime:
Cybercrime's manual processes and how AI can address this gap.
06:22–07:49 | Fin7 and the Corporate Crime Model:
Breakdown of Fin7’s structure and AI's potential for similar criminal organizations.
09:03–10:55 | AI’s Real Impact: Work Enabler, Not Exploit Generator:
Why AI is more a productivity boost for criminals than a tool for discovering bugs.
11:00–14:19 | Security Through Irrelevance and Defense:
Analogy: The art thief, why some targets are ignored.
15:20–18:30 | Monetizing Access vs. Exploit Chases:
Distinguishing real-world cybercrime profits from theoretical exploit-based models.
23:04–23:54 | Will Bug Bounty Hunters Become Criminals?:
Challenging simplifications about hackers’ motivations as markets shift.
24:23–25:39 | The End of Joyriding?:
How making hacking easier with AI could sap the joy for "curious" adversaries.
This episode robustly challenges alarmist views about AI’s introduction to cybersecurity, arguing that the more significant shift isn’t new exploits or intelligence breakthroughs, but the massive productivity gains AI can offer to already process-oriented, risk-tolerant cybercriminals. For most organizations, simple, basic security practice—and being an uninteresting target—remains the best defense. The fascinating implication: the biggest winners from the "AI-first" wave might just be the crooks with the most chaotic workflows—and the imagination to automate them.