
Loading summary
A
A lot of the underground forum topic discussions were, you know, soliciting advice regarding deepfake models of how do I turn, like, which model will help me turn images or avatars into deepfake content that I can monetize, or asking about which deepfake tool they could use to mass produce inauthentic profiles to pretend a platform is populated by genuine users. There's a lot of use of deepfakes to masquerade as humanity to help further along their malicious operations.
B
Welcome to another episode of Manion's Defender's Advantage podcast. I am your host, Luke McNamara. Joining me today, I have the privilege of, I believe, welcoming back Michelle Cantos, senior Threat Intelligence Analyst here in Google's Threat Intelligence Group. Michelle, great to see you again.
A
Yeah, thanks for having me. It's been a minute, but thanks for having me back.
B
What were you on here before last time? I know you've been on here before, but I'm blanking now for a second.
A
I think it was something to do with contractors. I think it was role of third party contractors, possibly. Okay, it's been a blur.
B
Yeah.
A
Yeah.
B
Well, today you're here to talk about a piece of research that is very specific to, I think a topic for the last several years that comes up in conversations with customers all the time, which is AI and specifically how threat actors are using AI. You just finished a research report looking into this and so we're going to dive into that today. Maybe you could start with what was the scope of the research? What were the sort of main questions you were setting out to answer with this report?
A
Yeah, so with the market for AI tools and just AI in general sort of exploding in recent years, we wanted to find out what threat actors in underground forums were doing. So we wanted to figure out what tools are being advertised in these underground forum spaces and what were they just sort of talking about related to AI? What were the discussion topics? Kind of like a vibe check of these platforms to see where their heads were at in terms of AI and how they were treating it and how they were using it. So myself, along with a team of amazing researchers, pulled about a year's worth of conversations and ads from Russian and English language underground forums. And the resulting report is our best efforts of sort of combing through that data. I have to say, big shout out to our researcher, Ramin Chorish, who collected a bulk of this information. I ended up somewhere with around like 500 pages worth of posts that I had the privilege of filtering through thanks to all of his efforts. So that Was fun.
B
Yeah. And I would position this as if you think about the report we released earlier this year, adversarial Misuse of Gemini, which was a look into specifically threat actors misusing or attempting to misuse Google's Gemini model. There's been similar research from researchers at ChatGPT and Microsoft on those models. This is looking specifically in these underground forums and marketplaces how threat actors are leveraging or looking to leverage more illicit tools. So not necessarily these sort of more well known models, but models that may be purpose built for some of the malicious activity they're doing.
A
Right? Yeah. This is kind of like the foil to the adversarial use of Gemini report and blog, which is amazing and incredible and we had so much great insight into that specific use case. This is kind of like the everything else of that in underground forums that might not be using Gemini, Claude. All the legitimate tools like what is being offered in these spaces and these tools might not have the same sort of security guardrails that the more legitimate tools offer. These tools have the ability to do uncensored searches, uncensored outputs. I believe some of the ads did actually highlight the fact that there was no data retention. So if the cops came knocking down the door, we don't have the log data. So they can't, they can't get at what you're, what you've been searching for online sort of thing. So it's very much the opposite of the Gemini abuse report.
B
So we're going to get to some of the specifics here, but just the kind of high level tldr. What were some of the key findings from this project?
A
So it's very interesting of a lot of these spaces mirror what you see in legitimate spaces. In terms of the marketing strategy that these ads use, a lot of it is like human driven support and customer support and accessibility and versatility and making tools that will optimize your efficiency and your workflow. It's very much like a lot of the same language that they use in more conventional spaces. And on the discussion side, it's a lot of the typical discussions that you would normally have. Is AI going to replace our jobs? Is, you know, all the existential crises that come along with conversations like this. Law enforcement leveraging AI for surveillance, killer robots, it's a lot of those same conversations. And then, but in addition to that, you'll, you'll also have, you know, threat actors who were just asking the group writ large, compare and contrast, you know, a deepfake generator that I'm going to use for this horrible thing or operation. And it's like, it's like, oh yeah, no, this is a reminder that I am in an underground forum and that these people are here to support malicious activity in some cases. But overall it's a reminder that threat actors are incorporating AI tools into various stages of the attack life cycle and it's become commonplace now. A few years ago it was a novel thing if you saw a threat actor leverage LLMs in any sort of capacity. But now it's another tool in their toolkit. And these tools are transformative force multipliers and they can help those lower level actors augment their frequency, their scope and their efficacy and their complexity of their intrusions, even though the low hanging fruit might have limited technical abilities and financial resources to do so. This report is the best effort of how are these tools helping this lower hanging fruit? And what do these tools look like? What specifically are they advertising? What utilities do these little hanging fruits sort of need help with? And that was the basis of our report.
B
I know this is kind of beyond the scope of what you were focusing on, but I do think of it as an extension to all of the as a service offerings that we've seen for quite some time in underground markets. So obviously people will be familiar with ransomware as a service, but you know, web skimming as a service, any sort of activity you can imagine, that is something as a service offering. And I think kind of similar to that. My sense from reading this report is that there's a mixture of both the actual products and tools themselves being offered for sale or access to those models, but then also services on top of that. So maybe you're not actually that actor who's offering some capability. He's not going to sell that tool outright, but he's going to provide some sort of service that you can pay for. Is that kind of the sense in terms of the mixture of what you saw?
A
Yeah. As far as tooling, when it comes to the tooling itself, what I've seen a lot of is all in one services, where it's tools that are combining LLMs and deepfake generators into a single tool. It's tools that are supporting every aspect of the attack lifecycle. It's like if you need initial intrusion, if you need initial access help, or if you need help when you're like maintaining presence inside a system. A lot of these tools really want to be the one stop shop for every facet depending on what your needs are. So it's been fascinating to see. I thought they would Be sort of unitaskers of just one tool that's really good for one facet of the whole operation. But a lot of these tools to get the most bang for your buck, especially when some of these are as high as like $3,000 a month, I think was the highest price as software tool. It's like they want to make sure you're getting the most out of it. So they've made this, you know, all in one bulk sort of Swiss army knife of an AI tool for some of these offerings.
B
And we were talking about this a little bit before we started recording, but one of the other pieces of this, you know, marketplace that kind of mirrors what you see in more legitimate spaces is that there are reviews. And again, for folks who are familiar with underground forums and markets, that is a very common component of that is that people will leave reviews on various services or tools that they've purchased. And so you kind of get a window into at least how people perceive these capabilities. What was some of that research like?
A
It was kind of funny because it's a mixed bag of when you're going off of customer reviews, of the paradox of if you're going to leave a review, you're either really happy or really angry with the tool. So how much do you trust the reviews that are coming out? I just thought it was fun because I do the same thing when I'm looking at restaurants. But seeing someone do it for an AI tool was kind of funny in a way. But yeah, for some of these it was. You could tell that some, some of the creators sort of were soliciting feedback, but only if it was positive of wanting better sort of customer reviews for their tools. Some people, I think I saw one or two where people were outright calling tools a scam because they didn't really work and they just kind of accused Deb of being like a middleman of just like secretly using ChatGPT to. To give the. The person in an output for a trial run. It's. It was pretty fascinating to see them sort of have this sort of righteousness of the customer is always right in such in an underground platform in this space was kind of surreal. But yeah, we kind of based the scope of this report off like because there are a ton of AI tools out there. We. We base the scope off of, you know, which tools seem to have the most sort of engagement. And part of that engagement included customer reviews, good or bad regarding these tools.
B
And when you look at this sort of in, I guess, like comparison to. Well, let me back up For a second. So thinking about some of the use cases here, and this goes into, I think, the capabilities that are being built into or offered. You know, you mentioned deep fakes and that's something I think kind of early on, the last several years, maybe even further back, was where we started to first initially see adoption. And you know, I think, you know, we expected some of these capabilities to certainly get better over time. I remember, I think the sort of that now infamous image that's in a few of our blogs around Zelensky and a deepfake of him, of him supposedly, you know, announcing a surrender in Ukraine. And you look at that image and it's very, you can even just tell just looking at it that it is very poorly done, very poor quality deepfake. You know, the capability for that sort of image generation, full motion video is much, much greater now. So I mean, is that something that as the capabilities have progressed that we've seen more utilization of that, or is that still kind of a subset of actors, you know, actors engaged in disinformation or some of the North Korean groups that are using it for maybe the AI face swapping capabilities or some of the resume lures?
A
So for a while we've actually seen a lot of financially motivated actors leverage deepfakes for part of their malicious activity, whether that's fraud, blackmail. We've seen a lot of these services out for a few years now where, whether it's courses, it's people offering courses on how to make deepfakes or people offering to create deep fakes for others. I think I saw a few advertisements where it was give us some video, give us some audio of the person and we will make a, you know, a deep fake video of whatever you want related to this person. In a lot of the recent, in, in our recent research, a lot of the underground forum topic discussions were, you know, soliciting advice regarding deepfake models of, you know, how do I turn, like, which model will help me turn images or avatars into deepfake content that I can monetize. You know, there's a lot of those types of discussions or asking about which deepfake tool they could use to mass produce inauthentic profiles, pretend a platform is populated by genuine users. There's a lot of use of deep fakes to pretend to masquerade as humanity to help further along their malicious operations. So that's something that's continuing to be embraced by underground communities and by financially motivated actors. It's no longer just sort of the toolkit of The IO actors or the state sponsored actors.
B
I think when we've seen, and I think some of the examples that we highlighted in the Gemini report is a good example of this. When we've seen threat actors that we've tracked for some time seek to use AI tools, legitimate or otherwise, they've often been in ways that make sense with that actor's previous pattern of activity and kind of what they tend to focus on. So you get like some of the Iranian actors like Apt42 using it for a lot of reconnaissance. Well, they do a lot of like high value target targeting and reconnaissance on their targets prior to reaching out. So that makes a lot of sense that they would be incorporating these tools and these capabilities. When you saw the discussions in some of these forums and marketplaces, did it seem like there was, you know, threat actors who had a particular existing use case that they were trying to figure out which model, which tool is right to kind of, you know, accelerate what I'm already doing? Or as you kind of noted, it seems like there's also threat actors talking about what new business models from an illicit nature are also out there that they could utilize these tools for.
A
Yeah, so I'd say it's kind of the former where in a lot of these underground discussions you'd see a lot of comparison of tooling or a lot of topic posts where they're going, what tool is best for deepfakes? Xyz, what tool is best for scripting? I have X amount of money to spend on an AI tool. Which one will I get that gives me the most bang for my buck? So a lot of these actors, they have an inkling of how they want to use it, how they want to reduce their version of analyst toil. They just using these platforms to try and cut through the research, reviewing all the customer reviews of someone tell me which tool is best for this facet that I want to use AI for this specific part of my ops.
B
Yeah, you mentioned analyst toil in the sort of toil talent piece that you'll hear people hear refer to AI's usage for defense. I think in countless examples that we've seen now of threat actors leveraging AI. Regardless of the model, I go back to those two things because I think they're very similar. The threat actors using and finding a lot of similar use cases around just being able to automate more or to scale up and do something that maybe would have taken them a lot of time in the past or they were less capable with certain types of coding, for example. But There seems to be kind of this mirror focus around capabilities that's happening on the adversary side. As to what we're discussing on the defense side.
A
Yeah, I think a big part of that is phishing. The amount of tools related to phishing operations. It was a popular feature that was advertised on a lot of the tools that we found in these forums because it's easy for actors to integrate this sort of utility into their ops. It can create more engaging lore content at scale. It can help threat actors distribute phishing emails to a wider audience. And they don't have just, it's a push of a button and they have to do, they can do more with less. So phishing is one of those like really stellar use cases of just making it at scale, reducing that toil and leading to more frequent and potentially more successful campaigns. If you have the money and the resources to. I mean, you don't even need the technical acumen, you just need money to, you know, get these tools.
B
One of the interesting things in this report is reading some of the comments around the capabilities that threat actors, the sort of comparisons they give to legitimate AI models. Again, the well known, you know, Gemini, ChatGPT, the models that everyone's using and kind of compare and contrasting the benefits of using this. So obviously you have the various models that you researched in this report that are all kind of geared towards more illicit activity, but you do see threat actors discussing. Well, actually for this one particular function or if I'm doing something coding, something like Gemini or ChatGPT might be more preferable. How do you kind of see that playing out? Are there enough kind of data points at this point to say we do seem to see threat actors prefer illicit models for these use cases, but then for these other use cases where they're doing something much more benign and not overtly malicious, some of the more well known legitimate models are what they tend to gravitate to.
A
Yeah, I think it's a reminder that there's no honor amongst thieves because in these forums some of these users are afraid that the underground tools that are being advertised might themselves be scams to get Bitcoin or they might be part of this might be an op in and of itself. So that's one thing to think about. And for a lot of these commercial offerings, it's, there's a lot of discussion regarding like reputation reliability, accuracy of these outputs and like performance on more technical questions versus, you know, what model gives more creative outputs that sound more human. From a logistical perspective, it's if there's A model that has like API issues causing lag price is a big point of contention. Yeah, I'd say if, if you're not doing, you know, the illicit searches and that's not necessarily a priority for you and you're just trying to search for, you know, doing research, reconnaissance, technical support, trying to figure out the best sort of way to write a utility for this script sort of thing. It's. There is a tendency to lean towards more conventional AI platforms and shy away from the, the illicit offerings.
B
And you even have. I think there were a couple of points I saw discussion around using models from non Western companies. So using Quinn, which we saw, I think the APT28 activity recently in the news, they were leveraging the hugging face API, but they were specifically leveraging Quinn, which is from Alibaba Cloud. I think there's discussions in there about deep SEQ models as well. So there's that larger also kind of geopolitical component too that threat actors seem to be considering.
A
Yeah, it was kind of surprising how many, going back to the previous point of just like how much it mirrors sort of conventional sort of discussion topics in these spaces of the rise of non Western models and the reliability and reputation of something coming out of China versus something coming out of the west and the comparison in contrast of these models of is it good Will, can I do elicit searches on here versus here the data retention question and you know, who is logging my data? Is it okay if the west has it, but it's not okay if China has it? The same conversations are also happening in these spaces. Some users don't really care, they just care about performance, they care about the ability to have the most accurate output, whereas others are a little more concerned with data retention and I guess the reputation, the geopolitical implications, if you will, of using Deep SEQ versus using Gemini.
B
So tying this all together. And again, I think this is a very useful addition to the emerging research around threat actors leveraging of AI. I think kind of what we see with this, similar to what we saw with the Gemini report and others, is that threat actors are seeking to use it kind of across the attack lifecycle. Maybe other more specific use cases like the deepfake stuff, we see more utilization in the illicit model space. What are some of the areas going forward that you're interested in further researching or you think that there needs to be more work done around this larger topic of how our adversaries going to increasingly adopt these tools and what ways they might do so.
A
I mean we touched on it a Little bit of like the rise of non western models and seeing the growth of that market and comparing and contrasting like what type of actor would use Deep Seek versus Gemini, the rise of agentic AI. And once that hits the market of those types of tools, what does that even look like? Is there any way to try and extract. It's like one level removed. Before when we had these tools, we kind of knew what functionality they were going after. Using it for tech support, using it for research and reconnaissance, vuln research, all that stuff. But when you just are selling agents, it's like that agent can do anything. So we have to find a way to dig deeper to find out how threat actors are going to use that as part of their ops for support. And in this research it was just Russian and English language forums, but we didn't cover Portuguese, we didn't cover Mandarin, we didn't cover. There's a whole slew of other underground platforms that might be treating these tools differently and might be treating this whole conversation differently that we also need to dive deeper into. We just examined 10 tools. There's way more tools out there that I want to keep looking at. Our researchers have found so many more tools since we published that report. So we might have to do an update of just seeing how much more has just come out in such a short time. This research is just going to continue on because the market is there and they're not going to stop anytime soon. Have access, it's cheap or it's free and it's helping them to conduct more successful campaigns and more profitable campaigns. So they're not going to stop now that they know this is going to help them get their bag.
B
It'll be interesting to see. I think one area I'd be curious to see us do research in is around kind of the increase in speed because I think going back to what you're noting about just sort of the efficiency component, speed is something we track in stats like dwell time in the mtrans report every year where how long is an actor active in a victim environment before they're detected? How long does it take for them to achieve actions on target if they're doing something like ransomware to maybe compare that in years past to maybe some of these groups that we know are utilizing AI models, illicit or otherwise in their operations and say okay, maybe from a capability standpoint, you know, they're still carrying out phishing as initial intrusion method and then they're, you know, escalating privileges once they get into the network and maybe there's nothing incredibly novel of what they're doing, but are they able to do that faster? And I think that's something that if we're looking at that for maybe a whole class of adversaries, let's say actors engaged in extortion or ransomware or even specific groups, I think that would be an interesting thing to study.
A
Yeah. Even just vulnerability exploitation of when something hits that you know is a vuln that hasn't been patched yet. What. How long does it take the researcher, the threat actor to find that and spin up something to exploit that in real time? Like even before patch Tuesday happens. It's that sort of the timescales of this have just been shortened so dramatically in all aspects of it. If it's state sponsored threats using it, even if it's IO actors spinning up fodder related to geopolitical events, if it's fin actors spinning up extortion or fraud campaigns, it's the ability to the time to detect and stop this has dramatically shortened given the access to these tools. Something to look forward to. Yeah.
B
The whole end day space and oh boy, the fast followers there. How that will change with vulnerability exploitation is a good point. So we'll have to have you back at some point when you do kind of further research into this. Again, I think it's a topic that regardless of whatever industry or type of adversary you care about, it's one that I think is going to continue to be of interest. So I think it's some great research to kind of further our understanding of this down the. Down the field.
A
Yeah, it's just this is the tip of the iceberg. There's so much more. There's so many ways to spin this to. There's so much fodder. I always say like it's a good problem to have with this stuff. There's so many great opportunities for deeper dives into any niche aspect of this. And it being a geographic and industry agnostic, it's. It's fun. It's fun. I'm having fun. It's great.
B
Well, we'll end it right there. Thank you as always Michelle and take care and have a great day.
A
Thanks for having me, Sam.
Host: Luke McNamara (Google Threat Intelligence Group)
Guest: Michelle Cantos (Senior Threat Intelligence Analyst, Google Threat Intelligence Group)
Date: August 18, 2025
This episode delves into the use and perception of artificial intelligence tools within underground cybercrime forums, focusing on how threat actors are adopting AI for malicious purposes. Host Luke McNamara interviews Michelle Cantos, the lead researcher on a major report about underground AI tool markets. They break down the types of illicit AI tools available, user sentiment, pricing, common use cases, the growing sophistication of products and services, and evolving trends including the implications of non-Western AI models.
On Deepfakes for Monetization:
"A lot of the underground forum topic discussions were, you know, soliciting advice regarding deepfake models of how do I turn, like, which model will help me turn images or avatars into deepfake content that I can monetize..." — Michelle (00:01, 11:49)
On the Tool Marketplace:
"A lot of these tools really want to be the one stop shop for every facet depending on what your needs are... they’ve made this bulk sort of Swiss army knife of an AI tool." — Michelle (07:26)
On Phishing Use Cases:
"Phishing is one of those like really stellar use cases of just making it at scale, reducing that toil and leading to more frequent and potentially more successful campaigns." — Michelle (15:32)
On Paranoia Among Cybercriminals:
"There's no honor amongst thieves because in these forums some of these users are afraid that the underground tools that are being advertised might themselves be scams to get Bitcoin." — Michelle (17:22)
On the Future of Research:
"There's a whole slew of other underground platforms that might be treating these tools differently... Our researchers have found so many more tools since we published that report." — Michelle (21:00)
This episode provides a comprehensive look at the evolving landscape of illicit AI tool adoption in cybercrime communities, highlighting both technological and social factors shaping the underground market. Michelle and Luke emphasize the need for ongoing vigilance, broader research across languages and regions, and deeper dives into rapidly developing tools and tactics.
“It’s just the tip of the iceberg. There’s so much more.” — Michelle (24:54)