The Defender's Advantage Podcast
Episode: AI Tools and Sentiment Within the Underground Cyber Crime Community
Host: Luke McNamara (Google Threat Intelligence Group)
Guest: Michelle Cantos (Senior Threat Intelligence Analyst, Google Threat Intelligence Group)
Date: August 18, 2025
Episode Overview
This episode delves into the use and perception of artificial intelligence tools within underground cybercrime forums, focusing on how threat actors are adopting AI for malicious purposes. Host Luke McNamara interviews Michelle Cantos, the lead researcher on a major report about underground AI tool markets. They break down the types of illicit AI tools available, user sentiment, pricing, common use cases, the growing sophistication of products and services, and evolving trends including the implications of non-Western AI models.
Key Discussion Points & Insights
1. Scope and Purpose of the Underground AI Research
- Research Focus:
- Understanding how threat actors discuss, advertise, or sell AI tools in underground forums (primarily Russian and English-speaking).
- "A vibe check of these platforms to see where their heads were at in terms of AI and how they were treating it and how they were using it." — Michelle (01:49)
- Data Collection:
- Approximately a year’s worth of data: ads, posts, and discussions, totaling ~500 pages.
- Thanks given to Ramin Chorish for data collection.
2. Nature of AI Offerings in Underground Markets
- Illicit vs. Legitimate Tools:
- Many advertised tools lack security guardrails, offer uncensored outputs, and claim no data retention (making them attractive for criminal use).
- "These tools have the ability to do uncensored searches, uncensored outputs … no data retention. So if the cops came knocking down the door, we don't have the log data." — Michelle (03:29)
- Mirror to Legitimate Tech Marketing:
- Sellers emphasize customer support, versatility, workflow optimization—mirroring commercial SaaS marketing strategies.
- "A lot of these spaces mirror what you see in legitimate spaces. In terms of the marketing strategy that these ads use... It's very much like a lot of the same language that they use in more conventional spaces." — Michelle (04:32)
- Pricing and Packaging:
- High-end, all-in-one tools can command prices as high as $3,000/month.
- Majority are Swiss army knife-style products combining LLMs, deepfake generation, and support for every stage of the attack lifecycle.
3. Underground Forum User Sentiment & Reviews
- Review Culture:
- Forums feature customer reviews, both positive and negative, similar to mainstream app stores or marketplaces.
- Suspicion of scams is high; users debate the legitimacy and technical efficacy of the tools.
- "It's a mixed bag... You're either really happy or really angry with the tool. So how much do you trust the reviews that are coming out?" — Michelle (09:08)
- Customer Engagement as a Success Metric:
- Tools with the most engagement (reviews/posts) were analyzed for the report.
4. Core Use Cases for Illicit AI Tools
- Deepfakes:
- Mass creation of fake profiles, fraud, blackmail, and disinformation.
- "There's a lot of use of deepfakes to masquerade as humanity to help further along their malicious operations." — Michelle (11:49, 00:01 repeat)
- Phishing and Social Engineering:
- Automated, large-scale campaigns with tailored content.
- "Phishing is one of those like really stellar use cases of just making it at scale, reducing that toil and leading to more frequent and potentially more successful campaigns." — Michelle (15:32)
- Technical Support, Scripting, Reconnaissance:
- Automated coding, vulnerability research, and target profiling—mirrors legitimate productivity uses.
5. Comparison Between Illicit and Legitimate AI Models
- Selective Use Based on Task:
- Threat actors often prefer mainstream models (Gemini, ChatGPT) for technical/research tasks, and illicit models for activities needing uncensored output.
- "If you're not doing the illicit searches... there is a tendency to lean towards more conventional AI platforms and shy away from the illicit offerings." — Michelle (17:22)
- Caution and Paranoia:
- Users suspicious of attacks and scams within their own underground ecosystem; concerns about being tracked, logging, or setup for law enforcement stings.
6. Emergence of Non-Western AI and Geopolitical Considerations
- Popularity of Non-Western AI Models:
- Notable use/discussion of models like Alibaba Cloud’s Qwen by Russian/Chinese actors; comparisons made between DeepSeek and Western models like Gemini.
- "The rise of non-Western models and the reliability and reputation of something coming out of China versus something coming out of the west…" — Michelle (19:05)
7. Ongoing and Future Research Directions
- Broader Linguistic and Platform Coverage Needed:
- Current research focused on Russian/English; a need for similar studies in Mandarin, Portuguese, and other languages.
- Expansion to more tools as the market rapidly evolves.
- Rise of Agentic AI:
- Increasing use of autonomous, task-based agents will complicate attribution and defensive strategies.
- Need for Research on Speed/Efficiency Gains:
- Open question whether threat actors’ dwell time and cycle times are accelerating due to advanced AI tools.
- "The ability to the time to detect and stop this has dramatically shortened given the access to these tools." — Michelle (23:35)
Memorable Quotes & Notable Moments
-
On Deepfakes for Monetization:
"A lot of the underground forum topic discussions were, you know, soliciting advice regarding deepfake models of how do I turn, like, which model will help me turn images or avatars into deepfake content that I can monetize..." — Michelle (00:01, 11:49)
-
On the Tool Marketplace:
"A lot of these tools really want to be the one stop shop for every facet depending on what your needs are... they’ve made this bulk sort of Swiss army knife of an AI tool." — Michelle (07:26)
-
On Phishing Use Cases:
"Phishing is one of those like really stellar use cases of just making it at scale, reducing that toil and leading to more frequent and potentially more successful campaigns." — Michelle (15:32)
-
On Paranoia Among Cybercriminals:
"There's no honor amongst thieves because in these forums some of these users are afraid that the underground tools that are being advertised might themselves be scams to get Bitcoin." — Michelle (17:22)
-
On the Future of Research:
"There's a whole slew of other underground platforms that might be treating these tools differently... Our researchers have found so many more tools since we published that report." — Michelle (21:00)
Timestamps for Key Segments
- [01:48] — Research scope and data collection
- [03:28] — Illicit tool marketing and lack of guardrails
- [04:31] — Key findings and underground sentiment
- [07:26] — Packaging and pricing of AI tools for cybercrime
- [09:05] — Presence and impact of customer reviews in underground markets
- [11:49] — Deepfake tools and monetization/fraud use cases
- [15:32] — Phishing campaigns at scale using AI tools
- [17:21] — Users’ selection of illicit vs. legitimate AI models
- [19:05] — Non-Western AI models and geopolitical discussion
- [21:00] — Research gaps and agentic AI
- [23:35] — Efficiency gains and speed in cyber operations
Conclusion
This episode provides a comprehensive look at the evolving landscape of illicit AI tool adoption in cybercrime communities, highlighting both technological and social factors shaping the underground market. Michelle and Luke emphasize the need for ongoing vigilance, broader research across languages and regions, and deeper dives into rapidly developing tools and tactics.
“It’s just the tip of the iceberg. There’s so much more.” — Michelle (24:54)
