Risky Bulletin: "Between Two Nerds – How Threat Actors are using AI to run wild"
Podcast: Risky Bulletin
Host: Tom Uren and "The Grok"
Date: September 1, 2025
Episode Overview
In this engaging "Between Two Nerds" installment, Tom Uren and "The Grok" take a deep dive into a recent threat intelligence report from Anthropic, examining how cybercriminals are increasingly leveraging large language models (LLMs) like Anthropic’s Claude to augment – and sometimes revolutionize – their operations. The discussion explores the real-world impact LLMs are having on the cybercrime landscape, focusing on how AI is lowering technical barriers, providing new domain expertise, and enabling unprecedented levels of operational and social engineering sophistication among threat actors.
Key Discussion Points & Insights
1. The Changing Role of AI in Cybercrime
- Agentic AI systems are now directly executing attacks, not just advising criminals (00:25–01:00).
- Barrier to entry is lower than ever: Even unsophisticated actors can now execute complex attacks, including developing ransomware, via AI tools.
- AI is embedded throughout cybercrime workflows: From automated victim profiling to in-depth data analysis for extortion.
Tom (A): "The barrier to entry was dramatically lowered because you could use Claude or whatever." (03:14)
2. Why Threat Actors Gain More from AI Than Legitimate Businesses
- Criminals quickly find value: While 95% of companies report no ROI on AI projects, threat actors seamlessly integrate AI to gain tactical advantages.
- Reasons:
- Businesses have complex, legacy workflows; criminals build singular-purpose tools with no need for lasting integration (04:34–05:14).
- Criminal workflows are short-lived, prioritize proof-of-concept, and don’t require longevity.
Grok (B): "Maybe businesses can't deploy AI unless their business is crime." (04:02)
3. Where LLMs Shine: Knowledge Work, Not Hacking
- LLMs excel at domain analysis and context-heavy tasks:
- AI parses financials, sorts stolen data, and crafts custom extortion notes based on victim details.
- The real value: Helping criminals make smarter ransom demands or sift for "juicy" data, not just automating attacks.
- Customization of attacks:
- Ransomware notes are tailored to each victim, with context-aware threats and financial details (07:13–09:22).
Grok (B): "Claude can do things you can't do... look through an arbitrary business... and say, this is sensitive, and this they care about." (09:23)
4. Scaling Operations and Filling Skill Gaps
- AI fills expertise gaps for criminals:
- Not all attackers are certified accountants or market experts, but LLMs bring "good enough" business and domain understanding.
- This enables scaled, semi-targeted extortion at a volume previously unmanageable (10:39–11:53).
- Victim profiling and report generation: AI automates tedious research (e.g., compiling law firms to target or writing breach reports).
- "Good enough" is plenty: Cybercrime doesn’t demand perfection—just a statistical edge.
Grok (B): "You need to be competent enough to make money with it. You don't need to not be wrong, I guess." (11:33)
5. State Actors and Commercial Espionage
- Chinese threat actor:
- Used Claude extensively to enhance operations against Vietnamese targets, not just for hacking, but also for context and domain knowledge (23:35–24:19).
- LLMs offer deep, multi-domain, multilingual support—especially valuable for non-native operators.
- North Korean IT worker scam:
- LLMs enable under-skilled individuals to land and keep complex overseas IT jobs, automating everything from communications to problem-solving, further fueling North Korea's revenue streams (27:23–28:53).
Tom (A): "Operators who cannot independently write basic code or communicate professionally in English are now successfully passing technical interviews, maintaining full time engineering positions..." (28:20)
6. AI-Driven Romance Scams & Social Engineering
- Emotional manipulation at scale:
- Romance scam bots now leverage LLMs to generate sophisticated, emotionally intelligent interactions, spanning multiple languages and styles—removing previous red flags that gave away non-native scammers (29:29–31:15).
Tom (A): "AI enables non native speakers to craft persuasive emotionally intelligent messages that bypass typical linguistic red flags." (30:18)
7. LLMs’ Limitations and Defensive Perspectives
- AI doesn’t fundamentally expand the pool of vulnerable victims: The number of potential victims remains relatively constant; AI just increases the pool of attackers.
- Criminal vs. legitimate AI deployment: Illicit actors take bold risks (such as using public models) because the downside (losing access) is minor compared to the upside.
- Potential for a "criminal ChatGPT": With existing password-cracking infrastructure, threat actors might soon host their own "bulletproof" LLMs—purpose-built AIs with no guardrails for illicit use (22:15–23:35).
Grok (B): "You could purchase access to a criminal LLM that has no guardrails... Ships with an MCP for like Kali and whatever. That seems like a sort of sell shovels approach to making money from the ransomware gold rush." (22:52–23:15)
8. Implications for Security Industry
- Questioning the relevance of entry-level tasks:
- Speculation that LLMs could soon replace basic penetration testing report writing or automated assessments, pushing humans into higher-value, specialized work (21:14–21:33).
Notable Quotes & Memorable Moments
-
On AI’s practicality for criminals:
"Maybe businesses can't deploy AI unless their business is crime." — Grok (B), 04:02
-
On LLMs providing new critical skills:
"Claude can do things you can't do... look through an arbitrary business... and say, this is sensitive, and this they care about." — Grok (B), 09:23
-
On scaling ransomware operations:
"It's a numbers game in a way. And if this improves your numbers, then you win overall." — Grok (B), 09:23
-
On North Korean LLM-powered workers:
"Operators who cannot independently write basic code or communicate professionally in English are now successfully passing technical interviews, maintaining full time engineering positions..." — Tom (A), 28:20
-
On outsourcing emotional manipulation:
"AI enables non-native speakers to craft persuasive emotionally intelligent messages that bypass typical linguistic red flags." — Tom (A), 30:18
Timestamps for Important Segments
- 00:12–03:42: Introduction and Anthropic threat report summary
- 04:00–05:44: Why threat actors benefit more from AI than businesses
- 07:13–09:04: Extortion analysis, victim-specific ransom note crafting
- 10:39–11:53: LLMs provide general business/domain knowledge for smarter extortion
- 15:02–16:50: North Korean IT worker scam & using LLMs for social/cultural blending
- 22:15–23:35: The prospect of "bulletproof" criminal LLMs
- 23:35–25:31: Chinese state-linked actor, domain-specific knowledge via LLMs
- 29:29–31:15: AI-powered romance scams and emotional manipulation
- 31:15–32:07: LLMs for domain-specific, post-breach victim manipulation
Closing Thought
The episode closes with the observation that while AI may not create more cyber victims, it radically multiplies the capabilities and reach of threat actors—especially by plugging their knowledge gaps. The hosts agree: as LLMs become more integrated into both crime and defense, the lines of advantage are shifting fast—perhaps faster than the security industry is prepared to keep up.
Stay tuned for further updates and in-depth security analysis from the Risky Bulletin team.
