Safe Mode Podcast
Episode: What’s Powering the ‘Steroid Era’ of Cybercrime?
Date: January 15, 2026
Host: Greg Otto (A), Editor in Chief at Cyberscoop
Guests:
- Adam Myers (C), Head of Counter Adversary Operations, CrowdStrike
- Ilya Zaitsev (B), CTO, CrowdStrike
- Matt Kapko (D), Cybersecurity Reporter
Episode Overview
This episode explores how artificial intelligence is accelerating cybercrime, prompting a "steroid era" in attacks and defenses alike. Host Greg Otto speaks with CrowdStrike experts Adam Myers and Ilya Zaitsev about the ways AI is transforming everything from phishing and intrusion techniques to defenders’ abilities. The show opens with breaking news on a major takedown of a global cybercrime marketplace, then dives into how AI is both empowering adversaries and arming defenders, and what enterprises should (or shouldn’t) automate as AI capabilities grow.
Key Discussion Points & Insights
1. The Red VDS Takedown: A Sign of the Times
[01:21 – 07:32]
-
Red VDS, a major cybercrime marketplace, was taken down in a joint operation by Microsoft, industry partners, Europol, and German authorities.
-
Scope: Offered criminals remote desktops, VMs, and more, acting as a SaaS platform for cybercrime—remarkable for its visibility and SEO.
-
Victim sectors included real estate, healthcare, education, logistics, and legal services.
- Notably, Alabama-based H2 Pharma lost over $7.3 million; a Florida condo association lost $500,000.
-
Technique: Real estate fraud was prominent—criminals timed attacks to closing deals, diverting payments at just the right moment.
-
Red VDS had global reach, with 191,000+ compromised Microsoft email accounts and 130,000+ organizations impacted.
-
Key Insight: The “blocking and tackling” of cybercrime has become easier, not more sophisticated. Services like Red VDS facilitate opportunistic, scalable attacks, with bulletproof hosting and global infrastructure evading basic security measures.
“I was googling through and sure enough, if you go to Google Red VDS, it still comes up...like it was any other SaaS product out there.”
— Greg Otto [02:27]“They rented servers from third party hosting providers from multiple countries, which allowed cybercriminals to use IP addresses that looked like they were located close to targets.”
— Matt Kapko [06:34]
2. The ‘Steroid Era’ of Cybercrime: Three Defining Changes
[09:40 – 16:35]
Adam Myers describes three key shifts:
- A. Shift from Endpoint to Social and Voice-based Attacks:
- Mass adoption of endpoint detection has pushed adversaries away from “dumb” phishing (malicious documents) to voice phishing (vishing) targeting help desks.
- "Voice-based phishing has really taken off. We saw a 442% increase...adversaries are increasingly calling the help desk and pretending to be a user." — Adam Myers [10:33]
- B. Speed:
- Average “breakout time,” from initial access to serious compromise, dropped from 62 to 48 minutes (2023-2024). Fastest breakout: 51 seconds.
- “...the fastest breakout time we saw was 51 seconds, which is less time than it takes to make a cup of coffee.” — Adam Myers [14:44]
- AI is now sometimes present in code found in exploits, showing adversaries are using LLMs to build them.
- Average “breakout time,” from initial access to serious compromise, dropped from 62 to 48 minutes (2023-2024). Fastest breakout: 51 seconds.
- C. Technical Novelty via AI:
-
Adversaries use AI for everything from crafting social engineering messages to automated code/exploit development and security evasion.
-
AI lowers barriers to entry for amateurs (“AI slop” in open-source code) and introduces new vulnerabilities through poorly supervised agentic systems.
“Now adversaries have, basically at the margin, zero cost, super effective technologies to create very convincing, native language sounding voice recordings and emails...those techniques have gone much cheaper, much quicker, much more effective.”
— Ilya Zaitsev [13:01]
-
3. How AI Is Prompting and Automating Intrusions
[18:17 – 26:46]
-
Promptable Intrusions:
- With advanced models, every phase—recon, exploit selection, even privilege escalation—can potentially be automated via prompts.
- Reports show LLMs can guide actions with potentially malicious intent; model choices matter due to “loyalty” or ideological manipulation of outputs.
- Example: Chinese LLMs introduce vulnerabilities or omit security features for specific (politically sensitive) groups.
“The model that you choose is also really important in kind of determining what that outcome is.” — Adam Myers [19:15]
- Example: Chinese LLMs introduce vulnerabilities or omit security features for specific (politically sensitive) groups.
-
Supply Chain & Data Poisoning Risks:
- Open-source models may introduce unknown or intentional backdoors (“data poisoning”), especially if sourced or trained on untrusted data.
- National laws (e.g., China’s) can force ideological constraints into commercial models.
- “...even a legitimate, trusted provider might unintentionally scoop up, in reams and reams of data, malicious instructions...” — Ilya Zaitsev [21:35]
-
The Looming Age of Autonomous Malware:
- We are moving past human-directed remote C2; future malware may run agentic models locally, deciding in real-time how to operate without needing to “phone home.”
- “...it's probably not too long before an enterprising adversary takes a lightweight LLM and builds it into the malware.” — Adam Myers [28:57]
- Real-world example: “Lame Hug,” Russian GRU malware, used Hugging Face’s API to profile, search, and exfiltrate files—all driven by prompts.
- We are moving past human-directed remote C2; future malware may run agentic models locally, deciding in real-time how to operate without needing to “phone home.”
4. AI for Defenders: Levelling the Playing Field
[29:19 – 37:44]
- Defense Catches Up: AI cannot only supercharge adversaries; it enables real productivity boosts for defense teams.
- “For the first time, we're starting to see that these systems, these AI systems, can actually enable the defender to operate at speed and at scale...” — Adam Myers [29:58]
- Lowering the Cost of Defense:
- Tasks like alert triage, historically labor-intensive and fatiguing, can be offloaded to agentic AI systems with 98.6% accuracy.
- “The marginal cost of having our agentic system triage of detection is next to nothing...” — Ilya Zaitsev [34:45]
- Adds resilience to analyst burnout and enables more aggressive detection strategies (more “noise” can be handled if an AI rapidly filters it).
- Tasks like alert triage, historically labor-intensive and fatiguing, can be offloaded to agentic AI systems with 98.6% accuracy.
- Reducing Human ‘Context Switching’:
-
AI can handle multiple analysis roles (malware, reactive response, dark web monitoring), relieving humans from constant tool-hopping and distraction.
“When we look at it, [analysts] have like seven different hats. They're constantly switching off...moving that context switch into the AI so that they can stay in one tool...”
— Adam Myers [36:09]
-
5. Where Should AI Autonomy Stop? What Needs to Stay Human?
[37:44 – 45:38]
-
There is no universal red line; risk tolerance, regulatory posture, and business context set individual organizations’ boundaries.
- “I think it'd be naive to say that there is any specific category or type of decision that should never ever be made by AI. The answer to me is it depends.” — Ilya Zaitsev [38:21]
-
Empirical evidence should guide where to automate; if an AI is benchmarked to outperform humans in a given context, autonomy can be justified.
-
Historical analogy: Intrusion Detection Systems (IDS) vs. Intrusion Prevention Systems (IPS). Full automation failed until systems could operate with high enough accuracy not to disrupt business.
-
In critical sectors (e.g., healthcare, manufacturing), some actions (device containment, service disruptions) should require a last human check.
“You'd want a human to at least check the work before a device is contained or something gets disrupted, particularly in manufacturing or healthcare...”
— Adam Myers [41:50]“If I can prove to you...the machine makes mistakes less often than the humans...I would think even a risk averse organization would prefer then to have the more effective system taking that action.”
— Ilya Zaitsev [44:04]
6. “Hallucinations” and Risk: Should We Trust AI?
[45:38 – 49:48]
- Hallucinations (errors) in LLMs are not fundamentally new; every detection system operates with a measurable error rate.
- The challenge is transparency: many AI systems lack clear, published benchmarks for their accuracy.
- “We're somehow treating this category of technology because it's new and scary and different in this radically different way than we've already come to terms with...”
— Ilya Zaitsev [47:45]
- “We're somehow treating this category of technology because it's new and scary and different in this radically different way than we've already come to terms with...”
7. The Malware-less Future: Hands-On Keyboard Attacks
[49:49 – 51:05]
- Increasingly, attacks feature no traditional malware (79% → 81% of observed cases)—just persistent hands-on-keyboard, living-off-the-land tradecraft (PowerShell, WMI).
- The defender’s best tools are context-aware, agentic systems capable of analyzing diverse, often obfuscated behaviors in real-time.
Notable Quotes & Memorable Moments
-
“Voice-based phishing has really taken off. We saw a 442% increase…adversaries are increasingly calling the help desk and pretending to be a user.”
— Adam Myers [10:33] -
“Now adversaries have, basically at the margin, zero cost, super effective technologies to create very convincing, native language sounding voice recordings and emails.”
— Ilya Zaitsev [13:01] -
“The model that you choose is also really important in kind of determining what that outcome is.”
— Adam Myers [19:15] -
“At some point you have to imagine we're going to see autonomous malware, right? That...can basically live off the land...without the need to have constant instruction.”
— Ilya Zaitsev [25:36] -
“For the first time, we're starting to see that these systems...can actually enable the defender to operate at speed and at scale...”
— Adam Myers [29:58] -
“The marginal cost of having our agentic system triage of detection is next to nothing.”
— Ilya Zaitsev [34:45] -
“If I can prove to you with data, with math and with confidence that the machine makes mistakes less often than the humans...I would think even a risk averse organization would prefer...the more effective system.”
— Ilya Zaitsev [44:04] -
“Hallucinations are nothing new in software and security...What does it really matter? Like, what's the difference between a hallucination and a false positive?”
— Ilya Zaitsev [47:45]
Timestamps of Important Segments
- [01:21] – Red VDS takedown details; victims and impact
- [06:34] – “Bulletproof” hosting & cybercrime infrastructure
- [09:40] – Defining the ‘Steroid Era’: Speed, social engineering, technical sophistication
- [14:44] – Breakout times accelerate; AI fingerprints in exploit code
- [18:17] – LLM-driven, “promptable” cyber attack phases
- [19:15] – Loyalty/ideological bias in commercial AI models
- [21:35] – Data poisoning and supply chain vulnerabilities
- [25:36] – On the horizon: Autonomous, agentic AI-powered malware
- [29:58] – AI as a force-multiplier for defenders
- [34:45] – Transforming alert triage with AI (Charlotte agentic platform)
- [38:21] – The line between automated and human judgment: it depends
- [47:45] – Hallucinations vs. traditional false positives
- [49:49] – Hands-on keyboard: Stealthier, “malware-less” attacks
Tone & Takeaways
- The tone is analytical, candid, and sometimes playful (especially as the hosts riff on self-driving cars and “The Simpsons”).
- Clear consensus: While AI drastically accelerates current cyber threats, it equally empowers defenders if implemented with empirical evidence and smart controls.
- The “answers” to defending against today’s AI-driven attacks are still rooted in classic security fundamentals—just faster, with new tools.
- Enterprises must thoughtfully decide where to automate, guided by data, their risk appetite, and the ongoing measurement of human and AI effectiveness.
For listeners seeking the upshot:
The “steroid era” of cybercrime is real—attacks are faster, smarter, and less detectable thanks to AI. Yet, for every advantage gained by adversaries, defenders now have new agentic tools to match. The real differentiator is how enterprises choose, benchmark, and monitor the balance of human and machine in their security operations.
