The Defender's Advantage Podcast — "Vishing in the Wild"
Host: Luke McNamara (Mandiant / Google Threat Intelligence Group)
Guests: Nick Cutilla (Consultant, Mandiant Offensive Security Services), Emily Astronova (Associate Consultant, Mandiant Offensive Security Services)
Date: June 4, 2025
Episode Overview
This episode tackles the evolving cybersecurity threat of vishing (voice phishing), with deep dives into both traditional and AI-powered voice-cloning attacks. Host Luke McNamara speaks with red teamers Nick Cutilla and Emily Astronova about their hands-on experiences, real-world red team engagements, insights from recent attacks, and practical defensive measures. The team also examines the future of vishing as AI models rapidly lower the barriers for sophisticated voice impersonation.
Key Discussion Points & Insights
1. Red Teaming and the Role of Offensive Security Services
- Red Teaming Defined: Simulating real-world attack scenarios to proactively identify weaknesses (01:27).
- "We perform assessments from the perspective of a malicious actor … to better prepare the customer for those threats." – Nick (01:27)
- Proactive security, not only identifying but emulating threat actors' methods.
2. Vishing: What It Is and Why It’s Gaining Attention
- Definition of Vishing (Voice Phishing):
- Social engineering over phone calls to manipulate targets into sharing information or taking compromising actions (03:37).
- Effectiveness Factors:
- Rise in remote work dilutes personal, face-to-face familiarity, making impersonation easier (04:00).
- Increased communications with “strangers” in large organizations (03:37–05:30).
- Less organizational control over personal devices (05:51).
- Threat Actor Example:
- Scattered Spider (UNC3944), known for using vishing in high-impact attacks (05:16).
Quote:
"When you're on the phone with somebody and speaking and there's intonation... there’s more of a believableness... it becomes a little more convincing." – Nick (07:14)
3. Vishing in Red Team Engagements: Tactics and Examples
A. Preparing and Scoping Vishing Engagements
- Engagements are tailored to client needs: employee sectors, IT, call centers, management, etc. (08:15).
- Can be used in both initial access and lateral movement stages (09:53–10:22).
B. Real Red Team Example (Emily)
- Process:
- Open source intelligence (OSINT) to identify new employees and key IT systems (12:51).
- Spoofed IT phone number to call a new employee who had never met or heard the real IT person.
- Used a realistic pretext (installing urgent patch via TeamViewer) to secure user credentials and install payload.
- Outcome:
- Gained control of a low-privilege user, escalated to domain admin on the first day through ADCS misconfigurations (12:51–15:18).
C. Lateral Movement
- Monitored internal channels for support issues, then called employees with believable knowledge of their real IT issues (15:57).
- Exploited weak identity validation processes to reset passwords, register new MFA devices, and eventually gain admin access (15:57–19:32).
Quote:
"We followed up on their profile, see if we can get their direct phone number that was linked ... and then called those users individually knowing their real IT issue..." – Nick (15:57)
4. AI-powered Voice Cloning: Raising the Stakes
A. Real-World AI Vishing Attack (Emily)
- Challenge: Vishing a security team member who was friends with the red team’s point of contact — higher suspicion bar.
- Trained an AI model using only 10 minutes of recorded voice from the PoC (24:05).
- Result: Successfully impersonated the PoC, passed as their “boss” in real time during a call, and harvested credentials (20:54–23:15).
Quote:
"All this is real time. I'm on the other end of the call and sure enough we got our payload deployed and rest is history." – Emily (22:32)
B. Data Sourcing for Voice Models
- Access to internal call recordings or publicly available data (e.g., conference talks, YouTube), even for C-level executives or admins, enables highly accurate cloning (24:23–25:14).
- Critical Incident:
- Found a network folder with every recorded corporate phone call—potential goldmine for attackers seeking to train models for voice impersonation (25:14).
Quote:
"I was like, this could absolutely be used to train models for AI voice cloning, lateral movement." – Emily (25:14)
C. Utility and Feasibility
- Even if organizations don’t provide data, attackers can often source enough public audio (26:18).
- The barrier to entry for AI vishing is now "super low"—consumer hardware and open-source models suffice (35:51).
5. Defensive Strategies & What Actually Works
A. Technical Controls
- Universal and enforced MFA for all access points—prevents exploitation even after password resets (28:31).
- Strong identification processes:
- Manager approval or third-party verification for resets of password or MFA (28:31).
- In-person or video verification with ID for sensitive changes (28:31–29:50).
- Alerting on correlated changes: password and MFA resets in quick succession.
Quote:
"The biggest one: enforcement of MFA as a whole ... getting that individual's manager involved directly for that process..." – Nick (28:31)
B. Employee Training
- Encourage “call back” as verification: if users get urgent or suspicious calls, call the number back via internal directory—spoofing fails (32:03).
- Use “out-of-band” communication or rotating code words for high-privilege actions (33:09).
- Training on social engineering awareness: never act on urgency or secrecy alone.
Quote:
"As attackers, we can spoof the phone number ... but if you call back, it’s going to go to the actual intended individual—not back to us, the attacker." – Emily (32:03)
6. The Future of (AI) Vishing
- Both Nick and Emily predict the technique will grow more prevalent as organizations harden other attack surfaces but voice-based tradecraft remains effective (34:17–35:51).
- AI video impersonation is emerging, and defenders may respond by leveraging AI for voice and video verification (34:17).
- FBI and others now tracking cases of high-level AI impersonations in the wild (35:51).
Quote:
"I don't think phishing as a whole is going away anytime soon. Especially the more we rely on these methods of remote interaction." – Nick (34:17)
Quote:
"The barrier to entry is already super low for voice at least ... and video starts to become more accessible as well." – Emily (35:51)
Notable Quotes & Moments
- On the psychology of vishing:
"When you're on the phone and there's intonation ... it becomes a little more convincing." – Nick (07:14) - On red team creativity:
"We had her navigate to an attacker-controlled portal ... stole those, stole her session ... [then] escalated from that low privilege user to domain admin in the first day." – Emily (12:51) - On AI voice cloning efficiency:
"We only needed 10 minutes of audio data ... enough to create a really convincing voice model." – Emily (24:05) - On defensive best practices:
"The biggest one: enforcement of MFA ... and have the manager involved directly for that process." – Nick (28:31) - On the low bar for adversaries:
"This is all being done on consumer hardware with open source models ... I don't see why that trend wouldn't continue." – Emily (35:51)
Key Timestamps
| Time | Segment / Topic | |-----------|---------------------------------------------------| | 01:27–02:15 | Red Teaming: Role & Philosophy | | 03:37–07:14 | Definition and Effectiveness of Vishing | | 08:15–12:51 | Scoping and Examples of Vishing in Engagements | | 12:51–15:18 | Detailed Case Study: Vishing to Domain Admin | | 15:57–19:32 | Lateral Movement & Identity Exploits | | 20:54–23:15 | AI Voice Cloning: Real Attack Example | | 24:05–25:14 | Voice Model Training, Internal Audio Sources | | 25:41–28:31 | AI Utility, Sourcing Public Data | | 28:31–33:09 | Defensive Strategies: MFA, Verification, Training| | 34:17–36:41 | The Future: Proliferation & AI-powered Attacks |
Conclusion
The episode delivers a rich, practical look at how vishing—both traditional and AI-enhanced—enables attackers to bypass defenses, exploit human trust, and move laterally within organizations. With the rising prevalence and low barrier to sophisticated voice cloning, the hosts underscore the urgent need for layered defenses, robust training, and creative policy controls. The arms race between attackers leveraging AI, and defenders deploying both technical and procedural safeguards, is only just beginning.
Recommended action:
- Review organizational policies around identity verification, especially for high-privilege actions and remote processes.
- Enforce universal MFA and strengthen escalation procedures for resets.
- Regularly train staff on vishing awareness—encourage callbacks, out-of-band verification, and critical thinking in all urgent requests.
- Monitor developments in AI-driven impersonation and consider defensive AI countermeasures.
