Podcast Summary: "Fighting AI with AI"
The Indicator from Planet Money – October 6, 2025
Host: Darian Woods (with reporting from various NPR staff)
Theme:
This episode delves into the rapidly evolving landscape of AI-driven voice fraud — specifically, scams that use cloned voices to commit crimes — and examines how companies and banks are fighting artificial intelligence… with artificial intelligence.
Main Theme & Purpose
The episode explores how criminals increasingly use AI to replicate voices and scam both customers and institutions, focusing on the vulnerabilities in banking and the countermeasures being developed to detect and block these threats. It features firsthand anecdotes, expert commentary, and a look at the technological arms race between scammers and defenders.
Key Discussion Points & Insights
1. How Voice Deepfakes Are Used for Scams
- Darian Woods pranks his colleague Angel Carreras with an AI voice, simulating a common scam technique of impersonating a coworker to request gift cards.
- Angel quickly detects the ruse, exposing the growing sophistication but also the limitations of AI-generated voices (00:27–00:57).
- Commentary highlights how millions of Americans have already lost substantial amounts to AI voice scams, with convincing audio easily tricking victims—especially when urgency is invoked (01:17–01:40).
2. Banks on the Front Line
- Darian Woods explains banks are major targets of AI voice fraud due to the “literal” monetary stakes (03:05–03:10).
- Mark Guapozeski, Chief Information Officer at PNC Bank, discusses the constant pressure to strengthen security as criminals evolve with digital banking (03:22–03:37).
- A typical scam involves recording a victim’s voice for a few seconds to create an AI clone, then using it to bypass phone-based voice ID systems (03:37–03:49).
- Quote (Mark Guapozeski, 03:49):
“Because it’s so easy now to reproduce your voice, you really can’t rely on any one vector to say, okay, I’m just going to accept it’s you because I hear you.”
3. Rise of ‘Reality Defender’ and Detection Tech
- Ben Coleman, former Goldman Sachs employee and co-founder of Reality Defender, has been tackling AI fraud risk since 2021, anticipating the coming “tsunami of fraud” despite lacking terms like “deepfake” at the time (03:59–04:19).
- Coleman explains their detection relies on “inference,” identifying subtle features in audio that signal AI generation, undetectable by humans (05:00–05:13).
- Quote (Ben Coleman, 05:13): “There’s indicators of AI, which means that the information is, yes, it’s yours, but it’s being used by somebody who’s not you.”
- Reality Defender is now used by the majority of the top 20 banks.
4. Limitations and Need for Multi-Factor Security
- Ben Coleman warns that reliance on voice biometrics—“your voice is your password”—is dangerously outdated, advocating for stronger multilayered systems (05:42–05:56).
- Mark Guapozeski details PNC’s approach: using location, caller device, security codes, and other data points to supplement voice authentication (06:11–06:49).
- Quote (Mark Guapozeski, 06:11): “If you’re only using the one dimension, there’s risk in everything… you always want to have layers of security.”
5. Target Shift: From Banks to Customers
- Reality Defender and similar tools have hardened bank systems, shifting fraudsters’ focus to exploiting bank customers directly (07:11).
- Scammers call customers pretending to be the bank, urging them to move funds or invest in crypto/gold, but Mark warns:
Quote (Mark Guapozeski, 07:36): "We will never ask you to move your money. If we think an account has been compromised, the bank will move your money for you, you know, within the bank." - He tells how fraudsters build a sense of urgency and notes the importance of verifying the institution’s identity.
6. Practical Defenses: Family Safe Words
- Mark shares that his family uses a “safe word” in case anyone calls asking for money due to supposed emergencies—a simple but effective tactic (08:06).
- Quote (Mark Guapozeski, 08:06): “We all know if there’s ever a situation where somebody is either really in trouble and asking for money, ...we will ask for the safe word.”
7. The Need for Broad AI Detection Regulation
- Ben Coleman asserts that banks alone won’t be enough; all online content should carry an indicator if it has been AI-generated, from voice to video (08:27–08:44).
- He explains their push for regulations, having demonstrated deepfakes in Congress, even mimicking Senators’ voices to prove the threat (09:13–09:24).
- Quote (Ben Coleman, 08:53): “I think we’re gonna look back and say, I can’t believe there was a time when we didn’t have automated deepfake detection. Our challenge is just the technology is moving quicker than regulations.”
8. Real-World Scams and the Societal Stakes
- The hosts note celebrity impersonation scams (such as fake pro golfers on Instagram) are proliferating (08:44–08:53).
- Darian notes even The Indicator has been impersonated by scammers, warning listeners to check for official NPR email addresses.
Notable Quotes & Memorable Moments
- Darian’s AI prank on Angel:
Angel Carreras (00:27): “You sound like AI.”
Angel Carreras (00:55): “Too sloppy.” - Mark Guapozeski on risk:
(03:49): “Because it’s so easy now to reproduce your voice, you really can’t rely on any one vector…” - Ben Coleman’s vision for detection:
(08:53): “I think we’re gonna look back and say, I can’t believe there was a time when we didn’t have automated deepfake detection.” - Use of safe words:
Mark Guapozeski (08:06): “We all know if there’s ever a situation where somebody is…asking for money, ...we will ask for the safe word.”
Important Timestamps
- 00:11–01:05: Darian pranks Angel with AI-generated voice
- 01:17–01:40: Overview of AI scam prevalence and losses
- 03:05–03:10: Banks’ vulnerability explained
- 03:22–03:49: Mark Guapozeski (PNC Bank) on fraud evolution
- 04:19–05:13: Ben Coleman introduces Reality Defender’s approach
- 05:56–06:49: Risks with single-vector authentication, move to multi-factor
- 07:11–07:59: Customer-targeted scams, advice from PNC
- 08:06–08:24: Safe word practical tip
- 08:44–09:24: Regulatory need; demonstration in Congress
- 09:39: Reminder for listeners about official Indicator contacts
Tone & Style
The episode blends wit, urgency, and expert commentary, keeping jargon minimal and translating complex technical threats into relatable, everyday risks. The hosts simultaneously entertain (via pranks and banter) and educate, making the topic both clear and memorable.
Conclusion
With AI-generated voices now capable of fooling both ears and security systems, defenses must be both technical (like Reality Defender’s detection software) and practical (such as safe words and awareness). The episode emphasizes multilayered security, continued vigilance, and the need for regulation as technology leaps ahead of policy.
Next episode teaser: A look at what’s “supercharging” data breaches.
