AI Is Here. Deepfakes Are Real. Trust Is Not Automatic Anymore
RSAC Podcast • March 10, 2026
Hosts: Tatiana Sanchez, Casey Zirkis
Guests: Clarissa Serta (Chief Legal Officer, Pindrop), Stephanie Fogel (Vice Chair, Global Markets and Sectors, DLA Piper)
Episode Overview
This episode dives into the profound shift occurring as AI-powered deepfakes and synthetic identities rapidly move from speculative threats to real-world risks. Clarissa Serta and Stephanie Fogel, both leading voices in legal and technical responses to deepfakes, explain how identity assurance is being fundamentally challenged across sectors—from enterprise security to hiring. The discussion unpacks how legal frameworks, enterprise standards, and practical safeguards must evolve to restore and maintain trust in a world where seeing (or hearing) is no longer believing.
Key Discussion Points & Insights
1. The Evolution and Impact of Deepfakes
[02:55 - 05:53]
- Deepfakes are now mainstream tools for identity fraud.
- "A couple of years ago when people talked about deepfakes, it was mostly about viral videos or political misinformation. Today... it's about whether you can tell if the person on the other end of a video call is real." (Clarissa Serta, 02:56)
- Cites a case where an employee, deceived during an executive video call populated entirely by AI-generated deepfake personalities, authorized a $25 million transfer.
- Industrialization of identity fraud:
- "Over the last years, deepfakes haven't just improved – they've industrialized identity fraud. Voice and video synthesis is faster, it's interactive, it's scalable." (Clarissa, 05:08)
- Reports over 15,000 unique AI voice bots targeting a single customer environment in one summer (05:23).
2. Legal and Regulatory Perspective
[05:53 - 09:48]
- Existing statutes already apply to deepfake threats.
- "The courts aren't evaluating whether the technology is innovative. They're simply applying the existing statutes like biometric laws, wiretap frameworks, privacy regulations." (Stephanie Fogel, 05:55)
- Enforcement focuses on whether reasonable steps were taken, rather than technological novelty.
- Regulatory ambiguity around ‘reasonableness’:
- "Regulators and legislators do love that word reasonable...but also...it leaves organizations very uncertain into exactly what it is they're supposed to do." (Stephanie, 07:58)
- Ultimately, "defensibility, not perfection, is the standard here that will be applied." (Stephanie, 09:23)
3. The Shift from Gates to Continuous Signals
[09:48 - 11:29]
- Traditional perimeter security is obsolete:
- "Security used to be about gates. Log in once and you're trusted. Now it's about signals. Are you still who you say you are? That's a very different architecture." (Clarissa, 10:09)
- Legitimacy requires both technical and legal rigor:
- "Product innovation...gets you live legitimacy means it works technically in the system and legally in the courtroom. And discipline is what keeps you operational and at scale." (Clarissa, 11:15)
4. Building Defensibility and Enterprise Alignment
[11:29 - 16:40]
- Action isn’t waiting for future laws; it’s proactive alignment:
- "You don't have to wait for new AI specific legislation...Existing frameworks already apply." (Clarissa, 12:18)
- "What reasonable looks like in the AI era...is not going to be a policy on paper, it's going to be alignment in practice." (Clarissa, 13:31)
- Documentation and cross-functional alignment are crucial for legal defense (Stephanie, 14:00–15:07).
- Compliance by design:
- "The event based consensus laws weren't written for these always on systems...Compliance by design is really critical to this evolution and can help to systematize safeguards and reduce manual failure." (Stephanie, 15:45)
- "Consent alone isn't enough. Identity is being evaluated continuously...Purpose limits, proportional safeguards and audit trails [must be] built into the architecture from day one." (Clarissa, 16:40)
5. Moving from Abstract Concern to Concrete Action
[18:33 - 21:53]
- Risk management starts with trust inventories and tabletop exercises:
- "Treat authenticity as infrastructure. You start with a trust inventory...stress test those assumptions...run a tabletop exercise." (Clarissa, 19:21)
- "If you don't rehearse this...you're going to be figuring it out for the first time in the middle of a crisis." (Clarissa, 20:45)
- The stakes are personal and immediate:
- "We've moved from fake content to fake people...the victims aren't just headlines, they're individuals." (Clarissa, 21:12)
- "Adaptive security requires adaptive legitimacy." (Clarissa, 21:48)
6. Evolving Playbooks and Embracing Multidimensional Defenses
[21:53 - 26:38]
- Integrate AI into defense, not just human-driven checklists:
- "It's imperative that companies are not relying only on a human ability to follow a checklist, but to have actually integrated AI in a successful way to address bad AI." (Stephanie, 23:41)
- Use multidimensional tactics that mirror the sophistication of new threats.
- Governance gaps and real-time response:
- "Some of these failures are governance gaps that were revealed under pressure...if you have that kind of environment, there's no time to do the old school, let’s go do these checks after the fact." (Clarissa, 24:30)
Notable Quotes & Memorable Moments
-
On deepfake risks:
- "That's the shift we're living through...If someone can deep fake you in real time, then every system built that's based on sounds right or looks right is exposed."
— Clarissa Serta, 03:16
- "That's the shift we're living through...If someone can deep fake you in real time, then every system built that's based on sounds right or looks right is exposed."
-
On regulatory standards:
- "Defensibility, not perfection, is the standard here that will be applied."
— Stephanie Fogel, 09:23
- "Defensibility, not perfection, is the standard here that will be applied."
-
On adapting enterprise security:
- "Security used to be about gates...Now it's about signals. Are you still who you say you are?"
— Clarissa Serta, 10:09
- "Security used to be about gates...Now it's about signals. Are you still who you say you are?"
-
On aligning law and security:
- "What reasonable looks like in the AI era...is not going to be a policy on paper, it’s going to be alignment in practice."
— Clarissa Serta, 13:31
- "What reasonable looks like in the AI era...is not going to be a policy on paper, it’s going to be alignment in practice."
-
On stress-testing trust:
- "Treat authenticity as infrastructure...If you don't rehearse this, you're going to be figuring it out for the first time in the middle of a crisis."
— Clarissa Serta, 19:21/20:45
- "Treat authenticity as infrastructure...If you don't rehearse this, you're going to be figuring it out for the first time in the middle of a crisis."
Timestamps for Key Segments
- [02:55] – The transition from viral deepfakes to industrialized identity fraud
- [05:53] – Existing laws and regulatory focus on “reasonableness”
- [09:48] – Perimeter security versus ongoing identity verification
- [12:18] – Applying and documenting oversight, detection, safeguards
- [15:45] – Aligning “always on” systems with event-based consent laws
- [18:33] – Moving from abstract discussion to actionable enterprise alignment
- [21:53] – Evolving playbooks to anticipate synthetic identity failures
- [24:06] – Security as an immune system; governance and fast-moving threats
Actionable Takeaways
- Immediate action is possible: Use existing frameworks to define and document reasonableness.
- Enterprise-wide alignment is key: Security, legal, and operations teams must collaborate on risk assessments and system design.
- Identity assurance must be continuous and transparent: Build controls, audit trails, and human oversight in from the start.
- Proactive stress-testing: Run regular tabletop exercises to identify and remedy trust vulnerabilities before a crisis.
- AI vs. AI: Integrate AI-driven detection and resilience tools to contend with AI-generated threats.
- Adaptive governance: Update playbooks and procedures to reflect the reality of multifaceted, evolving threats.
Summary prepared for listeners and cybersecurity professionals seeking insight on how AI-driven deepfakes are remapping the landscape of trust, regulation, and enterprise risk—and what steps are essential to stay ahead.
