Eye On A.I. – Episode #328
Guest: Kevin Tian, Co-founder and CEO of Doppel
Host: Craig S. Smith
Date: March 27, 2026
Overview & Main Themes
This episode dives into the rapidly expanding threat of AI-driven social engineering: how generative models are weaponized for deepfakes, fraud, impersonation, and misinformation. Guest Kevin Tian discusses Doppel, an "AI-native social engineering defense platform", its origin, technical approach, and the broader societal risks inherent in generative AI. The conversation covers attacks, defense strategies, enterprise and individual impacts, and the ongoing battle to preserve digital trust.
Key Discussion Points & Insights
1. The Existential Threat of AI-Driven Manipulation
- Manipulation of Digital Reality:
- “It’s really that broader thesis of this is the existential threat with AI, right? It can manipulate digital reality.”
— Kevin, [00:00]
- “It’s really that broader thesis of this is the existential threat with AI, right? It can manipulate digital reality.”
- Erosion of Trust:
- “Ultimately what's eroding here, it's not the individual attack, it's trust in general.”
— Craig, [00:06]
- “Ultimately what's eroding here, it's not the individual attack, it's trust in general.”
- Deepfakes as a Symptom:
- Deepfakes are just one of many social engineering attack vectors; AI enables increasingly convincing and scalable attacks via various channels (social, search, phone).
2. Doppel’s Platform: Scope & Capabilities
- Comprehensive Defense:
- Doppel detects and shuts down impersonations (executive, brand, VIP), simulates attacks (red teaming), and provides security awareness training and human risk management ([00:15], [06:13]).
- “We are the first social engineering defense platform that enables you to detect impersonation attacks...and now even enable you to simulate and train against them.”
— Kevin, [00:15]
- Coverage & Reach:
- Services enterprises (especially Fortune 500s), protecting everything from C-suite executives to brands.
- Current coverage: "upper 90s percent of business communication channels" ([18:55]).
3. Channels & Attack Vectors
- Vast Attack Surface:
- Beyond email: YouTube, LinkedIn, paid ads, phone (deepfake calls), search engine poisoning, SMS/WhatsApp/Telegram ([02:24], [09:46]).
- Trending Attacks:
- “The biggest...recent trend is definitely the phone call attacks...these bad guys...they’re doing phone calls to customer support lines, IT service lines, HR lines, and those phone calls are then how they get into compromising...” — Kevin, [09:46]
- Social media and search engines being poisoned by malicious content, often upranked via trusted third-party sites or paid ads ([10:46]).
4. AI vs. AI — The Technical Arms Race
- AI Agents for Defense (and Offense):
- Doppel uses AI agents for:
- Threat scanning, analysis, and automated takedown
- Offensive (red teaming) simulations: e.g., “Vibe phishing” phones, emails, SMS ([20:19])
- “We built one of the very first security agents with [OpenAI] to go auto analyze these attacks and take them down.”
— Kevin, [20:19]
- Doppel uses AI agents for:
- Red Teaming and Simulation:
- Doppel can simulate multi-channel attacks (deepfake voice calls, phishing emails/messages) to train clients; simulation based on real-world data from current threats ([06:13], [31:24]).
5. Escalating Arms Race & Societal Impact
- Adversarial Landscape:
- Cat-and-mouse between attackers and defenders
- “The reality is it’s an adversarial industry where as you build more defense capabilities, the bad guy is building more offensive capabilities.”
— Kevin, [35:44]
- Scale as a Challenge:
- “You could manually have security analysts look at every single impersonation alert and try to take it down, but again, it’s just not scalable. So how do you actually deploy AI effectively to build up your own capacity?”
— Kevin, [38:18]
- “You could manually have security analysts look at every single impersonation alert and try to take it down, but again, it’s just not scalable. So how do you actually deploy AI effectively to build up your own capacity?”
- Society-Wide Ramifications:
- Misinformation erodes public trust and endangers democracy
- “When that gets to the level...of serious news...and people can no longer...figure out what's true and what's not true...it's certainly going to erode public discourse and the ability for democracies to function.”
— Craig, [16:40]
6. Practical Tips & Verification Strategies
- Verification as an Individual:
- Out-of-band checks: using other communication channels, knowledge-based questions, random/fictional discussion prompts to detect AI ([24:26])
- “It's harder to do it across all the different channels...it is important to have out of band verification channels.”
— Kevin, [25:56]
7. Human Risk Management
- Evolving Security Training:
- “It's not just phishing email simulations, but it's really multi channel AI native simulations.”
— Kevin, [19:23] - Holistic management: combining awareness training, interventions, red teaming, insider risk assessment ([28:17])
- “It's not just phishing email simulations, but it's really multi channel AI native simulations.”
8. Enterprise, Individual, and Platform Roles
- Business Model:
- Currently an enterprise SaaS, with white-glove services and plans for broader individual coverage ([44:40], [45:40]).
- Relationship with Social Platforms:
- Doppel partners as a “trusted reporter” with APIs and hotlines to expedite takedowns ([37:18]).
- Platforms invest in integrity, but have different incentives and operational challenges versus a specialist defense company.
- “We have an advantage where...we get to learn what ground truth is from every company in the world.”
— Kevin, [41:04]
Notable Quotes & Memorable Moments
-
On AI’s existential risk:
- “If AI were to go destroy the world...it's the fact that it can manipulate any digital surface. And so that's where our mission around protecting the world from social engineering attacks every day comes from.”
— Kevin, [14:39]
- “If AI were to go destroy the world...it's the fact that it can manipulate any digital surface. And so that's where our mission around protecting the world from social engineering attacks every day comes from.”
-
On the erosion of trust:
- “I don’t believe anything that I see.”
— Craig, [00:06] - "What’s eroding here, it's not the individual attack, it's trust in general."
— Craig, [00:06], reiterated at [26:20]
- “I don’t believe anything that I see.”
-
On scale of risk:
- “We're seeing consumers lose tens of billions, if not hundreds of billions of dollars today to things around fraud and phishing and social engineering.”
— Kevin, [26:51]
- “We're seeing consumers lose tens of billions, if not hundreds of billions of dollars today to things around fraud and phishing and social engineering.”
-
On the attack chain:
- “We’re actually building out a platform to stop every single piece of that attack kill chain.”
— Kevin, [06:13]
- “We’re actually building out a platform to stop every single piece of that attack kill chain.”
-
On practical verification:
- “A couple of the tactics...like I could ask you, Craig, to put up your phone and then with your phone, you know, show the selfie view...or ask about some random topics...to see if you’ve got pre-can AI answers.”
— Kevin, [24:26]
- “A couple of the tactics...like I could ask you, Craig, to put up your phone and then with your phone, you know, show the selfie view...or ask about some random topics...to see if you’ve got pre-can AI answers.”
-
On the future of defense:
- “Our job is to make it not whack a mole...How do we be strategic and proactive? You know, that is our mission, right? Is to solve this problem.”
— Kevin, [39:52]
- “Our job is to make it not whack a mole...How do we be strategic and proactive? You know, that is our mission, right? Is to solve this problem.”
-
On platforms’ response:
- “The reason why they built these programs is because they want to consume this intelligence. They want to shut down these campaigns.” — Kevin, [47:49]
Timestamps for Important Segments
- [00:00] – Existential risk: manipulating digital reality
- [00:15] – Introduction to Doppel’s mission and capabilities
- [02:24] – The invention of “social engineering defense”
- [06:13] – How Doppel disrupts the social engineering attack chain
- [09:46] – Key current attack trends: phone calls, LinkedIn, search poisoning
- [14:39] – Doppel’s origin story and AI safety motivators
- [18:55] – Platform coverage and technical depth
- [20:19] – Doppel’s AI agent architecture & red teaming
- [24:26] – Practical verification tactics for individuals
- [26:51] – Quantifying trust erosion and financial loss
- [28:17] – Human risk management explained
- [31:24] – How AI-native defense and simulation work
- [32:31] – Outlook for AI-driven threats and election manipulation
- [35:44] – Case study: Shutting down a real-world attack
- [37:18] – Building partnerships for speedy takedowns on platforms
- [38:18] – The scale problem; need for scalable AI responses
- [39:52] – Industry arms race, aims to go beyond whack-a-mole
- [41:04] – Platform vs. specialist defense business models
- [44:40] – How companies/individuals become clients; SaaS model
- [45:34] – Individual vs. enterprise offering
- [46:36] – Working with insurance and other sectors
- [47:49] – Platforms’ willingness and policy responses
Takeaways & Implications
- AI-native social engineering is spiraling in sophistication and frequency, driving a paradigm shift in both threat and defense.
- Doppel’s model leverages AI to counteract AI, enabling real-time threat detection, takedown, and simulation/awareness training.
- The challenge is both technical and societal: as digital trust erodes, the urgency for both organizational and individual defenses grows.
- Ongoing adaptation, multi-channel monitoring, and trusted relationships with major platforms are crucial.
- No panacea exists; the only way forward is continual innovation and partnership in a never-ending adversarial context.
For anyone concerned with AI risk, online trust, or cybersecurity, this episode offers a sobering and strategic view—plus practical tips for both professionals and everyday users.
