CISO Series Podcast: "Wait, SMS Doesn’t Stand for 'Super Mega Secure?'"
Hosts: David Spark, Andy Ellis, Mike Johnson
Guest: Brian Long (Co-founder & CEO, Adaptive Security)
Date: September 16, 2025
Episode Overview
This episode dives deep into the evolving cybersecurity landscape, focusing on challenges and adaption strategies around hiring in a remote world, the implications of AI in code generation and social engineering, the realities of deepfakes, and the persistent insecurity of SMS-based authentication. The hosts and guest explore not only technical controls but also cultural and organizational factors that must change to keep pace with threats—offering candid, practical, sometimes humorous insights for practitioners and business leaders.
Main Discussion Segments & Key Insights
1. Cybersecurity Representation at Conferences
[02:23-04:07]
- Diversity Issues: David Spark raises concerns over the lack of visible diversity at major conferences like Black Hat, challenging companies to "represent outside of your organization if you claim representation inside."
- Organizational Responsibility: Andy Ellis notes, “Conferences are both opportunities for your staff and a chance to display your organization’s makeup. If you’re hiring women but never sending them to conferences, maybe rethink your process.”
- Caregiver Burdens: Recognized as a factor, but companies must solve it, not use it as excuse.
David Spark [03:17]: “If you claim the representation is within your organization, represent it outside your organization.”
2. Hiring Risks in the Remote & AI Age
[04:33-11:40]
- Real-World Example: Two North Korean operatives nearly hired remotely by a security company, passing initial digital checks but failing in video interviews due to AI-generated answers and poor live interaction.
- Broken Processes: Andy emphasizes, “We’ve built on insecure protocols entire business processes, and adversaries will exploit this.”
- Escalating Threats: Brian Long notes, “[AI models are] getting cheaper and better so fast that it's soon anyone can scale attacks for minimal cost. The ROI for attackers is clear.”
- Verification Workarounds:
- Off-line or “off the books” authentication methods (e.g., shared secrets within families or teams).
- Question if digital authenticators themselves can be trusted once attackers pivot to compromise them.
- Awareness Gap: Brian warns, “Technology is moving a lot faster than awareness is,” urging urgent education for everyday users and employees.
- Risk Decision-Making: David and Andy both highlight it’s safer to act cautiously—sometimes, the “worst” outcome is preferable to taking a risky step.
Andy Ellis [10:20]: “Anytime you have someone saying ‘Hey, can you X,’ your brain should consider: is this person a friend or an adversary?”
3. AI-Generated Code: Security Paradigm Shift
[11:46-17:04]
- Code Ownership: Even with AI, developers are expected to "own" code, but at scale, human review and assurance is unsustainable.
- "Vibe Coding": Rapid, prompt-based coding shifting security and QA to the end of the pipeline—question: does this help or harm?
- The ‘Kindergarten’ of AI Coding: Brian: “Where we are now… is going to look like the kindergarten of AI coding.”
- Agentic Future: Andy foresees multi-agent workflows where AIs write specs, test cases, and deliver prompts for other AIs—humans focus on the “art” of software, not tedium.
- Risk of Learned Insecurity:
- Andy: “If you have an LLM write your code, it’s trained on all software—most of it insecure.”
- Optimism: With the right evolution, AI can free engineers to focus on creativity and architectural excellence.
Andy Ellis [15:30]: “Humans should not write prompts. You need AI to write prompts for AIs to write prompts before you even write code.”
4. "What's Worse?" Game: Telemetry Without Context vs. Context Without Capability
Game starts: [18:50]
Scenario 1 – Telemetry Everywhere, Insight Nowhere:
- Enterprise has every security tool imaginable, endless alerts, but no context, outdated asset mapping, and useless dashboards.
Scenario 2 – Context Rich, Capability Poor:
- Business-aligned threat models and contextual risk understanding, but legacy tooling, no automation, talent gaps, and only minimal compliance possible.
Debate Outcomes:
- Andy: Scenario 1 is worse—teams are “spending money but getting no value” and leadership is blissfully unaware.
- Brian: Scenario 2 is worse—at least in scenario 1, “you have the opportunity to do something… to draw attention.”
- Key Insight: Teams need both context and capability, but lack of action (Scenario 2) can be existentially demoralizing.
5. Deepfakes: Detection is Not Enough
[26:27-33:38]
- Detection Futility:
- Andy: “Detection alone will soon be irrelevant. Everyone is about to be deepfaking themselves for minor things like cleaned-up profiles. The real problem is not technical detection, but trust and verification.”
- Deepfake Personas:
- Brian: “It’s not just video or voice—AI can create hyper-realistic persona attacks thanks to limitless OSINT. Most attacks won’t even use visuals.”
- Example: AI, given a LinkedIn URL, produced sensitive personal information within seconds.
- Evolving Threat Model: Attackers may now impersonate middle-management or peers (“middle up, middle out”), not just execs—requests for "help" or favors are less suspicious, but effective for social engineering.
Brian Long [29:41]: “The amount of OSINT you can pull off the LLMs now is staggering.”
6. SMS 2FA Security: Structural Transparency Fails
[34:56-38:54]
- Industry Exposé: Millions of “secure” login codes are intercepted by contractors through convoluted, multi-layer routing chains.
- Technical Reality:
- Andy: “You have no confidence in SMS—every time you add a vendor, you add risk, and the structure is not designed for security.”
- Law enforcement and state access further undermine trust.
- Analogy: “It’s like taping codes to pigeons feet and hoping they fly in the right direction.”
- Supply Chain Complexity:
- Brian, whose prior company sent 50+ billion texts/year, confirms “there’s always yet another middleman,” and only huge-scale companies can go direct.
- Bottom Line:
- If you must use a code, use authenticator apps or other out-of-band systems, not SMS.
- Answers industry’s titular joke: “SMS doesn’t stand for ‘Super Mega Secure.’”
Andy Ellis [36:28]: “You should have no confidence in SMS… If you have a relationship, use an authenticator app.”
Notable Quotes and Moments
-
On AI Social Engineering:
Brian Long [09:03]: “Everyday people really do not have any idea of what the capabilities are here… The technology is moving a lot faster than the awareness is.”
-
On Deepfake Attack Scaling:
Brian Long [30:02]: “They [models] can use information at unlimited scale… That’s where the difference is coming.”
-
On the Meaning of Security Stack Investments:
Andy Ellis [24:14]: “The first one is worse simply because you are spending money and getting no value out of it. But you have a bunch of happy vendors.”
Key Timestamps
- 02:23 – Lack of diversity at conferences
- 04:33 – North Korean remote hiring attempt; hiring/verification in remote world
- 07:21 – Off-book verification practices for employees, families, and teams
- 11:46 – AI coding: ownership, vulnerabilities, “vibe coding” culture
- 18:50 – “What’s Worse?” scenario debate on context vs. capability
- 26:27 – Deepfakes: why detection can’t save you; rise of “deepfake personas”
- 34:56 – SMS 2FA investigation: multi-vendor supply chain insecurities
- 39:55 – Adaptive Security offers and hiring; field growth stats
Sponsor and Innovation Spotlight
- Adaptive Security (Guest: Brian Long) [31:36, 39:55]
- What they do: AI-native security awareness, simulating next-gen attacks (deepfakes, vishing, smishing).
- Unique workflow: Realistic, OSINT-based simulations using actual company data and deepfake persona attacks; bespoke interactive training.
- Special offer: Demo requesters mentioning the podcast receive discounts and a custom simulated deepfake attack.
- Jobs: Actively hiring across all functions (engineering, marketing, sales, more).
Closing Thoughts
- Rapid evolution of AI, in both attack and defense, demands urgent awareness and fundamental business process reform.
- Deepfakes and AI-generated code pose new challenges rarely fully addressed by detection alone—security must embed both context and verification into every workflow.
- SMS-based authentication is structurally broken and should be phased out for anything where security matters.
For Further Engagement
- Adaptive Security Demo and Jobs: adaptivesecurity.com (mention CISO Series for special offers and custom simulations)
- More Podcast Content: cisoseries.com
This summary captures the candid, direct, and sometimes witty style of the hosts and guest, condensing a lively episode filled with timely warnings and actionable intelligence for security practitioners and business leaders alike.
