Podcast Summary: Cybersecurity Today – The Complex Landscape of AI and Cybersecurity
Date: September 27, 2025
Host: Jim Love
Guest: Rob T. Lee, Chief AI Officer and Chief of Research, SANS Institute
Episode Overview
In this rich and candid conversation, host Jim Love sits down with Rob T. Lee of the SANS Institute to dissect the evolving relationship between artificial intelligence (AI) and cybersecurity. The discussion navigates the unique risks, immense opportunities, and everyday realities leaders and practitioners face as AI becomes embedded in business operations. Both guests stress the necessity of continuous learning, vulnerability, and community in tackling this seismic technological shift, emphasizing that no single expert holds all the answers in this rapidly developing arena.
Key Topics & Insights
The Inherent Tension: Business Needs vs. Security (00:00–08:00)
- Jim Love’s Realist Stance: Love opens by framing AI as both “unstoppable” due to its potential for business transformation, and fundamentally insecure because security is often bolted on after the fact, driven by business demands for speed and innovation ([00:00]).
- “If you resist, the business will either steamroller over you or they'll subvert the rules and bring it about surreptitiously.” – Jim Love ([01:30])
- Business Rewards Action Over Prevention: Leaders are incentivized for success rather than risk avoidance, which shapes technology rollouts.
- The "Governance" Dilemma: Introducing governance often makes things more challenging, requiring rare wisdom and courage.
Practitioner-Led Security Education at SANS (03:44–06:06)
- The Value of Practical Experience: Rob T. Lee details SANS’ focus on hands-on teaching and practitioner-led instruction, not just theory.
- “You want Navy SEALs to be taught by people that were in combat... you actually want that level of experience.” – Rob T. Lee ([05:29])
- Critique of Academia vs. Practice: Both agree research is important but stress that practical learning from real-world situations is crucial for results.
Schism Between Business and Security in the Age of AI (08:45–13:30)
- Three-Way Juggle for Security Teams ([08:45]):
- Create appropriate governance frameworks/policies
- Utilize AI internally to enhance security operations
- Protect the organization as others independently experiment with AI
- Shadow AI as Top Concern: At a recent industry event, Rob found that non-cybersecurity business leaders were more worried about "shadow AI" (unauthorized and hidden use) and “business trust” than about direct technical threats like hacking or prompt injection.
- “The one thing top like 90% [mentioned] is shadow AI. And that got me thinking...” – Rob T. Lee ([09:45])
Debunking "Governance" – Moving from Restriction to Enablement (12:24–16:00)
- Language Matters: The hosts critique jargon like "governance" and "GRC" for being confusing or off-putting ([12:24]).
- Governance as Enablement: Lee uses the analogy of giving a child a helmet with their bike—security should enable innovation with reasonable safety, not stifle it.
- “Security's job in governance is to create that bicycle helmet... Security done properly is going to enable the business.” – Rob T. Lee ([13:33])
- Security as Community Safety: Security teams should see themselves as community enablers, similar to good policing, not just enforcers.
Facing the AI Learning Curve—with Vulnerability (17:13–24:30)
- Admitting Not Knowing: Rob analogizes the current AI landscape to the early days of information security—no playbook, no experts, and the need to “get off the couch” and start learning by doing ([17:13]).
- “Anyone who would get up on stage and say, here's how to secure AI, they are guessing as much as I was guessing.” – Rob T. Lee ([17:20])
- Crisis of Competency: A striking moment: at a conference, 80% of attendees admitted to “faking it” with their AI knowledge ([22:10]).
- “How many of you are faking it with your current knowledge about AI and ML technologies? 80% of people in the room... hands went up.” – Rob T. Lee ([22:10])
- Daily Practice: Keeping up with AI—the advice is practical: get daily hands-on exposure. Just like sleep, diet, and exercise, interacting with AI tools for 30 minutes a day is key.
- “You have to sleep, eat well, exercise every day... You have to do AI for 30 minutes.” – Rob T. Lee ([24:32])
AI in the Business Context: Why Everyone Must Engage (27:14–31:26)
- Executives—and Everyone—Must Play: Rob recounts a boardroom experience, arguing that every executive must personally try AI, not delegate understanding.
- “If you outsource that, you and your business are at risk... You need to be the person.” – Rob T. Lee ([29:46])
- Don’t Wait for a Magic ROI: Innovation often happens in the play and experimentation phase, not via centrally sanctioned projects.
The “Framework of No” and Problems with Security’s Defaults (31:29–35:05)
- Shadow AI as an Inevitable Outcome: Security and legal often default to “no”—blocking tools until fully understood. This drives employees underground to use AI, creating more risk.
- “The framework of no... has created the exact security issue that everyone's actually worried about in that room. Shadow AI.” – Rob T. Lee ([34:20])
- Organizational Change Needed: Leaders must move from a rigid “no” stance to enabling measured, safe adoption—else employees will simply route around them.
Learning, Community, and Resource Building (40:03–44:50)
- Empowering Safe Experimentation: Give teams the chance to play, but with safeguards—like a practice dataset rather than production data ([57:03]).
- Learning as Community Practice: Assemble groups like Project Synapse (coffee chats for learning), reference materials, and real conversations—learning by sharing.
- “It's research, it is coming together, it is community forming. It is let's solve this together.” – Rob T. Lee ([41:00])
- SANS as a Hub: SANS offers practical resources (AI critical controls, summits, consensus papers), but stresses these are starting points, not full solutions.
Embracing Vulnerability for Real Leadership (46:10–50:00)
- New Leadership Mindset: Both host and guest underscore that leaders must be open about not knowing everything and embrace this as a new kind of strength.
- “I don’t have to know everything now. I don’t have to say I know everything. ... I think you've talked about his vulnerability. I think it's an important piece.” – Jim Love ([46:10])
- Learning from the “Week Older”: Find and follow “AI champions” who might only be a little ahead in learning cycles—replicate this grassroots, peer-driven progress ([47:56]).
Practical Action Steps for Cybersecurity Teams (53:06–58:00)
- Sunlight AI over Shadow AI: Shift the default approach from “no” to “yes, with supervision.” Assign security personnel embedded in teams to watch and guide experimentation.
- “Move from a framework of 'no' to sunlight AI. ... Enable them a little bit, say yes, yes, yes... do an experiment, a monitored one.” – Rob T. Lee ([53:06])
- Restructure Security for Engagement: Like the military’s practice of embedded safety officers, assign specialists to work side-by-side with functions experimenting with AI—finance, HR, operations, etc.
- Learn by Doing and Watching: Security’s role becomes facilitating and learning from what users are doing, codifying protections in live context rather than dictating from above ([57:29]).
Final Practical Advice (58:24–64:42)
- Join the Community: Tap into SANS summits, AI Exchange, open resources, and group discussions. The answers are evolving and must be built collaboratively.
- “Realize that the only way to learn is by joining the community.” – Rob T. Lee ([58:47])
- Pick a Pillar, Become a Champion: In AI security, leaders should pick a focus (Govern, Utilize, or Protect), go deep, and bring others along.
- Adopt a Growth Mindset: Accept starting fresh; no one is expected to have it all figured out.
- “It’s more than even than that. It's like I am now in kindergarten again and I'm learning a brand new language...” – Rob T. Lee ([63:30])
Notable Quotes & Memorable Moments
- “How many of you are faking it with your current knowledge about AI and ML technologies? 80%... hands went up.” – Rob T. Lee ([22:10])
- “Security done properly with proper governance is going to enable the business in the perfect world… instead of thinking about restrictions, it needs to be considered enablement safely.” – Rob T. Lee ([13:33])
- “If you outsource that [AI experimentation], you and your business are at risk... You need to be the person. And... start playing with it.” – Rob T. Lee ([29:46])
- 'The framework of no... has created the exact security issue... Shadow AI.' – Rob T. Lee ([34:20])
- “Everyone needs a Yoda... my Yoda is... Kate Marshall… everyone has the—everyone needs a Yoda.” – Rob T. Lee ([47:56])
- “It’s like I am now in kindergarten again and I’m learning a brand new language.” – Rob T. Lee ([63:30])
Timestamps for Key Segments
- 00:00–03:16 – Jim Love’s introduction framing AI risk and business realities
- 03:44–06:05 – SANS’ practitioner-led approach to education
- 08:45–13:30 – AI’s threefold challenge for security, the business/security schism
- 13:33–16:03 – Governance as enablement, security’s evolving mindset
- 17:13–24:30 – The vulnerability of not knowing, the crisis of AI competency
- 27:14–31:26 – The imperative for everyone, especially executives, to experiment
- 31:29–35:05 – Shadow AI and the consequences of the “no” approach
- 40:03–44:50 – The value of hands-on, collective learning and community
- 46:10–50:00 – Embracing vulnerability as leadership; learning from peers
- 53:06–57:29 – How CISOs & security leaders can enable engaged, safe experimentation
- 58:24–64:42 – Concluding advice: join the community, specialize, keep learning—together
Structured Takeaways for Listeners
1. Acknowledge Uncertainty and Learn Openly
- Don’t pretend to be an AI expert—admit what you don’t know, and prioritize daily practice.
- Share vulnerabilities and lessons learned in peer groups or public forums.
2. Shift Security from Restriction to Enablement
- Replace the enforcement-heavy “framework of no” with monitored, experimental, and enabling "Sunlight AI."
- Embed security personnel in every department experimenting with AI.
3. Make AI Learning and Experimentation a Company-wide Practice
- Executives must personally try AI; learning cannot be fully delegated.
- Create internal communities, share discoveries, and embrace collective imagination.
4. Embrace Community and Collective Wisdom
- Join professional summits, group discussions, and tap into practitioner-led resources.
- Seek a “Yoda”—someone even slightly ahead in the journey.
5. Develop and Iterate Governance—Don’t Wait for Perfection
- Establish basic “helmets” (safeguards & ground rules), then adapt as learning evolves.
- Move quickly but responsibly—perfect frameworks will only come with time and experience.
For resources mentioned, visit sans.org/ai and explore AI critical controls, summits, and consensus papers.
“If you’re leaning on your ‘AI expert’... just hand them the reins of the company and you step down, because you are effectively not useful any longer.”
– Rob T. Lee ([30:10])
“We all need to be in this together. We're in that foxhole, all scared. But if we work together, that's the way we get through this and we're going to be able to protect our families.”
– Rob T. Lee ([62:46])
Subscribe to ‘Cybersecurity Today’ for weekly updates, and check out Project Synapse for more collaborative AI learning journeys.
