RSAC Podcast Summary
Episode Title: Cyber at the Top: Shadow AI – The Hidden Threat Inside Your Organization
Date: April 2, 2026
Host: RSAC
Guest: J.R. Williamson, SVP and CISO, Leidos
Episode Overview
This episode dives into the emerging threat of “Shadow AI” – the unauthorized or unsanctioned use of generative AI tools within organizations, especially in sensitive sectors like defense. J.R. Williamson, SVP and CISO at Leidos, shares operational insights, risk assessment strategies, and key governance approaches for identifying and mitigating the risks associated with Shadow AI, all while fostering innovation and maintaining ethical standards.
Key Discussion Points
1. Leidos' Mission and the Unique Defense AI Landscape
- [01:27] J.R. describes Leidos as a leading aerospace and defense contractor with diverse operations in defense, intelligence, homeland security, health, and biomedical research, spanning the US, UK, and Australia.
- AI is not novel in defense; automated systems have existed for decades, e.g., unmanned vehicles.
- The transformative leap is generative AI’s natural language interface, allowing human-like interaction and expanding productivity and operational efficiency.
Quote:
“AI has been around a long time. What's really different is this whole sort of generative AI thing... the fact that humans can interface with the machine quite a bit simpler or easier with natural language processing.”
— J.R. Williamson [02:31]
2. Defining Shadow AI and its Risks
- Shadow AI is the informal, unauthorized use of AI tools outside enterprise governance and approved policies.
- It mirrors the historical problem of “shadow IT” but comes with new risks due to AI’s powerful and adaptive capabilities.
Quote:
“If they're operating outside those norms, we may accidentally be doing something that really is inappropriate or potentially unethical. That's not part of our brand. Ethics is one of our core six values across the enterprise.”
— J.R. Williamson [06:28]
3. Detecting and Governing Shadow AI
- Draws lessons from combating shadow IT: shine light where potential misuse may occur.
- Detection mainly involves monitoring network activity, proxy logs, and cloud service usage.
- Use anomaly detection—AI itself can help identify unauthorized AI activity.
Quote:
“AI can actually help us find the shadow AI... setting up your governance, making sure folks know how to do it right. It's less about, ‘No, you may not use this,’ and more about, ‘KNOW—this is how we do it properly.’”
— J.R. Williamson [08:32]
4. Protective Policies, Guardrails, and Technical Controls
- Implement a combination of education, process, and technical controls.
- Speed bumps instead of outright blocks: Leidos preferred educating users at critical junctures (e.g., warning banners) rather than restricting access entirely.
- Leverage firewalls, application whitelisting, API monitoring (especially as APIs are a prime integration and leakage point), and CASBs.
- Technical solutions should complement policy and governance: use advanced tools, including AI, for detection.
Memorable Approach:
“One of the things we did very early on when ChatGPT showed up and shocked everybody was we put in a speed bump... we just had a little reminder, ‘Hey, click here if you will, just to verify and validate you understand what these risks are.’”
— J.R. Williamson [10:25]
5. Balancing Enablement and Security: The Role of Education
- Education and controls must go hand-in-hand.
- Focus on teaching principles and use-case-based understanding rather than blanket prohibitions.
- Employees need to understand both the potential and the boundaries of AI—align innovation with organizational ethics.
Quote:
“We really want to teach people how to fish, not just fish for them... Put those principles together, share them, train on those, and teach the right way to use these tools.”
— J.R. Williamson [14:51]
6. Evolving Data Protection and Classification in the AI Era
- Data Loss Prevention (DLP) is challenging, especially with unstructured data and conversational AI.
- The future lies in granular tagging and portion marking, potentially automated via AI—track sensitive fragments even when extracted into prompts.
- Build more surgical, context-aware policies, moving toward automated detection and prevention using AI-assisted classification.
Quote:
“We can train the machine... when I hear this term or these certain terms coming together, that is sensitive. If I'm using the word Social Security... relate those, disrupt that. And so we're training the machine by building language models to detect those situations.”
— J.R. Williamson [17:37]
7. Bringing Shadow AI Into the Light: A Supportive, Not Punitive, Approach
- Offer “amnesty” programs, inviting employees to disclose unsanctioned AI use without penalty to transition toward approved, safe usage.
- Lead with understanding and guidance rather than punishment, while maintaining seriousness about certain risks.
Quote:
“It always starts with a good amnesty program... The mindset is, we want to help teach you how to use these tools properly and effectively... not just provide the hammer.”
— J.R. Williamson [26:29]
8. Adversarial AI, Explainability, and the Future of Cyber Defense
- The threat landscape is rapidly evolving: adversaries use AI to craft sophisticated attacks, necessitating speedy, AI-driven defense.
- Explainable AI is vital—systems must provide auditable reasoning to enable trustworthy automation.
- Graph neural networking and digital twins: model your environment to continuously probe and improve defenses using AI.
Quote:
“You can almost create like a digital twin of your operating environment... Now I can do what-if scenarios. I can actually attack the model like the adversary would... That's where we go from detect and respond to predict and prevent.”
— J.R. Williamson [29:40, 36:35]
9. Actionable Takeaways and Final Advice for CISOs
- “Know where your data is.” Visibility and understanding are foundational.
- Don’t seek to block AI entirely—focus on ethical usage, education, and principled governance.
- Prepare for AI integration everywhere, including AI agents operating beyond prompts and desktops.
Quote:
“It's 10pm—do you know where your data is? Because if you don't... I guarantee you they're going to be using AI. Start with the why: what does it mean to use these tools in an ethical, responsible, and hopefully soon explainable way.”
— J.R. Williamson [39:03]
Notable Quotes & Memorable Moments
| Timestamp | Speaker | Quote | |-----------|-----------|-------| | 02:31 | J.R. | "AI has been around a long time. What's really different is this whole sort of generative AI thing... the fact that humans can interface with the machine quite a bit simpler or easier with natural language processing." | | 06:28 | J.R. | "If they're operating outside those norms, we may accidentally be doing something that really is inappropriate or potentially unethical." | | 10:25 | J.R. | "We put in a speed bump... we just had a little reminder, 'Hey, click here if you will, just to verify and validate you understand what these risks are.'" | | 14:51 | J.R. | "We really want to teach people how to fish, not just fish for them... Put those principles together, share them, train on those, and teach the right way to use these tools." | | 17:37 | J.R. | "We can train the machine... we're training the machine by building language models to detect those situations." | | 26:29 | J.R. | "It always starts with a good amnesty program... The mindset is, we want to help teach you how to use these tools properly and effectively... not just provide the hammer." | | 36:35 | J.R. | "You can almost create like a digital twin of your operating environment... Now I can do what-if scenarios. I can actually attack the model like the adversary would..." | | 39:03 | J.R. | "It's 10pm—do you know where your data is? Because if you don't... I guarantee you they're going to be using AI. Start with the why..." |
Important Timestamps for Key Segments
- [01:27] Introduction to Leidos and its unique AI challenges
- [02:31] Impact of generative AI in the defense sector
- [06:28] Definition and risks of Shadow AI
- [10:25] Building guardrails: the “speed bump” approach
- [14:51] The necessity of education in AI governance
- [17:37] Next-generation data classification and DLP with AI
- [26:29] Amnesty and positive reinforcement for shadow AI disclosure
- [29:40] Adversarial AI and the push toward explainability
- [36:35] Digital twins & graph neural networks as defensive strategy
- [39:03] Closing advice: begin with data visibility and ethical principles
Final Takeaways
- Shadow AI expands the risk landscape in all sectors, especially defense, by hiding ungoverned, potentially unethical use.
- Solutions should balance enablement and security: education, amnesty, and robust technical controls.
- Data visibility, granular tagging, and continuous, AI-driven monitoring are critical.
- The future of cyber defense lies in explainable, automated, and predictive AI—using the machine to defend against the machine.
- Start by knowing your data, then build a culture where ethical, responsible, and innovative AI use is actively taught, monitored, and celebrated.
