Podcast Summary: Governing Generative AI at Scale – From Model Risk to Systemic Control
Podcast: RSAC
Hosts: Tatiana Sanchez & Casey Zerkis
Guest: Varun Raj, Cloud and Engineering Executive
Date: April 7, 2026
Episode Theme & Purpose
This episode delves into the evolving challenges and strategies for governing generative AI at scale. The discussion centers on moving beyond traditional, static approaches to governance, advocating for dynamic, architecture-embedded controls that address not only model risk but also systemic risk. Guest Varun Raj shares insights on reframing AI governance as a runtime property, designing robust control planes, and ensuring organizations' AI systems are safe, trustworthy, and auditable in production environments.
Key Discussion Points & Insights
1. The New Paradigm of AI Governance
[00:29]
- AI governance fundamentally differs from traditional technology governance.
- Many organizations mistakenly treat AI governance as a compliance or documentation exercise.
- AI systems bring unique, behavioral risks that are dynamic and require real-time oversight.
“AI governance is more dynamic and it needs to move beyond just a checklist mindset.” – Casey Zerkis [00:29]
2. Runtime Governance for Dynamic Systems
[03:00]
- Traditional governance is static: policies, pre-deployment reviews, documentation.
- Generative AI is dynamic, interacting with new users/data constantly, and risk emerges during operation, not just development.
- Governance should be a runtime property—continuous, embedded, and enforced as part of the system architecture.
“Governance has to exist at runtime… embedded into the system itself through things like policy enforcement, access control… and full traceability.” – Varun Raj [03:26]
- Similar to cloud reliability models: don’t just test, but actively contain and monitor failures as they happen.
“Governance should not only approve AI systems before deployment; it should continuously govern how those systems behave while they operate.” – Varun Raj [04:24]
3. Architectural Separation: Control Plane vs. Data Plane
[05:06]
- Governance controls are often inconsistently scattered across applications, leading to fragmented enforcement.
- A more effective design: separate control plane (makes governance decisions) from data plane (executes AI operations).
- The control plane handles identity verification, policy enforcement, and system-wide rules; the data plane is where AI models run.
“When you centralize governance in a control plane, you can apply least privilege principle continuously and consistently across your entire AI system.” – Varun Raj [06:05]
- This ensures models operate within defined, monitored boundaries, turning previously “black box” models into governed system components.
4. From Model Risk to Systemic Risk Thinking
[07:03]
- Prevailing governance frameworks stress model risk (e.g., bias, fairness, accuracy).
- Most operational failures actually arise from systemic risk—the ways models interact with infrastructure, data, and external tools/APIs.
- Even a great model can cause harm if misintegrated within a less mature system.
“Aircraft safety doesn’t depend on the engine alone. It depends on the entire system—the navigation, the instrumentation, the monitoring and controls. AI systems require the same kind of thinking.” – Varun Raj [07:57]
- Governance must be built into the overall AI system: safety, observability, policy enforcement as core architectural features.
5. Enabling Verifiable, Auditable AI Governance
[08:56]
- Regulators and CISOs now expect tangible evidence, not just assurances of safety/compliance.
- Runtime governance and architectural controls help make organizational AI adoption both verifiable and auditable.
- Ensures transparency for legal, security, and engineering stakeholders.
“Architecting the systems that enforce governance at runtime will help CISOs to ensure that they don’t have the risk that regulators are concerned about.” – Varun Raj [09:34]
Notable Quotes & Memorable Moments
-
“AI governance needs a similar shift [to cloud reliability]; what this leads to is what I often describe as a runtime governance model for AI system…”
— Varun Raj [04:13] -
“If a well-trained model [is] placed inside an immature system, it can still produce serious failures.”
— Varun Raj [08:18] -
“The model is no longer treated as an autonomous black box—it becomes a governed component within the controlled system architecture.”
— Varun Raj [06:56]
Important Segment Timestamps
- [00:29]: Introduction to dynamic AI governance vs. traditional methods
- [03:00]: Explanation of runtime governance and its necessity
- [05:06]: Why separating control and data planes matters
- [07:03]: Differentiating model risk from systemic risk
- [08:56]: Regulatory expectations and enabling auditable AI systems
Takeaways for AI Leaders & Practitioners
- Treat governance as a living, always-on property of operational AI—not a static artifact.
- Architect systems so the control plane centrally manages policies, access, and enforcement.
- Evaluate and address systemic risk: the sum of all interactions across the AI ecosystem.
- Build governance into the fabric of the system to provide trustworthy evidence for security, regulatory, and business stakeholders.
