Transcript
A (0:05)
You're listening to the rsa conference podcast, where the world talks security. Hello listeners. Welcome to this edition of our RSAC podcast series. Thank you for tuning in. I'm Tatiana Sanchez.
B (0:21)
And I'm Casey Zerkis and we are
A (0:24)
your RSAC podcast hosts. Casey, what are we going to discuss today?
B (0:29)
Well, Tatiana, no surprise that AI is front of mind for everyone. And governance around AI, simply put, is just different from traditional governance. Moreover, it needs to be treated differently. And still many organizations continue to approach it as this sort of documentation or compliance exercise. And we know that that's just not working. AI governance is more dynamic and it needs to move beyond just a checklist mindset. So that's why I'm excited. I know you are as well. To be joined today by Varun Raj, who's going to talk to us about how to reframe AI governance through a system design lens. So he's going to offer just a few tips. How organizations can treat governance as a runtime property, how separating AI control planes can create a more secure system, and how teams can shift from a model risk model mindset to a systemic risk. So are you ready to dive in, Tatiana?
A (1:37)
Yes, but before we get started, we want to remind our listeners that here at RSAC we host podcasts twice a month and we encourage you to subscribe, rate and review us on your preferred podcast app so that you can be notified when new tracks are posted. And now we would like to ask our guest to take a quick moment to introduce himself before we dive in. Varun.
C (1:57)
Thank you Tatiana. And thank you Casey. It's great to be here. I am a cloud and engineering executive working on large scale platforms where cloud infrastructure, data systems and machine learning come together to power real production environment. Much of my work focuses on how organizations move generative AI from experimentation into reliable systems that operate safely at scale. What we are learning as an industry is that AI introduces a different category of operational risk than traditional software. With most software, behavior is deterministic. You can review the code and test it. With AI systems, behavior emerges from the interaction between models, data and surrounding platform. So the real governance challenges is no longer simply is the model performing well? The more important question is is the system governing the model behaving safely while I run it in the production? That shift from evaluating models to governing AI systems is where many organizations are focusing right now.
