Transcript
A (0:00)
Welcome to the IA on AI Podcast, part of the Audit Podcast network where we bring you weekly updates on AI from the internal auditor's perspective.
B (0:09)
Here we go. From CIO.com, aI isn't the risk not being able to explain. It is the article says, or starts off with. AI won't break the enterprise by failing. It will break trust when leaders can't explain why it made a decision they're expected to defend. So practically, the way that we can fix this and help from the audit perspective is building a control layer that makes AI decisions traceable and explainable. Understanding who owns these, that's getting more difficult. This isn't like a this is my instance of Copilot or this is my instance of ChatGPT. This is more enterprise wide type stuff and similar to how we can't say that's how we did it last year or same as last year.
A (0:52)
Sally.
B (0:54)
Um, similarly, we can't keep going. We being organizations as a whole, when there's like an incident or there's complaint or something like that, we can't just be like, oh, that's what the model decided, like that's not going to be acceptable anymore, or at least for very long. So we have to add controls in there now. We have to continue to add controls there now. So the article goes on to say, when something goes wrong, the questions are simple and direct. Why did the system recommend this action? Why did it flag this person, this case or this transaction? Why did an automated workflow trigger that decision? Who owns the outcome when AI is part of the chain? Like these are really good questions to start thinking about now and going like, oh, in the event something terrible happens or even minor happens, what's going to be the answer to these questions? Like that's one way to think about it. I like the way the, the author put this though. He said that's why I think the most important quote emerging topic is not the next wave of AI capability, which is like all we ever hear about. Seemingly it's the control layer that makes AI safe to adopt at scale. I was actually pretty excited. The word control is in here 2, 4, 6, 89 times. So this is a lot. This is probably one of the more just on point. Hey, internal audit as opposed to maybe even the CIO in this case article that I've read recently. And a lot of this comes down to the idea of explainability. So basically again, coming back to those questions that we asked earlier and being able to answer those, something is going to happen and so we need to be able to answer those we need to have the controls in place. Ideally you would do it during a pilot, but that's probably unlikely because that's going to be like a well, it's a pilot. Let's just get this thing out there and then before maybe we go into production, we can consider that pretty okay with that also. But we think about the specific controls to think about. It's what we refer to as traceability. By default, audit trails for every material, AI output or decision. So similar to any other system you have or data, you have them ranked. This one's high priority, medium low. Who even really cares about that one? And so similarly, you have to have that in mind and think about for each one of these AI systems. Log inputs, key features, retrieval sources, prompts, models, versions of those models, tool calls, approvals, downstream actions, timestamps. Basically, you need sufficient evidence to determine what happened and why, when the AI doesn't do what it's supposed to do.
