Podcast Summary
Podcast: The Audit Podcast
Episode Title: IA on AI – IA Isn't at Risk - Not Being Able to Explain It Is
Host: Trent Russell
Date: February 18, 2026
Episode Overview
This episode discusses the risks associated with artificial intelligence (AI) in the context of internal audit, with a core focus on the challenge of explainability. The host emphasizes that while the advancement of AI itself isn't the biggest risk, the inability to explain or trace AI-driven decisions is what could undermine trust and create problems for auditors and their organizations. The conversation draws from a CIO.com article, translating its insights for an internal audit audience.
Key Discussion Points & Insights
The Real Risk: Explainability, Not AI Functionality
- AI won't "break the enterprise" by simply failing. The real danger lies in "breaking trust when leaders can't explain why it made a decision they're expected to defend."
- (00:15)
- The major concern for internal auditors is not whether AI works, but whether decisions made by AI can be traced and explained.
The Need for a Robust Control Layer
- To address the explainability challenge, a "control layer" is necessary, making AI decisions traceable and explainable.
- (00:20)
- The difficulty in determining ownership and accountability for AI outputs is increasing as enterprise-wide AI becomes mainstream, beyond individual use cases like "my instance of Copilot or ChatGPT."
- (00:30)
Shifting Accountability
- Organizations cannot continue to deflect responsibility by saying, "that's what the model decided," especially in the event of incidents or complaints.
- (00:54)
- Auditors and business leaders must have controls in place to answer essential questions:
- Why did the system recommend this action?
- Why did it flag this person, case, or transaction?
- Why did the automated workflow trigger that decision?
- Who owns the outcome when AI is part of the process?
- (01:18)
Explainability as the Central Issue
- Rather than getting caught up in the next wave of AI capability, the primary emerging topic is "the control layer that makes AI safe to adopt at scale."
- (01:37)
- The article cited by the host uses the word "control" numerous times, highlighting its increasing importance for internal auditors.
Traceability and Audit Trails as Best Practices
- Internal audit must focus on traceability, establishing audit trails for every material AI output or decision. This includes:
- Logging inputs
- Key features
- Retrieval sources
- Prompts and models used (and their versions)
- Tool calls
- Approvals
- Downstream actions
- Timestamps
- (02:40)
- The goal is to accumulate sufficient evidence to explain "what happened and why," especially when the AI doesn't behave as expected.
Proactive vs. Reactive Controls
- Ideally, traceability controls should be implemented during the pilot phase of AI systems, but organizations often defer such measures until production.
- (02:20)
- The recommendation is to at least consider traceability before any enterprise-level AI moves into full-scale use.
Notable Quotes & Memorable Moments
-
On AI Accountability:
"AI won't break the enterprise by failing. It will break trust when leaders can't explain why it made a decision they're expected to defend."
— Host, quoting CIO.com (00:15) -
On Outdated Attitudes:
"We can't just be like, 'Oh, that's what the model decided,' like that's not going to be acceptable anymore—not for very long."
— Host (00:54) -
On the Shift in Focus:
"The most important emerging topic is not the next wave of AI capability … it's the control layer that makes AI safe to adopt at scale."
— Host, citing CIO.com (01:37) -
On the Purpose of Audit Trails:
"You need sufficient evidence to determine what happened and why, when the AI doesn't do what it's supposed to do."
— Host (02:53)
Important Timestamps
- 00:15 — The real risk is inability to explain AI decisions, not AI malfunction.
- 00:54 — Organizations must prepare for accountability and can't defer responsibility to the model.
- 01:18 — Essential questions auditors need to answer about AI actions.
- 01:37 — The shift in focus: Controls over AI capability.
- 02:40 — Best practices: What to log and track for traceability.
- 02:53 — The auditor's goal: accumulate evidence for explainability.
Tone & Style
The host adopts a grounded, practical approach—dispelling hype and emphasizing actionable controls over buzzwords or speculative AI trends. The language is direct but accessible, making the episode valuable for both experienced auditors and those newer to the field.
For internal auditors, this episode is a call to action: As your organization adopts AI, don’t just chase the latest capabilities—focus on the foundational controls that allow you to explain, trace, and defend every automated decision.
