Podcast Summary: The Audit Podcast
Episode: IA on AI – The AI Oversight Gap
Host: Trent Russell
Date: September 3, 2025
Episode Overview
This episode addresses the “AI Oversight Gap” within organizations, focusing on how insufficient governance and controls around AI tools create hidden risks and costs—especially in the audit context. Drawing on recent industry reports, trends, and audit best practices, the host (Trent Russell) examines the prevalence of unapproved (“shadow”) AI use, the challenges of instituting effective policies, and the vital role of internal audit in mitigating risks related to AI adoption.
Key Discussion Points and Insights
1. The Hidden Costs of Ungoverned AI Use
-
IBM’s 2025 Data Breach Report ([00:11]):
- Nearly all AI-related security incidents originate from unauthorized or unmanaged (“shadow”) AI tools.
- 97% of organizations with AI security incidents lacked proper AI access controls.
- 63% of breached organizations had no governance policy for detecting or managing AI use.
-
Definition and Terminology ([01:02]):
- The team debates the terminology, settling on "shadow AI" as opposed to "Bring Your Own AI" (BYOAI) for ease.
- Shadow AI is defined as employees using unapproved or unsanctioned AI tools in the workplace.
-
Quote ([01:21]):
“The most striking finding is that 97% of organizations experiencing AI-related security incidents lacked proper AI access controls. Access controls—that’s like audit 101 stuff.”
— Host (Trent Russell) -
Host’s Recommendation ([01:47]):
- Every organization should implement an AI governance policy and committee.
- Lack of such policies increases—not decreases—risk; merely denying or “blocking” AI is insufficient.
2. Shadow AI in Organizational Spend and Behavior
-
Real-World Example ([02:12]):
- Guest Michelle Velez (referenced) noticed increases in ChatGPT subscription expenses before her company had an approved internal AI tool. After deployment of a sanctioned tool, unauthorized subscriptions decreased but did not vanish entirely.
-
Quote ([02:46]):
“Obviously, there’s still going to be shadow AI...even when you have those approved tools.”
— Host -
Actionable Advice ([03:29]):
- Auditors should document and raise findings if AI governance is absent.
- Maintain an “inventory” of AI tools discovered during audits and share with governance bodies.
3. Public Accounting and Internal Audit’s Responsibility
-
Eisner Amper Survey Data ([04:10]):
- 80% of employees had a net positive experience using AI, but only 36% of companies had a formal policy.
- Host speculates larger companies with internal audit departments likely have better maturity, but many are still underprepared.
-
Quote ([04:57]):
“There are organizations we talk to...and they still go ‘we have nothing.’ Or it’s ‘Yeah, we kind of have something, but it’s not super specific and it’s pretty broad.’”
— Host -
Employee Use Patterns ([05:17]):
- Only 41% of employees report AI use to their manager; 60% rely on free platforms.
- 28% admit they’d use AI at work even if it were banned.
- Host doubts even this is fully accurate — actual shadow use may be even higher, especially among AI-literate employees.
4. Mitigating Shadow AI and Setting Expectations
-
Education is Critical ([06:21]):
- Training employees on AI risks is emphasized as essential.
- Setting realistic expectations: AI tools support but do not fully automate complex functions like audit report generation.
-
Quote ([07:34]):
“You’re going to need to be a little bit more involved on the engineering side...Definitely, I would highly recommend implementing—suggesting—an AI governance or AI risk training.”
— Host -
Practical Learning Resource ([08:12]):
- Host mentions a recent webinar showing how these tools work, advocating for education not just on risks but on practical functionality to reduce “AI anxiety.”
5. Hype, Vendor Claims, and AI Washing
-
CFO Dive Article — Vendor Overselling ([09:02]):
- FTC suit against Air AI for defrauding clients with exaggerated claims of AI’s capabilities (“AI washing”).
- Caution to organizations: don’t be seduced by vendor promises that AI will “solve all your problems.”
-
Quote ([09:43]):
“This is what is referred to as AI washing...Propping up this idea that AI is going to solve all your problems—buy our thing, you’ll be good to go.”
— Host -
Vendor Risk Management and Onboarding ([10:25]):
- Host cites a prediction: By 2026, 80% of vendors will incorporate AI, making vendor oversight essential.
- Organizations must add AI risk assessments and evidence of safeguards to the vendor onboarding checklist.
-
Internal Integrity Check ([11:02]):
- Reminder to assess one’s own organization’s marketing to ensure it doesn’t fall into the same overpromising or “AI washing.”
Notable Quotes and Memorable Moments
-
On Access Controls:
“Access controls—that’s like audit 101 stuff.” ([01:21], Host)
-
On Shadow AI Use:
“Even when you have those approved tools...you’re still going to have people using these shadow AI systems.” ([02:46], Host)
-
On Reporting Gaps:
“28% admit they would use AI at work even if it were banned.” ([05:44], Host reading survey data)
-
On Expectations vs. Reality:
“That’s not exactly how it works...Just being able to throw it into the tool…and it do it is not going to happen very often.” ([06:40], Host)
-
On Vendor Claims and AI Washing:
“This is what is referred to as AI washing... And so it is propping up this idea that AI is going to solve all your problems, buy our thing, you’ll be good to go. When that’s really not the case.” ([09:43], Host)
Key Timestamps
- 00:11 — Introduces IBM data breach report findings and the term “shadow AI.”
- 01:21 — On AI security incidents and the lack of access controls.
- 02:12 — Story of Michelle Velez tracking AI tool spend and shadow AI usage.
- 04:10 — Eisner Amper survey data: policy gaps and employee behavior.
- 05:44 — Discussion on actual shadow AI prevalence and employee bypassing of policy.
- 06:21 — The need for robust AI risk and governance training.
- 08:12 — Mention of webinar providing hands-on education about AI tool functions.
- 09:02 — FTC suit over AI washing; warning against vendor overpromising.
- 10:25 — Vendor onboarding and necessity of AI due diligence.
- 11:02 — Internal marketing review to avoid AI overpromising.
Takeaways for Audit and Risk Professionals
- Inventories of AI use, robust governance policies, and employee education are critical in managing AI-driven risks.
- “Shadow AI” is pervasive, and absence of formal controls compounds exposure.
- Vendor risk management must now include AI-specific diligence.
- Internal audit remains essential in setting expectations and ensuring reality matches AI’s promises, both internally and for clients.
