CyberWire Daily – Special Edition:
"AI in the GRC: What's Real, What's Risky and What's Next"
Date: November 30, 2025
Episode Overview
This special edition of CyberWire Daily brings together industry leaders to explore how agentic artificial intelligence is reshaping the fields of Governance, Risk, and Compliance (GRC). Hosted by Dave Bittner, the episode features panelists Kane McGladry (CISO in Residence, Hyperproof), Alam Ali (SVP of Product Management, Hyperproof), and Matthew Cassidy (Partner, Grant Thornton Advisors). The conversation centers on practical applications, real risks, and future developments as organizations weigh the promise and perils of integrating AI into GRC workflows.
Key Discussion Points & Insights
1. Panelist Introductions and Perspectives on GRC and AI (02:11–05:52)
- Matthew Cassidy describes a career rooted in IT audit and governance, now focused on applying lessons from IT audit to emerging AI risks (data, business, and access).
- Kane McGladry shares his transition from theater to cybersecurity, advising clients on measuring the value of security investments and improving audit efficiency.
- Alam Ali brings extensive software development and machine learning experience, emphasizing AI’s potential to automate and streamline GRC processes:
“With the latest tech we have significant opportunity to make real leapfrogs in how we save time, money and toil across the product.” (04:46)
2. Current Use Cases: Where AI Delivers Value in GRC (06:37–09:04)
-
Consulting as an “Assistant”:
Matt describes efforts to embed proprietary small language models across the firm, making institutional knowledge widely accessible:“We’re really seeing it as kind of an assistant... no more sending RCMS via email... dropping it into a chat so that somebody can share it.” (06:44)
-
Control Automation:
Alam highlights demand for “end-to-end” automation of GRC controls, a recurring ask from nearly every customer. -
Continuous Monitoring & Evidence Gathering:
Kane stresses the shift from sampling/auditing on intervals to ongoing evidence collection and risk analysis enabled by AI.
3. Emerging Risks and the Challenge of Trust (07:44–13:03)
- Expectation Management:
Matt points out the importance of clarifying what AI can and can’t do—avoiding overpromising full automation or human-equivalent accuracy. - Security and Access Risks:
Lessons from prior bot automation inform today’s focus on data access and governance as AI tools gain power. - Auditability and Objectivity:
Audit processes still demand that AI-collected evidence is objective, complete, and traceable to its source.“When we’re using the AI, making sure that’s auditable as well. Right. It seems a little recursive to a degree. But keeping those principles in mind... there’s a good mesh there.” – Matt (12:29)
4. Poll: The Current State of AI in GRC Adoption (14:16–17:26)
Audience Results:
- 30% piloting/testing AI-powered GRC solutions.
- 30% evaluating but not testing.
- 36% planning to explore in the next 12 months.
Panelist Reflections:
- Strong caution from regulated industries, balancing adoption desire against liabilities from AI errors.
-
“…If an AI makes a mistake, the people who sign off on that mistake... they’re going to be the people who are going to carry liability for it. And we owe it to them to not make those mistakes.” – Kane (16:29)
5. AI Hallucinations & The Importance of Human Oversight (17:26–20:22)
- Hallucinations as a Barrier:
Matt likens current LLMs to a “tireless intern” but warns,“I’m also not going to bet the company on an intern… There’s always the trust, but verify.” (19:55)
- Critical Role for References and Human Intervention:
Outputs must be referenceable; human review remains necessary, particularly in ambiguous or ethical decision-making.
6. Real-World Example: Automating Audit Evidence (20:23–24:35)
- Multi-Source Evidence Gathering:
Kane illustrates how AI can locate scattered policy documents or correlate complex change management evidence across systems—a major reduction in manual “toil.” - Strategic Value Over Drudgery:
Automation allows GRC teams to focus on improving processes rather than assembling basic evidence packages.
7. Balancing Automation with Accountability (24:51–26:36, 32:22–37:16)
- AI as an Assistant, Not a Replacer:
Alam and Matt highlight the importance of properly calibrating expectations—AI speeds up well-defined tasks but does not eliminate human expertise or final accountability. - Design Philosophy:
Hyperproof intentionally avoids “magic LLM” solutions. Instead, AI supports users by suggesting tests or actions, always requiring human approval before execution. - Human-in-the-Loop:
“Let’s have a human approve everything. Let’s have a human say, yeah, actually go do that. So the AI suggests a test, it waits for permission.” – Kane (32:25)
- Audit Logging:
The audit trail logs human approvals for AI actions, keeping accountability transparent and inspectable.
8. Selecting AI Approaches: Agentic AI vs. Other Models (38:50–42:24)
- No Full Autonomy:
All panelists agree there is always a human in the middle. - Agentic Models:
Alam explains their architecture—context, tools, and LLM “brain”—allowing flexible but controlled orchestration without training AI on customer data, addressing key data privacy concerns. - ML and Context Constraints:
Machine learned models help define context for agentic AI, improving scoping and minimizing LLM hallucinations.
9. Data Privacy and Security: The Top Concern (42:25–46:54)
- Poll Results:
Audience overwhelmingly cites data privacy and security as their chief fears. - Panelist Agreement:
This aligns with software engineering priorities: build privacy and security in the architecture from the ground up; never tack it on later. - Rationale:
“You don’t want to be the first mover… a case study… that’s a bad idea, don’t do this.” – Matt (44:25)
10. ROI of AI in GRC: Reducing Friction, Not Magic (46:54–49:51)
- Real Value:
AI reduces friction between SecOps, DevOps, and auditors by automating evidence collection, allowing teams to focus on higher value work. - Competitive Advantage:
Streamlined GRC programs can enable faster market entry and better compliance attestations.
11. Q&A Highlights: Human-in-the-Loop, Accountability, and Process Documentation (49:51–58:26)
- Human-in-the-Loop at Scale:
AI is designed to optimize, not remove, the role of humans—even if speed increases. - Ensuring Accountability:
The human is always the responsible sign-off; policies and audit trails back this up.“There’s always time for human review... there has to be a human to review certain things.” – Matt (54:06)
- Challenge of Process Documentation:
Many organizations are stuck because processes are undocumented or siloed in people’s heads. AI can help formalize and clarify these—but a baseline is required.“You have to have the process. The AI tool sets can and should… help you clarify, refine, and then establish your processes in the tool sets.” – Alam (57:21)
Notable Quotes & Memorable Moments
-
On AI as an intern:
“I’ve found it helpful just for myself to think of some of these LLMs as being a tireless intern… But I’m also not going to bet the company on an intern.” – Dave Bittner (19:33)
-
On the myth of fully autonomous AI:
“I do not believe that we should just simply go to an LLM and say, hey, what are all the controls and proofs I need today? And ta da. Magically, it’s going to figure it out.” – Alam (28:59)
-
On AI Accountability:
“If an AI makes a mistake, the people who sign off on that mistake, your CEO, your CFO and so forth, they’re going to be the people who are going to carry liability for it.” – Kane (16:29)
-
On audit logs:
“In our design, we have an audit log... most of those logs, there’s a human and with a name that said, yes, this human approved that thing to go happen.” – Alam (34:58)
Timestamps for Important Segments
- Introductions & Context: 02:11–05:52
- Where AI is Used in GRC: 06:37–09:04
- Risks, Challenges, and Expectations: 07:44–13:03
- Poll Results & The State of AI in GRC: 14:16–17:26
- AI Hallucinations & Trust: 17:26–20:22
- Automating Complex Evidence Collection: 20:23–24:35
- Role of Human Oversight & Audit Logs: 24:51–37:16
- Choosing the Right AI Model/Architecture: 38:50–42:24
- Data Privacy/Security as Top Concern: 42:25–46:54
- ROI and Value of AI: 46:54–49:51
- Q&A: Human-in-the-loop, Accountability, and Documentation: 49:51–58:26
Final Thoughts
This episode offers a grounded, practical look at integrating AI in GRC, demystifying the roles it can play and highlighting the non-negotiable necessity of human oversight. The panelists caution against silver bullet mentalities, emphasizing that AI’s current and near-term utility is as an accelerator for well-understood tasks, not a wholesale replacement for human expertise or accountability. Data privacy, security, and transparency underpin every responsible innovation in this space—an outlook echoed both by the panelists and audience.
For organizations exploring AI in GRC, the message is clear: approach deliberately, measure thoroughly, and always keep the human in the loop.
