Data Security Decoded
Episode: The Real Risks of Agentic AI in the Enterprise
Host: Caleb Tolan
Guest: Camille Stewart Gloster (CEO, CAS Strategies)
Date: February 17, 2026
Overview
In this episode, host Caleb Tolan explores the rapidly evolving landscape of agentic AI in enterprise security with Camille Stewart Gloster, an authority on AI, cybersecurity, digital trust, and risk. The conversation focuses on the practical challenges and real risks of deploying autonomous (agentic) AI systems, how identity has become the central attack surface, and why fundamental security disciplines—like governance and MFA—are more critical than ever. With audience questions, strategic recommendations, and actionable insights, the episode delivers a vendor-neutral, high-impact discussion for cybersecurity and IT professionals.
Main Discussion Points & Insights
1. The U.S. National Cybersecurity Strategy
[04:22 - 07:40]
-
Offense in cyberspace is gaining attention, especially regarding U.S. responses to attacks on critical infrastructure.
-
Camille warns against a narrow focus on offense and suggests using all national security tools, not just cyber operations.
"We don't always choose a cyber offensive attack as a reply to attacks on critical infrastructure… sometimes that's not the best way to elicit the response that we desire."
— Camille Stewart Gloster [04:22] -
Funding and workforce development are highlighted as potential gaps in the upcoming strategy.
-
The need for coordinated investment, especially as AI proliferation worsens software quality and security risk.
"The proliferation of AI tools and AI systems mean that we have a software quality problem that will exacerbate the cybersecurity issues that we are worried about."
— Camille Stewart Gloster [05:55]
2. Identity Is the New Attack Surface
[08:35 - 11:27]
-
Traditional EDR focuses on malware and abnormal movement, but attackers now exploit identities—especially non-human ones.
-
Ratio cited: non-human identities outnumber human ones 82 to 1.
-
High-value targets now include AI systems aggregating enterprise knowledge—making shadow AI a prime risk.
"Identity is the attack surface now… not only worried about the identity of the people… but whether those are AI agents or they're APIs, IoT devices."
— Camille Stewart Gloster [08:38] -
Basic defenses like MFA are underutilized (less than 50% of organizations have implemented), but conditional access and least privilege are also necessary.
-
Attackers can now use compromised OAuth tokens to bypass MFA-protected systems.
3. Listener Q&A: AI in Threat Detection and Response
[11:49 - 13:15]
-
AI helps by reducing noise from large volumes of alerts and logs, enabling analysts to focus on high-value work.
-
AI should be viewed as augmentation, not replacement.
"You should always think about AI systems as an augmentation to your team, not a replacement, particularly in security."
— Camille Stewart Gloster [12:38]
4. The Ethics of AI in Security
[13:24 - 15:19]
-
Over-relying on AI or failing to address bias introduces ethical and strategic risks.
-
AI ethics and AI security are deeply intertwined—not separate disciplines.
"AI ethics… are actually extremely important to AI security and securing an organization."
— Camille Stewart Gloster [14:30] -
Contextual oversight by humans remains essential, especially to avoid reinforcing bias in insider threat detection.
5. Preventing Data Integrity Compromises
[15:31 - 17:44]
-
Organizations err by "shoving AI into an existing workflow"; process redesign is needed.
-
Cross-functional teams and robust governance are critical.
"The biggest mistake that I see organizations make is think that you can just shove AI into an existing workflow or process. It is not a plug and play technology."
— Camille Stewart Gloster [15:31] -
Trust and safety disciplines must be integrated into core security operations.
6. Automation vs. Human-Led Threat Intelligence
[17:44 - 20:27]
-
Threat modeling must adapt—emergent behaviors from AI systems make risk timelines shorter.
-
Continuous learning, adaptive detection, and broad observability are keys to resilient, future-proof security.
"Your detection should be adaptive. It should adapt to the changing intel you're getting…" — Camille Stewart Gloster [19:12]
-
Unique challenges arise when provisioning access for agents—temporary and context-aware access should be rethought.
7. Strategic Adjustments for Agentic AI Risks
[21:49 - 23:21]
-
Implement MFA and conditional OAuth policies.
-
Adopt Zero Trust as a baseline assumption.
-
Treat agents as identities, with potential for both expected and unexpected (emergent) actions.
-
Practical guardrails: Temporary access for agent-granted privileges; human escalation for longer-term access.
"Please be using Zero Trust if you're not already… think about your agents or your AI systems not as normal software, but as identities or employees that will be making decisions and moving through your organization…"
— Camille Stewart Gloster [22:09]
8. Foundational Security Still Matters
[23:21 - 24:57]
-
Even with innovative technologies, fundamentals like MFA, observability, and governance remain vital.
"A lot of times, it's going back to the basics. MFA, governance, observability, they all still matter so much today."
— Caleb Tolan [23:55] -
Rushed AI deployments without data hygiene, governance, and alignment with business value result in downstream risks.
Notable Quotes & Memorable Moments
-
"The best way to address these attacks on critical infrastructure is not to limit ourselves just to cyber offensive attacks. We should really use all of the tools in the toolkits..."
— Camille Stewart Gloster [04:47] -
"AI systems and eventually AI agents as you deploy them can be helpful in doing some load reduction for your analysts so that you can get more out of them."
— Camille Stewart Gloster [12:20] -
"AI governance… and not thinking about security in the ways that you've done before… Trust and safety… must be part of how you think about holistic security."
— Camille Stewart Gloster [16:27] -
"Anything granted by an agent gets shut down 24 hours later... and anything that needs to be approved for a longer term, you will have to get done by a human."
— Camille Stewart Gloster [22:43] -
"The places where I see the most mistakes... is when you rush to try to deploy an AI system because it looks cool, but you haven't done the work to think about what value it is actually adding to you, how you organize your organization around it, and what guardrails you need to put in place."
— Camille Stewart Gloster [24:20]
Key Timestamps
- 04:22: Cyber offense and national cyber policy
- 05:35: What should be in the national cybersecurity budget
- 08:35: Identity is the new attack surface; non-human identities
- 11:49: How AI enhances threat detection/response
- 13:24: Ethical intersections of AI and security
- 15:31: AI integration & data integrity risks
- 17:44: Balancing automation and human-led threat intelligence
- 21:49: Top 3 strategic adjustments for agentic AI risks
- 24:08: Closing advice: Clean data, deliberate deployment, and strong governance
Final Takeaways
- Agentic AI introduces real, novel threats—such as the increased attack surface through machine and SaaS identities—yet fundamentals like MFA, Zero Trust, governance, and data hygiene are still the foundation for resilience.
- Treat AI agents as powerful internal actors: understand, observe, and constrain their access with temporary credentials and clear escalation to humans.
- Success hinges on cross-functional governance and a culture of continuous adaptation—technology alone isn't enough.
For cybersecurity pros, this episode delivers rich, actionable frameworks for balancing innovation with enduring security disciplines as AI continues to reshape enterprise risk.
