Cybersecurity Today
Episode: Agentic AI Security Is Broken and How To Fix It
Host: Jim Love
Guest: Ido Shlomo, Co-founder and CTO of Token Security
Date: February 21, 2026
Overview:
This episode dives deep into the security challenges posed by the rapid adoption of agentic AI—autonomous AI agents capable of taking real actions in corporate environments. Host Jim Love and guest Ido Shlomo (Token Security) discuss why current approaches to AI security have failed, how AI agents fundamentally break established security models, and what practical steps organizations can take to mitigate the risks. The conversation is candid, technical, and laced with real-world anecdotes and actionable advice.
Key Discussion Points & Insights
1. The State of Agentic AI Security
- Jim Love sets the stage: The industry is repeating past mistakes by bolting on security after-the-fact instead of building it in, especially with agentic AI.
- Memorable take: “It’s taken the already weak foundations of AI security and blown them up. Even the largest and supposedly most responsible AI companies are just throwing up their hands.” (01:00)
- Anthropic’s MCP vulnerability: Even leading companies are unable or unwilling to fix inherent agentic AI security flaws, instead shifting responsibility to users (01:20).
- Open-source agentic solutions (OpenClaw, Cowork) are spreading “like wildfire,” but are riddled with vulnerabilities—often with little caution from creators and reckless user adoption (02:00).
2. Why Are AI Agents So Insecure?
- Ido’s Perspective:
- AI is like a new, highly privileged operating system now embedded in critical business processes (09:21).
- AI’s “input space” is the entirety of human language—it’s impossible to sanitize or control it algorithmically like traditional software (12:30).
- Analogy to human behavior: “People are saying that AI is like this savant genius with some cognitive problems.” (09:56)
- The Impossibility of Determinism:
- “When you work with such a human spirit, you need to communicate… it’s not controllable like people think… it’s very hard.” (11:15)
- Hallucinations are not bugs, but results of unpredictable logic chains AI follows (11:36).
3. The Dilemma of Permissions and Trust
- Dangerous Permissions:
- Real utility from agents requires giving them vast access—far more than would be allowed in traditional security models (13:45).
- Example: An IT helpdesk agent might have the power to spin up infrastructure or lock out users, while a customer service agent’s blast radius is smaller.
- Insight: “You need to understand what’s your risk appetite… It’s about managing blast radius.” (14:24–15:58)
- Jim’s Principle: “No matter what you do, something’s going to go wrong. The question is, how big will it be?” (15:58)
4. New Attack Surfaces & Insider Risk
- Cloud Code and Real-World Exposure:
- 70% of the Fortune 100 are using Anthropic’s Cloud Code, giving AI agents full developer-level permissions.
- Real risks involve access to secrets, environment files, APIs, and more. (18:17)
- “There is a real exposure to everything a developer identity touches to cloud code…” (18:42)
- Multi-day, Unattended Agents:
- Agents now run tasks for days, making decisions with limited or no human oversight (19:28).
5. Practical Solutions: A New Security Paradigm
- Treat Agents as Identities:
- Need for a new identity and access management approach (“non-human identity” layer) that maps agent actions, credentials, and intent (20:16–22:17).
- Zero Trust for Agents: “Just because you have access doesn’t mean you should or that you should be able to do this.” (22:17)
- Analogy: Catching a clerk who issues 149 refunds—abuse isn’t always about access, but about intent and anomaly (22:54).
- Intent-Based Permission Management:
- Token Security’s approach: Map an agent’s reason for existence (“intent”) to the permissions it actually receives. This aims to balance power and control (23:23).
- “An agent breaks both [user and machine] identity models because it works at the speed and scale of machines, but has a human intent.” (23:45)
- Granular controls ensure agents only have access necessary for their specific purpose, reducing risk.
6. Layered Technical Implementation
- Tracking Agents from Creation to Retirement:
- Every agent is identified from creation, credentialed, monitored, and eventually decommissioned (25:35–26:52).
- AI Defending AI:
- “It’s impossible to protect AI without AI.” (26:52)
- Using AI-driven graph architectures and mesh networks to manage complex identity and permission matrices in real time. (27:42)
7. Real-World Example
- Use Case:
- A software company (500 employees) goes “AI native”—ends up with 1,500 agents, each with potentially excessive permissions (28:22).
- Problem: Agents designed for narrow tasks (e.g., sales prep) often get permission to delete CRM records or send emails—far more than their intended function (29:25).
8. Human Factors & Common Mistakes
- Sloppiness, Excitement, Dark Security Patterns:
- End users and organizations too often grant broad permissions for convenience (30:55).
- Consumer-use “computer use agents” connect to all aspects of a user’s life—personal and organizational—without adequate boundaries (30:55–31:59).
- Misconfiguration and naive security practices (“Open me first” files, storing unencrypted tokens) expose users to internet-scale risks (31:59–32:29).
- Admitting Security Lapses (Jim Love):
- “I installed an agent…on my Mac and…my stupidity, my laziness…We want this to work…” (33:46)
- Call for “Secure by Default”:
- Agents should be sandboxed with clear access menus. “Not all security must be after the fact.” (34:44)
9. Immediate Advice for Security Professionals
- Ido’s Recommendations:
- Align security strategy with business, advocate for AI adoption but insist on centralized inventory and discovery of agents (35:53).
- Establish boundaries on what agents should access; monitor for agents crossing red lines; decommission unused agents to prevent sprawl.
- “A little bit like human identity, we need a governance process that starts with discovery and a safe creation of agents...and eventually secure decommissioning and retirement.” (36:53)
10. Where Is Agentic AI Headed Next?
- Emerging Trends:
- “Agent teams”: Agents spawning other agents to operate collaboratively or in task clusters (38:13).
- “Multi-day autonomous tasks”: Agents operating for days without human intervention, opening new vectors and complexities for oversight (38:13–39:29).
- Shifting Work Culture:
- Potential for agentic AI to offload “996” startup-style workloads from humans—echoing sci-fi visions like Tony Stark’s AI assistants (Iron Man) (40:10).
Notable Quotes & Moments
- On the fundamental insecurity of agentic AI:
- “AI is like this savant genius with some cognitive problems...academic level knowledge, but with very deep amnesia, very wants to please and to confirm what you asked it to do…” —Ido Shlomo (09:56)
- On control illusions:
- “People are always talking about hallucinations. There are no hallucinations. There’s just a logic train you can’t follow.” —Jim Love (11:36)
- On treating agents as identities:
- “You need an identity layer that maps every agent, every sub agent, every entity connection as a govern...non human identity.” —Ido Shlomo (21:40)
- On protecting AI with AI:
- “It’s impossible to protect AI without AI. Meaning that you need a product and a strategy that utilizes the power of AI to protect it.” —Ido Shlomo (26:52)
- On pragmatic guardrails:
- “Not all security must be after the fact...it should be sandboxed. It shouldn’t be by default tempting to connect to all of the details to our life.” —Ido Shlomo (34:44)
- On human security foibles:
- “I run a security podcast. People are after me all the time and they will get through and there’s nothing I can do about that.” —Jim Love (33:46)
- Tony Stark analogy:
- “…he invented Jarvis, his smart assistant. And Jarvis took care of a lot of things so Tony Stark could rest and enjoy his life. And then he invented Ironman autonomous suit...and enjoy a good life. And I think that’s what AI was invented for.” —Ido Shlomo (40:10)
Important Segment Timestamps
- Introduction & Industry Background: 00:00 – 03:22
- Agentic AI Insecurity Explained: 09:21 – 13:45
- Permissions & Risk Management: 13:45 – 17:44
- Anthropic Cloud Code Case Study: 18:17 – 20:16
- Identity Layer & Zero Trust for Agents: 20:16 – 23:07
- Intent-Based Permissions: 23:23 – 26:52
- How to Manage Security in Practice: 28:22 – 36:53
- Where Agentic AI Goes Next: 38:13 – 40:53
- RSA Innovation Sandbox Discussion: 41:14 – 42:59
Tone & Language
Jim Love keeps the conversation relatable with analogies, humor, and personal admissions of security blunders. Ido is deeply technical yet enthusiastic, using vivid stories from gaming and national defense to ground cyber risks in reality. Both are pragmatic, stressing the inevitability of agentic AI and the futility of naïve restrictions.
Conclusion
This episode is essential listening for any cybersecurity professional grappling with the new realities of autonomous AI agents. It debunks myths about the controllability of these systems, argues for new identity- and intent-based models, and offers immediately actionable steps and frameworks for mitigation. With open acknowledgment of the community’s ongoing struggles and failings paired with practical optimism, it is both a warning and a guide for the next phase of AI security.
