Summary of CyberWire Daily: "When AI Gets a To-Do List. [Research Saturday]"
Release Date: May 3, 2025
Host: Dave Buettner
Guest: Shaked Rayner, Principal Security Researcher at CyberArk
Introduction
In this episode of CyberWire Daily, host Dave Buettner welcomes Shaked Rayner, Principal Security Researcher at CyberArk, to discuss the burgeoning field of agentic AI and its profound implications for cybersecurity. The conversation delves into the definition of agentic AI, its unique vulnerabilities, challenges in identity and access management, lifecycle management of AI agents, best practices for deployment, and future trends shaping cybersecurity strategies.
Understanding Agentic AI
Agentic AI refers to artificial intelligence systems that incorporate Large Language Models (LLMs) to autonomously make decisions regarding the control flow of programs. Unlike traditional LLMs or chatbots, agentic AI can perform actions in the real world based on the instructions it autonomously generates.
Shaked Rayner explains:
"Agentic AI is kind of a concept... it basically means any type of system, any type of code that uses LLM in sort of a way that allows the LLM to decide about the control flow of the program."
[01:27]
Security Implications of Agentic AI
Rayner highlights that while traditional security measures focus on preventing unauthorized access and ensuring system integrity, agentic AI introduces new layers of complexity. The ability of agentic AI to perform real-world actions amplifies the potential impact of any security breaches, making the security risks associated with such systems more severe compared to conventional AI applications.
Vulnerabilities in Agentic AI Systems
The research conducted by CyberArk meticulously maps out the threat landscape of agentic AI, categorizing vulnerabilities into two primary areas:
-
Traditional Access Attack Vectors: These include server-level attacks and other conventional threats that have been prevalent in information security for decades. Despite the advanced nature of agentic AI, these traditional vulnerabilities remain relevant and pose significant risks.
-
LLM-Based Attack Surfaces: This new category encompasses vulnerabilities unique to AI systems, such as prompt injections and model manipulations. These attacks can alter the behavior of the AI system, causing it to act in unintended and potentially harmful ways.
Rayner emphasizes:
"We really tried to illustrate and actually demonstrate practically and technically... how those attack vectors manifest in the Agent AI field."
[03:30]
Identity and Access Management Challenges
Agentic AI complicates traditional Identity and Access Management (IAM) frameworks due to the autonomous nature of AI agents. Determining the identity of these agents—whether they are users, machines, or bots—and assigning appropriate permissions presents a significant challenge. Current IAM standards are not fully equipped to handle the dynamic and autonomous actions of AI agents, raising concerns about how to securely grant and manage access tokens and permissions.
Rayner notes:
"Agentic AI is still a beast that we don't know very well as an industry... we need to understand what their identity should be and what access exactly they can have."
[05:30]
Risks of Overprivileging AI Agents and Mitigation Strategies
One of the critical risks associated with agentic AI is overprivileging AI agents—granting them excessive permissions that can be exploited if the AI is manipulated. For instance, an AI agent with broad access can be coerced into accessing multiple databases, even those the initiating user doesn't have access to, through techniques akin to social engineering.
Rayner explains:
"...LLMs can be manipulated... to perform actions that I wouldn't have access to in the beginning."
[06:42]
Mitigation Strategies:
- Principle of Least Privilege: Assign AI agents only the minimum permissions necessary for their functionality.
- Access Control: Carefully manage and restrict access tokens and permissions assigned to agents.
- Monitoring and Auditing: Continuously monitor AI agent activities to detect and respond to unauthorized actions.
Lifecycle Management of AI Agents
Managing the entire lifecycle of AI agents—from deployment to decommissioning—introduces unique challenges:
-
Trustworthiness of LLMs: Organizations must rely on trusted LLM providers, as the underlying models significantly influence AI behavior.
-
Configuration Monitoring: Changes in system prompts or configurations can dramatically alter AI behavior. Monitoring these configurations is essential to prevent unauthorized modifications.
-
Monitoring Agent Actions: Similar to employee oversight, AI agents' actions must be regularly monitored and audited to ensure they operate within defined parameters.
Rayner emphasizes the importance of lifecycle management:
"We need to make sure that we are aware of exactly what are the important parts in the code that we should monitor... the behavior of the agent can be dramatically changed just by changing the instructions."
[10:21]
Best Practices for Deploying Agentic AI
Based on their findings, Rayner offers several best practices for organizations deploying agentic AI:
-
Comprehensive Mapping: Identify and inventory all systems utilizing LLMs and agentic AI within the organization.
-
Never Trust the LLM: Treat outputs from LLMs with skepticism. Validate and sanitize all AI-generated information to prevent malicious exploitation.
-
Task Appropriation: Use AI agents only for tasks that cannot be effectively handled by traditional code. Limit the scope and autonomy of AI agents to reduce risk.
-
Least Privilege Principle: Assign the minimal necessary permissions to AI agents to perform their functions without exposing unnecessary access.
-
Credential Management: Implement robust management and monitoring of credentials assigned to AI agents to prevent unauthorized access.
-
Security Monitoring: Establish continuous monitoring and threat detection mechanisms specifically for AI agents to swiftly identify and mitigate potential compromises.
Rayner summarizes these practices:
"Map out all of the systems using LLMs... never treat your LLM as a security boundary... utilize the old least privilege principle... make sure to have security monitoring and threat detection and response for those agents."
[13:41]
Future Impacts of Agentic AI on Cybersecurity
Looking ahead, Rayner anticipates that the evolution of agentic AI will significantly impact cybersecurity strategies:
-
Rapid Technological Advancement: The pace at which agentic AI technology evolves poses a moving target for security measures, necessitating agile and adaptive security frameworks.
-
Emergence of Advanced Attack Vectors: As AI capabilities expand, so will the sophistication of attack techniques targeting AI systems, requiring innovative protective measures beyond current practices.
-
Unprecedented Functionality: Future agentic AI systems may possess functionalities that are currently unimaginable, further complicating the security landscape.
Rayner shares his foresight:
"The pace that this technology progresses is really, really fast... agentic AI would look entirely different than what it is now... security measures that we will have to create will be different than what we can think of now."
[17:05]
Conclusion
The discussion with Shaked Rayner underscores the critical need for robust security frameworks tailored to the unique challenges posed by agentic AI. As AI systems become more autonomous and integral to organizational operations, understanding their vulnerabilities and implementing best practices is paramount to safeguarding against increasingly sophisticated cyber threats. Organizations must stay vigilant, continuously adapt their security strategies, and foster a deep understanding of agentic AI to navigate the evolving cybersecurity landscape effectively.
For more insights and detailed analysis, listeners are encouraged to subscribe to CyberWire Daily and stay ahead in the ever-changing world of cybersecurity.
![When AI gets a to-do list. [Research Saturday] - CyberWire Daily cover](/_next/image?url=https%3A%2F%2Fmegaphone.imgix.net%2Fpodcasts%2F75dbfb68-277c-11f0-8d39-571f9cdb3b0c%2Fimage%2F95b72a93c2ffaf8ff900d662a9bd3735.png%3Fixlib%3Drails-4.3.1%26max-w%3D3000%26max-h%3D3000%26fit%3Dcrop%26auto%3Dformat%2Ccompress&w=1200&q=75)