AI + a16z Podcast | "What Is an AI Agent?"
Date: April 28, 2025
Host: Derek Harris
Guests: Guido Appenzeller, Mat Bornstein, Yoko Lee (a16z Partners)
Episode Overview
This episode dives deep into the elusive concept of AI agents. The hosts and guests debate how to define an "AI agent," distinguish agents from other AI applications, and unpack the significance and implications of agent technologies for the industry and end users. They also explore pricing, architectural questions, the impact of data silos, and what needs to happen for agents to become truly transformative.
Key Discussion Points & Insights
1. What Is an AI Agent? Definitions and Blurred Lines
-
Spectrum of Definitions:
- The term “agent” means different things depending on technical or marketing context.
- On one end: A simple chatbot interface using an LLM can be called an agent.
- On the other: “Real” agents are imagined as close to AGI—persistent, independent, learning entities (01:42–03:04).
- Most current agents are somewhere in between; “weekend demo” level, not decade-long problems yet (04:44–05:33).
- The term “agent” means different things depending on technical or marketing context.
-
Notable Quotes:
- “The cleanest definition I've seen of an agent is just something that does complex planning and... interacts with outside systems. The problem with that definition is all LLMs now do both of those things...”
—Yoko Lee (04:44) - "I don't think anything we have are actually agents. And 'agent' itself may be a poorly defined and kind of overloaded term."
—Yoko Lee (05:16)
- “The cleanest definition I've seen of an agent is just something that does complex planning and... interacts with outside systems. The problem with that definition is all LLMs now do both of those things...”
-
Underlying Theme:
- The agent paradigm is both technically and linguistically fuzzy:
- Definitions depend on context, intent, and level of system complexity.
- Sometimes just clever prompts with chat interfaces are called “agents”; in other cases, agents must plan, persist, and work independently.
- The agent paradigm is both technically and linguistically fuzzy:
2. Degrees and Elements of Agentic Behavior
-
Agent vs. Copilot UI Models:
- Copilot: User tightly in the loop, immediate feedback; often not called an agent (06:01).
- Agent: More autonomous, possibly working in the background or for longer periods.
-
Planning, Reasoning, Decision-Making:
- Agents are typically defined as LLMs in a loop: making decisions, reasoning, determining when tasks are complete, using tools iteratively (06:41).
- “It's a multi-step LLM chain with a decision tree.”
—Guido Appenzeller (09:13)
- “It's a multi-step LLM chain with a decision tree.”
- Agents are typically defined as LLMs in a loop: making decisions, reasoning, determining when tasks are complete, using tools iteratively (06:41).
-
Definitional Challenges:
- Difficulty in systematizing what makes something an “agent” — often, distinction blurs depending on system structure and intent.
- “Just by that definition, isn't every chatbot effectively an agent then in this world?”
—Yoko Lee (07:17)
3. Marketing, Productization & Pricing of Agents
-
Agent as a Marketing Tool:
- The concept of an “agent” is as much a marketing narrative as a technical construct, used to justify higher pricing by analogizing to human replacement (10:41).
- "We can price the software that we're building much, much higher because this is an agent... The human worker makes, I don't know, $50,000 a year, and therefore this agent you can get for only $30,000."
—Mat Bornstein (10:41)
- "We can price the software that we're building much, much higher because this is an agent... The human worker makes, I don't know, $50,000 a year, and therefore this agent you can get for only $30,000."
- The concept of an “agent” is as much a marketing narrative as a technical construct, used to justify higher pricing by analogizing to human replacement (10:41).
-
Reality Check:
- Most fields are seeing augmentation by agents; few see full replacement.
- AI “agents” tend to offload routine work, slow hiring, but don’t replace creative intent or fundamental human decision-making (11:56–14:27).
- “I just don't know that AI kind of has what we would think of as decision making or intent.”
—Yoko Lee (14:02)
- “I just don't know that AI kind of has what we would think of as decision making or intent.”
-
Agent vs. Function vs. API Call:
- At the lowest level, agents can be seen as orchestrations of multiple functions and LLMs—sometimes indistinguishable from classic software functions or API calls (14:50–16:09).
4. Infrastructure, Technical Architecture & Sharing
-
Infrastructure Similarities:
- Architecturally, SaaS applications and agents are built similarly: LLMs often run on specialized infrastructure; state management is external; agent logic is lightweight orchestration (26:07–27:19).
-
Functions as Building Blocks:
- The ease of sharing AI functions (model weights, fine-tunes, etc.) is changing how reusable “functions” and software get built (16:09–17:26).
- “The model itself takes up so much of that functionality in the function, and it's just a different kind of animal compared to normal code.”
—Yoko Lee (16:09)
- “The model itself takes up so much of that functionality in the function, and it's just a different kind of animal compared to normal code.”
- The ease of sharing AI functions (model weights, fine-tunes, etc.) is changing how reusable “functions” and software get built (16:09–17:26).
-
Human Function Analogy:
- Playful comparison: If humans could be “called” as a function from code, they’d be akin to agents (17:45–18:42).
5. Pricing Models: Still Shaking Out
-
Early-Stage Pricing:
- New technology is priced based on perceived replacement or augmentation value, but competition and marginal cost pressure will quickly reduce prices (19:13–20:24).
- “In practice, I think most buyers are actually pretty sophisticated about what's going on under the hood... It's pretty simple stuff happening.”
—Yoko Lee (20:05)
-
Value-based vs. Usage Pricing:
- Traditionally:
- Per seat pricing for human-used services; usage-based for machine-to-machine.
- "For infra, a rule of thumb... is that if the service is used by a human, it's a per seat pricing and if it's a service, it's used by other machines, it's a usage based pricing. And I actually don't know where to put agents here."
—Guido Appenzeller (20:48)
- Traditionally:
-
Monopoly & Application Layer Value:
- Distinction drawn to products like Pokemon Go, where unique value justifies very high markup (23:42–25:43).
6. Data Silos, Walled Gardens & Access
-
Data Access as Key Limiter:
- The capabilities of AI agents depend heavily on access to tools and data.
- Data silos (by companies protecting their assets or user engagement) hinder agent potential (29:54–31:02).
- Example: Gmail’s ad controversy; companies found ways to shield data from automated access (32:23).
-
Adversarial Ecosystem:
- Cat-and-mouse evolves: Companies may ramp up anti-agent measures (like captchas) to guard their platforms (31:48).
- "There's also the opposite that could happen... more and more complex anti agent captchas trying to keep out the agents because they only want humans..."
—Mat Bornstein (31:48)
- "There's also the opposite that could happen... more and more complex anti agent captchas trying to keep out the agents because they only want humans..."
- Cat-and-mouse evolves: Companies may ramp up anti-agent measures (like captchas) to guard their platforms (31:48).
-
Potential Shifts:
- As foundational model capabilities improve, the distinction between human and agent access may blur, changing these dynamics (33:23).
7. The Road Ahead: What Needs to Happen?
-
What Would Make Agents "Game Changing"?
- Agents must gain:
- Secure authentication and access control
- Effective data retention
- Cooperative (or at least feasible) integration with consumer and enterprise platforms (33:46–34:41)
- “The positive vision is that in two years we figured out how an agent working on my behalf can use most of the tools that I have access to.”
—Mat Bornstein (33:46)
- Agents must gain:
-
Multimodality as Unlock:
- Moving beyond text: Visual, interactive, and other modalities would unlock new agent use cases (34:41–35:19).
- “Even for web browsing, it's like a very clunky experience... I will actually bet on multimodality... producing vector art, unlocking new things.”
—Guido Appenzeller (34:41)
- “Even for web browsing, it's like a very clunky experience... I will actually bet on multimodality... producing vector art, unlocking new things.”
- Moving beyond text: Visual, interactive, and other modalities would unlock new agent use cases (34:41–35:19).
-
Normalization not Hype:
- Agents becoming “normal technology,” part of the fabric rather than an exotic outlier, is the ultimate goal (35:19–36:05).
- “If we don't use the word agent two years from now or five years from now, I think that's a huge win.”
—Yoko Lee (35:19)
- “If we don't use the word agent two years from now or five years from now, I think that's a huge win.”
- Agents becoming “normal technology,” part of the fabric rather than an exotic outlier, is the ultimate goal (35:19–36:05).
Notable Quotes & Memorable Moments
-
“I just think it's really tough to define a system based on what someone says to it. These are by design unstructured inputs. These systems will accept literally anything.”
—Yoko Lee (07:51) -
“If you don't know how this thing works internally, a classic function and an agent become indistinguishable.”
—Mat Bornstein (15:42) -
“Traditionally for infra, a rule of thumb...if the service is used by a human, it's per seat pricing...if it's a service used by other machines, it's usage based. And I actually don't know where to put agents here.”
—Guido Appenzeller (20:48) -
"It's an application layer monopoly...for a different audience...willing to foot the bill for the value."
—Guido Appenzeller (24:31) -
“I actually think the winners will be the specialists, not the foundational models... It’s really up to humans and specialists of the next wave to come up with new data, new workflows, new aesthetics, to push that distribution.”
—Guido Appenzeller (27:55–29:25) -
“If we don't use the word agent two years from now or five years from now, I think that's a huge win.”
—Yoko Lee (35:19)
Timestamps for Important Segments
- Defining “Agent”: Blurred Lines and Contradictions (01:42–05:33)
- Agentic Behavior: Planning vs. Decision vs. UI (06:01–09:52)
- Agents as Marketing Narrative & Product (10:18–14:27)
- Differences Between Agent, Function, API (14:50–17:26)
- Infrastructure & Sharing Functions/New Paradigms (16:09–18:42)
- How to Price Agents? (19:13–22:55)
- Examples of Value-based Monopoly Pricing (23:42–25:43)
- Architectural Questions & Future of Agent Capabilities (26:07–29:25)
- Data Silos, Access, and Anti-Agent Barriers (29:54–33:34)
- Biggest Innovations Needed for Agents (33:46–36:05)
Conclusion
This episode offered a thoughtful, sometimes playful, and always sharp interrogation of what makes an AI agent, the difficulty of definition, and why the concept is at once powerful and overloaded. The dialogue covered practical differentiators, the signal vs. noise of “agent” as a marketing term, pricing realities, technical architectures, and the path from novelty to normalized infrastructure—including who will shape that path and what will slow it down.
Core Takeaway:
The term "AI agent" is useful, but currently slippery. Truly transformative agents will likely blend robust technical advances—across planning, decision-making, multimodality, data access, and workflow orchestration—with new business models and evolving human expectations for normal, mainstream technology.
