Transcript
Yoko Lee (0:00)
I have to ask the question here. How do you define an agent?
David Soriapara (0:06)
I'm not going to get into that. What do you think is an agent? How do you do that?
Yoko Lee (0:10)
I think it's a multi step LLM reasoning chain. It's very simple for me.
David Soriapara (0:14)
Okay, yeah, I can't get behind that. For me, agents is potentially more like in this word, like agency. So something that does some form of autonomous orchestration, autonomous task solving and that usually anything that's a multi step thing is for me already like an agent right at the moment. It does two steps and it reacts to the first step is basically an agent because it now has some agency over what it's doing welcome Back to.
Podcast Host (0:39)
The a16z AI podcast. It's been a while, but here we are again with another great discussion about the fast moving AI space. This time it's MCP or Model Context Protocol, which has been a major topic of conversation this year as it means to open up new LLM use cases and energetic behaviors by connecting models to any number of new tools, datasets and external applications. And here to talk about it are 816Z, Infra partner Yoko Lee and Anthropic's David Soriapara, who created MCP along with his colleague Justin Spahr Summers. Among other topics, Yoko and David discussed the MCP origin story early and popular use cases, important work still to be done, for example around authentication and what is the right level of abstraction for carrying out certain types of workflows. It's an insightful and timely conversation that you'll hear after these disclosures. As a reminder, please note that the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. For more details, please see a16z.com disclosures.
David Soriapara (1:51)
So MCP is first and foremost, it's an open protocol and it does not say much yet. But what it really tries to do, it tries to enable building AI applications in such a way that they can be extended by everyone else that is not part of the original development team through these MCP servers and really bring the workflows you care about, the things that you want to do to these AI applications and for that it's like a protocol that just defines how you know whatever you are building as a developer for that integration piece and that AI applications, how they talk to each other and that's really what it is. It's a very Boring specification. But then you know, what it enables is hopefully, at least in my best case scenario, is something that looks like the current API ecosystem, but for LLM interactions with some form of context providers or agents in any form or shape.
