The Analytics Power Hour – Episode 288:
Our LLM Suggested We Chat about MCP. Kinda' Meta, No?
Date: January 6, 2026
Hosts: Michael Helbling, Val Kroll, Tim Wilson
Special Guest: Sam Redfern (Staff Data Scientist, Canva)
Episode Overview
This episode dives deep into the emerging world of AI tool integration with a focus on Model Context Protocol (MCP), an open (sort of) standard that lets language models (LLMs) interface directly with organizational data and tools. The hosts, with guest Sam Redfern, explore what MCP is, where it came from, whether it’s a real "standard," and what possibilities and complications it brings for the future of analytics, AI, and organizational governance.
Key Discussion Points & Insights
1. What is MCP? (Model Context Protocol)
- MCP is a (somewhat) standard proposed by Anthropic for allowing LLMs to access external tools and data sources, letting models "do" things, not just converse.
- Originated from work in frameworks like LangChain to give LLMs agentic capabilities (tools as "fingers" to interact with the world).
- “I kind of think about it as fingers in a sense…trying to give the large language model the ability to touch something, a bit of information, bring it closer to it, for it to understand.” – Sam Redfern [03:26]
- Early tool integrations (like “tools” and “functions” in OpenAI) had similar goals but lacked standardization.
2. How Does MCP Work?
- MCP servers expose “tools” — each with names, descriptions, and input/output schemas — that LLMs can use in a programmable fashion.
- Context is critical: every action requires LLMs to know which tools are available, what they do, and how to use them.
- LLMs typically run MCP servers locally for access and permissioning, especially early on for security reasons.
- Analogy: Like early APIs, but more dynamic and non-deterministic; Sam compares it to early days of XML for document exchange between apps.
- Quote: “We are at the XML stage of this development.” – Sam Redfern [13:26]
3. Standardization & Why MCP Exists
- MCP attempts to standardize how tools are described and exposed to LLMs, facilitating broader interoperability.
- Anthropic led the way due to earlier reinforcement-learning work for “tool-use” in LLMs.
- “Anyone could have come up with a standard. The core problem they were trying to solve is ‘how do you give the large language model a hand…’” – Sam Redfern [09:33]
- It’s not a formal standard like those from the W3C—more a useful, evolving convention.
4. APIs vs MCP: What's the Difference?
- MCPs act like APIs for LLMs, but with extra abstraction to handle the fuzziness/non-determinism of LLM input/output.
- “APIs is a great way of talking about it… but I actually think of MCP and where it’s at right now as more akin to digital document formats like XML.” – Sam Redfern [12:59]
- Mcps serve as a connective glue between deterministic (e.g., code, APIs) and probabilistic (LLM outputs) systems.
5. Use Cases & Examples (from Canva and beyond)
- Internally at Canva: building custom MCP servers to interface LLMs with their internal datasets, automating workflow and visualization generation (with Altair, for example).
- Home labs: building personal MCP servers to streamline command-line tool use.
- Fun experimentation: Sam describes building an MCP-driven Battleship game where different agent harnesses compete against each other using LLM tool strategies.
- “What we’re trying to do is…take our staff members and…make them move faster and explore more in a shorter period…to get to a better end outcome.” – Sam Redfern [26:02]
6. Standardization vs Customization: Vendor Tools or In-House?
- Vendors are building their own MCP connectors, but these can feel “fuzzy” unless your stack perfectly matches the vendor’s assumptions.
- Real power comes from customizing MCPs/harnesses to match the specific idiosyncrasies of your org’s data, stack, and workflow.
- “Most organizations I ever interact with, it’s sort of like a collage of different solutions and the money is in getting them to connect together.” – Sam Redfern [32:17]
7. Pitfalls and Concerns: Context Pollution & Governance
- Overloading LLMs with too many tools causes “context pollution”—the more options, the worse the output gets.
- “You can go and add 70 tools to an agent harness and you should do that to watch it not work because it’s very entertaining.” – Sam Redfern [34:02]
- Governance issues loom large when deploying MCPs organization-wide:
- Security: Lack of baked-in authentication; sometimes direct database or terminal access is dangerous.
- Maintenance: Quick hacks can become critical infrastructure if not managed.
- “It would be very easy to embed MCPs into an organization and they’re not well thought out, they’re not well built…there’s a governance immense risk when you are able to do this stuff so quickly and roll it out.” – Tim Wilson [39:33]
- Most initial value will be in internal, tightly-scoped utility functions, not user-facing, open tools.
8. The Future of MCP and Emerging Standards
- MCP is evolving, but won’t be the last iteration — new protocols like Zed’s ACP are on the horizon, aiming for better security and open governance.
- “It’s more like the QWERTY keyboard…we just kind of picked it because it was there first, not because it’s better…” – Sam Redfern [47:50]
- The ecosystem is expected to evolve quickly, with new standards and best practices emerging as adoption grows.
Notable Quotes & Memorable Moments
- On the non-determinism of the LLM world:
“It's the first time in software that we've had this amount of non determinism to deal with.” – Sam Redfern [16:44] - On learning to work with non-determinism:
“People who talk about the vibes of a model, there's some truth in it...getting a feel. And it's true with tool design.” – Sam Redfern [17:18] - On the current experimental era:
“It puts the fun back into the early stages of programming...” – Sam Redfern [29:08] - On the inevitability of new standards:
“…at some point that will have shifted to a point that it's got a new label and it's like, oh, remember, it was just MCPs.” – Tim Wilson [49:06] - On security pitfalls:
“One of the downsides of giving the LLM access to your terminal command line is that it could just do, it could delete all the files…” – Sam Redfern [41:08] - On evolving standards:
“Back to that XML example...we're going to be moving to something else. But I'm very bullish on the concept of tool use…giving large language models these fingers to do things.” – Sam Redfern [44:42]
Timeline: Key Segments & Timestamps
| Segment | Timestamp | |-----------------------------------------|------------| | Introduction & setup | 00:14–01:36| | MCP explained (history & purpose) | 02:17–05:22| | MCP’s tool “fingers” metaphor | 05:54–08:03| | Standardization needs & LLM tool use | 08:23–11:35| | APIs, XML, and LLMs: analogy discussion | 11:35–14:12| | Security, local servers, and examples | 15:08–20:51| | Use cases: Canva, home lab, Battleship | 24:26–29:39| | Standardization vs Org Customization | 30:27–32:34| | Context pollution and agent harnesses | 33:18–36:30| | Downside: governance & security risks | 39:18–45:54| | The future & analogies: XML, QWERTY | 46:08–49:06| | Host wrap-up & last calls | 51:31–56:33| | Outtakes, humor, and closing remarks | 56:33–End |
Takeaways for Analytics Pros
- MCP and similar protocols are ushering in a new paradigm:
LLMs can now act, execute, and interact with data/tools, not just generate text. - Customization is key:
The most business value comes from tailoring these integrations to fit actual, lived data and workflow. - Governance matters:
Rapid experimentation is thrilling, but security and maintainability must not be afterthoughts. - Standards are evolving—don’t lock in yet:
MCP is an early attempt; expect rapid change, new protocols, and plenty of churn in the coming months/years.
Resources & Further Reading
- Zed's Agentic Engineering series: z.dev, see "Resources"
- OpenCode AI: OpenCode.AI
- LangChain: langchain.com
Memorable closing advice from Sam Redfern:
“One of the most important things here is to go get your hands dirty with these systems. They are just so much fun…and really you just spend some money on tokens and explore it.” [52:20]
Don’t forget: No matter which MCP you’re using—keep analyzing.
