Practical AI Podcast: "Orchestrating agents, APIs, and MCP servers"
Episode Date: April 14, 2025
Host: Daniel Wignak (CEO, Prediction Guard)
Guest: Pawel Velar (Chief Technologist, EPAM Systems)
Episode Overview
This episode explores the practical realities and innovations behind orchestrating large language model (LLM) agents, APIs, and the emerging MCP (Model Component Protocol) servers. Host Daniel Wignak is joined by Pawel Velar from EPAM Systems for a deep dive into real-world implementations, technical integration challenges, and the evolving dynamics of working with AI agents—focusing on EPAM’s open-source orchestration platform, Dial, and the broader impact of such technologies in enterprise environments.
Key Discussion Points & Insights
1. Introduction to EPAM & the Move to "AI First" (02:00–03:17)
- EPAM’s Background:
EPAM is a global professional services firm operational in over 50 countries with 50,000+ employees, focused on building and operating software both for clients and its own internal systems. - Transition to AI-First:
"Historically as a company, we've been running on software that we ourselves built... and that software today is very much AI first."
— Pawel Velar (02:00) - Philosophy:
Core differentiating capabilities are built in-house, continuously iterated, and now tightly integrated with AI.
2. GenAI Orchestration and the Evolution of Dial (03:49–07:28)
-
What is Dial?
An open-source, conversational AI orchestration platform that acts as a "ChatGPT-like" interface for enterprise, but with additional capabilities—streamlining multiple models, tools, APIs, and organizational workflows into a unified point of access. -
Features:
- Model-Agnostic Load Balancing:
Routes requests across multiple LLM deployments for scalability and resilience. - API and UI Orchestration:
Receives richer payloads than just text—includes interactive UI elements, forms, and task automation. - Application Hosting:
Hosts and integrates AI agents and applications for company-wide access.
- Model-Agnostic Load Balancing:
-
Centralized Logging & Analytics:
Single entry point allows detailed logging and analysis of all prompts—enabling organizational insight."Dial becomes this sort of center mass of how your company can build, implement, integrate AI into this single point of entry."
— Pawel Velar (05:34)
3. MCP Servers: A Universal Connector for LLM Tools (07:28–10:13)
-
What is MCP?
Analogy to how HTTP/HTML connected users and software globally; MCP connects LLMs to the world of software tools and services with a common protocol.- Example: Instead of hand-coding function calls between your LLM and every unique tool, MCP standardizes the interface.
"MCP allows to connect the existing software world to LLMs..."
— Pawel Velar (07:45) -
Plug-and-Play Tooling:
MCP servers are emerging for all sorts of services (CRMs, file systems, IDEs), making LLM integration far more scalable and modular.- Example: Intellij IDEA exposing itself as an MCP server for LLM-driven code editing.
-
Rapid Adoption:
"As far as I can tell, everybody's writing MCP servers...and those who talk to LLMs, they consume MCP servers."
— Pawel Velar (10:02)
4. Practical Examples of Orchestrated Agents (10:36–12:34)
-
Agentic Developer Example:
AI Run’s Codemi (coding agent) integrates with Dial via an MCP server, enabling orchestration through a simple conversational interface.- Tasks like reading codebases, generating architecture diagrams, and proposing code changes become accessible as “assistant” functions via Dial.
"Dialectic as a generic front door... can now connect to all Codemi features...expose them as tools to an LLM and orchestrate them for me."
— Pawel Velar (11:30)
5. Routing, Scaling, and the "Secret Sauce" of Orchestration (13:25–16:02)
-
Managing Expanding Toolsets:
As the number of available tools/assistants grows, effective routing and context management becomes key.- Rather than exposing thousands of tools at once, group them contextually and enable hierarchical selection.
-
Optimal Delegation to LLMs:
The emergent best practice is to let LLMs orchestrate steps where possible, as they’re rapidly improving at managing complexity."You're better off delegating to LLMs because they get better at it. But you don't expect it to just figure out from one prompt."
— Pawel Velar (15:14)
6. Real-World Experience & The Human Bottleneck (16:03–25:01)
-
Case Study – Pitfalls & Learnings:
- Over-specifying tasks for agents can lead to wasted effort and inefficiency (i.e., "over-prompting").
- Iterative usage—allowing agents to do parts of a job, then re-evaluating—is more effective.
- Major productivity gains are possible, but agent output still requires significant critical review and validation.
"When I look at what agent produced for me I have no idea how it arrived at where I am. I need to reverse engineer."
— Pawel Velar (20:33) -
Changing Workflow Dynamics:
- Agents offload repetitive tasks, but users must context-switch more and manage idle waiting time—raising new productivity and cognitive load questions.
- The cadence of "think–prompt–review" replaces traditional focused work.
-
Quote Highlight:
"The percentage of time I spent critically thinking was much higher than normal. Percentage of time I spent doing boilerplate is much lower because the agents did this."
— Pawel Velar (21:43)
7. Setting Up Dial & Organizational Integration (28:09–31:25)
- Dial is Not a Local Tool:
Designed for centralized, organization-wide deployment rather than individual local spins.- Exposed as a web app or internal service.
- Example Use-Case: "Talk to your data"—integrates with data warehouses/semantic layers to provide chat-driven analytics and visualization.
- How to Learn More:
Visit epam-rail.com for documentation and examples.
8. Easy Wins and Ongoing Challenges in Orchestration (32:27–37:58)
-
Easier Integrations:
Simple data queries and API connections are straightforward technically; the challenge is making outputs relevant and user-friendly. -
Harder Cases:
Achieving true utility—especially in analytics and complex orchestration—requires domain understanding, semantic modeling, and building higher-level abstractions over raw data/APIs.- Limitations stem from schema complexity, legacy systems, and lack of business-context metadata.
"Technically can do it, but to be able to do this for all kinds of questions you can ask about our data, that's a much harder thing to do."
— Pawel Velar (36:55)
9. The Future of AI Work: Hopes and Worries (39:09–41:56)
- Cautious Optimism:
Pawel is less interested in anticipating breakthrough capabilities ("no new greatness") and more focused on the exponential pace of change outstripping humans' linear prediction abilities.- Main concern: barriers for junior entrants, need for strong fundamentals and system thinking, and the evolving role of the engineer.
- Final Word:
"I'm excited as an engineer, I like using all of this. I just don't know how it's going to reshape the industry and how it's going to change my work in years to come."
— Pawel Velar (40:41)
Notable Quotes & Memorable Moments
- “Dial becomes this sort of center mass of how your company can build, implement, integrate AI into this single point of entry.”
— Pawel Velar, 05:34 - “MCP allows to connect the existing software world to LLMs...”
— Pawel Velar, 07:45 - “You're better off delegating to LLMs because they get better at it.”
— Pawel Velar, 15:14 - “When I look at what agent produced for me I have no idea how it arrived at where I am.”
— Pawel Velar, 20:33 - “I am the bottleneck.”
— Pawel Velar, 19:47 - “Technically can do it... but to be able to do this for all kinds of questions you can ask about our data, that's a much harder thing to do.”
— Pawel Velar, 36:55 - “Our ability to project into the future...is linear. So I am unlikely to properly anticipate and get ready for...what's to come. I am sure to be surprised.”
— Pawel Velar, 39:31
Timestamps for Key Segments
| Time | Segment Description | |------------|----------------------------------------------------------------| | 02:00–03:17 | EPAM’s history and “AI-first” internal evolution | | 03:49–07:28 | GenAI orchestration, introduction to Dial platform | | 07:28–10:13 | What is MCP? Real-world analogy and rapid adoption | | 10:36–12:34 | Agentic assistants and orchestrating Codemi with Dial | | 13:25–16:02 | The challenge of scaling/routing among many agents/tools | | 16:03–25:01 | Human/agent collaboration pitfalls, productivity dynamics | | 28:09–31:25 | Deploying Dial and "talk to your data" examples | | 32:27–37:58 | Integration challenges, semantic layers, utility vs. tech | | 39:09–41:56 | AI's exponential future, industry anxieties and excitement |
Tone & Language
Both Daniel and Pawel speak candidly from hands-on experience, blending technical detail with self-reflection and practical advice. The conversation is open, inquisitive, and at times philosophical about the impact of AI on productivity and the future of software engineering.
Summary Takeaways
- Enterprise AI orchestration is moving rapidly from “manual wiring” of LLMs and APIs toward standardized, scalable platforms (like Dial) and protocols (like MCP).
- Building truly useful AI agents requires more than just technical connectivity; context, semantics, and human workflows add layers of complexity.
- Human productivity with AI agents is increasing, but patterns of work, cognitive load, and organizational processes are being transformed—often in surprising ways.
- Anticipating the future in AI is hard; adaptability, strong fundamentals, and critical thinking remain as vital as ever.
For more on Dial and MCP-based orchestration, visit epam-rail.com.
