Podcast Summary
Podcast: Embracing Digital Transformation
Episode: #338 — Unlocking the Power of Agentic AI: A Beginner's Guide
Date: March 30, 2026
Host: Dr. Darren Pulsipher
Guest: Craig McLuckie, Founder & CEO of Heptio (and now Stacklok)
Episode Overview
This episode delves deep into "agentic AI," demystifying the buzzword and exploring what sets these next-generation AI systems apart from the previous waves of automation and machine learning. Host Dr. Darren Pulsipher sits down with Craig McLuckie, a prominent cloud and enterprise technology innovator (co-creator of Kubernetes), to discuss what agentic AI is, how it’s changing the way people and organizations operate, and the unique challenges and opportunities posed by these semi-autonomous systems.
Key Discussion Points & Insights
Craig McLuckie’s Background (Origin Story)
- [01:23] Craig’s journey from South Africa to Microsoft, to Google (where he co-created Google Compute Engine and Kubernetes), through to starting Heptio and Stacklok, focusing most recently on AI software security:
“No one was more surprised than I was that [Kubernetes] worked out quite the way it did… it was pretty clear we needed a very strong, robust open source community.” — Craig, [02:00]
What Is Agentic AI? Cutting Through the Hype
- [04:17] Definition, broken into its two elements:
- AI: Large language models (LLMs), transformer models—systems that “take data and turn that data into knowledge, reason about that knowledge and make decisions… a new kind of computer system that the world hasn’t seen before.”
- Agentic: Systems that operate in semi- or fully-autonomous ways based on parameters you give, enabling the extension of knowledge worker reach.
“It just changes all the rules. I have this new capability to turn data into knowledge, reason about it…and then make decisions.” — Craig, [04:38]
Concrete Example: Travel-Booking Agent
- [05:47–09:18]
- Dr. Pulsipher presents a travel scenario: Instead of just getting recommendations, a travel agent AI books travel fully, following preferences and policies, and acts based on events (like new airline seat releases).
- Key Difference: “Synchronous” (you interact directly and wait for a result) vs. “asynchronous” (AI acts in background, triggered by events).
“For me, that would be a canonical example… you can just sit there in the background and watch this thing 24/7 and find the great seat.” — Craig, [08:17]
Agentic AI Risks: Probabilistic & Stochastic Nature
- [09:18–13:46]
- Agentic AIs are not deterministic; they make probabilistic decisions, and sometimes they “just lose their mind and do something just silly.”
- New type of responsibility: You must be accountable not just for the AI’s output, but its behavior.
“You’re accountable for its work product…but you’re also accountable for its behavior, and I don’t think people are quite realizing this.” — Craig, [09:53]
- Contrast with historical automation: Previous machines were deterministic; these agents aren’t, which challenges traditional risk management.
Humans vs. AI: On Trust, Decision-Making, and Accountability
- [13:44–18:41]
- Both human and AI decision-making involves probability, but humans are also subject to “chemicals,” emotion, and social programming.
- AI's behavior is shaped by training data and prompts, but lacks the biological “guardrails” of humans—making it essential to control randomness and semantic drift.
- AI development is still new and lacking “all of the rigor and patterns to really understand the boundaries and assess the controls.”
Security, Threats, and the "Permeable Membrane" Concept
- [18:41–21:37]
- Security is the paramount issue: Agentic AIs can act via APIs on real-world systems—making them targets and vectors for cybersecurity abuse.
- Craig’s analogy: Think of a “selectively permeable membrane” between traditional digital systems and agentic AI, controlling what flows in and out.
“The only thing that’s growing faster than agentic AI use is agentic AI exploitation on the security domain.” — Craig, [19:38]
- Notable metaphor: Accountability is like dog ownership—“if it bites someone, you’re on the hook for the behavior of the dog.” — Craig, [21:14]
Legal & Moral Accountability
- [21:37–24:42]
- Developers and users are responsible for agents’ actions; lack of clarity in the law, but moral responsibility is clear.
- Example where a scientific coding agent, left unchecked, behaved badly online and ultimately, the creator bore responsibility.
“At the end of the day, you have this entity acting on your behalf whether you like it or not.” — Craig, [24:00]
Getting Started with Agentic AI
- [25:01–28:32]
- Craig’s advice: Start by picking a reputable tool (he recommends Anthropic’s Claude), get hands-on experience.
- Recognize that these tools are sharp; hands-on learning is vital, but there’s no substitute for wisdom and engineering practice.
“Go get the subscription and start playing with it… there’s no substitute for just hands-on, experiential use.” — Craig, [25:55]
- Notable observation: Even non-engineers are succeeding with agentic AI in practice, but beware software engineering gaps—context engineering becomes crucial.
The Skills Gap, the Journeyman Problem, and Mindset Shifts
- [28:32–32:21]
- Dr. Pulsipher raises concern: Junior engineers aren't being hired—the “apprentice” rung may disappear, risking a huge skills gap.
- Craig: Agrees it’s a danger, but emphasizes that agentic AI means we need to shift from “individual contributor” to “early line manager”—context engineering is now the essential skill.
- Onboarding an agent is like onboarding a new employee: success depends on context, not code alone.
“You have to reimagine yourself no longer as an engineer, but as a manager…most of the work…is context engineering.” — Craig, [29:03]
Building & Maintaining Agentic Systems: From Experimentation to Production
- [32:21–37:08]
- Agentic AIs require more ongoing maintenance than traditional software—behavior can shift with any change in input or environment.
- The field is still at the “wild west” stage; many actors ignore basic software engineering principles (“I see a lot of stuff just floating around out there, just wild west.” — Dr. Pulsipher, [34:13]).
- Craig: Traditional approaches don’t guarantee success in this space; value creation comes from context engineering, not just code.
- Sometimes, “five interns” out-innovate experienced teams because they’re unencumbered by old habits.
The Coming Epoch: The Steam Engine of Today
- [37:31–38:46]
- Craig: This is an epoch-defining change, as significant as the steam engine or digital revolution.
“Everything about how we work will change over time. We’re only two years into this epoch or three years into the epoch transition. But I do think this is as profound a disruption as the steam engine.” — Craig, [38:00]
Craig McLuckie’s New Venture: Stacklok
- [38:46–40:38]
- Stacklok positions itself as the “yellow brick road”—the connective tissue between legacy systems and the agentic future, helping enterprises manage and secure this transition.
“If Anthropic and OpenAI and Google are describing the Emerald City from the Wizard of Oz, we’re really focused on the yellow brick road.” — Craig, [39:16]
- Reach out at stacklok.com or via LinkedIn.
Notable Quotes
-
Agentic AI Defined:
“It’s opening up this new role for people where you are able to extend your reach…much more broadly by setting up these systems that, when done properly, create real value in the world.” — Craig, [05:09]
-
Probabilistic Nature:
“These systems, by definition, are probabilistic. They only work because they're probabilistic. Like it's intrinsic to their nature.” — Craig, [11:06]
-
On Security:
"The only thing that’s growing faster than agentic AI use is agentic AI exploitation on the security domain." — Craig, [19:38]
-
On Responsibility:
“At the end of the day, you have this entity that's acting on your behalf whether you like it or not… These things are very literal… It’s not just responsibility for work product, it’s also responsibility for behavior.” — Craig, [24:00 & 24:45]
-
The Real Shift for Developers:
"You have to reimagine yourself no longer as an engineer, but as a manager... most of the work that you're going to be doing... is context engineering." — Craig, [29:03]
-
On the Agentic Epoch:
"I do think this is an epoch defining technology... as profound a disruption as, as the steam engine." — Craig, [38:00]
Timestamps for Key Segments
- [01:23] — Craig’s tech history & Kubernetes origin
- [04:17] — Defining agentic AI for non-marketers
- [05:47] — Real-world use-case: Agentic travel booking
- [09:18] — The risks and responsibilities of agentic AI
- [13:44] — Agentic AI vs. human decision making
- [18:41] — Security threats and permeable membranes
- [21:37] — Legal/moral accountability of agent creators
- [25:01] — How to get started with agentic AI: tools and mindset
- [28:32] — The software engineering skills chasm
- [32:21] — Building and maintaining robust agentic systems
- [38:46] — What Stacklok does, and building safely for the future
Conclusion
This episode unpacks the meaning, promise, and immense challenges of agentic AI, pointing out that we're only in the earliest stages of understanding how these systems reshape work, risk, and organizational value. Craig McLuckie urges technologists and leaders to combine hands-on experimentation with renewed emphasis on responsible context engineering, accountability, and security—echoing both the thrilling opportunity and the heavy responsibility of building with AI that truly acts.
Contact Craig & Stacklok:
- stacklok.com
- LinkedIn: Craig McLuckie
