Podcast Summary:
MarTech Podcast ™ – "The Year of Context Engineering"
Host: Benjamin Shapiro
Guest: Scott Brinker (Marketing Technology thought leader, "Godfather of MarTech")
Date: January 8, 2026
Episode Overview
This episode examines a crucial emerging theme for marketers in 2026: "context engineering." Host Benjamin Shapiro and guest Scott Brinker discuss how marketers must move beyond prompt engineering—previously the focus of AI-powered marketing—toward harnessing and curating the right context for large language models (LLMs) and agentic AI systems. The discussion covers best practices, pitfalls, and practical advice for balancing context in AI-powered processes, drawing on industry analogies and real-world martech experience.
Key Discussion Points & Insights
1. Setting the Stage: Yearly Trends in AI for Marketers
- Scott Brinker reflects on the evolution of AI in marketing:
- 2024 as "the year of prompt engineering"
- 2025 as "the year of agentic AI"
- 2026 positioned as "the year of context engineering"
(01:15)
2. What is Context Engineering?
- The guest compares the transition from SEO (Search Engine Optimization) to AEO (Answer Engine Optimization) with the shift from prompt engineering to context engineering.
- Definition:
- Context engineering is not just about crafting instructions (prompts) for AI; it also involves supplying it with the right data access and appropriate tools, especially as agentic AI (AI that can take actions) becomes mainstream.
- This involves bundling instructions, relevant data, and tool permissions together so AI can work effectively and safely.
- Notable quote:
- "[Context engineering] is that art, that practice of saying, okay, now when I ask the AI to do something, I want to bundle up instructions. I want to bundle up access to the right data for what it might want to need to do. I want to like point it at the tools it can use and executing that. And that's what context engineering is all about."
(Podcast Guest / Martech Expert, 02:38)
- "[Context engineering] is that art, that practice of saying, okay, now when I ask the AI to do something, I want to bundle up instructions. I want to bundle up access to the right data for what it might want to need to do. I want to like point it at the tools it can use and executing that. And that's what context engineering is all about."
3. The Diminishing Value of Prompts Alone
- Scott Brinker shares insights from Nicholas Holland (Head of AI at HubSpot), noting that LLMs have become good at deciphering intent even from poorly written prompts.
- Without proper context to filter and focus the AI's attention, outputs become generic or "dumbed down to the norm."
- Challenge: Determining the right scope and boundaries of context to provide.
(03:08–04:01)
4. The Human Analogy and Real Organizational Challenges
- The guest analogizes assigning work to humans versus AI: you must provide clear objectives, relevant data, and access to necessary tools in both cases.
- Pitfall: Most organizations fail not because of AI limitations, but because they lack clarity on the "job to be done" and the process itself.
- Many struggle to define the process or data required for a task, which translates into poor outcomes when automating with AI.
- Notable quote:
- "...if you really want to get value out of these things, you got to step back from the AI piece of it and say, okay, what's the job to be done? What is the specific task that I want this thing to accomplish? What information, what tools is it going to need to execute..."
(Podcast Guest / Martech Expert, 04:11)
- "...if you really want to get value out of these things, you got to step back from the AI piece of it and say, okay, what's the job to be done? What is the specific task that I want this thing to accomplish? What information, what tools is it going to need to execute..."
5. The 'Goldilocks Problem': Not Too Much, Not Too Little Context
-
Scott discusses challenges faced while building their internal podcast production infrastructure ("podcast OS"):
- Too much context (e.g., huge strategy documents, full transcripts, every LinkedIn post) overwhelms the AI and leads to failures or meaningless output.
- The task is not to simply provide more data, but to limit and curate context so that it is relevant and digestible.
-
Notable quote:
- "The problem isn't how do I give more rich information and let the machines sort through it and figure it out, it's how do I limit the information so it's only what's relevant so then the LLM can digest it. And I think that that's the problem with context engineering. It's not just give the LLM everything. You actually have to find a balance and, you know, not too little, but also not too much. We have a little bit of the Goldilocks problem."
(Scott Brinker, 05:47)
- "The problem isn't how do I give more rich information and let the machines sort through it and figure it out, it's how do I limit the information so it's only what's relevant so then the LLM can digest it. And I think that that's the problem with context engineering. It's not just give the LLM everything. You actually have to find a balance and, you know, not too little, but also not too much. We have a little bit of the Goldilocks problem."
-
The guest wholeheartedly agrees and emphasizes the importance of balance:
"Balance 100% I more eloquent than anything I could have said."
(Podcast Guest / Martech Expert, 07:04)
Notable Quotes & Memorable Moments
- "There's probably like a, a parallel here of what AEO is to SEO is a little bit what context engineering is to prompt engineering."
(Podcast Guest / Martech Expert, 01:40) - "Without context as a filter, everything gets dumbed down to the norm."
(Scott Brinker, 03:30) - "If you really want to get value out of these things, you got to step back from the AI piece of it and say, okay, what's the job to be done?"
(Podcast Guest / Martech Expert, 04:11) - "We have a little bit of the Goldilocks problem. It's like, not too little, but also not too much."
(Scott Brinker, 05:56)
Important Timestamps & Segments
- 01:15 – Framing 2026 as the "Year of Context Engineering"
- 01:40–03:08 – Defining context engineering and its connection to AI in marketing
- 03:08–04:01 – Limitations of prompt engineering and the growing need for curated, well-defined context
- 04:01–05:47 – Practical analogy: giving tasks to humans vs. AIs; why businesses struggle with AI implementation
- 05:47–07:04 – The Goldilocks challenge: achieving balance in contextual inputs for AI systems
Overall Takeaways
- The shift towards context engineering is critical for marketers leveraging AI in 2026.
- Success depends not on overwhelming AI with information, but on precisely curating and supplying the right data and tools for the job at hand.
- Many challenges attributed to AI are actually due to a lack of organizational clarity about processes and goals.
- The "Goldilocks problem"—finding not too little and not too much context—will be a defining martech obstacle in the year ahead.
