Episode Overview
Title: Ep 710: Context Engineering: How to Get Expert-Level Outputs From AI Chatbots
Podcast: Everyday AI Podcast – An AI and ChatGPT Podcast
Host: Jordan Wilson (Owner, Everyday AI)
Date: February 10, 2026
The episode dives deep into the evolving landscape of working with AI chatbots, focusing on the concept of "context engineering." Jordan explains why prompt engineering has become less relevant with modern AI advancements and details actionable frameworks, techniques, and best practices for getting expert-level outputs from AI chatbots by shifting your attention toward context engineering. The episode is part of the Start Here series, designed to equip professionals at any level with foundational—and advanced—AI literacy.
Key Discussion Points & Insights
1. The Shift from Prompt Engineering to Context Engineering
- Prompt Engineering's Decline:
Earlier success with chatbots was all about “how you talk” to the model—using the perfect phrase or “magic password.” Changing model capabilities and context windows made prompt engineering less crucial.- “You could kind of pull the best out of a model’s training data. … But if you don’t have the context, it doesn’t matter.” (05:50)
- Rise of Context Engineering:
With the growth of model memory, document upload capabilities, and dynamic data connections, what matters now is ensuring your AI has the right business and task-specific context.- “An output that moves the needle is much more dependent on business context versus just wording something a certain way.” (01:01)
- Industry Timeline:
In 2025, Shopify CEO Tobi Lütke and former OpenAI co-founder Andrej Karpathy popularized “context engineering,” and Anthropic published a defining post in September 2025.- “This shift really started … probably mid-June of 2025.” (11:30)
2. Understanding the AI Context Window & Its Importance
- Analogy:
The context window is like a computer’s working memory/hard drive. Give it too much and it starts dropping off earlier files; too little, and it works blindly.- “The context window, without getting too technical, that’s like a hard drive.” (13:54)
- Why Projects Fail:
Poor context is the #1 reason 40% of AI projects fail (per Intuition Labs, 2025).- “It’s not actually the model, it’s the model either not understanding what you want ... or it just doesn’t have the data that can be the differentiator.” (16:30)
3. Platforms & Features: Connecting Your Data
- Contemporary Models’ Capabilities:
Modern chatbots (ChatGPT, Claude, Gemini, Copilot) now allow businesses to connect dynamic data sources (docs, drives, calendars) with a few clicks, indexing them for faster access. - App/Connector Integrations:
Each platform treats external data differently, and even within ChatGPT “apps,” there are four distinct ways to handle data.- “Depending on what app you’re talking about … it maybe handles your data a little bit differently than it did before.” (22:00)
- Pro Advice:
Always understand permissions and platform-specific behaviors before plugging in confidential data.
4. The Six Building Blocks and Four Layers Framework
Six Building Blocks for Effective AI Context (29:02)
- Goal: What you need the AI to produce and for whom.
- Constraints: Boundaries, rules, format requirements.
- Reference Material: Approved facts, data, source docs.
- Examples: Good, bad, and contextual samples of desired output.
- Procedures: Step-by-step instructions for how AI should approach tasks.
- Evaluation Rubric: Scoring criteria so AI can assess and improve its responses.
The Four Layers of Context Application (31:01)
- Personal: Your individual role, expertise, user settings.
- Team: Shared definitions, goals, project collaboration.
- Company: Brand, policies, product details, organizational knowledge.
- Market: Position in the competitive landscape, trends.
- “Not just about bringing the right folder in via a ChatGPT app. It’s also making sure you apply those at the different layers that a large language model needs.” (32:18)
5. Practical Approaches to Building & Reusing Context
- Context Vaults/Skills:
Think of these as reusable, modular “folders” of company, team, and market context, modeled after the “skills” feature in Anthropic Claude. Build for roles first, then expand.- “Invest in your AI session as if you’re training a new employee: provide context, conversation, and iteration.” (35:50)
- Implementation Tactics:
- Use copy-paste, doc uploads, or dynamic app integrations as appropriate (platform-dependent).
- For large documents, provide an index and process instructions.
- Routine testing and “human-in-the-loop” evaluation is always necessary, as models can be inconsistent.
6. Expert Techniques to Level Up AI Outputs (41:16)
Jordan recommends three time-tested techniques:
- Few Shot Examples:
Give the AI clear illustrations of both good and bad outputs, especially for subjective or complex tasks.- “Go back to that analogy … the very first time they hand in their first project … you’re probably going to go through and sit with them and say, ‘Hey, this is great because blank. This is incorrect because blank.’” (41:48)
- Rubric First:
Define evaluation standards in your prompt/context window before starting work. For example, for creative writing: “A 1 is boring, a 10 is wildly creative. Tell the model why.”- “Give the model something it can gauge, or give it a temperature.” (43:10)
- Show, Don’t Tell:
Illustrate exactly how you want output formatted, rather than just describing the format.- “If you want outputs formatted in a certain way … giving it examples of that is helpful.” (44:04)
7. Final Advice and Key Takeaways (46:32)
- “Context is everything. Your data is the differentiator.”
The way you phrase a prompt is much less important than what information the model has access to. - Stop starting with a question. Instead, start with a tiny context pack and extend from there. (46:58)
- Reuse what works. Build up skills, projects, or GPTs to save time and consistently deliver strong outcomes.
- Repeatable Systems = Expert Results.
“Expert level results come from a system you can repeat every time.” (47:59) - Pro tip: Ask your AI what tasks you do repeatedly and how to build reusable context/skills around these.
Notable Quotes & Memorable Moments
- On context vs. prompt engineering:
“You can have the best prompt engineering skills in the world, but if you don’t have the context, it doesn’t matter.” (08:17) - On what’s changed in AI:
“How you talk to a model is way less important now than having the correct context.” (40:23) - On reusability & building systems:
“Expert level results come from a system that you can repeat every time.” (47:59) - On what scares users off:
“Earlier on, 2023-2024, a lot of non-technical people were kind of scared off ... but it doesn’t matter anymore. ... You can talk to it just even like a lazy human.” (48:52) - On asking for help:
“Never feel like you’re stupid ... by just saying to a large language model, ‘I’m not sure,’ because guess what: large language models are smarter than us.” (49:38)
Key Timestamps
| Timestamp | Segment | |-----------|------------------------------------------------------------| | 00:16 | Why prompt engineering is no longer the focal point | | 08:17 | Context > prompt: The true differentiator emerges | | 13:54 | Context window explained (AI memory/hard drive analogy) | | 16:30 | Why AI projects fail: the importance of context | | 22:00 | Connecting business data: apps, permissions, nuances | | 29:02 | Six building blocks of AI context | | 31:01 | Four layers of context: Personal, Team, Company, Market | | 35:50 | Context vaults, modular skills, and real-world analogy | | 41:16 | Three expert techniques: Few shots, rubric, show-don’t-tell | | 46:32 | Final advice: context is everything, systems and reuse | | 48:52 | Removing the barrier for non-technical users | | 49:38 | Asking your AI to help define reusable context |
Summary Wrap-Up
Jordan Wilson’s episode is a practical masterclass on shifting from outdated prompt-tweaking to “context engineering”—assembling and layering the right information for your specific AI assistant and scenario. He demystifies the architecture of modern LLMs and platforms, provides a clear, repeatable framework for scalable results, and reassures listeners from all walks that the new world of AI is about data, context, and systems, not secret codewords.
This Start Here episode delivers foundational strategies, advanced techniques, memorable analogies, and actionable frameworks anyone can adopt—making it a must-listen for anyone seeking to truly leverage generative AI in 2026 and beyond.
