
Loading summary
A
This is the Everyday AI show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life.
B
Why does no one talk about prompt engineering anymore? I mean, if you rewind back like two years ago, you would have sworn that prompt engineering would be the world's most popular future job title. But that's obviously not the case. And the essential disappearance of that term is twofold. One, models are smarter and it doesn't always matter the exact way we talk to them, as long as we get the message across. And two, an output that moves the needle is much more dependent on business context versus just wording something a certain way. Hence the resurgence of the term context engineering. But what does that even mean? And how can you understand the required inputs of context engineering to get better outputs out of a large language model? Well, if that's one of the things that you or your business is grappling with, then you're in luck, because on today's episode of our Start Here series, we're tackling context engineering and how to get expert level outputs from. From AI chatbots. All right, I am excited for today's show. I hope you are too. If you're new here, welcome. This is the Everyday AI Start Here series. So after 700 plus episodes, one of the most common questions I get is, where do I start? So that's why we started the Start Here series. And this is actually volume seven of this exact series. So the Start Here series is the essential podcast series to both learn the AI basics and to double down on your knowledge. So if that's what you're trying to do, you're in luck. Make sure you go to start here series.com that's going to redirect you and give you free access to our inner circle community. So there you can not only go take our context engineering course called Prime Prom Polish for free, but also network with a bunch of other people. And you'll be redirected right to our Start Here series area where you can go and listen to every single episode in this series, all right there at your fingertips. All right, and if you missed our last episode of this series, we talked about how to train your team on AI and the seven steps to educate your organization on large language models. And the last step in there was, well, making that step to go from operator to orchestrator. So that's where we kind of left you with the last step in our series. And that's where we're going to pick up because actually, one of the biggest things that you can do from going from an operator or essentially someone pushing all the buttons to an orchestrator, right, which is when AI starts to do the work for you, is having the right data and providing that data to the model in the right way. And that is the backbone of what context engineering is. It is the process a human goes through to make sure a large language model has the right context about not just you, your role, what you're trying to accomplish, but maybe most importantly, your business and the competitive market. So this is the big differentiator, because the same two people, they can use the same prompt, right? Going back to prompting and prompt engineering, and you can get wildly different answers, right? Because if one person, whether it's in custom instructions, a GPT, a project, etc, or using chat GPT apps or, you know, quad connectors, whatever, you may be using Google Gemini apps, right, you can put in the exact same prompt as someone sitting next to you. And if you have Your context engineering 101 ducks in a row, your output will be light years better than that person who does not. And the difference isn't necessarily as difficult as it may sound, because the skill separating average AI users from expert level ones is just providing the model the needed context and understanding how it works in different scenarios. So that's what we're going to cover today. We're going to. Well, first talk about why the AI industry as a whole and really just the business landscape has shifted away from prompt engineering and really just more focusing on context engineering. And I will say that that shift kind of happened and popularized in late 2025. Then I'm going to lay out for you a six part framework and also a four layer system, restructuring what your AI sees. And then I'm going to walk you through how to build reusable context vaults or kind of skills one in the same and use platform features that are already at your fingertips. All right, let's get into it. So RIP Prompt engineering, All right, kind of. But here's, here's the thing. If you think back to the early days of ChatGPT or even technically before Chat GPT, right, when the GPT technology was available to dozens and eventually hundreds of other platforms before ChatGPT even came out, right? So much of what you were able to get out of a model was how you talk to it. And that was for a couple of reasons. Number one, the AI models themselves were a little more old school, right? And I'll talk about that here in a little bit. You couldn't always upload documents, right? And you couldn't always paste in a bunch of information either because the model's context windows were smaller. So essentially it would always forget things very quickly. You couldn't for the most part upload documents, at least not very easily, right? So this really changed that differentiator of if you were going to get a good output versus a bad output. And it was really just how you talk to the model, right? If you use certain prompting techniques, you could kind of pull the best out of a model's training data, right? Because even if you think back to the way earlier days of large language models, they weren't connected to the Internet, right? So they weren't connected to the Internet, they didn't have tool calling, right? And for the most part you couldn't even upload files, which is why in the earlier days prompt engineering was actually really important because even in that data set, right? And if you go back and listen to the earlier episodes of our Start Here series, we talk about training data and everything that goes into it. But for the most part, in the earlier days of Chad, GPT and you know, when Gemini was barred and you know, early co pilot days, etc. It was really how you talk to the model because the model had so fewer capabilities, yet there was still a lot of information there, right? Even the early models like GPT3 or GPT3.5 or you know, the first, you know, version of Google Gemini, even though we look back at those models now and we think, oh, they weren't very good, they were, you just really had to learn how to talk to it, right? So now you can go talk to any of them today's smartest models, and you don't even have to really put a sentence that makes sense, right? Sometimes I find myself, you know, if I'm not using voice dictation with models and I'm if. And if I'm just typing, right, I've become. And I'd probably program myself to know the models are so smart, you know, it's misspellings and I say the wrong thing and all these things, but I know in the end it doesn't matter because the today's models are so incredibly smart at understanding what I'm trying to say, right? Especially when I have personalization enabled, memory enabled all of these other things. The actual words that I'm telling a model don't mean a ton now, right? That's how it is, but it isn't how it used to be because the prompt engineering used to focus on you had to say things the exact right way. Right. But if you did, what you could get was just a step change, different than what you could get if you didn't word something the direct way. It was like you almost had a password that no one else had. And at the time, right, proper prompt engineering, it was an amazing skill to have, but right now it doesn't matter as much. Right. You can have the best prompt engineering skills in the world, but if you don't have the context, it doesn't matter. And I think the industry really realized that the bottleneck was never about how we talked to the model, it was the information behind it. And I think that this shift really started to happen in probably mid June of 2025. So, you know, two of the people that are kind of credited with popularizing this concept of context engineering were Shopify CEO Toby Lutke, who just kind of called for the move from prompts to context. And then in the same month, former OpenAI co founder Andrej Kathy kind of endorsed that term publicly. And then in September, Anthropic, even public, published a dedicated blog talking about moving from moving away from this concept of prompt engineering. And I will just go ahead and say this, right, not one of those I told you so, but we've been teaching this concept of context engineering, even though I didn't call it that, I believe we started teaching it in late 2023. So, yeah, we've, I've done more than, you know, 200, probably like 210 now live trainings on, you know, essentially prompting models. Right. So even early on, right. I was teaching prompt engineering, but that really shifted in late 2023 and early 2024. You know, we kind of, you know, we have our prime prompt polish course, but if you have taken it in the last two and a half, three years, you know that we started to teach this concept called Refine Q. And essentially that is context engineering at its core. So, you know, it's, it's, it's not a new concept, right, Because I've been teaching people this for a long time. But the term, the terminology around context engineering, and it has really snowballed into more of a movement has really picked up in the last year or so. And that's because it's about designing the environment and not just the question. So an easy way to think about this is to think of the AI, right? Think of that, whatever large language model that you're using as a processor. And the context window is its working memory. Okay. So context window, without getting Too technical. That's like a hard drive, Right. So in the same way, maybe you have a one terabyte hard drive. Let's just say, right, if your hard drive's full and if you try to put more information, maybe fortunately a computer will stop you from doing that. A large language model will not. So once it's quote unquote, hard drive gets full, it's just going to delete the first file that you ever uploaded. Right. So that's how a context window works with large language models. Except you never really know when you are hitting that context window. Right. So what this means is, well, your context becomes very important in understanding that working memory and how the large language models work specifically with your data. Right. That's grasping the basics of context engineering. Right. And it's, it's a little different, Right? So it's different depending on, it does get a little convoluted depending on what large language model you're using. Right. As an example, if you're using GPT5.2 versus GPT5.2 thinking versus if you're using GPT5. 2 Pro via the API. So it is, it is a little bit different, but essentially the concept of prompt engineering, it ends once you hit enter. And context engineering is an ongoing battle to make sure that your a model or your session with a model has access to and can understand your business data. And there was a study from Intuition Labs last year that said that 40% of AI projects fail. And one of the reasons, or the main reasons why is, well, it comes from poor context. And it's not actually the model, it's the model either number one, not understanding what you want out of it, or number two, it just doesn't have that data that can be the differentiator. So here's why it matters more than ever. Well, if you talk about the big three or the big four, right. So Chat, GPT, Anthropics, Claude, Google, Gemini and Microsoft Copilot, they, it used to be fairly hard. Aside from Copilot, it's always been more straightforward. If you understand the, the tech and the permissions landscape inside Microsoft Windows, that's a whole nother story. But let's look at the other three with just ChatGPT, Claude and Gemini, I would say in early 2025, it was actually kind of difficult to use your company's data. Right. You could even say, oh well, Jordan, you know, there's been these things like GPTs where, you know, you could upload, you know, documents and have a specialized version of ChatGPT that had access to those documents. Yeah, but have you ever tested it? Have you ever run the needle in the haystack test on that GPT that has. Right? People would just assume, oh, I'm going to, you know, upload a, you know, 500 page PDF into a GPT and then it knows everything about me and my company. No, absolutely not. Right. That means you didn't really understand, you know, how these models tokenize that information or access that information. But now it's much easier, right, because these models by default can create searchable indexes of your files. So think of it like this way, in the same way, let's say you have a Mac, right? And you can go to your Mac Finder and your search bar there and you can search for maybe a word that is within a PDF and it's going to know because it's indexed that file and it understands it. Right. That obviously requires a certain level of compute, right. That two years ago these models just didn't have, but now they do. So within ChatGPT as an example, they have things like projects, they have things that were previously called connectors, now they're called apps. Claude, same thing. You can have, you know, project specific memory. You can have these reusable skills that can take advantage of your business context. Same thing with Google Gemini. You can have these gems that connect live to your Google Drive, to your Gmail, to your calendar. Right? So at various levels and in different ways, the three main players within a couple of clicks only can connect to your, in many cases, your dynamic business data. It's not always dynamic. Right. In some cases it is and it will create a searchable and live indexed of everything that you connect to it. All right, Again, I have to throw out that same asterisk as I always do, you know, always make sure you have permission to connect your company's data to a large language model, blah, blah, blah. Right. But once you do that is the first step in context engineering is making sure, number one, the model has access to the context or the data that it needs. But you also, more importantly need to understand how it works in each scenario. Right. Like as an example, I would say most people, even if you listen to the show often, you might not know that Chat GBT had these things called connectors. But oh wait, actually in December they changed them right before the holiday season, they changed them all to apps. And a lot of people miss that. And with that comes depending on what app you're talking about now. Well, it maybe handles your data a little bit differently. Than it did before. So you do have to, depending on the platform that you use, if you really want to understand context engineering well, you have to understand how each of these different platforms connects to different data sources. Because it's not, it's, it's not uniform. Right. It's really not. Right. Even if you look at chat, GPT, their apps, there's four different ways that it can look at your data in four different ways that it can understand your data. Right. In the same way. Right. If you think of cloud storage and there's these different permissions and different way to access data, you know, that does trickle down to large language models as well. So let's talk about kind of what I'm calling these six building blocks of effective AI context. Because the step one is, well, what I just covered, you have to make sure depending on what large language model you're using, it has access to your data. But it's not just about data and it's not just about telling a large language model. Here's something about me, right, that's providing context, right. Context clues. Tell me more about yourself, what you're trying to accomplish. Right. I like to break it down into these six different building blocks for building context within a context window. Okay. Once you're out of the context window again, depending on how you're connected, connecting your data, you might start to get poor results. So keep this in mind and keep these six building blocks in the context window of any given conversation. So goal, that's what you need the AI to produce and for whom. Constraints, understanding the boundaries, rules, things to avoid and format requirements, reference material. That's the approved facts, data and source documents to draw from. Examples, those are representative samples of the output you want, plus context and examples, then procedures. Those are step by steps instructions for how the AI model should approach. The tax task and then the evaluation rubric. So that's grading criteria so the AI can assess its own output quality. So this isn't a perfect formula and it is ever changing. Right. But I think this is for the most part, number one, you should just go take our, our free prime prompt, polish course and go through the whole thing. I think Refine Q is another version of building this essential building blocks of context. But this is another kind of framework that I think works really well. Okay. Now you don't just think of those six building blocks of. Okay, I'm good. Right, Let me get those things. Because you have to actually apply them in different layers. So the first layer is, well, personal. That's your own Personal context. The second layer is your team, right. If you're on a small team, might be a little less difference between that first layer and the second layer. If you're on a large team, could be a huge difference. Then the third layer is obviously your company or your business, right? So those are things like your brand voice, your policies, your product details, etc. Right. When we talk about layer one, that's your own personal role, your expertise, right. Layer two, that's kind of shared definitions, your project goals, conversations, like I said, layer three, that's at the company level, brand voice, policies, et cetera. And then number four, that's your market, that's the, the, your position in a competitive market, industry invite insights, trends, et cetera. So it's not just about bringing the right folder in via a ChatGPT app. It's not just about those building blocks. It's also making sure that you apply those at the different layers that a large language model needs. And then when you do that, you can probably imagine by now, oh my gosh, this sounds extremely time consuming. Yeah, it is, right. I always tell people, if you're thinking about AI as if it's an easy button, you're looking at it all the wrong way. Right? The best thing, the best analogy that I've probably ever taught, and it's from the very first, right? In early 2023 when I did my very first, you know, chat GPT prompting course, back when prompt engineering actually was a thing, and it still holds true to today. When you are working with a large language model, you have to think of it as you are training a new employee, right? A new college grad or someone that just transferred in and they're a capable person, right? But their output is going to largely be dependent on how much context you share with them, right? Because if you just throw down a giant prompt, right? If you throw down a giant training manual and then say, all right, first assignments due in an hour, they're going to fail, right. You have to go through and give them the context and the conversation and the iteration that they need, right? And one of the ways that you can think about this, because what I just laid out in terms of, you know, applying the context, okay, those six different building blocks across four different layers, that's a lot. Well, you have to think in the same way that you would invest in an employee, you do it. So in the long run, it is scalable, repeatable, reusable, right? So you can think of this as creating a context vault or maybe a skill, right? That's another terminology, another term, you know, kind of created by anthropic. But skills have been picked up by, you know, all the main players in the AI space. So think of a vault or a skill as kind of this folder of reusable context, right? So these different procedures, rubrics, key facts that you might be able to reuse or to modularly use with within each other. So you can build a skill or a vault per role to start, then expand that as your team standardizes whatever process you're working on. And the most important content to the vault first is how you do things, right? And that's before the knowledge ever leaves. Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on Genai. Hey, this is Jordan Wilson, host of this very podcast. Companies like Adobe, Microsoft and Nvidia have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use gen AI. So whether you're looking for chat, GPT training for thousands, or just need help building your front end AI strategy, you can partner with us too, just like some of the biggest companies in the world do. Go to your everyday AI.com partner to get in contact with our team, or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on Geni. So here's another thing people are always wondering, okay, do I just paste this in, do I upload this, you know, in a, in a Google Doc? Should I save this as a, you know, as an example, if you're using collage, should I, you know, save this as a skills markdown file, right, and then use it throughout? And I'd hate to use an SEO answer, but it depends, right? It depends on a lot of things. It depends on what model you're using, it depends on the context window, it depends on are you using something ultimately inside of a project, inside of a GPT, inside of a gem? Are you using it in a kind of quote unquote naked chat where it's not connected to one of those three things? And the answer, it does depend. But I think it's, um, it's important to know well, you can kind of make it work anyway, right? As long as you have the right understanding of how all these models work. So an easy example of that is to go back to GPTs, right? Because a lot of people were under the assumption that you could just throw in a large PDF in a GPT and then at any point, well, okay, it has all my knowledge, right? But people didn't understand that, hey, you would probably have to. Number one in the custom instructions, you would have to kind of provide a. Especially for larger and longer documents, you would have to provide kind of like a, an index, right, for how the model should treat that really long document. Or if you uploaded a handful of documents, you would have to have some simple rules like, hey, if this happens, then do that, right? If the user asks about marketing guidelines, you should check in, you know, page five of document A and page nine of document B, right? So sometimes you can just upload static contexts, sometimes you can just do the copy and paste method. Sometimes if certain, you know, app or connector syncs dynamically, that can solve a lot of your issues, right? You always still have to do the good old human in the loop stuff, right? And make sure that you're always testing these things and scoping them and measuring them and, you know, making sure the models are properly connecting and pulling out the context that's needed. Because the other thing is, well, models are always changing. Behavior can be extremely finicky. In a large language model, right? Taken from someone that's done dozens, if not more than a hundred live demos on this podcast, things can go terribly awry, right? You can run the exact same quote, unquote prompt, even if you have your context near context engineering, all the exact same. And it can go in a different direction. That is the, the source of generative AI. It is generative, it is non deterministic. So you do have to still understand the different ways that you can bring that context in, whether it's paste, you know, pasted context. As long as you understand the context window, again, you should always be looking at the, the chain of thought as well. And that's another big thing too, when we talk about why prompt engineering is all but dead, right? Well, chain of thought was a very popular prompting technique, right? And this was essentially a way that you could kind of go through this process of walking a model through how a human would think this is the chain of thought, or how a smart human would go about, you know, getting this proper answer. And you would kind of think of, okay, here's how a smart expert would go about getting an answer and then you would have to deconstruct that and kind of reverse engineer that to a large language model. And that would be called a chain of thought prompting technique. But that was kind of when we just had these quote unquote old school transformer models, right? That scenario that I painted for you earlier, when the models weren't connected to the Internet, when you couldn't upload files, when they weren't dynamically connected to your data, right? When they didn't have tool calling and all of these different functions, right? That's when this kind of chain of thought really prompt engineering mattered, right? But now this is the default of how today's thinking or hybrid models work, right? If you ever read the summarized chain of thought, you'll see, oh, this is essentially, they are doing this prompt engineering stuff that was popular in 2023. By default, they're doing this under the hood, right? Which is why how you talk to a model is way less important now than having the correct context. So here's three techniques that I think can help you turn good context into expert level outputs. All right? And these are technically still going back to prompt engineering basics. So yes, prompt engineering stacked with proper context is obviously going to give you the best results. So it's not that prompt engineering is dead. You know, still some of its, you know, core, you know, core foundations move live on so few shot examples. Number one, you should still, even with all that context, give it the examples of what's good, what's bad, right? That's what we teach in our prime prompt polish. That is the polish portion is giving it kind of, that multi shot kind of work, right? In the same way, think you're trading, you're trading that employee. Go back to that analogy that still works so well the very first time they hand in their first project, their first assignment, their first deliverable, you're probably going to go through and sit with them and say, hey, this is great because blank. Hey, this is incorrect because blank. Right? So same thing that the, you know, few shot examples. The second one, the second technique is rubric first. So give, you know, whatever large language model that you're working with a grading criteria in the context window before you even start working. Right? This is something again that we use to teach in our older PPP Pro quote course. We don't teach it as much anymore because I don't think it's as, as useful as it once was, but I think it's still right. I called it, you know, temperature. I think I called it no gauges. Right? You need to give the model something it can gauge or give it a temperature. Right? And give it examples too. So say, hey, let's just talk about creative writing, right? Write a sentence as, you know, plainly as possible, as boring as possible, and you say, this is a one, right? On the creative scale, you write a sentence that's kind of creative. This is a 5. Write the world's most creative sentence ever. That's a 10. But tell the model that, tell it why, right? And then there you go, you've just kind of created a rubric, right? So then at any point you can say, hey, let's do this as a three. Hey, let's do this as an eight. Right? And that can help you think of not just creative writing. That's just an easy one to think about. But there's so many different gauges or temperatures that you can put into a model. And then last but not least, show don't tell. So sometimes you just need to paste the exact format you want instead of describing it. I think the combination of describing what you want and then also giving it examples, you know, that's just another, you know, kind of piling on here or doubling up here on our first technique, which is few shot examples. But if you combine that with the kind of the Show Don't Tell, I think that's a great, great way. Especially if you want outputs formatted in a certain way or if you want outputs to always include, you know, xyz, just giving it examples of that is helpful. But then also giving it the exact formula. Right? So I would always do multiple versions of this. But the show Don't Tell is extremely important. All right, so we've covered a lot in this Start Here series. But let me wrap by saying this context is everything, right? I've always said your data is the differentiator. Using AI doesn't matter, right? It doesn't. People always think like, oh, we're using AI, so we're ahead of the curve. No, you're not. Right. And prompt engineering doesn't matter as much anymore, right? Because now these models by default are doing a lot of that heavy lifting that really paid off in 2023 and early 2024. So you have to stop thinking about talking to the model a certain way. And actually, especially if you are non technical, especially if you're not a, you know, heavy AI user, that's actually a good thing. Because I think earlier on, right, in 2023 and 2024, a lot of non technical people were Kind of scared off of using, you know, large language models because, you know, earlier on there, you know, this prompt engineering, it's everything, it's everything, it's everything. And people are like, oh, I'm not an engineer, right? It doesn't matter anymore. You don't have to, you know, understand you know, chain of thought or you know, any of these other prompting techniques anymore. You can talk to it just even like a lazy human. Because if you have the context side, how you talk to the model is not as important. But let me just give you three last pieces of advice to wrap up everything we talked about. So stop starting with a question. Instead, start with a tiny context pack. Extend on it from there. Next, reuse what works, right? Whether you want to create skills, create GPTs, create projects, it doesn't matter. But you need to reuse what works because you do need to put much more in on the front end on the context side than you think. You might need more and more in more context. As long as you understand and don't exceed the context window. Context will not bite you in the butt, right? It will bite you in the butt if you don't provide enough context. But you need to be smart and think in a scalable fashion and you need to reuse what works. And then last but not least, remember that expert level results come from a system that you can repeat every time, right? It's all about if you want the expert level outputs, there's a good chance if you're doing this the right way, that you're wasting time by not reusing it, right? Here's a, here's an extra pro tip, right? Especially if you're using as an example, well, any of the models, if you have personalization and memory on, go through, ask the model what are the things that I'm most, that I'm using you for that is most repetitive. You know, what are some systems that I can build that I can reuse, right? You'll probably be surprised. Especially, you know, there's been some recent updates in the last, I'd say three to four months with the big three players there in improving how the memory, how you know, remembering your past chat conversations. It's actually really good now, right? When it first started to debut like 15, 18 months ago, it wasn't that good, right? But I'll say the last three months, these AI chatbots, essentially their memory and what they know about you is really good and being able to recall past conversations. So just ask what, what am I wasting the most time on, right? Doing every single day. What are some skills that I should be building? What are some projects that I should be, you know, building? What are GPTs that I should be using? Right. Don't never, never feel like you're, you know, stupid or anything by just saying to a large language model, I'm not sure, because guess what? Large language models are smarter than us. They are. They can pick up those, those little pieces that humans tend to miss or even things that you may miss about yourself. So actually a great way to fill in that context is just to ask a large language model, right? There's kind of this more, you know, popular, you know, trend of creating like a skills markdown file or a roll markdown file. Ask any of the models, hey, based on everything you know about me, help me build out some of these blocks that you can then use modularly and take them with you as well. Right? Because I can guarantee. And I've gone through this process, I actually did it like one or two months ago where I just went through that process in all of the models because chances are I use them for different things because they each have their different strengths. And then I combine them into one big, you know, skills file, role file, all these different things. And now I have these building blocks that I can use whenever I need them. All right, so that is how you get those expert level results. So I hope that this version of the Start Here series was helpful. If it was, make sure, if you haven't already, go to start here series.com that is going to give you free access to not just our context engineering course that's been taken by more than 15,000 business leaders called Prime Prom Polish. But it is going to give you free access to our inner circle community. And then you can also go to the Start Here series space inside of our community and go listen to and catch up on all of our Start Here series all in one easy place. All right. I hope this was helpful. Thank you for tuning in. I hope to see you back tomorrow and every day for more Everyday AI. Thanks, y'. All.
A
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going for a little more AI magic. Visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.
Title: Ep 710: Context Engineering: How to Get Expert-Level Outputs From AI Chatbots
Podcast: Everyday AI Podcast – An AI and ChatGPT Podcast
Host: Jordan Wilson (Owner, Everyday AI)
Date: February 10, 2026
The episode dives deep into the evolving landscape of working with AI chatbots, focusing on the concept of "context engineering." Jordan explains why prompt engineering has become less relevant with modern AI advancements and details actionable frameworks, techniques, and best practices for getting expert-level outputs from AI chatbots by shifting your attention toward context engineering. The episode is part of the Start Here series, designed to equip professionals at any level with foundational—and advanced—AI literacy.
Jordan recommends three time-tested techniques:
| Timestamp | Segment | |-----------|------------------------------------------------------------| | 00:16 | Why prompt engineering is no longer the focal point | | 08:17 | Context > prompt: The true differentiator emerges | | 13:54 | Context window explained (AI memory/hard drive analogy) | | 16:30 | Why AI projects fail: the importance of context | | 22:00 | Connecting business data: apps, permissions, nuances | | 29:02 | Six building blocks of AI context | | 31:01 | Four layers of context: Personal, Team, Company, Market | | 35:50 | Context vaults, modular skills, and real-world analogy | | 41:16 | Three expert techniques: Few shots, rubric, show-don’t-tell | | 46:32 | Final advice: context is everything, systems and reuse | | 48:52 | Removing the barrier for non-technical users | | 49:38 | Asking your AI to help define reusable context |
Jordan Wilson’s episode is a practical masterclass on shifting from outdated prompt-tweaking to “context engineering”—assembling and layering the right information for your specific AI assistant and scenario. He demystifies the architecture of modern LLMs and platforms, provides a clear, repeatable framework for scalable results, and reassures listeners from all walks that the new world of AI is about data, context, and systems, not secret codewords.
This Start Here episode delivers foundational strategies, advanced techniques, memorable analogies, and actionable frameworks anyone can adopt—making it a must-listen for anyone seeking to truly leverage generative AI in 2026 and beyond.