AI Explored Podcast
Episode: “Rethinking Prompting: Getting AI to Work for You”
Host: Michael Stelzner
Guest: Jordan Wilson, Founder of Everyday AI
Date: February 10, 2026
Episode Overview
In this episode of AI Explored, Michael Stelzner welcomes AI strategist Jordan Wilson to challenge conventional wisdom around AI prompting. They dive into context engineering — a sophisticated, step-by-step methodology that unlocks the full potential of AI. The discussion emphasizes moving beyond simple prompting towards collaborating with AI as a thought partner, using Jordan’s Prime-Prompt-Polish framework to achieve scalable, reliable results for marketers, creators, and business owners.
Key Discussion Points & Insights
1. Jordan Wilson’s Journey Into AI (02:21-04:15)
- Jordan started as a journalist and marketer. He foresaw that technology would eventually make certain marketing skills obsolete.
- Encountered GPT-based tools before ChatGPT and quickly realized their professional potential:
- “When you start to really understand and use this technology, it’s just as good as the room full of us human professionals.” (03:22, Jordan Wilson)
- Decided to pivot his career towards AI, ultimately founding Everyday AI to help others adapt.
2. The New Reality of AI Tools in Business (04:38-05:36)
- Jordan spends his days immersed in AI, teaching and experimenting non-stop yet still struggles to keep pace with the field’s rapid evolution.
- AI now permeates every digital tool, making fluency with these systems essential for modern marketers.
3. Misconceptions About Prompting (06:28-08:13)
- Prompts are not magic bullets: Many users expect instant value but use AI like a smarter Google, which is a waste of its capabilities.
- “It’s like you have a Ferrari, but you just got the Ferrari to shield yourself from the rain... If all you’re doing is prompting, going in there and you know, trying to find one prompt, that’s like having a sports car and not driving it.” (07:00, Jordan Wilson)
- Using AI effectively requires more than isolated prompts; it’s about leveraging context and iterative collaboration.
4. The Power—and Uncomfortable Truth—of Frontier Models (08:36-10:13; 10:41-12:38)
- To succeed with AI, users must accept that large language models (LLMs) can outperform seasoned professionals in many tasks.
- “First, if you really want to get the most out of a model, you have to go through that uncomfortable realization that, yeah, this thing’s better than me.” (08:55, Jordan Wilson)
- Effective use comes from a symbiotic relationship: the model makes you better, and you make the model better through thorough context sharing.
5. Choosing the Right Model for the Task (12:38-17:34)
- Not all AI models are equal. Results depend on using the most appropriate model for your requirement.
- “It might do very well in Claude Opus 4.5 and it might do absolutely terribly in GPT 5.2... you have to understand how these models work and use the right model or mode at the right time.” (15:20, Jordan Wilson)
- Experimentation is key, as each AI system has unique strengths and limitations. Relying on AI defaults could yield subpar outcomes.
6. Introducing the Prime-Prompt-Polish Framework (17:34-19:31; 19:39-23:50)
- Context engineering is the foundation for effective AI collaboration.
- Jordan’s Prime-Prompt-Polish approach:
- Prime: Comprehensive context setting before asking for output
- Prompt: Once primed, issue a clear request for the desired output
- Polish: Iterate and refine the AI’s results with feedback and examples
The “RefineQ” Method for Priming (21:52-23:50)
-
R: Assign a Role to the AI (e.g., marketing strategist)
-
E: Provide clear Examples of input and expected output
-
F/I: Fetch relevant external information and Insights
-
N: Narrate audience details or problem context
-
E: State Expectations clearly (what a good answer looks like)
-
Q: Explicitly instruct the AI to ask clarifying Questions before responding
“All priming is... a conversation as if you had a consultant from a big four consulting company working with you on whatever that project is.” (21:21, Jordan Wilson)
-
An essential mindset shift: Don’t ask for an output immediately; instead, ensure the model fully understands the context via multifaceted instructions and back-and-forth Q&A.
7. Memory Recall Before Prompting (31:30-34:09)
- LLMs have limited context windows and lose detail as a session progresses.
- Before prompting, ask the AI to recall and summarize key context, ensuring it doesn’t forget earlier, critical inputs.
- “The language that I usually use is: ‘Please recall every single important fact determination that may be helpful as if you were giving instructions… to a large language model that had no history or understanding of this conversation.’” (32:14, Jordan Wilson)
8. The Polishing Phase (36:13-38:28)
-
After the initial output, the model is further improved through feedback—highlighting what’s good, what’s not, and why.
-
Use the “input, output, good/bad, why” structure to guide the AI towards more refined, contextually accurate results.
- Example: Correcting model suggestions based on personal or business context so future outputs are more tailored.
“[Polishing] is the difference between having a kind of untrained intern for you to work along with versus someone that knows everything in your head.” (37:20, Jordan Wilson)
9. Limitations of AI Memory and the Need for Manual Maintenance (38:59-44:13)
-
Built-in AI memory can degrade or change due to model updates and lack of transparency.
-
Critical to maintain your own offline copies of instruction sets and prompts; regularly test and update as LLMs evolve.
- “You never want to be over reliant on certain features that can change under the hood without you knowing about it.” (39:09, Jordan Wilson)
-
Assign a person to review and update prompts and outputs to keep pace with shifting AI capabilities.
“If you’re not properly scoping and testing things, and if you’re instead in a set and forget mindset, it can be bad.” (41:19, Jordan Wilson)
10. The Superhuman Advantage and Ongoing Responsibility (44:13-45:52)
-
While the process is complex, it grants individuals and small teams remarkable capabilities if regularly maintained and iterated upon.
-
Those who systematize and actively refine AI workflows will outpace competitors who rely on “set and forget.”
“If you want to retain your enhancement, you have to do maintenance and it’s just not a set and forget kind of thing... If we don’t have a system like what Jordan is talking about, we are eventually going to be outpaced by someone who’s keeping up.” (44:13-44:55, Michael Stelzner)
Notable Quotes & Memorable Moments
-
On Prompting Like Google Search:
“The absolute worst way to use it is if you’re, you know, just trying to get a quick answer or a quick, better written blog post... It’s like having a sports car and not driving it.” (07:00, Jordan Wilson) -
On AI Being Better Than Experts:
“This thing’s a better writer than me if you know how to use it correctly.” (09:03, Jordan Wilson) -
On Model Selection:
“It might do very well in Claude Opus 4.5 and it might do absolutely terribly in GPT 5.2... you have to understand how these models work and use the right model or mode at the right time.” (15:20, Jordan Wilson) -
On Keeping Models Up-to-Date:
“You never want to be over reliant on certain features that can change under the hood without you knowing about it.” (39:09, Jordan Wilson) -
On Ongoing Process:
“You need to have someone that’s going in there at least once a week, going through your different use cases, scoping everything, looking at the chain of thought... you need to constantly be looking under the hood and updating what’s working.” (42:34, Jordan Wilson)
Timestamps for Key Segments
- [02:21] Jordan’s Origin Story with AI
- [06:28] Misconceptions About Prompting
- [08:36] Benefits of Treating AI as Coworker/Thought Partner
- [12:38] Importance of Model Selection
- [17:34] Context Engineering and the Prime-Prompt-Polish Framework Introduced
- [21:50] Detailed Walkthrough of RefineQ Priming
- [31:30] Memory Recall Step
- [34:46] The Prompting Phase Simplified
- [36:13] Polishing Phase Example
- [38:59] Challenges with AI Memory & the Case for Manual Maintenance
- [44:13] Why Maintaining Your AI Systems Matters
Takeaways for Listeners
- AI is only as powerful as the context and clarity you give it—move from “one-shot” prompts to a collaborative, training-based approach.
- Use the Prime-Prompt-Polish framework (especially the exhaustive “prime” phase) to set up powerful, repeatable AI “employees” for your business.
- Don’t rely on default models or features—choose the right tool for the job, maintain detailed instructions, and regularly update as the landscape shifts.
- The payoff is a scalable, superhuman marketing operation, if you’re willing to put in the strategic effort.
Further Resources
- Jordan Wilson’s resources—including the Everyday AI podcast and live shows: youreverydayai.com
- For episode notes and more, visit: socialmediaexaminer.com/aipod
