
Loading summary
A
This is the Everyday AI show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business and everyday life.
B
One of the most challenging things about AI that no one talks about is not being able to talk about AI effectively. And I think that there's three main reasons for this. Number one, the tech changes faster than literally anyone can keep up with, even someone that talks about AI every day, like myself. Two, companies don't train their employees. And three, it's kind of the summation of those parts. There's a huge gap between what people know about AI and what the models are capable of and can actually accomplish. And so much of this can be solved by talking about it. But therein lies the problem. AI is a mystery wrapped up in ever changing jargon. So we're kicking off volume two of our Start Here series where we're going to karate chop the AI jargon in the jugular and break down the basics of AI language in lingo that every business leader needs to know in 2026. All right, I'm excited. Let's get into it. Welcome to our Start Here series and Everyday AI. So this brand new series is our essential beginner and advanced AI overviews to help you launch your year strong. So if one of your big goals is to double down or to just better learn AI, this new Start Here series is for you. All right, so if you haven't already, please go to Start Here series dot com. Yeah, new URL. So there you're going to be forwarded straight to a signup page to get free access to our inner circle community. And not only will you get instant access to our Prime Prompt Polish Chat GPT prompt engineering course, but you will also get access to the Start Here series. So whether you're listening to this in mid January or sometime in March, we're going to be putting the recap of every single Start Here series episode inside the community. Inside the community there so you can listen to them all in order. All right? And if you are brand new, brand new, make sure you go listen to the first series of this, which was episode 691 or volume one in the start here series, which was generative AI how it works and why it matters in 2026 more than ever. All right, if you're new everyday AI, it's a daily live stream, podcast and free daily newsletter. But I've done like 700 episodes and the most common question I get is where do I start? That's the whole reason that we're doing this series. So let's cut the jargon down, shall we? We're gonna get straight into it, so don't worry. This isn't gonna be a, a super boring run through of random vocabulary words. One thing that I, I've realized that I kind of have a knack for over the years is almost being a, a translator, right? And like literally there is, there's an instance I was working with a pretty big company, right, doing like $20 billion in, in revenue and I was sitting with their C suite and I was sitting with their, their head of AI and it seemed like sometimes these two sides weren't speaking the same language. And that's because there's so much jargon and because like I said, kind of that capabilities and understanding gap and that's really learned to do by doing this show over the last three years, is connecting those two. So whether you are that highly technical person, you know, who understands all these terms, hopefully this will help you understand where the non technical business leaders are coming from. And for our non technical business leaders, hopefully this serves as a nice foundational look at the language and lingo you need to know to, you know, talk about it. Let's start, start where we do. AI is not going anywhere. It's here to stay. All right, we went over that in the first episode, but it's worth repeating. Chat GPT, when it launched in November 2022, it was literally world breaking technology. No one had ever seen anything like it before. Not just its capabilities, but also the sheer numbers, right? They had 100 million users within two months after launch. And ChatGPT now has nearly 900 million weekly active users. I think officially still at the 800 million weekly active users. And 40% of Americans use generative AI in just two years. For comparison, it took the Internet like five times as long to get that number of people actually using it. And I think honestly chat GPT's overwhelming popularity is one of the reasons that we have this huge capability divide, right? Do you know, as an example, you can give a single prompt to a large language model like Claude or ChatGPT, I think are probably the two best at it right now. And they can go research the web, they can, you know, personalize things for you. They can grab images, they can put together spreadsheets that you can actually download. PowerPoints that look really good, right? All in one prompt, right? Most people don't know that, right? Unless you listen to the show, then you probably do. Most people think AI is just a chatbot it's not. It is something that is completing economically valuable work at a rate higher than human experts. And recent benchmarks show that, according to blind studies. So essentially, yesterday's expert is today's beginner. And that is how quickly things change. Because your technical teams now are probably just spitting out things like tokens and, you know, MCP and RAG and vector databases and Ralph Wiggins. Right? Like it's this misunderstood jargon that just drives waste. It stalls pilots and the vocabulary shifts faster than anyone can track. To get it right, you require the right words. All right, there is a reason why this is volume two in our Start Here series. Because, number one, we set the foundation. We said, here is generative AI. Here's why it's not going away, here's why you need to invest in it. And the next thing is you have to be able to communicate effectively about it. All right? Because they require the right words. You can't evaluate vendors or manage risk alone. Right. You need to bring your technical teams together with your people leading change management. You know, with your hr, with your marketing, with your sales. You all need to be able to speak the same language. And you know this is the new business terms, right? You know MCPs and. And you know A2, as are the new KPIs. They are. So you have to be able to translate AI phrases into things that move the needle for your company. This is ultimately, at least when we look at most large language models. And I'm assuming if you're listening to the show, you know, you're either using them on the front end or the back end or both. But this is all it comes down to. There's a prompt, there's an action, and there's an outcome. All right, so humans prompt a large language model. Hopefully, you're using our prime prompt Polish context engineering way. Right? So Again, go to starthereseries.com get free access to our new and updated PPP course that teaches you how to get the most out of a large language model. But it starts with human prompting a large language model. Then the model will kind of go on with the task. Uh, you know, it'll summarize search or draft, and the model will read the prompt inside of its context window. And it works within the confines of its architecture. Right. So much of the outcome is not even decided by the prompt. It's decided by you using the right model, the right mode for the right reason, for the right task. Right. But essentially, the model reads the prompt the human puts inside of its context window. It you know, people, you need to understand things like context window, you need to understand tokens, and we're going to be going over that today. Then the model retrieves whatever data that it needs to be a helpful assistant and respond to your query, right? It's now models are agentic by nature, right? Agency. They make decisions on their own to accomplish a goal which includes using different tools in their arsenal, retrieving data on their own, you know, exploring multiple pathways and going back and saying, oh, this wasn't the right one. And then they output a result. And then the humans hopefully observe what the model did. They trace the paths that it took, they verify the information and then they use the outcomes. That's essentially right. If you're like, well, how does AI even work? There you go, right? From prompt to action to outcome. Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on gen AI. Hey, this is Jordan Wilson, host of this very podcast. Companies like Adobe, Microsoft and Nvidia have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their their employees on how to use gen AI. So whether you're looking for chat, GPT training for thousands, or just need help building your front end AI strategy, you can partner with us too. Just like some of the biggest companies in the world do. Go to your everydayai.com partner to get in contact with our team or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on GEni. And this is the foundation for every AI term out there. All right, and, and we did cover this last one. So this is the, the 30 second refresher from volume one in this series. So generative AI systems, they generate something, right? Whether that's text, images, code, video, music, right? Generative AI is not just chatbots, it is now everything. Large language models which fall under kind of the generative AI umbrella, they generate responses by predicting the next word or token in a sequence. And they do that whether they're using the GPT technology, which is what a lot of today's, most of today's large language models are built on top of. Some of the image models use Diffusion technology, which is essentially they kind of fill in the dots where large language mod use more kind of Next Next Word or next Token prediction. And most current AI products that you use, right, they're just wrapped around these other products, right? So essentially you have three big AI labs that dominate, right? You have Anthropic, you have Google, you have OpenAI and their models, Anthropic, Claude OpenAI's ChatGPT models. So the GPT models, and then you have the Google Gemini models, and you're like, oh, wait, what about Microsoft Copilot? Well, they did just start making some of their own models, but for the most part, Copilot just uses the OpenAI models. They use a little bit of Anthropics models as well inside of their Microsoft. Microsoft Copilot360 or 365. Sorry, yeah, left out. That was Microsoft Xbox 360. So the, you know, a lot of this other software that you use, most people don't know this, right? They're not out there creating their own models. They're not competing with OpenAI for the most part. You know, if you're using any AI product and there's literally hundreds of thousands of AI products out there, right? Pieces of software, for the most part, they're using one of those three. Probably there's other smaller AI labs, but for the most part, those are the three that dominate. And essentially these models that we use, they don't technically understand our words, right? The human who inputs the words, those words get translated to tokens. And the model itself, it doesn't think in words, it thinks and produces in tokens and then converts it back. So there's almost like a. You know, it's kind of like when you go into a new country, you have to convert your money, and then when you leave, you got to convert it back. It's the same thing, right? When I'm giving something to a large language model, it's con. It's converting that to tokens, right? Which helps it truly understand what I'm trying to say. Because it doesn't understand words, it converts everything to tokens. Right? Which is why sometimes how you communicate with a model in the same way of how you communicate with your teammates, to get the most out of AI, how you communicate with the model is very important. Right. An example that I've given before, the word just. Right. Context matters. The word just can be tokenized by a model. Just the single word, just seven different ways the last time I checked. Right. Something can be just. You can use just As a qualifier, I just want to talk about, right? There's like so many different meanings for words in the same way that when you're young and you're learning, whatever language that you learn, you learn, oh, different words can mean different things, and it's the words surrounding those words that give context and help us understand. So large language models, kind of the same thing, but instead of the words around, it's the tokens around. So let's talk about those tokens and context windows. This is important. Tokens are essentially text chunks, right? So they're about four characters each. So a word could be anywhere from one to, you know, up to a handful of tokens. And they're used to measure input and output length. So when you have a conversation with a large language model, like I said, everything gets put into tokens. And I'd like to think of, you know, an easy way to think about it is a hard drive, right? When you. Except let's think of a hard drive worked a different way, right? Right now if you're trying to download a file and your hard drives full, it just says, oh, it's full. A context window is like a hard drive, but the difference is it's automatically going to keep working. And instead of saying, hey, the context window is full, it's just going to kick out the first thing that you put in, right? And a lot of times the first thing that you put into a large language model is the most important, right? It's saying, hey, here's the most important information. Here's what I want you to do for me. Then if you keep working with that, you know, keeps spinning tokens back and forth with that large language model, eventually the hard drive is going to get full. Except it's not going to say hard drive's full. It's just going to forget the first things you said. That's a context window. And then parameters. Parameters are important to understand about large language models as well. Essentially, these are the neural network connections of a model. The more parameters it has, the more power, right? You can think of parameters kind of like horsepower of a car, right? There's more capability of these models that have more parameters, but there's higher computational costs in resource usage, right? So large, large language models, I think, are actually getting smaller. A lot of times companies don't say how many parameters there are, but, you know, kind of the last widely reported one we knew about was the GPT4 architecture from OpenAI that had about 2 trillion, reportedly 2 trillion parameters. So it's a lot I've always said for a long time large language models are going to get smaller. And in the long run, I think most people are going to be using more small language models. We've even seen that a ton in the last week or two. A lot of specialized models from the big companies, right. For things like translation, medical, finance. Right. The big AI labs are starting to put out very, very small models. I do think the future, you know, there's going to be, you know, OpenAI will have thousands or tens of thousands of models and they'll be spitting them out faster than we can even keep up with. Right. But anyways, parameters is capability. Now let's talk about some dorky things, all right? And if you are a beginner, forgive me, because for our more intermediate people or people who just need a reminder, we really have to understand data, right? So rag is probably a business term that you should know. And I think rag, traditional rag as we know it is going to very much change, right? I'm not going to say rag is going to go away, but rag is retrieval augmented generation. And that's where a model can retrieve relevant documents and then it grounds its responses using that retrieved content. Right. So when, as an example, you know, before ChatGPT came out, the GPT technology was available and many businesses used it and then they essentially used retrieval augmented generation to kind of wrap OpenAI's responses in their business data. So embeddings are also something important to know. So these embeddings are kind of like group of data or turns text into vector embeddings, allowing for efficient similarity search in a vector database. All is a little bit more difficult to talk about. Right. I'll probably in, in the Start Here series in our community there, I'll probably put a couple graphics, but it's, it's a little hard to explain on a podcast without doing a deep dive. Right. But essentially text is split into smaller segments or chunks before the embeddings happen. And then the chunk size and the overlap settings control how it's done. So good chunking splits information logically, so answers include the right context and details. And bad chunking breaks up context or combines unrelated info leading to wrong or incomplete answers. So this is pretty, this is much more relevant for companies that are building with ap, well, building with large language models on the back end, right. So when I say back end and front end, front end, if you're using a large language model on the front end, you're going to chatgpt.com and you're using that ChatGPT interface, that's a front end user or Claude AI, right? A back end means that you maybe have a team of developers or even now non technical people are obviously using large language models on the back end, but you're using the company's API and so essentially you're using their technology. And when you're doing that a lot of times then you might be working with rag, you know, these vector databases, chunking, all of that. So if you're a front end user, you don't have to worry about that as much. But you should know what RAG is, right? RAG is what makes those queries kind of as they go off to the large language model makes them much more accurate. It reduces the likelihood and the prevalence of hallucinations when you are first interjecting your company's data, your secret sauce, how your company thinks, right? All that important stuff that helps new hires and you know, people that have been at a company for a very long time make good decisions. It also helps large language models make good decisions. And large language models are obviously changing. You know, they started as a helpful assistant in that little chat window and there is nothing else they can really do now. Models are agentic by nature. All right, I like to say we have our old school transformer models and then we have our newer reasoning models. Even though these reasoning models are still built for the most part on the transformer technology, the generated pre trained Transformer GVT technology. Right. I like to say we had more of these, you know, old school transformer helpful assistant chatbots. Now we have more autonomous workers. Right. These models now specifically, you know, anything from Claude in the 4.5 family, so Opus 4.5, Sonnet 4.5, Haiku 4.5 and then with Google Gemini, all the Gemini's new models, you know, are hybrid or can think, right. So they can either give you that kind of next token prediction or they can think about it. Right. They can be agentic in nature or sometimes they think if they need to think about it or they just think and they're like, all right, I'm going to spit on an answ. And then with OpenAI's models, for the most part, aside from the default one, right. But you know, make sure, that's why I always say if you're using a model, make sure you're using a model that thinks because these are models that can call tools, right? They can decide, hey, this user is asking me about something that might be a little old in my training data or the data that the model is built on top of might be. Might give old or outdated or wrong answers. So an agentic model can decide I'm going to call a tool. And it can trigger functions like search, you know, going through your email, letting the agentic model interact with external services, right? You have these things called connectors, which, you know, it's kind of like rag, mini, right? Retrieval, metageneration, but a very simple version, right? It used to take a lot of smart engineers and usually 6, 7, 8 figures to get rag. Now you can get a very simple version in a couple of clicks. You know, most large language models on the front end call those something like connectors or apps, right? So you can connect things like your, like your email, your calendar, your, you know, your SharePoint or your Google Drive, your OneDrive, things like that, Salesforce, Slack, right? And then all of that invaluable like, like all of that data that your company needs and uses that you're like, man, two years ago it was so hard to, to, to work with that. Now it's there almost automatically, right? With these connectors, you also have mcp. This is the Model Context protocol. And this is a technology that was first invented and popularized by Anthropic, but the rest of the industry has adopted it, which is great. You know, one thing about the AI industry, like, it's weird, they all kind of work together, right? So even as an example, I was just talking about connectors. If, if you're a Google organization and you use Gmail and Google Drive, you can technically use those connectors inside OpenAI's products. You can use it inside Microsoft products, inside Claude's products. So in the same way that MCP kind of this, it's kind of like a, you know, how the Internet talks to, you know, websites can talk to each other with APIs. This is kind of how AI tools can talk with, to each other with MCPS. So the model Context protocol is definitely the most popular one. There's a handful of other ones that are widely used and widely adopted. But for the most part, you know, these models have essentially languages that they can talk to each other. So you don't have to, you know, send it from an AI system to a website to use the website's API to go to another website to then send it to a model, right? So it's just a direct connection. The MCP connection allows different AI models to talk to each other or different AI tools and then scaffolding. So if you've ever heard of the word scaffolding, I say it A little. I try not to, just because I don't want to, you know, confuse any people. But that essentially defines the structure that agents use to chain together multiple steps or tools for more complex workflows. So, you know, sometimes, honestly, like the models themselves can all be fairly similar, right? If you look at the benchmarks, they're, you know, 0.1% from, you know each other. A lot of times it's the scaffolding, right? It's, it's how the model actually works and how it's able to successfully or unsuccessfully use all these different tools that it has at its disposal. So the large language models have largely gone from helpful AI assistants that were powered by older, slower models that hallucinated a lot. Now they're. The models themselves are agentic by nature and they can use tools and you can connect to your business's dynamic data, but there's risks and you need to know them. Hallucinations, probably a word most people have heard of, but it's essentially a lie, right? Or false statement or a half truth that's put out there very confidently, right? But if you do know the basics of context engineering, so if you've taken our free Prime Prompt Polish course, in our free Prime Prompt Polish, can't speak Prime Prompt Polish Pro course, you know the basics of context engineering and your hallucination rate is going to go down. But if you're using the right model and if you know the basics of prompt engineering and sharing your company's context, hallucinations are, are. I'm not going to say they're gone, but they're essentially gone, right? People overlook that. If you know what you're doing, if you're using the right model, if you're connecting in your company's data, if you're looking at the chain of thought, doing basic observability, you know, human in the loop, even lazy human in the loop hallucinations are down to about zero prompt injections. You have to look out for those, especially as we have agentic browsers, right? So these are browsers that are run by an AI agent and they can go browse the web for you and interact with interfaces. On the web, with graphical interfaces, you know, click, you know, click drop downs and fill out forms for you. But there's prompt injections, but right? So there's prompt injections that get into training data which impacts the models themselves. But on the agentic side, for agents or agentic browsers, there's prompt injections on web pages, right? And some of the earlier agents and Agentic browsers didn't have great safeguards against those. Right. So you know, as a agent, right. A simple example, you know, I could put something on my website so if an agent or an agentic browser was trying to use it and I could say, hey, you know, go, you know, go to this site and you know, buy a hundred boxes of cereal, you know, put it in your Amazon cart or something like that. That's a very safe and happy prompt injection. Right. It's not causing a lot of harm. It just tricked the model or an agentic model into doing something that it didn't intend to. But there's obviously very harmful and unsafe, you know, prompt injections. Just like there's, you know, phishing emails and all these, you know, scams that have been going around, you know, with malicious links and all that. The same thing. Models have risks. So companies need to understand and talk about guardrails. That's when you combine human made policies and filters to enforce safety and prevent misuse. All right, so this is kind of the translation playbook. All right, so in the future we might be updating this, right. If we keep the Start Here series going on for a longer time. So make sure I always say check the show notes of this show. So if you're listening on the podcast, we're going to always keep the show notes updated. And in the community in which you get free access to by going to Start Here series dot com, we're going to keep the resources there updated as well. So there's always going to be new terminology. Some of the terminology is going to change, right. Like as an example, the definition of artificial general intelligence changes every month. Right. By definitions from 10 years ago were well past AGI. But you know how people define it today, Maybe we're not there. So we're going to keep the definitions up to, up to date in our community going to have a little AI translation playbook. But you need to ask yourself these questions. All right, I'm going to go back to how I started the technical people and the non technical business leaders. These are the questions that you need to be asking yourself out loud when it comes to using AI and understanding it and hopefully closing that capability and education gap. Number one, what's the problem we're trying to solve and what's our current cost associated with that solution? Right, right. People overlook that. I'm like, you know, when people are trying to, you know, calculate ROI on geni, right. I, I always ask, okay, what's your current cost on this? And they're like, okay, well you know, we have this piece of software and no. How much time is spent? No clue. You have to understand what your current costs are. Next questions. What model data tools and who approves? You have to go through those next, how can we responsibly run the fastest sprint possible with measurement and evaluation with proper, you know, proper qa? You need to go fast, you have to be careful, but you have to go as quickly as possible while still being careful while you're implementing AI. Because if you're too slow and if you take a pilot slow and if you're not building modularly right now, all of a sudden you did a year long pilot. Oh wait, that original model you started with is that was the best model in the world, is now the 50th best model and it's bad. And now you're putting yourself at a disadvantage. You can't treat AI implementation in the same way that we've treated tech implementation for the past few decades. You need to ask what are the hallucination risks, the security risks and the costs. And then last but not least, you always need to require receipts. The observability, traceability, expert driven loops. You don't need human, lazy human in the loop logs, defining the scope and getting your evaluations first. You have to understand what you're going after to be able to measure it and go and you know, understand the evaluations in the process. All right, so now that's it. You now have the foundation for the rest of the start here series. All right, we went over what the heck is generative AI? How do large language models work and why it's the future of work. And now you have the language and the lingo down. All right, so what I want you to do, sit down with your people. You need to get a 30 day plan. You need to be able to sprint. You need to understand how you're going to deal with mistakes. And then later you can go into the rag, deep dive, roi, security scale, all this stuff. But you have to set here's where we are, here's where we want to go, figure out what's going to happen in between. But you have to keep up to date with the language and the lingo, otherwise your plan is going to fall apart. All right, one way it's not going to fall apart. Go listen to episode 628. Very much so related to this episode. So that is what's the best large language model for your team? Seven steps to evaluate and create ROI for AI. All right, I hope this was helpful. Now you know the AI without the jargon. The AI language every business leader needs to live by in 2026. All right, if this was helpful, please go to start here series.com that's going to take you and give you free access to our inner circle community where not only can you go and check all of our Start Here series on the web, right? Or if on the mobile app, if that's what you want to do, but you're going to be signed up for our free daily newsletter as well and you can go take our free Prime Prompt Polish Context Engineering course. All right, thank you for tuning in. I hope to see you back tomorrow and every day for more Everyday AI. Thanks all.
A
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going for a little more AI magic. Visit your everydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.
This episode kicks off Volume 2 of the "Start Here" series, focusing on breaking down the complex jargon of AI into practical, essential language for business leaders. Host Jordan Wilson draws from his extensive industry experience to highlight the importance of bridging the gap between technical teams and non-technical executives. The goal: Enable everyone—not just data scientists—to confidently communicate about, implement, and benefit from AI in the workplace.
“AI is a mystery wrapped up in ever changing jargon.” (01:47)
The “Prompt-Action-Outcome” Framework:
Importance of using the right model, for the right purpose, and understanding terms like context window and tokens.
“So much of the outcome is not even decided by the prompt. It's decided by you using the right model, the right mode for the right reason, for the right task.” (10:58)
| Term | Definition | Business Relevance | |-----------|--------------------------------------------------------------------------------------|---------------------------------------------| | RAG (Retrieval Augmented Generation) | Model grounds its response with relevant fetched data—better accuracy, less hallucination. | Vital for company-specific deployments. | | Embeddings & Vector Databases | Numeric representations for similarity search and matching; enables models to "find" relevant info fast and accurately. | Powers advanced internal search and document handling. | | Chunking | Breaking text into logical segments before they’re embedded, crucial for context and accuracy. | Impacts the quality of answers. | | Back-end vs. Front-end AI | Using LLMs via developer APIs (back-end) vs. consumer interfaces (front-end, e.g., ChatGPT page). | Non-technical vs. technical user experience. | | Connectors | Simple, click-to-integrate business data sources like email or Slack into AI workflows. | Drastically simplifies integrations. | | MCP (Model Context Protocol) | Standard for models/tools to communicate—industry-wide, facilitates interoperability. | Fast, safe deployment across stack. | | Scaffolding | The workflow structure for chaining multi-step/model processes. | Key for automation and autonomy. | | Agentic Models | Newer LLMs that can reason, make decisions, use tools, and interact with external systems. | Higher autonomy, broader use cases. | | Hallucinations | Confidently wrong answers by AI; can be mitigated by context engineering and the right design. | Business risk; requires ongoing vigilance. | | Prompt Injections | Malicious or unexpected instructions embedded in inputs/web pages. | Major security concern as AI agents gain web browsing. | | Guardrails | Human and algorithmic policies to enforce safety and prevent abuses. | Mandatory for enterprise scale AI adoption. |
Bringing technical and non-technical teams together:
Key Questions for Effective Adoption: (29:06)
This episode equips business leaders and teams with the foundational AI language and translation skills needed to strategically engage with AI—today and in the rapidly changing landscape of 2026.