
Loading summary
A
This is the Everyday AI show, the Everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business and everyday life.
B
You know what most companies pitching AI agents aren't actually doing? Deploying AI agents. Sage just did the opposite. At Sage Future, they expanded their AWS collaboration to put real AI agents into real finance workflows built on Amazon Bedrock Agent Core and available in the AWS marketplace. So that means enterprise grade AI for your finance team without the IT headache. Check it out@sage.com.
C
In the past 24 hours, two of the biggest players in the AI space, OpenAI and Google, both launched updated versions of their simple drag and drop agents that work for you with your context around the clock. And that got me thinking of the common Features versus Benefits methodology when it comes to marketing. If you've never heard of it, it's pretty simple. Features describe the technical facts or the specs of what a product does and well, the benefits explain the personal value the human gets from using said products. And in AI we've seen a similar features versus Benefits narrative take shape over the past few years. The feature Large language models are smarter and faster than humans. When used correctly, the benefit humans can be more productive. But the feature side has completely exploded over the past two months and the benefit side is still being written. Stick with me here. So as large language models have become overly agentic by default overnight and as capable as humans, there's a new benefit paradigm for AI agents that has flown completely under the radar. And I know that this is the next big trend coming. It doesn't have a name, but I'm going to go ahead and name it now and explain the concept. I'm calling this Scheduled Agentic Context Carry or sac S A C C and I think the companies taking the time to understand and iterate on this new concept now are going to be the ones crushing their year end goals and KPIs in quarter four. So let's unwind this kind of new concept together because I think understanding this now is one of the most important investments you can make on your AI journey this year. So let's start there with with our Start Here series. If you're new here, welcome to Everyday AI and this is our Start Here series. But let's first start with the big picture of what's going on. Anthropic, OpenAI, Microsoft and perplexity have all shipped scheduled agents just this spring and most business leaders are still using AI just like a chatbot. I'm going to go in and I'm going to technically reactively ask an AI chatbot for something and probably have to re explain a lot and waste a lot of time. But there's been a quiet workflow pattern emerging beneath every one of these recent product launches. And that's why we're talking about this new concept of scheduled agentic context carry. So on today's show, that's exactly what we're going to over kind of go over and here's what you're going to learn. You're going to learn what agentic context carry actually means and why you've never heard of it and but why it absolutely matters. You're going to know how this hidden workflow bridges chatbots and the fully autonomous AI future. You're going to understand the timing of all these things coming together and specifically These now huge 1 million token context windows that have quietly changed everything this spring. And you're going to know the exact three steps to deploy this pattern inside of your business today. All right, let's get started. Welcome to Everyday AI. My name is Jordan Molson and this is the Start Here series. So after 750 plus podcasts, I never had an answer when someone was like, I'm new. Where do I start? Well, you start here. Our Start Here series is an ongoing effort to help business leaders both better understand trending and emerging concepts, but also for those who are brand new to get caught up. So I'd recommend you start with episode one of the Start Here series and listen in order. But go ahead and listen to this one and then you can go backtrack. But make sure you go to start hereseries.com because, well, it's going to make it much easier to do that. That's going to give you free access to our inner circle community. Right now there's no other way for the general public to sign up except start here series.com and then inside the Start Here series space, there will be an updated Spotify playlist where you can go listen to all of the Start Here series very easily in order in a dedicated playlist. All right, and if you missed our last Start Here series, episode we in volume 20. So this is volume 21. We talked about AI change management that works five moves the top 5% make. All right, but let's get into this concept of agentic context carry and y', all, every single major AI lab and the big third party players launched something so from mic the big four, right? So that's anthropic. Microsoft, Google OpenAI and then even Perplexity and you know, openclaw technically fall under this category, but literally everyone launched something and it's all very timely. So I did mention just the past 24 hours with big announcements from Google Gemini at their Cloud Next conference and then with OpenAI's new agents that we're going to talk about here in a minute. But also CLAUDE code routines, right, that can bring automated scheduled agent runs to your desktop. So it is kind of like the maybe this open Claw movement that happened, you know, really in February and March of this year actually kind of forced the hand of all of the big companies to say, okay, it seems like we essentially, to oversimplify it, we need a, an AI agent that can run on a cron, right? Run on a schedule where someone can go in and they say, hey, AI agent at this time every single day, I want this to happen. So we had the quad code routines, which I absolutely love, that runs on your desktop. Similarly, OpenAI in their Codex platform just added scheduled work and persistent memory two days after that Claude code routines announcement in mid April. And then we also just got wind that Copilot Cowork officially launched in Frontier. So you have essentially all these scheduled agent platforms slash co working platforms, right? So like Claude Cowork is the big one. Microsoft Copilot uses essentially the Claude Cowork technology because they are an investor in anthropic. So you have those kind of two places coming together, you have these co working kind of elements that allow for scheduling and it brings all your context. And then you have these scheduled agents and it's all literally exploded out of nowhere. And although, you know, this may technically be a more timely episode with all of these things happening now, the reason why I'm doing it in the Start Here series, whether you are listening to this in April or you're listening to it, I don't know in the year 2027 is because I think that this is going to be a noticeable pivot in how the enterprise starts to interface with AI agents. Because here's the reality, right, We've been hearing since probably late 2024 that oh, it's the year of AI agents and it didn't happen and in 2025 it didn't happen. But I think we've now come to that realization in 2026 in a certain way because we've noticed that the fully autonomous AI agents where you just give them a goal and then they go off and run on their own, not as reliable as we'd like, mainly because of safety concerns, guardrails, et cetera. So I think that this new kind of co working scheduled agents is the stepping stone to where we will ult when we have, you know, more of like, oh my gosh, this is artificial general intelligence. We have AGI because I give an agent a goal and it doesn't need me for anything, right. We're not there yet. So we are in this in between phase. I don't know if this phase is going to last for a couple of quarters, a couple of years, I'm not sure. But it is definitely taken shape so quickly over the last few weeks. And that's led to kind of again this feature benefit. Because when I think of traditional large language models, right, essentially once, once companies understood their utility, right, the immediate benefit was oh, more time productivity, right? We can do more, you know, do more or save time. But what about for the actual agents, right. I think when we thought about AI in the feature versus benefits kind of paradox, we thought about the benefit on the human, but what about the benefit on the AI system? Because as they start to get agentic and more human like and how they can work well, they start to benefit as well. And that benefit is the agentic context query that we're talking about today. And this is huge. And then like I said just in the past 24 hours, right, we had Google launch their Gemini enterprise agent platform and OpenAI launched their workspace agents inside of ChatGPT. So we're going to be going over that a little bit more on tomorrow's show and FYI, we do kind of go over some good examples of this context. Carry on yesterday's show on Codex. All right, so if you missed that one, yeah, I'm unplug both of these shows so make sure to go listen to the Codex kind of Super App Preview 762 and then make sure to join us tomorrow. More on these two recent launches, but here's a little bit on what the chat GPT schedule agents can do just so we can kind of set the stage for why this context carry is extremely important. Important. All right, so how Open AI says it in their recently released blog post, they say build once, scale across your team, right? Create an agent once, share it with your team, work that runs itself. So you can run agents on schedules to handle recurring tasks and then keep the work moving across tools, right? So you can the agents use your tools to gather information, take action without needing step by step guidance. So now, yeah, I had to do a little, little wind up here because I wanted Everyone to really understand how big this is and how quickly it's happening. Before I really unwrapped this kind of concept that I coined, right, of agentic con. Agentic context carry. Yeah, it's. It's so new. Even, even I'm, you know, I don't just want to say sack, right? But scheduled agentic context carry. So this means, right, I want to break down each of the four words and how they work together. So scheduled obviously means that the agent wakes up or runs on a cadence, not only when prompted. So that can be both a time cadence, like we just talked about in the chat, GPT agents, or as an example, in clot routines, it can be a trigger, right? When you get a certain type of email, then an agent is going to run. All right, that's what scheduled means. Next, context query. That means your memory, either your personal memory and preferences, your company's data, all of those things. Dynamic data pipelines, tool access, all those things that persist between runs. All right? And that is the big piece there. That is the context and the carry. And obviously these agentic models all by default are able to do this, right? So the models themselves, they can call tools, right? They can, you know, call on, you know, these connected apps on these MCP servers that you can bring in, you know, thousands of different apps that you use. But the actual carry, that's what's important because as we've gotten these new context windows, which I'm going to get into a little bit more, that's what makes this all possible, in the ability now for an agent to go out and learn something, right, about you or your company without you having to teach it. So let's just say you have a scheduled agent, right? You give it information about your company, your company's goals. Maybe you're looking to acquire, you know, a new client or a new customer. But the industry, whatever industry you're working in is moving fast. Let's say you have an agent that goes out, you know, every Sunday night, it pulls up the. In the industry's most recent white papers, industry news, et cetera. And, oh, all of a sudden, when you didn't know it, it found out that you have a, you know, a huge new potential, you know, buyer moving in into your state that wasn't there before, right? And the reason that this can happen is because it's able to carry the context with you. All of those documents that you share, preferences, the memory of your, you know, recent chats, but also that can run in essentially the same context window over and over. So not only can it carry in the context that you give it according to your information, but also the persistent memory of that actual conversation. So it's going to know, right? If you have a run that goes every single day, it is going to carry that trend line with it. So it does start to turn into, oh, you know, it's like when you hire a junior employee, after a couple, you know, days on the job, they kind of start to get it. After a couple of weeks on the job, you're like, okay, it's picking up momentum, the same thing. And that's, I think, why this is a very exciting time in AI. And this, I think, is that bridge between, you know, the simple chatbots to the fully autonomous agent. Because as much as every, you know, open claw aficionado wants you to believe we are not yet at the point where we have true autonomy in agents, right? Where you give them a goal and they can safely go execute that goal without constant human intervention, is it possible? Sure. Right. If you have a very well defined goal, if you have strict guardrails and if you are using it in a narrow capacity. I don't think we have autonomous general agents. I think we have autonomous narrow agents that can do one very, very simple task if you give it, or a goal, if it's very, very specific. Right? But what happens if the guardrails are changed? What happens if the industry has changed? What happens if your data is corrupted? Right. An autonomous agent would in theory be able to figure those things out. We don't have that right now. And I think that this agentic context query is that stepping stone that's going to help us get there. I, I've also, you know, talked about this a little bit before. Previously I had called it kind of like, you know, the human AI duct tape. It's all those intermediate steps in between that a human had to do. Right. If you run something in, you know, deep research inside ChatGPT, well, now I have to copy that. I have to go put it in a doc, and then I have to upload it to this project folder as an example. That is where this context carry and the larger context windows starts to erase all of that manual, you know, human AI duct tape that, you know, those steps that us humans working with multiple AI systems would have to continually make. Because now these agents also have write ability, right. Whereas before, you know, three to six months ago, they didn't have the ability to write to your Google Docs, they didn't have the ability to send Gmails, right? Now they do, right? If you give them the permissions and if you're feeling, you know, spicy and you want to roll the dice, but that agentic context carry layers the schedule in the memory over that. But no one is really talking about this. I don't know, maybe, maybe I'm too dorky and excited about where we are. But the reality is, I think that there's been this. Quick pause.
B
Want to know where AI ROI actually lives? It's not the flashy demo, it's the work nobody wants to talk about at Sage Future Sage tackled the painful side of finance system rollouts, implementation, financial data migration, chart of accounts, mapping, configuration, the work that stalls real transformation projects. Sage is using applied AI to make that faster, more accurate and auditable. With humans staying in control. That's responsible AI doing real work that actually moves time to value. If your finance team is staring down a migration, check out sage@sage.com.
C
NARRATIVE right now. And I'm actually going to call out a recent tweet I saw or in X. What do you call, what do you call a tweet on X anymore? I don't know. This is why I call it Twitter. You can't verb X. But there was a recent viral tweet out on Twitter, right? So Chat GPT had a recent integration with Starbucks, right? Someone said, oh, you know, why are all of these apps, right? Why do they exist? It doesn't make sense, right? Because it's going to take me, you know, two minutes at the absolute fastest to make an order on this Chat GBT Starbucks app integration. I'm just using this as an example. Throw in any, you know, business app that you're using inside, or, you know, connector that you're using inside Gemini, Copilot, Claude or Chat GPT. But this kind of viral incident with startups, right, it's like, okay, well, it takes two minutes to order it via the app and if I the Chat GPT app, but if I go into the actual Starbucks app, I can do it 20 seconds, right? So this is done, right? But I think people are missing the point because it's not just about one app. It's not just about ChatGPT interfacing with one app. Because in the new Agent Builder, right, as an example, the brand new Agent Builder, you can go to create an agent all by hand. I'm literally clicking around as I do this. Now you can connect 20 apps, right? Your, your Gmail, your Slack, your Notion, your teams, your Outlook, email, your Google Calendar, Google Drive, whatever, right? MCP servers, you can do all those things. You can connect agent skills, you can upload files, you can manage the memory. Right. So it's not just about, oh my gosh, you know, using a single app to do a task is so much slower than it is to just do it individually in that platform or on that website. That's not what it is. It's about eliminating that human AI duct tape. It's about, you know, the, the 30 small human steps in between that are required. That is the context carry. Us humans have been the one carrying the context because AI agents didn't have the ability, they didn't have the, the tools to do that. Now they do. That's the thing. Yes, I can much more quickly go open my Gmail, read an email and respond to it. Then a connection in Gemini, ChatGPT Quad, etc, right. But what about when there's a Google Doc that goes with it? I have to look at my calendar. Oh, there's actually three or four different emails, right? Oh, there's that file in my drive. There's a Slack conversation about that right now all of a sudden. Yes, it might be quicker to do all of those small tasks individually in those apps or on those websites, but when you have to carry the context yourself manually as the human, that's where you can start, right? This is essentially been the mundane nature of knowledge work in front of a computer over the past 20 years, you know, as SaaS and applications have exploded. But that's what we do. And that is where the true benefit of now agentic context carry because us humans no longer have to do the duct tape and have to remember and have to bring that context from app A to app B to app C to storage D to messaging platform E and F. The agent does it all for us in one swoop. So yes, it might take you five times as long to accomplish a goal inside of an AI agent, but that's not counting the human error, the human lookup, the human retrieval that has to happen every single step of the way in between. That's the big unlock here, y'. All. But also, I don't know, I start to forget things fairly quickly. Maybe it's just me. Like I literally use, you know, this concept of agentic context carry all the time, right. I was actually walking, I was actually walking to my office, right. Sometimes I record, you know, from my kind of home office. Sometimes I, you know, record from my actual office. And you know, I'm working on a cool partnership here with a group. Sage and I had a couple different email Threads with, with travel, there was some Google Docs, there was all these things and I'm like ah, which you know, and I have multiple emails, right. I have multiple email accounts, certain forms go different places and I'm like my gosh, like this is going to take me a long time instead. Right? Just use, in this instance use Claude. It went and carried that context. But what about when you can schedule those things? Right? And to say, hey, every day at, you know, 2:00am I want you to go through my, my, my email, my calendar, notion, slack, all of these things. Yeah, it might take the agent longer to do that if, than if you were to. But it's going to do it on its own schedule and it's going to carry the context from app to app. So that's where the new breakthrough comes. It's the capabilities that have made the cross app technology possible. So here's where the unlock and the timing all comes into play. Right. I love Venn diagrams. Right. This is where it's kind of the, the capabilities and the technology and the need have all overlapped with this perfect timing. So this is, you know, if you think of like cowork or agentic scheduling, the features and then the context window all coming together and exploding at the same time. Right. So Claude Anthropic has really led the way of this. So now they have that 1 million token context window by default. Right. Kodak's a little bit, you know, behind, although there is an experimented experimental 1 million token context window in the command line interface. But on the app I believe it's 258,000 tokens. So what does that mean? Right. If you're not too technical, that just means. Right. The free version of ChatGPT last time I checked, I didn't check the free version in a while, but let's just say in 2025 it was about 8,000 token context, which of. Right, so now you're looking at 1 million. So do the math there. Or you know, or you know, going to 250,000 essentially. Now AI models and AI agents can remember things over a very longer period of time, right. Whereas before they essentially had very short term memory. You would, you know, especially if you were on a free plan or you know, early in 2024, 2025, AI models forgot things very quickly. So especially when it came to handling your data. So if you upload a file, you're working with it, right. Maybe updating a job description and doing some research on recent law changes to make sure that your, you know, job description Reflects those or something like that, right? And it's going well and all of a sudden it's, oh, wait, it's done, right? That's because it ran over the context window. Context widows used to be very small, but now as they become bigger and bigger and bigger, right? It's essentially you're working with an AI model that has a bigger brain that's able to carry the conversation for longer. So now you know, this kind of trending concept that's happening, it's not just going in and working with one app or one connector, right? It's bringing in all of the different tech stack that you have to use on a daily basis, that your company has to use on a daily basis, eliminating all of those manual steps in between. Because the reality is just like a large language model, us humans, we have a context window as well. How much time do you spend? Right. Even pre AI, it was obviously way worse. But sometimes you spend just as much time trying to either track, remember, or find certain information where it lives within your kind of SaaS database as it actually takes to create that new business value once you do find it, or reply to a certain email or to finish a certain deck or a project or fill out a spreadsheet, right? Sometimes you spend as much time just trying to retrieve that information. So that's where the multiple apps is a big context window and the new agent capabilities, those three things coming together come to play. So now that you know it's here and you know that this, I think, is the intermediate stepping stone until we have those fully autonomous agents, you need to take advantage of this scheduled agentic context Carry sack. Right? Here's how. Step one, three steps. Ready? I'm gonna go quick. Connect your live data sources and your preferences. First, make sure to do this, you know, I do have to, you know, put up a normal disclaimer, right? The responsible AI person I am, right? I'm a business owner. I decide if this is safe for my organization. But you need to do the same, right? You shouldn't be doing this with shadow AI tools. You need to be doing this with appropriate approved tools. So make sure you go through the proper channels. But let's just say you have, you know, Claude approved or you have chat GPT approved, whatever it is, right? And you have these connectors or apps approved as well. All right? So you need to authorize those live connectors to your, as an example, your email, your calendar, your slack, your drive, all of those important things, your CRM, right? It's huge. Then you need to understand how each system's computer use and access works and then ensure your custom instructions and memory are updated accordingly. So first you have to get your data sources, your preferences and your memory in line. Because when you talk about context, carry. Well, context is the base, right? We did an earlier show in the Start Here series on the importance of context engineering. So make sure you go back and listen to that one as well. And then also all these platforms, they support mcps, so even if you're, you know, your app of choice, whatever you're using, doesn't have any of, you know, oh, it doesn't have an app connection to ChatGPT or doesn't have an app connection to Claude, well, chances are you can just use an MCP server and get that cooked up right away. So that's step one. Step two, you need to context stuff in a dedicated memory thread. Here's a little. I wouldn't necessarily call this a cheat code per se, and this is much different, right? I've obviously talked, taught, you know, this concept of prime prompt polish, you know, the basics of prompt engineering 101 with the context window. It doesn't throw away those best practices, but it does kind of change what can get done. So here's a little, little cheat sheet. One for you for listening to this episode now for 26 minutes and running these systems now, the context windows are enormous. You can work on it in theory for a very depends on what tools you're calling. But you know, in quad code, as an example, you know, running your routines all in the same thread, a daily schedule, it's going to hold, right? I have some that run every single day that started when it first came out, you know, 2ish weeks ago. And they're not even close to hitting the context window, right? So at a million tokens, the agent can hold just weeks of regular usage, working memory at once. So here's what I like to do. Connect everything to one thread, right? They these all work a little differently, right? You know, Codex works a little bit differently than Claude code, works a little bit differently than these brand new, you know, ancient builders essentially that we got from, from ChatGPT and from Google Gemini. But essentially, if you do connect things on a thread by thread or a folder by folder basis, have one where you just context stuff, right? Put all your context in there at once and then that can be your daily driver. Because the good thing is, is then also if you need to take it into a different direction, you can just fork that threat at that point, right? So you at least have this unified base where you can every single day, right. Have it be the one that brings you to your morning triage, the one for your most common day to day tasks, but aren't necessarily specifically project based. That requ. A lot of different directional feedback, etc. Right. So from that you can iterate on the reasoning until it matches your standard. That's the big thing you need to context. Step two is technically context stuff and. And iterate as well. Well, iterate will actually get a little bit more into step three. Sorry, I. I jumped ahead of myself. So step two, context stuff in context stuff in a dedicated memory thread. And then step three, iterate with chain of thought. If you listen to the show at all, you know how important this is. You saw this in my little demo that I did yesterday on Codex, right? You need to understand how these models work because they are generative, they are not deterministic. They're going to work slightly different each time. So you can't just run something once, right? Especially as these, the, the capabilities become greater and greater. Right. A lot, I see a common mistake A lot people will run something once and they're like, oh yeah, this is great, let's put it out in production. Okay, well that's could be dangerous, especially if you're doing something public facing or client facing. You probably wouldn't want to do that just yet, right? Because there's always going to be edge cases. You can run the same scheduled run every single day for seven days. And it might two of the days it might call the tool that you didn't want it to, or maybe it's not calling a tool that you are telling it to. So you really do have to review the chain of thought and iterate. So what that means most systems, you can kind of have some level of observability, traceability as they go by looking at the system. So if they're scheduled, you can go, you know, usually you might click on, you know, might say oh, thought for one hour. You can click that and then see every single tool, every single step and kind of get how the. That scheduled run or that cowork session, how it works. And you can kind of trace it the same way, right? When you think of like math, right? Where you had to show your work. I don't understand like the new Common Core math stuff, right. But back in my day, right, we just, I don't know, wrote down the numbers at a column, right? But you had to show your work. So you should always be checking the work of Your, you know, co working run of your scheduled agent's task and then iterating it, you need to refine the prompt, make it better and then once it is kind of quote unquote ready for production and you've built those guardrails in place, that's when you can save it as a routine or a scheduled automation. Then what's refined that is that hidden workflow. So it is those three steps that really allow that agentic context carry again faster. Step one, connect your live data sources and preferences first. Step two, context stuff in a dedicated memory thread and then step three, iterate with chain of thought reasoning before you put it out into production. But then schedule that thing and take advantage of this stepping stone that I think is going to be huge and the time is now. So like I said before, this is not the final destination. I think that truly autonomous agents with persistent memory are the next big deal. But we that could be far off. Who knows, I mean maybe we'll have that, you know, next month, but could still be another year, two years or more until we actually see autonomous agents that you can give them a goal and they don't really require much else. This is the now or the next. So understand and really push this agentic context carry. The leaders pulling ahead this quarter are building scheduled context on autopilot, not just, you know, better one off prompts, not just, you know, sharing skills within your organization. That's no longer enough right to really be pushing in your space. So pick one recurring task this week, take it through those three steps and then deploy your first kind of hidden workflow inside of it. That's taking advantage of scheduled agentic context carry. I hope this was helpful y'.
A
All.
C
If it was, please go to Start Here series Com. That's going to take you straight to a signup form to get access to our community for free. The everyday AI inner circle. And then in the Start Here series space, you can go find every single Start Here series podcast, read every single Start Here series newsletter, all in one space. Connect and network with others who are doing the same. All right, I hope this was helpful. Thanks for tuning in. Hope to see you back tomorrow and every day for more everyday AI. Thanks y'. All.
B
Every AI vendor in your inbox is promising to revolutionize finance. Most of them have never set foot inside of a controller's office. SAGE has. At Sage Future, they double down on AI you can trust for finance AI that gives your team confidence with results. They can explain and verify control. So people stay in charge of the decisions that matter and accountability so every action can be traced. Plus a faster way off legacy desktop tools and into the cloud alongside AWS. That's how you actually adopt AI in mission critical work. Check out sage.com for more.
A
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit your everydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.
Host: Jordan Wilson
Date: April 23, 2026
Series: Start Here Series Vol 22
In this highly practical episode, host Jordan Wilson introduces and breaks down a new, crucial concept for leveraging AI at work: Scheduled Agentic Context Carry (SACC). As enterprise AI adoption shifts from basic chatbots to agent-based automation, Jordan explains how SACC is the bridge between simple chatbot queries and fully autonomous, always-on AI agents. He outlines why every company needs to build this workflow today to stay competitive, and offers an actionable three-step guide to get started.
“Most business leaders are still using AI just like a chatbot. I’m going to go in and I’m going to technically reactively ask an AI chatbot for something and probably have to re-explain a lot and waste a lot of time. But there’s been a quiet workflow pattern emerging beneath every one of these recent product launches.”
— Jordan Wilson (05:20)
“I’m calling this Scheduled Agentic Context Carry or SACC, and I think the companies taking the time to understand and iterate on this new concept now are going to be the ones crushing their year end goals and KPIs in quarter four.”
— Jordan Wilson (04:18)
“I think that this new kind of co-working scheduled agents is the stepping stone to where we will, ultimately, when we have... artificial general intelligence ... because as much as every, you know, OpenClaw aficionado wants you to believe, we are not yet at the point where we have true autonomy in agents, right?...This agentic context carry is that stepping stone that’s going to help us get there.”
— Jordan Wilson (11:55)
“It’s about eliminating that human AI duct tape. It’s about the 30 small human steps in between that are required. That is the context carry. Us humans have been the one carrying the context because AI agents didn’t have the ability ... Now they do.”
— Jordan Wilson (18:30)
“This is where it’s kind of the capabilities and the technology and the need have all overlapped with this perfect timing.”
— Jordan Wilson (26:45)
“You need to understand how these models work because they are generative—they are not deterministic ... You should always be checking the work of your co-working run, of your scheduled agent’s task and then iterating it. You need to refine the prompt, make it better and then ... you can save it as a routine or a scheduled automation. That is that hidden workflow.”
— Jordan Wilson (32:05)
On value of agentic context carry:
“That’s where the true benefit of now agentic context carry ... because us humans no longer have to do the duct tape and have to remember and have to bring that context from app A to app B to app C to storage D to messaging platform E and F. The agent does it all for us in one swoop.”
— Jordan Wilson (20:49)
On human ability vs. context windows:
“Sometimes you spend just as much time trying to either track, remember, or find certain information...as it actually takes to create that new business value once you do find it, or reply to a certain email...”
— Jordan Wilson (28:25)
On what to do next:
“Pick one recurring task this week, take it through those three steps, and then deploy your first kind of hidden workflow inside of it that’s taking advantage of scheduled agentic context carry. I hope this was helpful.”
— Jordan Wilson (33:25)
For a deeper dive—including context engineering tips and more hands-on guides—join the Everyday AI “Start Here Series” community at starthere series.com.