
Loading summary
A
This is the Everyday AI show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business and everyday life.
B
Here's a very obvious trend. As we gear up for 2026, almost everything's going to get identified, right? Your standard procedures are going to become agents, traditional processes, agents, features are now just called agents and what used to be called automations will be agentified as well. But here's the thing that's important. I think when we think about old school automations, right, those that have been around for a decade or more, if you didn't have the right data or if it wasn't configured correctly, that workflow probably would just fail in the output. Well, you wouldn't get one. But when we are starting to make the transition from automation to actual AI agents, that agent, even if something's not right, the data, something else, it's still probably going to give you an output. It might straight up lie or guess. So how do you get around that and how do you make sure that as you are bringing more and more of your business processes into agentic workflows or building AI agents, how can you make sure they're not guessing and hopefully giving you outputs that you can trust and actually use to grow your business? All right, we're going to be tackling that and a lot more on today's episode of Everyday AI. What's going on, y'?
C
All?
B
If you're new here, welcome. My name's Jordan, I'm the host and we do this every day. This is your daily, unedited, unscripted, live stream, podcast and free daily newsletter helping everyday business leaders like you and me not just keep up with all of the AI craziness, but how we can make sense of it to grow our companies and our careers. So if that's you, starts here with the podcast, but make sure you go to our website@your everydayai.com Sign up for the free daily newsletter. We're going to be recapping the highlights from today's show and a whole lot more. If you want the AI news from today, make sure you go read it in our newsletter. All right, enough of me chit chatting, y'. All. I'm excited for today's guests, so please help me welcome to the show Edmoski, the chief Product and and Technology Officer at boomi. Ed, thank you so much for joining the Everyday AI Show.
C
Thanks for having me, Jordan. Excited to be here.
B
All right, me too, Ed. Before we get started, tell everyone a Little bit about your background and what you all do at boomi.
C
Happy to. So I am the Chief Product and Technology Officer here at boomi, which means I run all of product and engineering for boomi. And who is boomi? We are an integration, automation and AI access activation platform. So we spend every day working with our customers on automating their businesses through either traditional workflows or now through AI agents and or activating AI within the business. A little bit more about me. I've been an integrator now for, gosh, 24 years. So I've spent my entire career thinking about integrating and integration and automation within a business. And with the rise of AI and agent technologies, it's like a whole new, it's a renewed sense of, you know, pride and excitement around how we can apply these within the business and that's what we do every day.
B
And I love talking to people with a deep background in tech, specifically in automations. Right. Someone myself I've loved, you know, the marketing automation sector for a long time. I've always experimented with different, you know, workflows. So it's obviously been exciting, you know, over the last two years as we're able to add a little bit of agency to traditional, you know, automations that didn't have those. Can you just maybe catch us up to where is the space at today? You know, are most, you know, enterprise businesses, are they using, you know, still old school automations? Are they having agentic workflows? Are they having just agents? A little mix? What are you seeing in your role?
C
Being a little bit of both actually. And I actually, and I think we will see both deterministic and non deterministic or agentic workflows in the enterprise forever. The reality is there are certain use cases where you're just going to want to have the performance of a fixed workflow in your business where that applies. You want to be structured on the rails. And the way I think about it, agentic capabilities actually helps you augment and expand more and more capabilities. There's some overlap where we do have agentic workloads that are overtaking some of the traditional workflows to help with maintenance and how we can think about these capabilities and be more robust. And I think that's where businesses are today. They've turned the corner from AI pilots and yes, individual productivity with AI that is still a thing. But in terms of automating the business, we're starting to see some real results to the tune of saving millions in these team workflows. And whatnot that we're identifying with our customers.
B
You. Yeah, I think agentifying and agentification, right. It's like almost like the new buzzword, you know, where it was agents maybe a year ago. But can you kind of walk us through some of the pros and cons? Right. So I'm sure that many companies out there, you know, have been piloting different AI solutions over the past year or year and a half and they maybe have some, you know, automations that have been the backbone of their business for maybe a decade or more. Can you kind of walk us through some of the pros and cons of as companies move from old school automation to newer school agentification or just AI agents out there accomplishing tasks on their own. What are some of the pros and the cons of kind of converting those older processes to agents?
C
Certainly the pros are you can kind of as the human or your humans within your organization when you convert some of these workflows into agent based flows. You can take your hand off the wheel a little bit. I like to think about it when you're driving, you know, these self driving cars where you've got someone sitting there and running. But getting more to the point of, you know, a waymo that is, is fully self driving. We see that happening. So that's the pro. You can get a lot more out of your workforce. You can let the humans do more and provide more value in the business versus these mundane kind of driving things that they click a button every day. And what traditional workflows bring, the cons are you have to build these agents and these automations, these new agentic automations on a strong foundation of data or bad, bad data in is worse data out in terms of the activity that you would expect from agents. Right. Whether it be the decisions agents are making or the actual way their ways in which they're wielding tools and actually activating change or updates within your business. You have to be very, very careful. So that, that is a big con side but we're here to make that better.
B
Yeah, I love that. Bad data in, worse data out. That's a great takeaway already. You know, one thing that I even remember, you know, personally, you know, way, you know, 10 plus years ago, working with, whether it's RPA software, you know, marketing automation software, I would spend a lot of time building these, these automations and if one thing wasn't right, right. Maybe there's an, you know, a period instead of a comma or something like that, the automation wouldn't run because it was overly rigid and you know, obviously deterministic. Can you walk us through maybe for our more non technical audience? Right. What does this mean specifically when agents have agency to maybe make their own decisions and what that does for the importance of our data?
C
Yeah, it makes it super easy. So you know, agentic technologies like Boomi for example make it very easy to not have to maintain chain define these rigid if then else rules that if you're missing a semicolon or you've got a space in there or something that they go bad or new rules get introduced and data changes that happens that can throw off these rigid workflows. AI agents and using agents to help make these decisions in a workflow which could be an AI agent wrapped with a traditional workflow if you'd like, but makes things things much, much easier and less brittle when implementing them. And then they run and they can make decisions on your behalf. So you can go in terms of you can do things like just defining your your policies within the normal documents that you normally do for your employees and let the agents read the policies like, like your employees would and agents can adapt and make those decisions in a workflow. That is all great. The downside or what could happen when you do these is now you could take your eye off of managing those policies. You would be forced to in an old fixed workflow because you'd have to go to some human who would write the code to say I'm going to add five lines of if then else. Because we added a new policy. It's just shifted to today. You need to make sure that your policies are well defined. You're maintaining your internal wikis or your documents or your PDFs or the things that you have which you should be doing for normal good business hygiene anyway. But as long as you you live and work in in the systems you have to find and then you have your agents pointed to those, you should be fine.
B
Could you maybe give us an example, you know, maybe one of those tried and true automations that may be more deterministic in years past and now with AI agents. So companies that have maybe converted those older automations into agenified them and you know, either agen workflows or AI agents. Can you maybe give us an example of hey, here's what that process looks like pre generative AI powered AI agents and here's what it looks like now with them.
C
Yeah, there's a lot of them particularly around the finance area and different things in the business. But my favorite one is expense reports. I think it's one that everybody can kind of relate to. That's, that works in a business where you have to file expenses, whether you be the person filing the expenses or the manager or leader in the business that is approving the expenses. A traditional expense report workflow is I submit some expenses, I may, I have a whole bunch of roles that someone has defined that tells me error or not as I'm entering expenses. And then once I click submit and I finally get through all the errors, which I probably faked half of them to get through the errors, it goes to my manager and then he or she then has to approve and do the same thing from the other side. Now there is frustration on both sides in terms of the people one who submits and the person who then has to approve. And then there's a lot of human inter. Well a, there's a lot of humans in the back end coding all of those rules in some way, shape or form or defining them. Then there's humans on both sides that are doing those activities to either submit or approve them. In terms of an agentic process, you can make life easier on all three parties. The person who's submitting you can make it, you know, we are doing this. You can have that as a conversational agent to make it easier for the person submitting an expense report to just really give the information and, or set the tone with the humans filing the expense. Here's what you need. Here's the only stuff I need make it really easy. Expense report taken. Created then for all of the rules that would be part of that, much less rigid, much more open that these agents can then using large languages or small language models, go out and read all these rules and dynamically adapt and adjust. And then on the approver side, these agents can also learn and only engage approvers when necessary. So if you are submitting expenses to me, Jordan, and I'm approving the $100 bottle of wine every time, even if it's part of, if it's not part of the policy, the agent will learn. Ed will approve Jordan's hundred dollar bottle of wine. But if it's a $20,000 bottle of wine, the agent will know. Wait, Ed's never approved this. It's outside of the policy. That's when I'm going to engage the human to come in the loop to kind of make that statement and approval. So you can kind of get, you know, best of all worlds by being adaptive and not so rigid in this process, but also the foundation of data underneath maintaining these Roles, understanding them, maintaining the who's Jordan's boss? So I know who to submit this to, who's this? And that. Those are all data quality things underneath that people have, you know, unfortunately in the last number of years in an enterprise, haven't really paid much attention to because humans have been a crutch. Humans have come in, said, oh, that data is bad, but I know I can approve this and you know, I can't automate that because of X, Y and Z. That's why data quality is becoming more and more critical for these agentic flows.
B
And I want to dig into that a little bit more. And also there goes, there goes my tactic of, you know, expensing things that, you know, 10x the price there, that goes out the window. But you know, one thing, and I'm not too big of a dork to admit this, I enjoy reading summarized chain of thoughts, right. I love, you know, working with the latest, you know, models, whether that's, you know, Gemini 3 or GPT5.1 Pro or the newer Claude4.5 Opus, right. And I love looking to see, you know, how they take whatever inputs and then what they do and what they think through, right. And one thing I've really noticed is, well, obviously models can be bold and they can sometimes be a little too confident, right? Especially when you throw a ton of data at them all at the same time. And sometimes even just one wrong assumption, maybe the model isn't even wrong, but maybe they just assumed something and it's not fully accurate. They might take that and run with it and go in a completely different direction than you may want, right? And unless you're a dork like me, looking at the chain of thought or, you know, really paying attention to what your agents are doing, you might miss that. So with that in mind, right? And as these models become more and more agentic by default and more and more capable, talk a little bit about what the average, even the average everyday business leader needs to do when it comes to working with data inside of large language models.
C
Well, there are a couple of things because I see this like a sandwich. Based on what you said at the foundation, business needs to continue implementing tools and make sure that the data, the underlying data, is as pristine as humanly possible or technically possible and not overlook that and put in true data governance capabilities within your organization. And then you need to make sure that your AI agents have a true security layer so that agents are accessing the proper data sets and not overstepping bounds there. That's another way of protecting from an agent getting some level of data and then going and stepping over some boundaries that is at a foundation. So to your point, the littlest data that can be off can, can have bad results. I said this earlier. Bad data in, worst data out. In terms of the actions. The top layer of the sandwich as I see it, is investing in AI governance tools that can oversee AI agents and what they're doing. There's tools in market, you know, there's observability and telemetry tools that are in market today that you would think about governing and having visibility into just your business operations or it having those etc. We have actually brought to market an AI agent control tower and what that does is we'll monitor the behavior of agents that are running in your business that you've implemented as another backstop for these types of things. So agents are off and running. They don't even have to be boomi agents, they could be agents and agentforce agents. In Salesforce they can be AWS bedrock agents, combination of boomie agents all orchestrating together and we will monitor those to look for anomalies. So using another layer of an agentic layer to oversee the agents that are running so that if you see some anomaly behavior like hey, it's, you know, agents have been doing A, B and C, but this one looks like it may have stepped out of bounds. It will at least it will alert the business user that hey, this agent did this, or you might want to check on this or do you want me to stop this agent from behaving here to help you with that governance? Because as a business user, trying to manage these thousands or millions of agents in your organization without using AI to manage them as well is going to be nearly impossible. So those are the types of solutions and things I think folks should look out for.
B
You kind of led into the next question that I naturally had is I feel observability, traceability is maybe a little bit easier with earlier scopes when it comes to AI agents. Right. When there's maybe one AI agent that you completely understand and it's a little bit easier to look under the hood, so to speak. But as we start going into these multi agent workflows and multi agentic orchestration, how does the kind of compounding impact of good clean and accurate data, how does that play out when, yeah, now all of a sudden you might have agents handing things off to each other without a human in between.
C
Yes. So even worse, data out. So it's important to double down on the governance of Your data in that aspect and to double down on having observability in aggregate to these agents working together and what they're doing, what data they're passing, looking for anomalies without just shamelessly plugging the boomy technology the whole time. There are other control, there are control tower capabilities out there. The different hyperscalers are providing visibility at a control tower perspective to get insights into what the agents are doing in their ecosystems. And not just hyperscalers, but application providers are providing these. A value that BOOMI has is helping to aggregate them all together so that you can see okay, from an end to end series of agents that can be orchestrating across ecosystems, which isn't a big focus right now. A big focus is a TOA and communication protocols across ecosystems, but not yet. That's where we're going next. Around the governance across ecosystems and making sure that what agents are doing together are sponsored and what the businesses want to do, the business wants to do. But when you think about there's two levels of your data. When you have multi ecosystem orchestration, there's the data that resides within the system that the agent is on top of. So agent force on top of salesforce data. Then you can have a servicenow agent on top of servicenow data. If that data is not in sync and of quality on both sides, you can have disaster. Those agents can talk past each other and not be sharing data. You need to come up with your strategy for, you know, your foundation of data that is kept in sync. Or you're leveraging an agent, multi agent system that is on top of the same data set depending on your organization. I know I'm getting a little technical here, but depending on your organization and your mature, you're going to want to think about those, those different things. Most companies are probably working with agents within their ecosystem, their siloed ecosystems versus on a common data set. But you're going to want, based on the use case, you're going to want to think about how you, you know, you architect that within your business.
B
Yeah. And I think that it's, you know, well past a foregone conclusion now that you need high quality data in order to fuel agents or agentic workflows. Yet I mean I've talked to hundreds of people on this show and you know, I'd say dozens of Fortune 500 executives that even themselves, whether they admitted it on the air or not, don't always feel the most confident about their data. So how should those types of business leaders be looking at this problem right because data is always evolving. We're getting more and more of it. It's coming in different formats. Right. Especially as some of it's being formatted specifically for large language models. And maybe humans don't jive well with that. Right. But how can those organizations that feel maybe iffy on their data game, how can they still use AI agents or should they not?
C
Yeah, I'm going to try to give some very, very practical advice here because this is what we're seeing every day. So we're in this, the fastest innovation life cycle we've ever seen in it, frankly, like every day. I mean, your podcast is showcasing a lot of this. It's happening every single day. And what's happening is we have the new wave of technology that's emerging around AI and a lot of people being very innovative there and so on and so forth. But those folks may or may not have all of the context that an enterprise or a business would have around their data and workflows and all those other things that we talked about. So the very practical thing to do, it sounds dead simple, but I'm not seeing enough of it. Pull your AI innovators together with your business owners, those that own the data and your AI innovators and get them talking and working together. A lot of what we're doing is pulling line of business owners that own these data sets together with the AI innovators and facilitating discussions on what they can accomplish together. And focus very practical, practical on not boiling the ocean of all of your data in your whole organization. Look at the data sets that you need for automating your business or the outcomes that you're looking for and really double down on those. You'll be much more cost effective than some large data project that you want to do in your organization. Focus on the data sets that are going to matter for your agentic workflows. Start small and iterate from there. Prove value very quickly and then build on top of the value you've already you've seen in previous cycles.
B
All right, that was good. I mean, I mean, Ed, you took my last question out of my mouth because I think that's some great kind of parting words for business leaders. But I'll go this here. You know, I think, you know, at this point here, rolling into the end of the fourth quarter, a lot of CEOs, business owners are, you know, looking back on their 2025 AI investments and, you know, trying to look at ROI and plan for 2026 and beyond. What would your advice be specifically when it comes to kind of measuring the return on agents and then also even the return on investing in your data. Because at least for me, I've seen a lot of companies, you know, putting way more resources into their data in 2025 than they had in maybe the last half decade combined because of this tight correlation between data quality and agentic outputs. So kind of two prong question there. Where should they be focusing for showing ROI and then also investing in their data processes?
C
I would. Great question. And I would flip the script entirely in terms of the way you look at defining a project in itself outcome. So in the last couple of years from an AI to data projects, people have focused on AI technologies or data technologies or data and tried solving those problems which have led to or addressing opportunities in the business. They what I always refer to as these science projects, they go off, people get fascinated with the technologies, they get fascinated with the data itself, etc. But they go and their outcome that there is clean data. Well, nobody. Clean data by itself doesn't mean anything. Or this really cool AI thing doesn't mean anything. Flip the script. Start with discrete projects where you want to define your ROI and the business process or thing you want to optimize. And what is that roi. And work backwards. So from there you say, okay, what are the outcomes that I want or what and what are the returns that I expect from those outcomes? Then apply the technology and the data projects to those. Then you can't go wrong. You're consistently focused on on the outcomes that you're looking to seek versus kind of working on a. Working on a solution, hunting for a problem. That's. I would just flip the script.
B
I love that. Great, great way to. To end today's show by flipping the script and starting with the outcomes, building back from there. Love to hear it. Ed, thank you so much for taking time out of your day to join the everyday AI show. We really appreciate it.
C
Thank you. It's great being here.
B
All right. And if you miss anything that Ed said, don't worry. We're going to be recapping it all in today's newsletter. So if you haven't already, go to your everydayai.com sign up for that free daily newsletter if you're listening on the podcast. Appreciate your support as always. Make sure that you please subscribe like and leave us a rating on Spotify or Apple Podcasts. Thanks for tuning in today. Hope to see you back tomorrow and every day for more everyday AI. Thanks y'.
C
All.
A
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going for a little more AI magic. Visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.
Date: December 11, 2025
Host: Jordan Wilson
Guest: Edmoski, Chief Product & Technology Officer at Boomi
In this episode, Jordan Wilson interviews Edmoski (“Ed”), Chief Product and Technology Officer at Boomi, to explore the transition from traditional automation to AI agents in business workflows. The discussion centers on why high-quality data is more critical than ever, how agentic (AI-powered) systems differ fundamentally from deterministic scripts, and the practical steps business leaders should take to harness these new capabilities while maintaining trust and ROI.
Agentification is everywhere:
As we move into 2026, everything that could be automated is becoming “agentified”—standard processes, features, and even classic automations are being reimagined as autonomous agents (00:16, Jordan).
Old automations vs. new agents:
Traditional automations fail when data or configuration is off—no output is produced. In contrast, AI agents will still produce an output, even if it's based on flawed data—which can lead to unexpected or even made-up results (00:16, Jordan).
Hybrid approach:
“Being a little bit of both actually....We will see both deterministic and non-deterministic or agentic workflows in the enterprise forever.” (04:08, Ed)
Some processes require strict, rule-based automation; others benefit from agentic flexibility. Many organizations are integrating both for effective business outcomes.
Significant cost savings:
Real business impact is emerging: team-based agentic workflows save companies millions of dollars, moving beyond pilot projects to real automation at scale (04:08, Ed).
Data dependency:
“You have to build these agents...on a strong foundation of data. Bad data in is worse data out.” (06:01, Ed)
Data quality is no longer a “nice to have”—agents will act on whatever data they’re given, amplifying errors or inconsistencies.
Need for data governance and maintenance:
Internal policies, organizational charts, and resources must be maintained as meticulously as the agentic systems that use them (09:38, Ed).
Quote:
“Those are all data quality things underneath...in the last number of years in an enterprise, haven’t really paid much attention to because humans have been a crutch...That’s why data quality is becoming more and more critical for these agentic flows.”
— Ed, (12:14)
Quote:
“When you have multi-ecosystem orchestration, there’s the data that resides within the system that the agent is on top of....If that data is not in sync and of quality on both sides, you can have disaster.”
— Ed, (18:39)
Quote:
“Flip the script and start with the outcomes, building back from there. Love to hear it.”
— Jordan, (24:46)
“Agentic capabilities actually help you augment and expand more and more capabilities...there are certain use cases where you want the performance of a fixed workflow...this is where businesses are today.”
– Ed, 04:08
“Bad data in, worse data out.”
– Ed, 06:18, repeated throughout
“I love looking to see how models take whatever inputs and what they do and what they think through...sometimes even just one wrong assumption...they might take that and run with it.”
– Jordan, 13:03
“Trying to manage these thousands or millions of agents in your organization without using AI to manage them...is going to be nearly impossible.”
– Ed, 15:42
“Focus on the data sets that are going to matter for your agentic workflows. Start small and iterate from there.”
– Ed, 21:46
“Flip the script. Start with discrete projects where you want to define your ROI and...work backwards.”
– Ed, 23:31
| Segment Topic | Speaker (Min:Sec) | |------------------------------------------------|--------------------| | Automation → Agents trend/risks | Jordan (00:16) | | Differences in enterprise adoption | Ed (04:08) | | Pros & cons of agentification | Ed (06:01) | | Data importance for agentic systems | Ed (06:54, 09:38) | | Expense Report workflow transformation | Ed (10:10–13:03) | | LLM “guessing” & chain of thought | Jordan (13:03) | | Governance & Agent Control Towers | Ed (14:35, 16:41) | | Multi-agent orchestration complexities | Ed (17:42–19:49) | | Advice for leaders with “iffy” data | Ed (20:50) | | Measuring ROI, flipping the script | Ed (23:31) | | Wrap-up / final insights | Jordan (24:46) |
For further reading and more AI news, Jordan suggests subscribing to the show’s free daily newsletter at youreverydayai.com.