
Loading summary
Marcus
This episode is made possible by Connective Media by United Airlines. Connective Media by United Airlines is redefining traveler media with a world first omnichannel network. From in flight to online and in app experience, Best in class tech helping brands engage travelers where it matters most. Are you ready to make an impact? Of course you are. Discover more@connectivemedia.com that's Konnective with a K. Hey, gang. It's Friday, April 18th. Somehow Dan, Jacob and listeners, welcome to behind the Numbers new marketer video podcast made possible by Connective Media by United Airlines. I'm Marcus and today we'll be discussing what's ahead for AI. Join me for that conversation. We have two people. Let's meet them right now. We start with our VP of Gen AI based in New York. It's Dan Van Dyke.
Dan Van Dyke
Thanks for having me, Marcus.
Marcus
Yes, sir. Of course. We're also joined by our technology analyst on the other coast out in California in the bay, it's Jacob Bourne.
Jacob Bourne
Thanks for having me, Dave. Marcus.
Marcus
Absolutely, absolutely. All right, so today's facts, right before we hit record, Dan said, I've got one for you. So I wasted three hours today on my one, but what I said to Dan is, we're going to compete. So Jacob's going to ref. Dan, you can go first.
Dan Van Dyke
Okay. So today I learned that sloths, my favorite animals, will only use the bathroom. I'm gonna say that broadly once a week. And they'll come down from their trees and they excrete one third of their body weight. One third. And they take the time to dig a hole and then they bury it and they risk their lives in the process, and nobody knows why. Um, so that. That is my fact of the day.
Jacob Bourne
Interesting. Slow metabolism, I guess.
Dan Van Dyke
Yeah, I guess so, Marcus, I think.
Jacob Bourne
This is the second time that sloths have come up in.
Marcus
It is really? Yeah, I think. Yeah. I think before I was talking about how they can hold their breath for 40 minutes.
Dan Van Dyke
What?
Marcus
I don't know why you would need that. I mean, maybe for little deep sea diving in the Great Barrier Reef.
Jacob Bourne
Is that true?
Marcus
Deuce worm for 40 minutes? Yeah. Yeah. Shocking amount. I think it's. It's one of the most if. Yeah. Must maybe if the most by now. How did this come about, Dan?
Dan Van Dyke
I was just on Reddit.
Marcus
Okay. That's how it happens. All right, cool. That is a good one. I've got one for you, Jacob. Don't be swayed. But I did invite you on the show. Okay, so Denmark holds the Guinness World Record for the oldest continuous use of their national flag. Since 1625, they've been using the same flag. So I went down a rabbit hole of flags because I'm cool. So a lot of flags look very similar, I found out, which is another fact about flags. Chad and Romania's are literally identical. Romania's flag came first by 100 years. All right, Chad, so you stole theirs. Senegal and Mali have the same one, but Senegal's has a little star in the middle. Indonesia and Monaco both have two horizontal stripes, red over white, but. But their dimensions differ. New Zealand and Australia are the same, but the stars are different colors. Venezuela, Ecuador, Colombia are yellow, blue, red horizontal bars, but have different emblems in the middle. Two more for you. Luxembourg and the Netherlands are red, white, blue lines, but the blue, slightly different shades. And Slovenia, Russia and Slovakia, all white, blue and red horizontal bars, but with different coat of arms.
Dan Van Dyke
Two reactions. One, that's like 42 facts and two remarkable that you could say all that without stuttering. I'm very in awe, but I practice, practice. Who is the winner?
Marcus
Come on, Jacob.
Jacob Bourne
You know, I mean, the sloth is always. Anything about animals is memorable.
Marcus
Ah, you make good points. Of course, he was.
Jacob Bourne
Yours made me think something for the first time, which is that, oh, well, we don't really think outside the box much with flags, do we?
Marcus
Yeah, not at all.
Jacob Bourne
Not a whole lot of creativity there.
Marcus
Not at all. No. And they're all grouped together, so I guess regions change, but, you know, the flags stay very, very, quite similar based on whereabouts the countries are that are coming up with them. But, yeah, not a lot. Ton of creativity. Yeah. Dan absolutely wins. Of course, it wasn't even close. Anyway, today's real topic, the dawn of AI agents and also the AI native company. All right, gents, everyone's talking about AI agents. Barely anyone knows what they are, right? Isabel Busquet of the Wall Street Journal. She notes that AI agents are broadly understood to be systems that can take some action on behalf of humans, like buying groceries or making restaurant reservations. But in some cases, the question of what constitutes an action is blurry. Dan, I'll start with you. What is an AI agent?
Dan Van Dyke
Yeah, so I actually had an education on this recently. I was talking to a vendor of one of those AI native companies getting a demo from them, and I use the term agents wrong, and the person on the other end of the phone, or Zoom politely sort of told me that, really, there's a whole spectrum from, you know, the chatbots, like ChatGPT, to workflows that are, like, rigidly orchestrated, but A little bit more robust than a chatbot to what fits into the, you know, term agent in the classical sense. So agent means a AI based tool that can take action based on a predefined task with autonomy and use tools. So that's kind of what defines an agent. But Jacob, you've actually written on the subject, so I'm curious if that gels with your definition.
Jacob Bourne
It does. I mean, I think that the. Well, first of all, it's a buzzword at this point. And so your story, Dan, is kind of relevant because these are technical terms that become commerc and become part of the consumer marketplace and then it takes on new meaning. But I think kind of distinguishing between gen AI chatbots or gen AI tools and agents, I think it's really about the level of autonomy. With a chatbot, you have to prompt it for every small task. With AI agents, it can take an action without that need for step by step prompting. So it can kind of do things in the background that you didn't necessarily tell it to, but it's all geared towards a goal that you want, essentially.
Marcus
Okay, so I mean, you said wrong. He said you said it wrong, Dan. I mean, different maybe. It feels like if you ask 100 people, you get 101 responses, even if you are talking technical terms. One from Tom Kosho, senior analyst at Gartner, says, does the AI make a decision and does the AI agent take action? Software needs to reason itself and make decisions based on contextual knowledge to be a true agent. And there's another quote here from Robert Blumoff, CTO at Akamai Technologies, and said many use cases, he said today resemble assistive agents rather than autonomous agents, requiring direction from a human user before taking action and narrowly focused on individual use cases. He does say it's a bit of an oxymoron and an assistive agent, because an agent's supposed to do it for you. But what do you think of those variations of definitions?
Dan Van Dyke
I think it reflects the fact that the goalposts are shifting for what constitutes an agent. For now, it's kind of like what's next? And the threshold is defined by the level of autonomy and the access to tools, but the capabilities of the baseline. So, like, what you can get within ChatGPT really resemble a lot of the characteristics that you were describing, Marcus, in that ChatGPT can decide to search the web based off the request that you ask it. It can invoke different tools like image generation. Does that constitute an agent? And so as nice it was, it would be to be able to come up with a crisp and specific definition for what constitutes an agent. It is a murky term and the definitions are changing over time.
Marcus
Yeah, Jacob, There are levels to this, right? And I'm surprised. I mean, with autonomous driving, I reference this quite a lot. There are the six levels, from zero to five various levels of autonomous cars. I'm surprised that agents don't have something similar because Ms. Lynn Belin, who writes for the Journal, was saying AI agents can perform simple tasks like ordering office supplies. Eventually some enterprises want to get them to financial transactions and hiring new workers. But that's quite a variation in difficulty.
Jacob Bourne
Yeah, I mean, I think that's a great analogy you're making there, or a comparison anyway, with the autonomous vehicles. I think the difference here is that autonomous vehicles are doing a very specific task. Drive your car. With AI in general, it's potentially anything that a human could do. At least that's the vision with artificial general intelligence. I think what this really highlights here is there's a bit of a disconnect between the vision for the AI sector, AI companies building this, and where the technology currently is at. So the vision is boundless automation, essentially artificial general intelligence that can do anything a human can do. I think that's the vision, but it's far from getting there. And so a lot of these terms become sort of incremental steps towards what the ultimate goal is. If we think about the initial agents that launched, like OpenAI's task, for example, very limited automation, very limited capabilities, but we're still calling them agents. And I think it's just they're really incremental steps towards where we'll eventually get, which is that you have AI tool or I mean, agents that can really handle very complex tasks that a human would do. And really it means people giving up a lot of that sort of micro decision making to the AI that's really operating fully in the background.
Marcus
It's quite ironic, actually. Zoe Weinberg, venture capital investor, was saying it's ironic to see a term that started out describing human human agency being used to talk about its opposite. Technology operates with little to no human oversight. Dan, we were talking before this recording about this quote from Erin Griffith of the New York Times. She says, after AI agents comes agentic AI. How are they different?
Dan Van Dyke
I don't know if I agree that it's what comes after. Oh, interesting.
Marcus
Okay.
Dan Van Dyke
At least according to my definition. And you know, we've already touched on how subjective and yeah, non uniform those are the way that I define agentic AI is sort of an umbrella term that encompasses both agents in the true sense of the word. So Jacob, you were talking about tasks. OpenAI has also released Operator, which can browse the Internet on a user's behalf and do things like attempt to book a flight or deep research, which can, you know, write a research report by browsing tons of sources. So those are true agents, but there's a middle ground that's sort of like above what ChatGPT can do, but below the capabilities of a true agent. So an example of that would be I'm building a lot of workflows to assist our research team in gathering content from Feedly, curating it and writing what are known as research blog posts, which is like an internal tool. And although that's tons of large language models strung together, and although there's a high degree of prompting and complexity in this workflow, I wouldn't call it an agent. I would say that it fits within the realm of agentic AI. But to your question of like what's next? I would say multi agent workflows is the thing that's next. And what that means is like think deep research, which can write reports, meets, you know, writes a report that ends up triggering 10 operators to go out and accomplish 10 different tasks all in service of a user's request. It's, it's like starting to build to an organization all working in unison towards a common goal.
Jacob Bourne
Yeah, yeah, I 100% agree with that. I think it's really about, you know, these different AIs with or different types of AI with different skills in themselves coming together to accomplish more. I think the next step is also AI agents that can anticipate users needs. So you don't really need to hardly do any prompting. It knows what you're going to need in the future and is already working on it in the background. I think though, for daily purposes, ultimately already we're seeing that these terms get used interchangeably. And so I imagine that again the deeper meeting or the technical meeting probably will get lost eventually.
Marcus
Yeah, yeah, I liked it. You said, Dan, this kind of umbrella term, Dr. Andrew Ng, prominent AI researcher, is saying there's a gray zone and agentic is an umbrella term encompassing tech that wasn't strictly an agent, but that had agent like qualities. We talked about some of these agents, at least one I think, Dan, you mentioned OpenAI maybe Jacob, OpenAI tasks. Who else do we have? What are some other examples of kind of popular AI agents at the moment?
Jacob Bourne
I mean there's all kinds from most of the tech giants leading AI companies. I mean Amazon has its bedrock agents through the cloud. Google has its vertex AI agent builder. Also Google has Agent Space which just announced that its agent now has coding capabilities. So autonomous coding. Dan mentioned operator from OpenAI. Oracle has a clinical AI agent for healthcare. Nvidia has an agentic AI blueprints which allows, you know, organizations to create their own custom agents. Microsoft Salesforce has agentforce and the list goes on and on. There's also more industry specific agentic platforms as well.
Marcus
Are they interoperable? Jacob, can they speak to each other? I mean Dan was talking about a multi agent world. Is that within the umbrella of Google, within the umbrella, the ecosystem of Amazon or do they talk to each other across companies?
Jacob Bourne
Well, I think that that's the, that's part of the vision and I think they're working towards interoperability, but I wouldn't say that we're quite there yet.
Dan Van Dyke
Okay, two, two recent steps that have brought us closer are, and I guess recent is a stretch for this one. First the introduction of mcp. So MCP stands for Model Context Protocol which is released by Anthropic, the creator of Clauds. And MOD protocol is simply a way for agents to be able to access tools. So think, you know, accessing GitHub repos or accessing databases or Zapier for automations. It's, it's a simple way that is very elegant and is becoming sort of the mainstream accepted standard to connect AI to all these other things that exist on the open Internet or you know, even the local files on a user's computer if they give it access. And then secondly, the new A to a protocol which was released by Google aims to complement the capabilities of MCP by allowing for agents to communicate to agents in a common sort of language. And so that, that vision of interoperability is starting to come a little bit more into focus. But the reality is quite fragmented as Jacob, you were painting a picture of where everybody wants to be the de facto home for agents and I think we're inevitably headed towards consolidation as a provider starts to emerge at the forefront, but for the moment it's just getting increasingly crowded, competitive and fragmented.
Marcus
Yeah, so a lot of, a lot of options out there in terms of different agents that you can choose. And OpenAI artificial intelligence company released a platform that lets companies create their own AI bots for completing tasks such as customer service and financial analysis. Bell Lynn of the Wall Street Journal is noting this. Dan, we talked before, I think it was last week and you said to me, that part of this conversation, maybe a part that's not being discussed as much, is the AI agents are hard to build. Right. What did you mean by that?
Dan Van Dyke
I mean, they're, they're very easy to build, period. So you could, you could spin together a prototype with a couple of hours if you're technical enough. And it'll be really impressive once you try to push that into production, to fulfill a real need that you have in an organization or build something that would be client facing, is where you start to encounter difficulties. And that's why the eval process is actually the most crucial part in measuring the efficacy of agents. And where a lot of people will get hung up is they'll realize that for a particular task, what they really need is 95% accuracy to meet the baseline that they have with people. And an AI agent maybe right out of the box will get to 80% accuracy, but that last 15% is actually 80% of the effort. And so what I was describing in some of the workflows that I've strung together in assistance with our research team actually turned into a very protracted process of figuring out evaluations, pushing, you know, new iterations out to the research team, having them come back to me and realize, oh, yeah, I didn't ask for this feature and it's actually crucial. And then doing that again and again, it's through no fault of the research team. It's just you don't know what to ask for until you're actually deploying these in the real world and seeing where they fail. And so it's a much more difficult process than it looks like on its face, to take something from very promising POC to something that's actually in production and starting to create value, which is not to say it's impossible. In fact, the thing that I had describing for the research team, they're really positive about it. It's really useful right now. But, you know, I'm already looking into new capabilities that would make it even more useful. So it's definitely a journey and easy to get sucked up in the hype and think that it's going to be fast. It isn't.
Jacob Bourne
Yeah, yeah. Just to add to that, I mean, I think we all know about the, you know, the issue with chatbots hallucinating, as documented with lots of examples. But, you know, the risk there is that, okay, you have something problematic in a chat box, an output that's, you know, erroneous or problematic in some other way. But when you have AI agents that are potentially Making transactions online. If they get stuff wrong, then the stakes are a bit higher. And so I think that makes it difficult on, I mean on the technical level in terms of putting in safeguards to reduce the likelihood of that happening, but also just deploying it commercially, knowing that there's that risk there, I think makes it difficult.
Marcus
Yeah, yeah. This is not something that happens overnight. And Greg Shoemaker, Deco Group Senior VP of Ops and AI, had a good quote saying companies should approach agents less as a tech development, sorry, tech deployment, and more as the development of digital workers that need to be onboarded and trained. Dan, you mentioned a word which I thought was interesting, which was, I think you said something to the effect of it like kind of technical ability with just the fact that you, you know, you have to have some kind of a technical understanding of how these things work. I'm wondering if that's part of the problem, is that this is hard. OpenAI was even saying to use its AI agent building platform, enterprise developers still need to have a comprehensive technical background. So how, how proficient do you have to be with AI to be able to build one of these agents and build it right? To your point, Dan?
Dan Van Dyke
Well, I've been covering the AI space for maybe eight or so years, but primarily from a financial services lens, as formerly the head of financial services research within eMarketer and recently I would say two and a half years ago, with the advent of ChatGPT, if I'm getting that timeline right, started to focus more and more of my workday, now 100% on AI and starting to build PoCs and applications that have transitioned into it becoming my full time focus. And so over the course of that amount of time, say two and a half years, I've gotten to the point where now I feel proficient enough that yes, I could build a poc, yes I could do evals that would help get something into production. In fact I've done those things. But it did take years and you know, that time is spent figuring out things like how do you set up a GitHub account and what is the importance of not hard coding, you know, environment variables into repos that you push into production. All these arcane terms that really have real world consequences. If you're talking about an application that you're building, putting out into the worlds that would otherwise become a mess of spaghetti and quickly attacked by hackers and you know, you become a cautionary tale. So I think it is well put that there, there is kind of a learning curve that you still have to overcome, but that Learning curve is rapidly dropping as tools like, you know, Claude 3.7 become more effective helpers. And so that's led to the emergence of something called Vibe coding, which is, you know, somebody like me is just sort of describing here's what I want, I'm proficient enough that I can describe, here's the platform I want you to use, here's what I want you to avoid. I can kind of guide it every now and then, but it's just a lot of like I get errors and then I'm saying help me fix this error and I'm going to feed you documentation which, which helps. But you know, I don't want to overstate it. It's quite often frustrating, mind numbing work and hopefully increasingly less so in time.
Marcus
But it's a great example of how someone in house can learn it. You know, you don't have to hire externally someone who studied it and is a, you know, got a PhD in IT and had been a company for 20 years. Someone in actually someone in house has an understanding of internal processes and what the company needs and a relationship with those people at the company as well. And so there's an argument to be made that maybe that is better perhaps. Jacob.
Jacob Bourne
Right. Yeah, Just a note too that I mean, I think things are changing. I mean just yesterday Google Cloud announced its new no code Agent designer which is launched specifically to tackle this problem of how can non technical people take advantage of developing their own agents. And so I think this is something we're going to see more of to meet that need.
Marcus
So let's end with this gents. AI agent adoption. It seems as though it's been extremely limited so far. I have one data points from Mr. Kosho who I mentioned earlier, from Gartner. He was saying just 6% of 3,400 people in a recent Gartner webinar on the subject said that their company had deployed agents, just 6%. There's an argument to be made, look, you've deployed one. There's also an argument to be made about. Yes, but did you deploy it? Well, how advanced is it? Dan, you were saying you can do it, but they're hard to do. Right? Dan, I'll start with you for this one. What, what do you see the next couple of months looking like for agents for agent deployment in. I'd say yeah, we're only in April, so maybe I should just say 2025 because in a few months, as I was saying before the show, it's Christmas.
Dan Van Dyke
I, I think by the end of the year, you'll probably get into the low tens up to 20% adoption if you rerun that same study, if I had to guess.
Marcus
Yeah.
Dan Van Dyke
And that will be as a result of more companies releasing agentic platforms so that, you know, the developer workforce who is eager to build these tools can go out and build on permission secure platforms that they're already using. And additionally you'll start to see a trickle of folks internally. You know, I'm thinking about very advanced AI users that we have within E marketer like Henry Powderly for instance, going out and starting to pick up skills and build their own tools. So I think we'll see a convergence as both groups start to build more agents and I'm excited to see that continue to grow into 2026.
Jacob Bourne
Yeah, I agree with Dan's forecast there and Marcus, I think that that 6% number, it does seem low, especially since there was other data that indicated adoption was higher. And I think the issue here goes back to what we're saying about what constitutes an AI age. And the lower number points to the fact that I think the adoption of true agents is very low. But I think there is definitely a lot more adoption of AI assistants that are getting called agents, which pushes the data up a bit in terms of adoption. So I think, and we're going to continue to see this in terms of, okay, are you actually using an agent or not? But as the technology gets better and we achieve a higher level of automation, then, you know, I think it'll become more clear over time.
Dan Van Dyke
Yeah.
Marcus
Dan mentioned Henry, Henry Powder, he was on with Garcia Sevilla who both talked about using AI at work. A two part episode or series if you will. I think it was March 31, April 4, both those episodes came out, so check those out. That's what we have time for for today's episode, unfortunately. Thank you so much to my guest for hanging out with me today. Thank you. First to Jacob.
Jacob Bourne
Thanks for having me today, Marcus.
Marcus
Yes, sir. Thank you. Of course to Dan.
Dan Van Dyke
Thank you.
Marcus
Absolutely. Thank you to the whole editing crew. Victoria, John Lance and Danny Stuart runs the team and Sophie does our social media. Thanks to everyone for listening in to behind the Numbers Remarketer video podcast made possible by Connective Media by United Airlines. We'll be back on Monday. Happiest of weekends.
Podcast: Behind the Numbers: an EMARKETER Podcast
Host: Marcus
Guests: Dan Van Dyke (VP of Gen AI, New York) and Jacob Bourne (Technology Analyst, California)
Release Date: April 18, 2025
In the April 18, 2025 episode of Behind the Numbers, host Marcus engages with EMARKETER’s VP of Gen AI, Dan Van Dyke, and technology analyst Jacob Bourne to delve into the future of Artificial Intelligence (AI). The discussion centers around the evolving landscape of AI agents, their definitions, applications, challenges, and the anticipated trajectory of their adoption in various industries.
The episode kicks off with a light-hearted exchange of “facts of the day” between Dan and Jacob:
Dan Van Dyke shares an intriguing fact about sloths:
"[Dan, 01:25]"
"Sloths will only use the bathroom once a week, excreting one-third of their body weight, and they risk their lives by coming down from trees to bury their waste."
Jacob Bourne responds humorously:
"[Jacob, 01:53]"
"Interesting. Slow metabolism, I guess."
Marcus adds his own fact about national flags, noting the striking similarities between various countries’ flags and the lack of creativity in their designs. This includes examples like Chad and Romania’s nearly identical flags and the similarities between Indonesia and Monaco’s flags.
The segment concludes with laughter over the competitive nature of sharing facts, highlighting the engaging and personable dynamic between the hosts and guests.
The conversation transitions to the core topic: AI agents. Marcus references Isabel Busquet's definition from the Wall Street Journal, highlighting that AI agents are systems capable of performing actions on behalf of humans, such as purchasing groceries or making reservations. However, the definition remains vague in some contexts.
Dan Van Dyke elaborates on his understanding:
"[Dan, 04:52]"
"An AI agent is a tool that can take action based on a predefined task with autonomy and use tools. There's a spectrum from basic chatbots like ChatGPT to more robust, classical agents."
Jacob Bourne adds to this by distinguishing between general AI chatbots and AI agents:
"[Jacob, 05:51]"
"It's about the level of autonomy. While chatbots require prompting for every task, AI agents can act independently towards a goal without step-by-step instructions."
Marcus brings in additional perspectives to the discussion, citing Tom Kosho, senior analyst at Gartner, and Robert Blumoff, CTO at Akamai Technologies, who view AI agents as either assistive or autonomous based on their decision-making capabilities and the need for human direction.
Dan Van Dyke acknowledges the evolving definitions:
"[Dan, 07:25]"
"The term 'agent' is murky and evolving. For instance, ChatGPT’s ability to search the web or generate images blurs the lines of what defines an agent."
Jacob Bourne likens the situation to autonomous vehicles, suggesting that unlike the well-defined levels of autonomy in cars, AI agents lack a standardized classification system:
"[Jacob, 08:47]"
"There's a disconnect between the limitless vision of AI and the current technological capabilities. Terms like 'agent' are incremental steps towards more complex, human-like AI functionalities."
The guests discuss various existing AI agents across major tech companies:
Amazon: Bedrock agents through the cloud.
Google: Vertex AI Agent Builder and Agent Space with autonomous coding capabilities.
OpenAI: Operator for browsing and booking tasks.
Oracle: Clinical AI agent for healthcare.
Nvidia: Agentic AI Blueprints for custom agent creation.
Microsoft Salesforce: Agentforce, among others.
Jacob Bourne emphasizes the diversity:
"[Jacob, 13:49]"
"From Amazon to Google, and beyond, numerous platforms are emerging. Industry-specific agents are also gaining traction."
Addressing the interoperability of these agents, the discussion highlights efforts to create standardized protocols:
Dan Van Dyke introduces the Model Context Protocol (MCP) by Anthropic and the A2A Protocol by Google:
"[Dan, 14:55]"
"MCP allows agents to access tools like GitHub or Zapier, while the A2A Protocol enables agents to communicate in a common language. However, the ecosystem remains fragmented with ongoing consolidation anticipated."
Jacob Bourne concurs:
"[Jacob, 14:37]"
"Interoperability is part of the vision, but it's not fully realized yet. The market is crowded and competitive."
The conversation shifts to the practical challenges of developing and deploying AI agents:
Dan Van Dyke discusses the complexity beyond prototyping:
"[Dan, 17:07]"
"Building a proof of concept is easy, but deploying agents to fulfill real organizational needs requires rigorous evaluation. Achieving high accuracy and handling unforeseen tasks involves a protracted process of iteration and testing."
Jacob Bourne highlights the heightened risks:
"[Jacob, 19:16]"
"AI agents that perform transactions carry higher stakes. Ensuring accuracy to prevent errors is critical, making deployment more challenging compared to simple chatbots."
Marcus introduces Greg Shoemaker’s perspective on treating AI agents as digital workers:
"[Marcus, 19:58]"
"Companies should approach agents as digital workers that need onboarding and training rather than mere tech deployments."
Dan Van Dyke reflects on the technical proficiency required:
"[Dan, 20:52]"
"Building and maintaining AI agents demands significant technical expertise. While the learning curve is decreasing with better tools, deploying reliable agents is still a challenging and time-consuming process."
Jacob Bourne notes advancements like Google Cloud’s no-code Agent Designer:
"[Jacob, 23:25]"
"Tools are emerging to lower the technical barriers, allowing non-technical users to develop their own agents."
The final segment assesses the current adoption of AI agents and forecasts future trends:
Marcus cites a Gartner webinar statistic where only 6% of 3,400 participants reported deploying AI agents:
"[Marcus, 23:49]"
"This low adoption rate might reflect the nascent stage of true agent deployment."
Dan Van Dyke predicts gradual increases:
"[Dan, 24:48]"
"By the end of 2025, adoption could rise to 10-20% as more platforms emerge and internal experts begin building agents."
Jacob Bourne adds nuance to the statistics:
"[Jacob, 25:46]"
"True agent adoption is low, but AI assistants labeled as agents are more common. As technology improves, clearer distinctions and higher adoption rates are expected."
The episode concludes with Marcus thanking Dan and Jacob for their insights and referencing related episodes featuring Henry Powderly and Garcia Sevilla discussing AI at work. The guests express optimism about the growth and maturation of AI agents, anticipating increased adoption and technological advancements in the coming years.
Notable Quotes:
Dan Van Dyke [01:25]:
"Sloths will only use the bathroom once a week, excreting one-third of their body weight, and they risk their lives by coming down from trees to bury their waste."
Dan Van Dyke [04:52]:
"An AI agent is a tool that can take action based on a predefined task with autonomy and use tools."
Jacob Bourne [05:51]:
"It's about the level of autonomy. AI agents can act independently towards a goal without step-by-step instructions."
Dan Van Dyke [07:25]:
"The term 'agent' is murky and evolving. For instance, ChatGPT’s ability to search the web or generate images blurs the lines of what defines an agent."
Jacob Bourne [08:47]:
"There's a disconnect between the limitless vision of AI and the current technological capabilities."
Dan Van Dyke [14:55]:
"MCP allows agents to access tools like GitHub or Zapier, while the A2A Protocol enables agents to communicate in a common language."
Dan Van Dyke [17:07]:
"Deploying AI agents to fulfill real organizational needs requires rigorous evaluation and a protracted process of iteration and testing."
This comprehensive summary encapsulates the key discussions and insights from the episode, providing a clear understanding of the current state and future prospects of AI agents for those who have not listened to the podcast.