
Loading summary
A
Foreign.
B
Hello and welcome to the Nvidia AI Podcast. I'm your host, Noah Kravitz. Agentic AI is on the tips of everyone's tongues right now, it seems. But what is it? What makes Agentic AI so exciting? And what should AI leaders like, CIOs and IT execs, be thinking about when designing an Agentic AI system for an enterprise? Here to break it down for us live from GTC 2025 is Bartley Richardson. Bartley is senior Director of Engineering and AI Infrastructure here at Nvidia, where he leads agentic AI and cybersecurity AI. Previously, Bartley was a technical lead on multiple DARPA projects. He holds a PhD in Computer Science and Engineering with a focus on AI and he's here right now. Bartley, thanks for taking time out of a busy GTC week to join the podcast.
A
Thanks Noah. I really appreciate it. That's a great intro, by the way. I'm going to bring you along with me everywhere I go and you can do that intro every.
B
I'm anything I can for the cause. Happy to do it. So maybe we can start with, to put you on the spot. You don't have to define things, but talk about agentic AI, what it is. Why is it so exciting in this context of enterprise leaders?
A
Yeah, I feel like what we're really good at in the tech industry, and I'll include us at Nvidia and that and everybody is we're really good at making things very complicated. That's one of our primary concentrations, right? And when I talk with people about agents and agentic AI, what I, I really want to say is automation, that's the word I really want to use. But automation, you just get half of it out of your mouth and people fall asleep, right? So we call it agency and agency AI. And really what it is, it is that next level of automation. If you think about the evolution of automation and how we were doing things manually and even if you go back to factories, right? And that these types of things, these, I don't want to say mundane, but I obviously just said it. But these kind of like everyday repeatable tasks, right? Come up with better ways to do it, right? Like example, I grew up on a farm in the middle of Ohio and we would dig post holes by ourselves, these terrible tools. We have technology that does this now for us, right? Agents are very similar. They're just instead of working in the dirt in the middle of Ohio, they're working on massive petabyte scale data silos or information repositories. To take that data, turn on a little bit, do this mundane task, and then instead of returning data that the human has to go back through and look at, actually take that, give it context, synthesize it together with other types of data and make it a little bit more actionable, a little bit more easy for the human to digest.
B
Right. Can we talk about reasoning?
A
Yeah.
B
Is there agentic reasoning? Is it a thing now?
A
It's a thing, right? I think we're going to make it a thing, right? It's definitely a thing. Reasoning is another one of these terms that gets kind of bandied about quite a bit. And really what you have is this kind of different type of model almost where you have a, I'll say a traditional LLM or an old school LLM. I don't know what that means. Like two years ago, five years ago, you have these old school LLMs and they're really good at kind of like this token prediction. They can do the thing where, oh, okay, I have the sentence, complete the sentence and all of these types of things they're going to image task and that reasoning models are kind of like this next level and they've been trained and tuned in a very specific way to think. Almost like think out loud. If you look at their test and look at their structure, where they're going through and you've given them the task, they're like, oh, I could do this, I could do this, I could do this, I could do this. And it's kind of like when you're brainstorming with like maybe your colleagues or even your family, right? You're like, oh, we could do this, we could do this other thing. Reasoning models have that type of feel to them, Right. And so putting them in a place of a system that's like, hey, I need to, I don't really know what I want to do, but like, I kind of want to make a plan. Here's my, here's my loose guidelines. Reasoning models can very easily explore that space for you and get it to like, here's some options.
B
So maybe we can dive into the technology a little bit and you can kind of go through some of the components involved. There's different Nvidia technologies here and just talk a little bit about each one. What they are, what they do, rather, why they're important.
A
Yeah. And I think the way you set that up is really great because it is this collection of technologies, right. When we talk about agentic system, it's not this like, oh, I have this completely 100% brand new thing that we're trying to do.
B
Get this agent name.
A
Exactly. Right, exactly. It's. And I, you know, you tell people it takes years to be an overnight success. Right. And so it takes years to do what we're doing in agents and Agentix systems right now. And it starts with something like data ingest. Right. And data ingest and being able to retrieve that information. So if you look at things like Nemo Retriever. Right. That's a great place to start. We have a capability that. Now Nemo Retriever doesn't just ingest text. Right. Like unstructured text. Right, right. You. But we can ingest multimodal documents. So everyone's favorite document type the PDF. Right. Everyone loves it. Great document, you can do it, everything.
B
Yeah.
A
If you would have told me 10 years ago that I'd still be working in PDFs. Right. I don't know. But anyway, here we are. And it can have all of these different modalities in it. It can have pictures, it can have structured images which are like tables and graphs and charts. It can have unstructured images and all of these types of things and extracting that information and keeping the context. I'll give you a really simple example where let's say you have a graph, a bar graph, really simple bar graph. And if you were to just give that to a really kind of basic computer vision parsing model, it would give you back and be like, yep, I understand it, I have five rectangles.
B
Okay.
A
It's like, no, you don't. Because. Right. Just having five rectangles doesn't make that interesting.
B
No.
A
Right. So it's the relationship with the five rectangles to them, it's the caption that goes underneath it. Is there text somewhere else in this document that's not even on that page that explains, explains that chart? So you have all of these different. When we talk about the retriever and ingestion process, we have all these different models, these NIMs that go into the pipeline and then we stitch them together to make this type of parsing possible. So we have models that detect bounding boxes of images. We have models that will work specifically on text. We have models that look at captioning and put all this together. There's three, four NIMs that just go into this one process. And what I love about Retriever is and you're going to see the theme, right? It's built for enterprise quality. And so if you look at PDFs, you know, if you're the average person, you have like a couple, you're doing research, something like that. You have 10 PDFs, you upload them, you don't care how long it takes, it takes five minutes, you know, whatever, come back and do it. If you're an enterprise, right. Like, you know how many PDFs we generated in video. Like, I think we could run the world on these PDFs that companies generate. And you need speed, you need consistency and you need accuracy. Right. And so if you look at something like retriever, we're 15 times the amount of throughput that you can get in other leading systems. With a really complicated PDF, we can go 10 pages per second through on a single GPU instance of this, extract this. It's really cool that. 50% fewer errors in accuracy compared to other systems. So it all starts, if I think about agents, it all starts with that part, getting the data from human source into an embedded kind of source.
B
Right, right. Should we talk about Agent Ops tools?
A
Yeah, Agent Ops tools also. So I like the progression that you're going right here. So I've got my data, right? Like, you know, I've got it in there. We're pretty good at that. Agent Op Tools are another way of saying this is kind of like a flywheel. Yeah, right. So you've got your, your system and we'll talk a little more about the system in a little bit. And you might have your models. And what Agent Ops tools are really good at is fine tuning, honing and making that system even more efficient. Okay, right. So you know, for example, you can, through successive iterations of fine tuning through an emojin, Op tools, you can have a 10x reduction in your model size by fine tuning that down, distilling it a little bit, which obviously translates to either speed or it translates to energy, or it translates to money. These are kind of currencies you can exchange depending on what you care about, almost 4% increase accuracy by doing this fine tuning model. And the real cool thing about Agent Ops tools is it just kind of sets there, right? Like, and you hook it in, you know, you've got your inputs, you've got your outputs, you hook it into your outputs, it feeds back into your inputs and it'll prompt the human like every once in a while it's like, hey, what do you think about this? And you know those, you go to any tool and you give it the thumbs up, you give it the thumbs down. Right? There's that. But what's even better is you can give it freeform text, like not just did I like this or did I not like this? It's like, oh, you give feedback. Yeah, yeah. This wasn't quite like this, whatever. And it keeps that context and then we'll use that to like steer in a different way in a different direction. Right. And so it's useful in a lot of ways, model, reduction, size, all that. But then you've ingested all this data, you're using this model. It's a way to kind of push or like steer this model a little bit towards your particular use or suite of models, right?
B
If I'm jumping ahead to something that you're going to cover, tell me and I'll back off. But you mentioned accuracy a couple of times. Accuracy, obviously a big, big thing. Do the reasoning models themselves improve accuracy? And I know it's not like there's one kind of reasoning model that fits everything, but I'm just wondering, as you've mentioned it, like, is that part of what the reason model is meant to do or is it just kind of a happy side effect or you know.
A
It can in a system, right? And certainly if you look at reasoning models, there's, you know, if we talk about Nvidia's model, right? Like llama Nematron reason super. I might have messed up Llama Llama Nematron Super.
B
Nematron super. With reasoning. Super.
A
With reasoning. Yes, exactly. If you look at that model specifically, it does have a higher accuracy than these other reasoning models, right? Like in its, its kind of system where I think or where we see this being advantageous is as part of a larger agentic system where I've got this reasoning model and there are good tasks for reasoning models. And I'm just going to say it, there's bad tasks for reasoning models, right? Like just like there's good uses for a fork and there's really bad uses for a fork, right? Like it's the same for this reasoning model. And with our reasoning model with Lama Neva charm reason, one of the great things about it is you can turn reasoning on or off. So you've got a single model that now operates as a reasoning model or it can operate as just a regular like non reasoning model, right? And so we talk about accuracy. One of the really nice things about reasoning models is the ability to iterate with them. I'm giving you an example. You're in a deep researcher type environment and you're like, hey, this is the topic I want. I might upload it a few PDFs, right? Like I give it the structure, I want it to have an intro, a you know, a conclusion. Right. Keep it pretty light, you know, you give it all this thing. Yep. A model can go, a reasoning model can go and say okay, I'm going to really quickly generate a lot of, lot of ideas, a lot of questions that I want to do and I'm going to present it to you human. It's like this is what I'm going to do. This is like seven tasks or something like that. Something the human can sticky for the human in their brain. Right. Like not a hundred things and the human can go through and they can be like oh that one looks good. Can you slightly. That one I want you to retweak it. Right. I want it about this topic or not that topic. That one looks like it's going to be too high level. Can you take that more towards a 12th grade level? And it's really in my mind it's the improved accuracy that we do see a little bit in the reasoning models and but it's the human model connection that lets you really. It becomes a coworker. These agency system become the coworker where you can guide them and you can fine tune them and you can talk with them and interact with them. So you have the report that you want and then you do all of the deep researching. Now you're in minutes of like I'm going to do this stuff and what if I take us, I'm going to date myself if I take us way back. If you remember the Internet when we used to talk about things like load times and you know what I mean, what's the user's perceived load time and we would do things to do that. We now have this ability with the reasoning model. You can interact with it early and so when you get to the final report or a draft report you haven't spent five minutes and it's something you totally different want.
B
Right.
A
Didn't want. So not only is the actual measured accuracy up but the perceived accuracy from the human, the perceived efficiency because you were involved in the process early on.
B
Yeah, fantastic. I feel like that's a great segue into talking about enterprise. The enterprise IT leaders, whoever's designing, developing the AI applications. What are some of the things that should be considered by a tech leader or CIO when they're designing an agentic.
A
AI system in particular we have a couple hours, right? Yeah. I think the biggest one is that when I talk with anybody in the enterprise I remind them I'll be like okay, so what is the agentix system we should use And I'm like, I'm going to answer your question. But let me ask you a parallel question. I'm like, tell me what is the, you know, tell me what's the one piece of software you use in your IT system? And they're like, what do you mean one piece of Software? We have 15 different vendors and we do all this kind of stuff. I'm like, correct, right? I'm like, so you have to think about Agentix Systems as the same, right? You're going to get some from your vendors, right? Whoever that is, you're going to have an application, you're going to have your CRM, you're going to have all these different things for your developers. They're going to put their own agents in that, right? And you're going to work with those. Some are going to be homegrown by you, right? Because everyone's moving really fast in this space. You have your own enterprise data with your own sources. If I can talk about a very specific Nvidia example that I tell everyone. We have this thing called nvbugspro AI Search and I like it for two reasons. One, it uses AI agents and two, the name just rolls right off the tongue.
B
Totally right?
A
At least when you say it. Mvbugs Pro AI Search is just like that one sticks in my brain. Not the order of super or Reason, right? But not that one. But the thing I like about this is our it. Our great IT department, who are much more engineers than the average IT department, created this. And they created it very early on in Lanechain's existence and then they modified it a little bit and now we're here fast forward six months and we want to not only use it within indebugs Pro, but we want to hook it up to coding repos and forums and CRM systems and all of that. And what that means is that now you're in this similar situation where I might have various different vendors and stuff that I've grown my myself. So when I talk with them and say, what you're really looking for is you need to look at it in that same context. It's not, I'm going to buy one piece of software, I'm going to write one piece of software. You're going to have all these agents working together. And the trick is, how do you let them all come together, mesh together in a somewhat seamless way for your employees? So when I log into our systems, or if you log into our systems, it's context aware. It gives me the information that I need it helps me do my job. I look at it like that. It's a. Look at it in the traditional IT deployment sense.
B
Right, right. How does that differ from what enterprise IT departments and developers were doing? You know, sort of pre LLM, pre agentic AI, that kind of thing. Because it, to me, being a little bit on the outside thinking about it, it sounds sort of like, oh, you've got different vendors and it was like an app store and CRM systems have their own apps, everything's got their own, like, you know, place where you can go grab apps. Are we moving towards that for agents?
A
You know, I mean, I think so, yeah. I mean there definitely will be.
B
I know they're out there already.
A
Yeah.
B
You know.
A
Yeah, right. You can, everyone has their own. And, and that's fine. I think the difference is, I say, you know, when we had all these apps in the, I guess in the before times, I guess when we had all these apps, we were in data silos.
B
Yeah.
A
And now it's very, it could be possible we're in these agentic silos. Right. Like a little bit where I have data here and I interact with that agent to talk with us. The big difference though is we're in a situation now where we're not always going to have API to API access. Right. I don't have to necessarily always have a developer script encode that. I can have CRM agent over here talking to Confluence Wiki agent over here. And they are communicating not via an API that a developer set up, they're communicating via human language. Right. Like with each other.
B
Right.
A
And that kind of makes connection, it kind of eases the connection points, it raises some, some things you have to look at. Right. Like, and how are we monitoring these systems? How are we observing these systems? Right. But that is one of the key differences is you're going to not just have API to API to API specs, you're going to have some of that and then you're going to have these agents that are just communicating with themselves on your behalf.
B
Right. What kind of complexities does that raise or perhaps doesn't raise when it comes to, you know, data security? And obviously this all runs on data, like you said at the beginning, Right. Ingest the data. So if you've got different agents and they're maybe from different vendors and they're, how do they talk to each other in a secure manner?
A
It's not 100% solved yet. Right. Like, but where it's headed is this idea that we call context based security. And so if you go back in the history and you look at, you know, security really started with let's say firewalls, right. And what was the motion? I'm gonna put everything in this circle and I'm gonna put my hands around it. And then there I secured it. And then we did this thing where we. I don't know if you, I don't know if you remember it. We moved to the cloud or I think we did, right? Like we moved to the cloud and then it was like, oh, oh, I can't put my hands around it anymore. Now I'm application based security, right. Like. Cause I have pieces everywhere. And now we're in this motion where every. There's cloud, there's on prem. We still have all of that. But when I am accessing a system, like I said, versus maybe with like our CFO is accessing that same system, we are looking at it in a different context. Right. And the information we want out of it. So we're moving to this context based type of security where you not only have to understand the person and the credentials and do all that stuff that you are already doing or supposedly doing. Right. Like our back should be there. Right. Like if it's not there, we have a more basic problem.
B
Right.
A
But you have to look at the context in which the question is being asked. Like look at the things around it, look at what pieces of information are coming with it. Do pieces of security before that question is and do then do an analysis before you return the information to the user and you can look at that context. So it does, I don't want to say complicates it, but it does add about maybe 10% new stuff that we weren't doing before to roughly 90% of just like regular security applications.
B
I'm speaking with Bartley Richardson. Bartley is the Senior Director of Engineering and AI Infrastructure at Nvidia. And we've been talking agents, agentic AI, a little bit of cybersecurity got snuck in there. But that's Bradley's domain. He leads agentic AI and cybersecurity AI here at Nvidia. And we're talking about agentic AI and the enterprise to shift gears a little bit and kind of talk about business needs designing one of these systems. How you know, are there best practices or you know, things that you've seen that work well when it comes to making sure that you're designing with the business care abouts in mind and then going back and kind of rechecking and keeping that, that north Star.
A
Well, I think, yeah, there are some of. It is kind of what we've already talked about. Right. Like building on these strong foundations of not just tooling and ingest, but models. And so there's all of that. Obviously. One thing that we haven't talked about explicitly yet is this notion of traceability and observability and profiling.
B
Right. You mentioned briefly.
A
Yeah. And you have to imagine if we go back and we have this kind of distributed system, there's all these agents, they're connected with other things, they're talking in various modalities fast than a human can understand from different vendors and different. Even like agentic framework providers. Right. Like you might have. Someone in your business wrote this thing in LangChain and someone wrote this thing in Crewai and someone wrote this. You know, there's all these different things.
B
Yeah.
A
So how do you have a holistic kind of traceability and observability platform like across that? And that becomes a little challenging. And it's why we made this new thing. It's called Agent iq and it's not an agentic framework. There's plenty of those I haven't checked. We've been talking for, you know, a few minutes. There probably are 10 more that popped up while we were talking. Right. We don't need another one of those.
B
Are frameworks the new prompt engineer?
A
Yes, they're the new. Yeah, exactly. But it's not, it's not one of those. But rather it's a really simple, as much as necessary, as little as possible way to get all of these frameworks and tools and everything to work together and be observable on the same point.
B
Right.
A
Being a CS person and coming from an engineering background, the way I tell people is I'm like, like again, we're really good at complicating things. We've got agents and tools and all these things. I'm like, no, everything's a function call. And so what Agent IQ does is it. It lets you use the frameworks that you were using and you still develop in those frameworks. Like I was saying, you probably already have stuff. Don't rewrite that code. Right. Let's just hook it up to other things that you have. But let's do that in a way where we develop everything to a function call and that lets you say, oh, I've got this agent pipeline here and now I want to add a capability to it. I can add it on or I can wrap it in something else. So now I have an agent inside of an agent it allows this nesting kind of capability. And what it gets you is this really cool traceability.
B
Yeah.
A
So you can go into every tool call, every LLM call, every chain of tool calls. You can look at the input tokens, the output tokens, the time it took to do the tool call, the sequence of actions of the tool call. And we have customers that are already seeing, they look at the timing chart and they'll optimize their tool calling chains and they get the 15x right. Like speed up through their, through their pipeline for it. Right. Or they'll get a, you know, a 5x improvement in accuracy by moving things around. And it's that type of information that is part one of the reasons that something like Agent IQ exists.
B
That's fantastic. We've got a few minutes left. Is there an area that we haven't covered that you want to go into in particular?
A
Yeah, I mean the thing I think I would end on right. If we're talking about agents is I feel like as an industry we've been talking about them for a really long time, but we're pretty new. Right. Like to them and their actual usability. Right. And how we're kind of getting them into, like we said, enterprise scenarios. I think what's incredibly interesting is these use cases that we have and some of them we have here in booth demos. Right. At GTC this year. Right. So I'm sure afterwards we'll have those. Right. Where people can consume as well. But we have this one that is automating a lot of what happens in the kind of feature requests like, you know, what, what issues are a customer having with my product all the way through writing a PRD for that, a product requirements doc, all the way to assigning, you know, who's who might be the best engineers for doing this. I think the coolest thing that I, that I've seen as part of that is we use a reasoning model to say like, oh, here's some, here's some issues your customers are talking about on forums or here's people, you know, tickets they filed. Right. Here's a set of brainstorming questions that I reason through. Go have a meeting. And then the humans get in the room, they have the meeting about, they talk about, they might draw some diagrams and that kind of stuff. We, that goes back into the system. So think about this as a human in the loop that goes back into the system. What comes out the other side, a fully formed PRD where we've taken the language and the action items from the actual team's call. We've taken pictures of the whiteboard, it produces the diagrams and then it gives you a PRD that then, you know, if it gets you 75, 80% of the way there, that's fantastic.
B
That's great.
A
Because what's, you know, I'm sure you know, you do your fair share of writing. Right. Like the hardest part for me about writing is that blank page.
B
That blank page, totally right.
A
And if I can get something that's 80% of the way there. Great. And that's the point I would leave on is agentic systems and models will make mistakes. Right. Like they will, they will not never be 100% accurate. I would challenge you to find anything that's 100% accurate. Right. But the way to think about them is like, look, the human will be in the loop and if it gets you 60, 70, 80% of the way there, that's amazing.
B
Yeah, I mean, that's 70% of work you don't have to do. It's there.
A
Exactly. And that's the part that I think is incredibly compelling. And we should focus on accuracy and we should always try to make things better. But I never want to lose sight of like, look at the 70%. Right. Like that, that we did. And how are we going to continue to make it better? Of course.
B
Absolutely. All right, Bartley, before we let you go, a little change of pace. But in your own daily routine, your own daily work, are there AI powered tools that you're using every day that you know you can recommend them? It can just be a category specific tool. What are you using that you like?
A
Yeah. I'll preface this by saying I have no official affiliation with any of these. Some of them I pay a lot for. Right. Than that. Right. No official affiliation. Of course, I think like a lot of people, you know, AI powered search engines. Even if that's something like a, like, like a perplexity. Right. I use that a lot. I use ChatGPT, the reasoning model in, in ChatGPT, like quite a bit like, that one's fantastic. If I'm trying to like research things or if I, you know, I just have something that I wanted to think about just a little bit more. Of course, that was before our reasoning model came out and before aiq. Right. Like is out there, which is our deep research. So those are kind of the obvious ones. I don't get to code as, you know, my teams don't let me code as much as they used to and they shouldn't. That's smart. That's Smart of them. But I love cursor as a coding tool and even just making diagrams, like, it's really good at that. The amount of times that I just hit tab and I'm like, wow, get out of my brain. Right? Like, I just hit tab. The other one that I use a lot is called Napkin. Napkin. AI.
B
Okay.
A
And that's the one that I don't know if a lot of people have heard of.
B
Yeah, I haven't heard of that one.
A
Yeah, it's in beta right now. It's free. Again, I have no affiliation with them other than I love their tools. What you can do is you can either just like type freeform text or you can do like a perplexity style and it'll research things for you. And it will take, let's say you have a process diagram or a complex tree or whatever, and it turns that into the associated diagram and the infographic for you. And then what I love about it is it gives you kind of pre formatted options to start from. Do you want this to be a funnel? Is this a process tree? Is it the cycle and all this, different color options? You're like, yeah, I want that. And then you can go in and you can change anything. You can move the text, you can change the text, you can move the shapes and all that. So I talk about the 70% there, right? I would toil away with these in PowerPoint for so long, right? But now, and it's getting better, I can just describe it to Napkin, like back of the napkin. And it produces this great SVG or PNG diagram.
B
Right?
A
Like, and so it's fan. It's awesome.
B
Fantastic. Bartley, for folks listening who want to find out more about all the things you've been talking about, all the work you and Nvidia is doing with the enterprise, where's a good place to go online or places to get started?
A
Yeah, a great place to go is build.Nvidia.com, right? So that's a great place to get started. You can see all of our models, all of our blueprints, which is just kind of like intent examples. So if you go to build.Nvidia.com, aI.Nvidia.com takes you to a similar place, Right. If you're interested in this new thing that we were just talking about, which is called AgentIQ, will play on AgentIC, right? It's AgentIQ.
B
Oh, yeah, okay.
A
Right. Yeah, it was clever. But if you're interested in that, that is open Source software on GitHub so GitHub.com Nvidia AgentIQ Great. But we'll also be all linked off of build Nvidia.com and yeah, great.
B
Bartley Richardson, thank you so much for taking the time out. Best of luck this week. Have a great show and in all you're doing and maybe we can catch up and do it again down the line and, you know, see if we're still talking about PDFs in a year or two.
A
Oh my goodness. Noah. Yes, we'd love to. Would love to.
Date: May 28, 2025
Guest: Bartley Richardson, Senior Director of Engineering and AI Infrastructure, NVIDIA
Host: Noah Kravitz
This episode explores the rapidly evolving world of Agentic AI, with Bartley Richardson demystifying what agentic systems are, how they build upon automation, and their potential for transformative impact in enterprise environments. Richardson offers insights into the technologies powering agentic AI, addresses operational and security challenges, and shares practical considerations for leaders designing these systems. Real-world examples and forward-looking advice make this a practical, engaging primer for anyone interested in next-generation enterprise AI.
a. Data Ingestion: “Nemo Retriever”
On the essence of Agentic AI:
"Automation, you just get half of it out of your mouth and people fall asleep, right? So we call it agency and agency AI… it is that next level of automation."
— Bartley Richardson (01:14)
On reasoning models as versatile collaborators:
"They've been trained and tuned in a very specific way to think. Almost like think out loud… Reasoning models have that type of feel to them."
— Bartley Richardson (03:16)
On enterprise architecture:
"You're going to have all these agents working together. And the trick is, how do you let them all come together, mesh together in a somewhat seamless way for your employees?"
— Bartley Richardson (14:43)
On security evolution:
"We're moving to this context-based type of security where you not only have to understand the person and the credentials… but you have to look at the context in which the question is being asked."
— Bartley Richardson (17:01–18:17)
On practical value:
"Agentic systems and models will make mistakes… but if it gets you 60, 70, 80% of the way there, that's amazing."
— Bartley Richardson (23:40–24:06)
The ‘blank page’ dilemma:
"The hardest part for me about writing is that blank page. And if I can get something that's 80% of the way there. Great."
— Bartley Richardson (23:38–23:40)
Agentic AI, though hyped, is still a new frontier in terms of everyday usability, especially at enterprise scale. It’s not about perfection, but about dramatically reducing the burden of work, placing humans meaningfully “in the loop,” and building on a strong foundation of adaptability, security, and observability.
“We should focus on accuracy and we should always try to make things better. But I never want to lose sight of like, look at the 70% that we did. And how are we going to continue to make it better?” (24:09)