
Aible’s Arijit Sengupta joins Logan Lawler to reveal how agents and Dell tech can get your enterprise from prototype to AI value fast and secure.
Loading summary
A
Foreign.
B
Welcome to Reshaping Workflows with Dell Pro Precision and Nvidia, where innovation meets real world impact in high performance computing.
A
Welcome back to another episode of Reshaping Workflows with Dell Pro Precision and Nvidia RTX Pro GPUs. I'm your host, Logan Mahler. Today, topic of the hour, pretty important, AI agents. You've heard it, you've heard the term, maybe you understand it. It is kind of all the rage. We've moved kind of beyond generative into, you know, AI agents. So we're here to talk about agents, but the cool part about today is agentic AI can be really complicated. So we have the founder and CEO Orijit of Abel, that's going to talk to us about their platform and how agents can be deployed very, very quickly to drive business value way faster than you're, than you're expecting. So with that, Orjit, thanks for joining us. Tell everyone a little bit about you, your background, introduce yourself, and then we'll jump right into it.
B
Well, as you mentioned, I'm the founder and CEO of Abel. Before this, I started something called BeyondCore, which became Salesforce, Einstein Discovery. Ended up writing a book called AI is a Waste of Money based on a thousand AI projects I had done at that time. Actually laid out why AI projects fail. And the reasons haven't changed. It's typically disconnect between the business users, the end users, the data science teams and it. So the more, you know, everybody says AI is a team sport, but unless you're starting with the end user and the job to be done in mind, you're not going to get value from AI. That book led to the Harvard Business School bringing me back in to co create and co teach the first AI course in the MBA program at the business school. Very much focused on how to get business value out of AI and been doing that for a long time.
A
Awesome. Humble Flex. You know, Harvard brought me back, no big deal.
B
Oh, I also have the Stanford part too.
A
Oh yeah, you know, Stanford, Harvard. Where's Yale, man? Yeah, I love that.
B
I have mit. I didn't get Merck.
A
Dude, we're done. We are done. This episode is over. So. Okay. Humble Flex, obviously you know what you're doing. That's why we brought you on as a guest. So Able itself for everyone. There's a lot of people that will be familiar that are watching this episode because you've worked with Dell, you know, on the server side for a while now you're kind of working on the client side. We'll talk about that in a second. But you know, 30 seconds to one minute. What, what is able? What do you all provide?
B
It's this fundamental thing I am able to. Our basic thesis was AI made by the few for the many will be very disempowering. So how can we enable every business user to make their own AI agents in their own way, train them to meet their specific needs for their specific use case and do it in a matter of minutes. That kind of empowering people is the crucial part. So for example, in the state of Nebraska we did a hackathon with the CIO and the CDO participating, a bunch of interns, agency staff. It was like 30 people and they built 222 agents in 90 minutes. Most of them had never seen Abel before they were trained in that session. So when you can change the world from I'm going to build one agent in six months and I'm going to spend a huge amount of mind share and complexity on it to hey, how can I go in and very quickly build a bunch of prototypes, play around with it. Iterated Baptist Health once talked about how they did a hundred iterations in five days. Imagine an enterprise AI agent where you do a hundred iterations in five days. That's how the game has to change. If you're thinking 1z2z I'm going to build one agent, you're not going to get value because the failure rate is too high. If you go in for a thousand agents and you get 1% success rate, 5% success rate, man, that's a lot of value you're going to get very quickly.
A
I get it, it makes a lot of sense. And I actually had a guest on that. We were talking about very different angle. We were talking about AI but how, you know, a lot of AI projects fail because you know, the business doesn't necessarily talk to the end user and there's not this one kind of size fits all approach. And you know, question I have for you is like I think it's 90% of AI projects failure, whatever, you know the number is changing, right? Like how. And you guys are obviously able's very successful in not being on the 90% of the failure rate.
B
Oh no, we fail a lot. Here's the thing, no free lunch. What we are saying is fail really fast. If you're going to fail, fail in a day. Because then it's not failure then it's just learning, right? We fail more than 90% of the time. The difference is we are getting so many shots on target, the failures don't feel like failure, getting too much more success. See, a 90% failure rate is a huge problem. If you have a bow and arrow where you have to get the wind exactly right, you got a point exactly right. Like everything has to be perfect. If you have a machine gun, what do you do? You shoot, you miss, you move the target, you move it, move it, move it. You hit the target. Right? That's what we are focused on.
A
Yeah, that makes sense. So you know, from your perspective, you know, just, and this is broad, you know, what are the, some of the top mistakes that enterprises are making and what does Abel do to help them avoid? I mean, obviously the very fast iteration. But what else in your platform enables them to avoid some mistakes that, you know, enterprises make? When developing either agentic AI or any sort of, you know, AI workflow, you
B
probably notice old saying that don't start from the tech, start from what the user wants. In AI, it's gone the other way, it's become a problem. Because what has happened is business users are often used to what they see in ChatGPT and stuff like that, right? So they have a consumer tech mindset to the problem. Now this consumer tech works fine. It's looked at the like most of the information you're talking about is on the Internet. It's fine. When you get to the enterprise that data is not on the Internet or better not be on the Internet, right? The processes are unique, the terminologies are unique. And that's where a lot of these failure points happen. So what we did is we went in and looked at, at this point we have done like thousands, like actually tens of thousands of projects by industry. And what we did is we found the easiest success points. Like one of our favorite ones is what we call the watch change analytics around say retail. Like you're going into your sales data and saying what has changed over time. The reason that is very powerful is a human being might go draw 50 charts, not more than that, right? But an agent can go in and look at 10 million charts, do it completely automatically. Actually do it without creating hallucination, because you use deterministic tools to do the calculations at scale. So we have done stuff like, you know, 5 billion rows of data, 10 million variable combinations for a compute cost of less than 10 bucks. Because we designed the thing the right way. Okay? So you take something like this where a lot of value is created very quickly, the, the chance of failure is low because we automatically clean the data, we automatically introspect the data. There is no manual setup. We have really dialed that in over seven years of doing this at Able. And then the customer gets to value very quickly. Once they get to value, we say, okay, you want something a little more complicated? Here's another template. Go try that. You have built four, five, six agents. Now let's stitch them together. So you have a order to cash process. Well, you did an inventory change check agent, you did a counterparty credit risk agent, you did a shipping agent. Guess what? The agents talk together and now you have order to cash agent. But you gotta start with these successes. And what we find is starting from the templates just gets them started from safe places, places where they have a high probability of success with low probability of failure and where they can get to value very, very quickly.
A
I love that. And I, it makes total sense, right? And I'm, I'm gonna. The kind of, the next question kind of is a tag along to that, right? Is that, you know, Able basically touts that you can get to business impact or business value within 30 days. So with like, I'm not going to call them playbooks, it's not the right word. But with agents that are very focused on one key problem in like a set industry or vertical that, you know, like you said, right, everyone's needing, et cetera, but what outside of that do you do to ensure the impact? Do you know what I mean is, was it to get to these things? Because I just, I just know, and I'm using kind of my Dell corporate experience is that every business thinks they're special and they're different. And even though they might have the exact same process, right, like they're taking one's taking it from Salesforce, another one saying from Oracle, another one is from here, you know, so as it live on SharePoint, like who knows, right? So with. Even though the workflow might be the same, like you want the number. How does Abel work with disparate sources of data to be able to quickly stitch all that together? Because that just takes time. That is a huge undertaking.
B
One important thing is like the answer to these kind of questions is never one thing because if it was, everybody else would have done it already. Let's take a few specific examples that you brought up. One is everybody's data is different, right? The variable names, the structure, et cetera. And what a lot of companies end up doing is doing a lot of setup before they start doing an AI project. This is my data dictionary. This is how it's configured. This, that and the other. There's Just a lot of stuff that they do at the beginning. What we did is we came up with a statistical analysis approach where we look at huge volumes of data. Like you can literally point it at every table in Snowflake, every table in Bigquery. It goes in and does an automatic profile and then it figures out these are the variables that actually have a lot of signal in them. Like if I draw charts, these variables have something informative. And then we make it very easy for the human to go in and fix the names of the, let's say six or seven variables. They care. But the generative AI first tries. And the generative AI gets it right most of the time. But if it got it wrong, as the human, you can go edit it. What we did there is instead of starting with a human first process, we started with the AI first process, did a bunch of work, presented it to the human and said, hey, did I get it right? Oh, I got these two, three things wrong. Boom, boom, boom. Let's go. If you're doing data cleansing, Abel will actually point out data quality problems. That quite often happens. Like we analyze it, we'll come in and say, hey, you have this variable missing a lot of the time. And when this is missing this other thing is happening, the human looks at it and says, oh my God, I know what is going on here, I'm going to fix it. But it's a very focused data cleansing. Going to somebody and saying, clean your data, that's like, have your veggies. Great. But like, the point is, when is it clean enough? What's the definition of clean enough? So in abo, we immediately show them our data quality score. Is this good enough? And if it's not, we are going to tell them where do they need to clean the data, Very, very precise. And then they go do that. Another one is when you have to post train models. By the way, language models in the enterprise need to be post trained. I'm just being very blunt about this because these language models haven't seen the unique data of the enterprise. But today if you try to do post training, it's incredibly complicated process. And the toughest part is data data collection. The data scientists literally go to the business users and beg them to give them Excel spreadsheets or bad answers, good answers. Who's going to do that for you? Nobody's going to do that. Right? So what we did enable is we bootstrapped it. So Abel gives you an answer. We call it an intern model. When you first start out with Abel, you call it an intern model, you go in and say, no, you got it wrong. When I say territory, I mean sales territory and that's actually in this variable. Or I go in and say, oh, you got this wrong. In our company we always do the credit check before we look at an invoice, right? The thing is going to be unique to the process, but you give feedback to the AI and you tell it what it got wrong and then we set it up so it immediately retries. You don't do prompt augmentation. Able automatically figures out how to translate that into prompt augmentation. Few shot learning, but some kind of real time adjustment. Maybe we change the settings of the model, maybe we change the settings on the vector database, but then we come up with a better answer and we say, is this better? Sometimes the human says yes, sometimes says no, you still have this problem, Boom, do it again. But within three iterations we get to very, very high accuracy. And the moment the human says yes, this is correct, we save it off as post training data. And the moment we get a hundred examples, we post train. So post training to us is an ongoing process. It's not something you do once in a blue moon. And the data collection is just part of using Able. So that's the whole idea. How do you get humans and AIs working to get together? And that's the core of IM Eagle.
A
It's very interesting. You said you didn't kind of start with the humans first, but you kind of did in a sense like allowing them to be in the process, right? Because as I think about it, I'm like okay, this all sounds great, but like exactly. The example, like the credit check before or territory for me as a regional sales director is states. But me territory as a rep means like a city or like a certain county or whatever you want to call it, right? Like, and I, I kind of love, I love that because I think that's one of the biggest things with AI that is amiss is that it's kind of the art to get to the data and the people that are consuming whatever you know, the agent is saying, I'm not going to say they're the expert, but they know their domain, right? And they know what they need. And being able to give them that on almost a hyper personalized level. Like you and I could be in the same department in sales, but I'm a manager and you're this. We could get different things is amazing. Is, is insane.
B
We call that AI with humility, by the way, because what happens with A lot of AI nowadays you're going to find, especially the things that have been distilled, they don't take instruction well because they think they know the answer because the distilling process has given them a lot of positive validation. So the AI thinks it really knows what it's doing. We really put in a lot of effort to make sure our AI is coachable, it's very adjustable and then we post train. But when we are post training, we are still retaining that original intern model so that if new data comes in, I can go back and post train again. You've got to have this mindset of AI is not magic. And a lot of people are missing that point because they come in from the consumer world, see some really cool demo and they think, oh, this is going to work in, in the enterprise. No it won't. Because that AI has never seen your unique use case, your unique data, your unique way of doing business. And if all businesses did work the same way, we wouldn't need enterprise tech in general.
A
Exactly. So let's talk a little bit about tech. I know that you've worked with Dell quite a bit. We're first going to kind of start. We won't talk about the Dell Promax GB10. We'll come with that in a second.
B
Oh, I love the Ben Pro Mic GB10.
A
I know you do. That's why you're here. Because that's what I love too. But let's start with, you know, one thing I want to call. We are going to talk about kind of local client deployment. But before we do that, you guys have been working with, you know, the dell kind of ISG team, so think servers, PowerEdge servers for a while. So you know, and I've seen kind of on your website, you know, obviously we love it when you deploy it on Dell servers, you know, in a data center. But you also have different deployability options, right? I think you work with Google, you work with Amazon, you work with other OEMs. Before we get into that, is that majority of the use cases that you see from a, from a hardware standpoint, is it mostly enterprise level data center cloud serving many, many concurrent users? Is that where we, where you start?
B
So what happens is, and this is changing for us thanks to the Spark and the GB10 with you guys. Actually when we first started, we started with the largest enterprises, so most of our customers were Fortune 500. And the reality of any Fortune 500 customer is they always have more than one cloud. So we supported all three clouds. They always have technology for different companies like Dell, right? They always have edge use cases, they always need something done on the desktop. So one of the mistake that most startups were making in this space is they started with a cloud first mentality, which is fine, but remember that the vast majority of enterprise data even today is not on the cloud. So we started with a enterprise mindset. Always jobs to be done. We are always focused on what the customer needs, right? So we, from very early days we were first, we were all three clouds, which was a very hard thing to do because these clouds don't work in a. The way serverless works in Google is very different from the way it works in Azure, for example. So we had to figure that out. Then when we started working with you guys, what was really important was now what happens when we have different kinds of processors? What happens if I have a CPU GPU on the same server? What happens if it's on one piece of silicon? You actually have to optimize differently. And this is where like, you know, working with you guys, one of the fun things is you guys really get into the deep details of that and how the software is orchestrated and exactly how we're going to do support. That's been the fun part of working with Dell, getting that intimate on getting these servers optimized for what we are
A
doing, which I love. Now let's talk about what I love more is the Dell Pro Max with GB10. So you've been working with Nvidia. You were, I don't know if you're an Inception partner, but like from a technology partner standpoint. Yeah, you started kind of with the Spark, so let's kind of start, you know, there. So it was kind of a jump, right, from a traditional enterprise, right? Like the Delpromax GP10. Small. Well, actually I do have one over here, but I'm not going to get it out. But, but small device, about the size of a box of Kleenexes, very high vram. Really An AI development box. An AI development on the go type kit. So when that came out, obviously it was. Well, Jensen announced it. It came out in October at gtc, but it announced in October. But I gotta ask from where your business model was, what made you think like, hey, this might be great, like why would we do this? I don't know, I'm just curious. Because it's so different, right?
B
There is a philosophical answer and there is a pragmatic answer. I'm going to give you the philosophical answer first. In tech we always bounce between centralization and decentralization this has been true in the entire history of tech, but ever since the cloud we have been very, very focused on centralization. Decentralization move actually hasn't happened for a while, much longer than it normally happens. I had this thesis that AI eventually is going to be very personal, that this idea of AI agentic memory is actually becoming very important. Now imagine in the future, what would happen is I would have a device on me that has my agent memory, that has all my context. Now that thing will talk to different data devices, different compute surfaces. Like I walk into a building, it's going to pick up the local compute to put the workload, but the information specific to me will be in something very personal. And that was again one of the original thesis of this I am able idea, right? So when we saw the spot for the first time when I was at ces, when he actually showed it for the first time, like, oh my God, this might be it. This is getting there, right? It's powerful enough that I can do real work. And pragmatic part was we kept hearing from customers, I can't move this data to the cloud to try out AI use case. I think I will get value from it, but I can't move it to the cloud to test. And when I ask for access to this data, people are telling me show me what would be the value of doing this. So I have a chicken and egg problem. How do I get started? And what the spark does is I can put the data air gapped on the desktop, right? The Dell Promax or the Nvidia DGX Spark. I'm running it air gapped on my desktop, I put it in, I evaluated, I built some agents and I literally carry that box, walk it to my CFO's office, put it in there and say, hey, let me show you how I can save you $10 million. Can I please have permission to take this data to the cloud so I can do it at scale? That agent can't scale because that box is too small, right? You needed to take it to a different scale and that scaling would be all of your powered stuff. Like Dell has that whole stack like this. How big do you want to go? I can help you go as big as you want, or if you want to go to the cloud, you can go to the cloud. But that starting point, that chicken and egg problem of how do I confirm this data is good enough for an AI without moving the data to the cloud? I think that's held back way too many AI projects. And two days after the Spark actually came out was Gartner Symposium. And we had the Spark on the desk at Gartner Symposium. And people like the CIO of a major US State came in, loaded up his own data, tried it, and he was like, look, this has been my biggest problem. Air gap security. I have to give control of my data. I need to know where it's running. I need controlled cost. I don't need any surprises because everything is running on this device. Whatever I do, at most I'm paying for some energy from my power bill will go up, but nothing else is going up. Nothing crazy. And I need to be able to, once I build something, push it out to where I will actually use it. Once he got that value prop, he was one of the first customers buying Abel on the, on the, on the Spark right after that. And he's, he's live in production already. That's the difference. People who have been stuck for a long time can suddenly get unstuck. They can prove the value of the agent and then say, give me the money to scale it.
A
I love it because that's, you know, everything that I've got. And it kind of aligns with a lot of, you know, the purchasing, right? And not, not for Abel specifically, But for the GB, the Del Promax GB10 is. Yeah, there's been a few orders of, you know, 100 here, a thousand here, but it's literally like onesie twosies, even from big companies. Because you're right, it's exactly as you describe. Like you're never going to, I mean, it's like you're never going to get permission to take a bunch of data to the cloud, like right away, you're never going to be going to get access to all this data. But to be able to do that makes a whole lot of sense.
B
And the one thing we are doing differently is officially the Nvidia perspective on this originally was it was for developers, right? We have CFOs doing this, we have healthcare researchers doing this, we have operations managers doing this. Right? Because our point of view always is if we have automated it and templatized it, anybody can then start building their agents. The prototyping angle is still real because you can't scale it beyond a certain point. But you prototype and then you push to the right environment for scale.
A
Makes sense. So, I mean, there's, we'll talk a little bit about your favorite kind of agents and workflows that you have, you know, kind of by industry. We'll go through that in a Second, but you're right, and I think that is the valuable point is that AI is probably not for you because you've got like mit, Stanford, Yale, like Oxford, you know, but it is making AI easy and accessible, right? Because developers, unfortunately this country, you know, we don't have a whole lot of them. Like, I mean, we're getting more, but we don't have a whole lot of them. And you're right. I guess the question is, how did you see the vision of. Because I've seen a lot of tools, not like this or platforms like yours necessarily, but like, are more open, right, where you have a lot more custom ability and stuff like that. I guess where was the idea or the vision to be like? And also the other part of this question is, is like, what technical level? Like, seriously, like, my dad, 75, boomer of all boomers, still sends emails in all caps like, it's ridiculous. And could my dad legitimately go into Abel? You give. If the data was there, he could legitimately run an agent.
B
If he still retains his curiosity, he can. So when we first launched able, this is 2018. We started the company 2019, before it went GA. We are going to Gartner Summit to launch it as ga. In those days, we did a hackathon at UC Berkeley where we took high school kids, history majors and MBAs, gave them 90 minutes of training. On ABLE, in those days, it needed 90 minutes of training and then put them up against expert data scientists using their favorite tools. The high school kids beat the expert data scientists. My entire team was terrified. They were like, you haven't gone GA yet. What are you doing? Why are we doing this at UC Berkeley? Why are we doing it with high school kids? Can we make this a little fairer? But the point is, if you really believe in this idea that tech only breaks through when anyone can use it. If you think of the Internet, the shift in the Internet happened when anybody could make a website. The shift in social happened when anybody could be the creator. If tech only takes off when anybody can do it, then your definition of anybody has to be very different. Nobody says their tech is hard to use. So going and saying my tech is easy is useless. Saying, I've had four hackathons, public hackathons, at UC Berkeley all four years. The high school kids using Abel beat the expert data scientists. That's saying something you know we should do.
A
This is completely random. Is my daughter's in sixth grade, so I've tried to like really educate her. But I would love just out of morbid curiosity to do like something at a middle school with like 6th and 7th graders just to see if they could do it.
B
Yeah, I have an 8 year old. I actually have her usable, I kid you not. Right. But it's important. In fact, when we did it a long time back, I forgot how old my co founder Skid was. I think he was like 7, 8 years old at the time. Right. Because yeah, he must have been because he was. He's not yet in college or he's just barely in college now. And he was testing it out for us because we wanted to first see whether people could actually use it before we gave it to a bunch of high schoolers.
A
That's amazing. I love that. So, okay, let's, let's dub it, let's jump in here with a little. The time that we have left is, you know, if you go to your website, which is able.com, a I b l e.com, you can go to kind of the case studies and you kind of break it down by use case and by industry. Right. So you know, you've got cpg, manufacturing, health care, education, food and beverage, construction, distribution, retail, legal, transportation. The list kind of goes on and on. From those, I'd love to hear from your perspective. Maybe one or two of the agents that were used in. You could pick whatever industry you want that had like, they have a just profound impact where whenever you launched it, you were like, oh my goodness, like this saved a billion dollars or this saved 10, you know, whatever the amount is. I'd love to just make it real with a couple of stories so people can kind of understand the scale and the power of Able and you know, Dell technologies, whether on the server or on the GB10 side.
B
Let's first start with one where I feel like we have impacted lives.
A
Let's do it.
B
This is at one of the largest hospital systems. And what happens is, you know, people don't show up for appointments sometimes and then there are people who have appointments six months out. So we built an agent that would actually figure out which people were not going to show because if they don't show, the surgeon's time is wasted, the equipment time is wasted. That's a lot of money, by the way. Huge amount of money too. But what happens is if you can figure out who's going to not show, if you can figure out who would move their appointment forward, like who are the right people to move the appointment forward and you can have a chat Interface, go to 3, 4, 5 people and say Would you like to move your appointment forward? Yes, you do? Okay, let me reschedule you. Do it completely automatically, because a human can't go in and just, you know, contact all those people. It doesn't really work. Of course, we made a lot of money because equipment utilization goes up. But my favorite part of it was people coming in and saying, you know what? I had a scam scheduled for three months out and this thing asked me, do I want it two days later, in just two days from now. I rescheduled and they found something, and it actually would have been disastrous if it had been found three months later. It would have been too late for me. I'm getting goosebumps while telling you this story because, like, I actually interviewed this person and accidentally, she was an employee of that, of that hospital system and she was like, look, normally I would never tell people to use this kind of story, but, like, it changed my life, literally. And that's what is important about these agents. Think about the impact you can have on people's lives and doing humanly impossible things. One thing that we always try to do, we are not out there saying, how can I take a manual process and automate it? We always think in terms of what is humanly impossible. No human could sit there, look at all the appointments and figure out which are not going to show, figure out which ones could move forward, do the like. They're just humanly not possible. And now let's go to a very pragmatic place where things are changing a lot. Anytime things change, you need agents because beings cannot find unknown unknowns. We are good at finding the same patterns we know, and we look for it. We'll draw the same charting tableau over and over again. But when it's unknown, unknown, when things are changing, it's very hard. In fact, when President Trump announced his tariffs, I was joking. This is the best sales story for us because just everyone's world got completely messed up, right? If you think about your supply chain disruption, if you look at your purchase patterns and what we were doing was looking at every single purchase pattern across all locations and finding underlying patterns to what is shifting. Very, very specific patterns for these items in these geographies. Patterns have changed. That becomes something no human being could have found and the economic impact of something like that, like, how has my business changed? And then you can do infinite drill down. So let's say I show you that your business has changed for this kind of a skew. You drill into it, then it goes in and just does a what's changed on just that sku and says, okay, these stores are behaving different. Then you drill into that and you go in and say, you know, like that infinite ability to keep analyzing completely automatically millions of variable combinations and then summarizing that using language models, but confirming that the math is all correct. We are not using the language model for the map. We double check it to make sure that Our analysis is 100% accurate.
A
This is amazing. This is great. I mean, it's, I mean, I mean, I hate to give people compliments on the webinar, but like this is, I mean it just, it's honestly, it's so smart. I mean it's just, it's. It's smart, it works, it's easy. Oh, it's not easy, but like, it's, it's smart, it works, but it's easier than anything else.
B
Yeah, it's easy. Just start with a template. Depending on the template you choose, it asks you very different questions. Right? So if you're trying to analyze claims, it's going to ask you very different questions. And if you're trying to do a retail watch change, and it's very context specific, so you answer between one and five questions, it's off to the races. That's it. It comes with sample data so you can play around and see what the experience would be. You can look at the sample data to say, this is the kind of data I need. So we have really put in a lot of effort over the years to make it simple. We have a saying, simple is hard. So we do simple.
A
It can be simple, it can't be. So, Urjit, this has been a fantastic conversation. So what I like to do, you know, as we kind of close out the episode one, give you a moment if there's anything else you want to cover.
B
Let's talk about why GB10 is the best thing for agents. We didn't go there.
A
Well, you kind of did because it basically validated my whole business strategy around. This is a prototype device. Get it in the hands, it's easy, simple, quick. Put it on prototype, prove out your point and then scale it up.
B
But here is the thing people don't understand. If you go into just a GV10 and ask a chat question, it'll be kind of slow. Agents are perfect for GV10 because what happens is it's actually designed. We run all of the agent coordination, all the agent tools, all the agent memory in the grace, in the 20 cores of the Grace, the CPU, we run all the models on the Backwork and all of it is in one shared piece of memory. So everything is synchronous, everything is like nothing is moving out of the piece of memory. So the end to end agent is super fast, even if the model is faster on the cloud. And this is where you have to start thinking about agent inference versus model inference. Model inference will be faster on the cloud the more and more resources you throw at it. Agent inference will be faster depending on whether you have thought through through your architecture. And the Grace Blackfell is perfect for it. Secondly, the GB10 does very well when you have high pre fill. So a bunch of input, a little bit of output, that's the agent it doesn't do well. Where it's a little bit of input, a lot of output, it'll be much slower. Finally, and this is going to become very important now that people are waking up to this OpenGL thing and getting excited. Well, by the way, we have had fleets of agents talking to each other, running long term for a while now. One of the things we found was concurrency was very important because we end up running multiple agents using the same models. So we need to be able to push up the concurrency. How many requests is the GPU processing at a given time? And the GB10 is very good at that. Like as you push up concurrency, it performs really, really well. So if you're going to have many, many agents running on that box or when you get to the server, if you're running it on One of your PowerEdge devices, for example, having that kind of architecture really makes a difference. So for agents, the Spark is uniquely well suited.
A
See, I didn't, I did not know that. I mean I knew traditionally, right. The more VRAM you throw at a model, the faster it'll go. Generally. Right. But like I did not realize that that was not necessarily the case with the gentic AI. Like I kind of figured it was the same. And that's, that's so interesting.
B
You can do a very fun test for yourself. Give it hundred input tokens and thousand output tokens and give it a thousand input tokens and give a hundred output tokens. You see what I'm talking about? Immediately, it's like it'll, it crushes it when you put a high pre fill. It's perfect. I cannot, I'm a hundred percent sure Nvidia was designing it for agents. I've just not found a research paper or something where they explicitly say that. But it's just too perfect for agents.
A
I would be surprised. Like I don't have Any data, you know, or any statement to back that up. But just thinking about that launching last year at GTC and it being the year of the agent, like I kind of. It, yeah, it all kind of makes sense to be honest at the end of the day. So that's fantastic. I love it. You know what I like to do, you know, now we've closed a lot on Blackwell is like give you, you know, 30 seconds to a minute, pretend someone is watching and they just tune in. At this point, what is the elevator pitch for Abel? What do you want them to walk away with?
B
Get started today. Get your data on a air gap, GB10 at the edge. Analyze your data, get the agent proven. You can try a hundred agents on your own in a matter of a week. Build 100 agent, find five that are useful. Walk to your exec, say I have created this value. Do you want it at scale? Take that analysis paralysis out of the picture. Just get started, build stuff and it's secure and it's cheap, it's not that expensive to run and there are no surprises. Just get going.
A
Exactly like Michael Jordan said, you miss 100% of the shots you don't take at the end of the day. So this was great. One of the, a fantastic orit. One of my favorite episodes. I learned a ton and so where can one people find you? And then I'll put the link to the website at the bottom but if the people are interested in, you know, obviously contacting you or the sales team is the best to go to the website and go to the schedule the demo button. What's the best way for them to reach out?
B
Just go to abel.com and schedule a schedule a demo in general. We respond back pretty quickly. It's just been an exciting collaboration with Dell. You guys are doing a lot more ramp up of the sales efforts now and we're doing some fun stuff together at Gartner symposium and so Gartner BNX Summit and then at gtc. So I think they should just come and check out live at these events. What, what is possible exactly.
A
We're going to have a whole host of able agents running on some new mobiles that I can't even talk about at gtc but come check out outside activation gtc. So with that, you know, or really appreciate the time, you know, I personally love this episode. I think it was fantastic to really show that AI, you know, doesn't necessarily have to be complicated when you're working with the right partner and you have the right things in place. Yes, there will always be stumbling blocks. You'll fail. But if you can fail very quickly, like, you know, kind of machine gun example, you know, you fail quickly and learn from it. And you can drive a ton of business value. So at the end of the day, you know, get your. Get your shoes in the race, get your feet in the race and start running. So with that, Logan Lawler wrapping up reshaping workflows with Delpro Precision Nvidia RTX GPUs, and we'll catch you on the next one. Do what you want. Do what you want. This podcast was produced in partnership with Amaze Media Labs.
Podcast: Reshaping Workflows with Dell Pro Precision and NVIDIA RTX PRO GPUs
Host: Logan Lawler
Guest: Orijit (Founder & CEO, Aible)
Date: March 3, 2026
This episode explores how Aible, in partnership with Dell and NVIDIA, is transforming the deployment and impact of AI agents in business environments. Host Logan Lawler and guest Orijit discuss demystifying agentic AI, the speed of iteration, and the breakthrough of the Dell Pro Max GB10 device. The conversation covers Aible’s philosophy of democratizing AI, accelerating realization of business value, and leveraging flexible, powerful hardware. Real-world case studies further bring to life how AI agents can revolutionize enterprise workflows, from healthcare to supply chain.
Fail Fast Philosophy:
AI with Humility:
Democratization Example:
Healthcare Transformation:
Hardware Revolution:
Summary prepared for listeners interested in the strategy, technology, and real-world impact of modern AI agent workflows using Dell and NVIDIA innovations.