
Discover how Dell’s GB10 Super Chip is reshaping AI workflows, from data science to education and enterprise innovation.
Loading summary
A
Welcome to Reshaping Workflows with dell Pro Max PCs and Nvidia, where innovation meets real world impact in high performance computing.
B
Welcome back to Reshaping Workflows with Dell Pro Max and Nvidia RTX GPUs. And boy, do we have a doozy today. If you were following along at gtc, there was a huge bombshell is probably not the right word, but a very exciting announcement. And the day is finally here for us to talk about the Dell Pro Max with GB10. And this is a very special episode to me. I've kind of bled a little bit of GB10 and all of the prep and the planning, and we're here. We are here. So I couldn't think of anyone else that I would rather have on this episode from Nvidia to be able to talk all about GB10, the ins and the outs, the use cases. So I have Marie Breedlove. Marie, thanks for joining us.
A
Oh, thanks for having me. I'm so excited that we get a chance to talk about the Adele GB10 and what you guys are doing with it. It's so great for our customers and just to get a chance to talk with you, Logan, I mean, we get to watch this podcast and get all this great information. So thank you.
B
Of course, of course. So let's start with a little bit of background. Obviously you work at Nvidia. Take a few seconds, talk a little bit about your role at Nvidia and anything else you want to share, you know, career, background, passion, whatever, and then we'll get right into the questions.
A
Okay, great. So my name is Marie Breedlove. I lead global sales for the workstation business. So if you think about the Dell Pro Max brand and how it's taken over the workstation lineup, that's really where our GPU set. So I'm really passionate about workstation business. I've been in the business for over 20 years and just helping customers be productive and get their workflows done and deliver their products that their customers, their companies are delivering to market is really what really drives me every day.
B
Let's start kind of at the very top. So, you know, at gtc, during Jensen's keynote, he had a huge announcement where he was dropping kind of the AI super chip, which there is the Nvidia version, and then there's other OEMs, much like Dell and others that are selling their flavor of it. And the Nvidia flavor is the Founders Edition or Node, and trailers as Spark, very similar to the GB10. So kind of the first question that I want to ask is traditional workstation customers, you have your Dell Pro Max Tower, you have kind of separate ram, you have a separate gpu, you have a separate processor. Typically it's running on Windows. So the GB10 architecture is completely different. Let's start high level, let's start talking about a few things of the differences between a traditional workstation versus the GB10.
A
Okay, so like you said, the traditional workstation you're going to have these discrete components in most cases and you're going to be using a lot of your productivity apps, right? So think of those as the distracticons of getting your work done. And so then when you think about the GB10, right, this is a Grace Blackwell superchip. So think about having an AI supercomputer at your desk. Grace is based on arm. And then the Blackwell architecture melded together are put together and you're sharing 128 gigs of VRAM inside of the superchip. So you could think about if you are a data scientist or an ML Ops person, or you're building AI develop, you're doing an AI development, you can access maybe a 200 billion parameter model off of this super small, think of the size of a coffee cup or smaller than that, it's super small AI super chip at your desktop. The cool thing is you can use it with or without your Dell Promax. So think of having this Dell promax super sexy 16 inch laptop on your desk and you're connected to your Dell GB10 and you're throwing out or working on your data science, your data analytics and using this to get your maybe fine tuning done on a 70 billion parameter model and then doing all of your productivity applications. We can't get away from our email and our spreadsheets and our PowerPoint, but we want to be able to get our jobs done. And so you can either use it with or without your laptop, so you can make it your dedicated PC or your dedicated AI supercomputer, or you can use it along with your Super Sexy Silver 16 inch Dell Pro Max laptop.
B
I love that. So you're right and we're going to come more back to the standalone versus the companion device because I think that is a very compelling use case. But want to touch upon something that you said that I know there's a lot of excitement in the market and at the end of the day we want people to buy this, but I want people to buy this who are the right people to use it. So you know, having the right use case so Like Marie said, the traditional workstation customer, very ISV independent, software focused. Think of the traditional industries that use workstations, M and E engineering, medical, banking, finance. They're running very specialized applications running on traditionally Windows, running in discrete components that are separated out. GB10 Grace Blackwell, the 10 superchip, very different integrated ARM design, running Linux. So Marie, who are the. Let's not say who, what customers should be buying this, but what Persona or what job roles is the GB10 designed for? Like who are the other people that should be buying this or looking at this as a solution to their current workflows from a hardware standpoint?
A
So we really are targeting our developers, our AI developers. So if you're developing and maybe building fine tuning a large language model you're doing, your use cases are maybe you're developing a robotics, maybe you're developing VSS so visual certain scan application. If you're maybe edge computing, you're looking at those different types of data analytics. Maybe you're a data scientist, you're a researcher, maybe you're in college and you're, you're going to school. I mean think about the price point of this, right? You're in college and you're going to school and you're learning all of these new types of ways to get your jobs done and help the industry get their jobs done. Those are really where we're focused. We're focused on higher education, research and development for building these type of applications.
B
Okay? And that is correct. So not that I'm going to dissuade you from buying the device, but there's nothing worse than getting something and you're super excited and you open it up and you're like, hey, this won't do what I want. So very transparently this runs Linux and it is very different than Windows. I am a, I am a Windows guy and I have been until I joined this role and I'm starting to work with Ubuntu and you know, Linux. So it is a learning curve, it is very different. And a lot of these traditional applications that you know, an ISV would run like Autodesk or Autocad or any of this doesn't really work with this. So think AI developer, data scientist, software engineer, anyone. When we say AI, I'm not talking about installing comfy UI and generating a few images, I'm talking about the actual building basically data cleansing, tagging, you know, model weighting, quantitation. The stuff that ultimately would come that you might see on hugging face, that's ultimately done ready for you the people that put that together. So those are the kind of the people that are, are looking at this. But you made a really good point and I want to come back to it, is that you said 200 billion parameter model on this box. So 128 gigs. I mean it is a lot, don't get me wrong. But like comparatively like that size model is amazing. How is this, how is the GB10, the Grace Blackwell chip, able to put on a model that size and be able to fine tune it where before that was really, you know, reserved for almost a server with multiple GPUs?
A
Well, one is that you're able to share that memory across so you're not having, you know, you're as fast as your slowest component inside of any PC. But just the bandwidth to go between the CPU and the GPU because of the interconnect there really allows you to move that model faster. The other thing is it's quantized down to FP4 so floating point four we're able to get these bigger models into smaller platforms. The other cool thing, and I know we haven't talked about it yet, is we use the CX7 NIC on the GB10 and there is a cable that I know Dell will offer and you can stack two of them together. You can connect two of them together to get a 400 billion model together. So now you know you can have this double two stack of them together interconnect and then be able to either tunnel into it or with and make it your primary device or tunnel into it with the Dell Pro Max. So you're able to do that up to 2 with a 400 billion parameter model. So just depending on what you're working on, you may the, the one might be great and you're working on this 200 billion or you're fine tuning the 7 billion parameter model. But you can also move up the scale as well.
B
I love that. So you kind of teed it up perfectly. So let's, let's kind of dive into kind of we said as a stand you kind of mentioned before but the standalone device and then we'll get into the Connect X7 card and stacking these multiple. So this is what I think is really unique about the Delpromax GB10, right is one, it can be a standalone device like a traditional workstation laptop. You can hook a monitor up, it has hdmi, you can have keyboard and mouse. It can be a standalone device. But what I think is super duper interesting and it's not just for those using a Dell Pro Max system or a Dell system, or it could be someone using a Mac, for example, or an Apple product is you can use this as an augmentation device. So you have your, whatever you have, laptop, desktop, it doesn't matter. It doesn't have enough GPU or compute to be able to run the model or fine tune a model instead of going out and hey, I need to replace a whole workstation, a la GB10 connection and being able to run that as a connected device to kind of offshoot tasks. So first question is, how does the connection between, you know, let's just say it's a Dell Pro Max laptop or, you know, laptop, how does that work with the GB10? Like how does that function? What does a customer need to be able to do that?
A
So you can either tunnel in or direct connect it, right? So if you're using it as a standalone device, you can put a keyboard and mouse and a monitor on it. Or if you're using it as a companion device, you can tunnel into it and, you know, thread it that way. And then the cool thing is, even if you're using a device maybe that doesn't have access to the Nvidia ecosystem, the GB10 comes with the DGX OS, which is, you know, a Linux OS. But then you get access to all of the great libraries that Nvidia has put out there, right? So whether it's the Cuda X libraries that help you accelerate your data science workflows, whether it's tensor RTs in order to accelerate your AI development. So there's all of these other things, all this other goodness that's already in there, whether you're going out to build.Nvidia.com in order to be able to download some models. So we, we provide models that you can go download and test as a developer or you can look at recipes like a Nvidia blueprint, we have all of these great goodness that maybe you don't have access to in the ecosystem that you're in, like you said, a MacBook ecosystem. For this, you're able to access that great ecosystem of AI development. And the, the GB10 has that available.
B
With it, which is amazing because I'll tell you, I'm not a developer, so I'm not going to pretend to be what I'm not. But I do tinker, I do demos. I. Murray, we just got off a call about an upcoming event. Well, at the time we recorded this is we were coming to upcoming event. It's like, hey, Logan, can you do this? And I'm like, yeah, well, depends if you get the unit anyways. But the moral of the story is I can do enough. But I am not a developer. I'm not fine tuning models even though I know how to. I'm not a data scientist, but one of the hardest things for me is going as a Windows advocate to, you know, kind of you're going to a data scientist role is you're using Linux, you're managing libraries, you're managing dependencies and I w s Docker. All that is great, but that is just a complex thing that you have to do. And it is hard. But to be able to have this device where you can manage that independently off of your main device. I can't tell you how game changing this will be and how many people who want to have the skills, but maybe don't have the traditional data science AI developer skills because they're Windows people, will now be able to have them and bring them to the fold purely because of that interplay as a connected device, which I think is awesome.
A
Yeah. And it's good to note that the AI developers and the data scientists are used to working in Linux. And so when you get one of these, it's an environment that you're already used to, but then because you can make it your companion device, those other applications like presentation applications, spreadsheet applications that you kind of have to do to get your business job done, you still are able to do that with a companion device.
B
It's so amazing. So let's talk about the stacking is, you know, whether it is a standalone device or if it's a companion device we can stack to versus kind of the, I think it's the Connect X7 kind of smart networking card. So do those function? And you might not know the answer, but do those function? Because right now in a traditional rtx, an Nvidia RTX Pro Blackwell gpu, you can't necessarily piece and part that out or you know, hook two of them together, much like you can with some of your server products through NVLink. Are these grace Blackwell chips? Are they, I'm using air quotes, but are they in theory NVLink, where they're operating kind of as the same unit, or are they operating as two independent units, although they are connected together?
A
So you can, when you, when you connect them together, it'll see it as one. As one unit. Yeah.
B
So cool. Just automatically just is connected.
A
Yeah. Well, I'm sure there's probably something you need to do to make that Happen. But in talking with our engineers and doing a bunch of customer presentations, you'll see it as the 400 billion parameter you'll be able to access.
B
It's massive. I mean it's absolutely huge. I want to call something out that the traditional data scientist, developer, software engineer, machine learning engineer, this is made for you. But where I think there's going to be a lot of opportunity is, you know, I talk with a lot of partners and I talk with a lot of people out in the field and I've had, you know, a lot of those type, you know, data scientists, whatever, reach out and ask questions about it. But where I've seen real interest is in higher education. Because when I went to college, there was not a data science course, there was not electives around neural networking engineering, there was not an AI development class. But I know a professor at Georgia Tech who, that he teaches that and it's an elective as an undergrad. You get no credit for it. It goes nothing towards graduation, but the class is packed. And when we were talking, he was like, well, he goes, the biggest problem that I have is people want to learn, but other than one or two people, they don't have a gpu. And if they do have a gpu, it is usually not a pro GPU like an RTX Pro. It's something for gaming, which there's nothing wrong with that. You can make it work, but not really design use case. And I have to, they are running this off their cpu. It's taking forever. So I've seen education as probably one of the biggest things and the fact, and I don't have one here, but you have the size, think of like a box of Kleenexs. It's actually probably smaller than putting it into, you know, a laptop into your bag. Have you seen kind of the same interest from higher education in schools or is this just Logan, seeing crazy things?
A
We see a lot of interest from higher education because now you can have your, like you said, each student can have their own supercomputer at their desk, right. And so, and you can think about, you can extrapolate that into industry as well, right. We have these data sciences AI developers who are traditionally, they've been developing in the cloud, right. But then when you wait for your model to come down, so the egress time takes a long time. So to be able to have your supercomputer at your desk really helps out. And then you can play around with your model. You can fine tune your model and do everything you need to do before you put it back into production. Right. And so we see this not only in the education, because in education students getting time on the servers, you know, that's why you don't have a whole class, because in a whole class you have to guarantee the time for the students to have access to, to a large amount of space on a server. So now you can think about if you have your AI super computer right at your desk, it's a companion device. Maybe you keep it in your dorm room and you can access it or when you go home for the summer, you take it with you. It's not part of, it's not, you know, you're not logging out those big workstations as you go out. But it really allows for anybody who needs access to it, who doesn't, who's time constrained or resource constraint. Right. These are valuable resources. Right. When you think about the AI factory, that's a valuable resource and we want to make sure that it's balanced so that people who are getting the work done and doing the development and then utilizing AI factories and their valuable resources for the production or deployment of the work that these folks are doing for us.
B
Exactly. I 100% agree. And let's touch on the AI factory. But I also want to kind of call out a point that I want to make sure that is touched upon here is that traditionally we adele, you know, yes, we talk to a lot of end users, we have a lot of customers and partners, etc, but mostly who is doing the purchasing I think is like, you know, the itdm, the IT decision maker, whether that be the VP CIO of IT or whoever it is. And generally they have kind of your stock configs, right. Like hey, you know, for example, if you're like me in a roll at Dell, I get a lot. I mean I get a Dell Pro Max plus and I basically get to choose the 15 or the 13 and I travel a lot, so I do the 13 and it's just, here it is, that's it. And it's just easy, complex, you know, it's easy to order, it's scalable, I get it. But you're going to have data scientists and machine learning engineers that are going to come to you and are probably going to ask about this product and think of this product much like when we talk about the affect, which we'll get to in a second, but think of this product as a true desk side to kind of data center device that when you're running the Nvidia architecture, we'll talk about here in a second allows you with, you know, Nvidia AI Enterprise to really seamlessly manage that. So what would tips would you give Marie? If you've got, for example, an employee is like, I really want the GB10, but my ITDM, I don't know what to tell them. What would you have them say to their ITDM ultimately to get them interested in purchasing the system for them?
A
So I can tell you what an ITDM told me at gtc. I'm not going to give, I'm not going to give you any names, any company names. But at gtc, if you were there in March, it was wild, right? Like, so in the Nvidia booth we had the, our Founders edition version and people were stopping to take pictures of it and I was hosting an ITDM through our Nvidia booth and he, he had to take a picture because his guys back in the office going nuts. Like you know, we, we had reservations, right, so where you could get in line to reserve a unit. And he was reserving them and he was reserving two at a time for each one of his guys. And so I asked him, he goes, I have to get a picture for my guys. So that way they know that I've seen it. And so we took a picture and then I asked him, you know, what is, you know, what do you think about this? What, what do you think about the price point? Now this is a two of them, it's the double stack version. And he said, oh, it's a no brainer. He's like, my guys have to sign up for time on a valuable device and this is worth two months of that. So after, you know, if you think about having it for 12 months, 24 months, 36 months, within two months it's paid for that time. So I think that's, you know, when you're able to have access and get product, remember that this is about helping people get their jobs done faster. Faster. Providing productivity in the hands of people who need this productivity, right. So when you're in the workstation space you really get used to people who are developing the products or developing the services or developing applications that add to the bottom line of the company. Helping them get there faster is a real boost to the company.
B
I agree and, and well said. True. And it really is a, you know, it is really like you said, it is a. With anything in life there's trade offs and like yes, you might not, you know, you're taking X to Y, but you're right, the benefits far outweigh Any of the trade offs, right? You can allow people to be creative. And you know what I love is the ability to experiment. Or instead of you thinking of an amazing idea, you leave for the day, hey, turn it, let it run. When you wake up, you get up the next day, go to work, it's done, it's ready to go. Versus let me fill out this form to request the compute resources in the data center. Oh wait, they're not available. Like it allows really the creativity and the work and the productivity to get done. And it's. I love that analogy because it's true. It allows you to get done quicker and when you look at the cost, it makes total sense. Makes total sense.
A
And then the architecture setup that as you're doing this development, when you deploy to your AI factoring it's seamless, right? And so some, sometimes when you're working on developing an application or developing something and you're doing those, the paperwork, I need 10 hours on a server to do this. When you get there, you're realizing that you didn't have the resource at your desktop to scale. And this resource, this AI supercomputer, allows you to scale up. So you're really working on what you need to get done, which I love.
B
And so we've talked about AI factories. Let's take this kind of in two parts, right? So you've mentioned it before. GV10, very different device we're talking about, you know, what it kind of comes installed with, right? Like I mean from an Nvidia standpoint. So we have like Cuda X different libraries, managing Cuda dependencies, all types of stuff. Can we talk about the stuff that, you know, GV10, you open it up. What are some of the things from Nvidia from like a software standpoint that come all ready to go or give you the ability to get on very quickly from Nvidia? What are some of those things and why do they matter?
A
So you're going to have like the Docker configuration, so you're able to set up your Dockers in order to get going faster. You're going to have all of your CUDA libraries. And if you think about the way that people roll out an AI application or development data prep is at the very beginning of it. So we don't really. It's not the cool app at the very end, but all of this data that our enterprises or everybody has out there really needs to be cleaned and worked on on some of these applications, you know, like a CUDA library, you can use Cudf for Pandas and Polars and you add one line before the code and then you're, it's getting you going faster really, to accelerate that part of it. The next part is the fine tuning. So you're going to get like tensorrt in here, so you're able to fine tune and access the, the fine tuning portion of it. You have access to build.Nvidia.com and if you know you have access to that, to that today, so there's some downloadable ones that you have to go out and do, but if you haven't gone to build Nvidia.com, i'd go out there and play around with models, look at the blueprints that are out there. You have access to development on nims. And so what does that mean? Right? Like, so I threw out NIMS blueprints and models, right? So nemo, right? So when you think about developing an agentic AI application, right, that means, so we went from GPT where it was predictive, contextually predictive, and that was awesome, right? So you could say, you know, it'd finish your sentences and you go, everybody uses it every day. Really changes the way we worked. Now if you think about agentic AI, it's taking it to the next level where it's actually taking an action, right?
B
A predefined action, by the way.
A
A predefined action.
B
It's not going out of bounds yet. That's not now.
A
No, no, no. And so like, if I think about how I plan vacations, right, you know, maybe I have an agentic AI that helps me plan vacations where I ask it, please plan a vacation to Kauai, because that sounds like a lot of fun. That fits within my school district schedule, my work schedule, and then goes to these islands that have fun things to do with kids. So it'll go back out, Check your calendar, it's going to check the school calendar, it's going to check the kids calendar, you know, all of these different things and come back and you can either set it up to like purchase it, set it up, put it on the calendar, let it go. You know, I'm probably a little more control freak where I want to see everything before you go out and do it. And so I can say, you know, let me know what my options are. And so that's pretty hard to set up, right? That's going to require multiple models, multiple accesses to different databases, like my calendar database, my school calendar database, my family calendar. You're going to look at the flight Schedules, the travel schedule, you know, travel.
B
Advisor for cool things to do. All types of stuff.
A
Yeah, all types of things. And so a blueprint, when you go out to build, Nvidia.com, i would probably start at the blueprint so you can see some of the things that are out there. And think of the blueprint as like a recipe. Some of us like, really like cooking. And so we go down to the farmer's market and this is not me, by the way. Go down to the farmer's market, we buy all the right vegetables, we smell em, we buy the right bread, we set up the thing, we create our own recipes. And then some of us like looking in a cookbook and getting the recipe there. Some of us like meal preps into our house and you know, you just throw it in the thing and some of us go to the restaurant, right? And so all of these different things, all of these are different ways that you can do an AI advice. So blueprints helps you kind of give you a framework, a recipe that you can use. And when you're using that recipe, you're going to use things like Nvidia Nemo, right? Because this is going to help you with that, developing your data flywheel, right? Because once you build an AI application, you can't just go, okay, well, I built it once. Like it's, it has to keep learning from itself and you have to give it guardrails, right? You do not want it to act on its own, right? So you don't want it to stay, you want it to stay within your language, right? So maybe your company that you're building has a very specific. The application you're building is going to be reflective of your culture, right? Like some of the, like the kid toys, you want it to talk like, you know, you want it to have like a kid language. You don't want it to be outside of that. So you're going to build some guardrails in there. And so you're going to be curating the data, you're going to be testing it, you're going to be putting guardrails and you're going to go around and around until you have it, right? And then you're going to deploy it, right? And so you want to make sure Nema will help with that kind of flywheel in order to get things out the door quickly. And then a Nim, which is an Nvidia Nim microservice. Think about all these models that are coming out, right? They come out all the time. And how do you stick a large model in your agentic AI? You might have like five or six of them. Think of this as a containerized version of the model that's optimized to run in your environment, specifically on the AI factory or the GB10 or even on a RTX 6000 inside of your Dell workstation. Right. So I think it's, you know, a cool way to go out and play with these models and then also see what you can and cannot do, you know, what's available and then also give you some ideas on what you want to work on. Right. Whether it's a digital human or it's, you know, a supply chain optimization or a visual video surveillance. Right. So there's a lot of different blueprints and models that you can play around with out on build Nvidia.com and you can access that all through the GB10.
B
And all free, which is fantastic. I love the, the recipe, the cook versus the meal prep versus hey, I'm going to the restaurant and I and I'll be honest, I played around with a lot of the different blueprints and some are more, I'll be honest, more complicated than others. Some are very straightforward. It's like, here, follow the script. It's very easy. But then, you know, there's things like when, even when you get a meal delivery service, it might say, hey, you need to brown the meat. And we all know because we're humans and we may or may not, maybe not if you're vegetarian, but like you've browned meat. If you are a con, you've eaten meat before, you know what it means to brown the meat. But if you've never cooked before, my daughter would not know what brown the meat means because she's never done it. And that's the thing is sometimes there's those little gotchas. But what the other thing I love about the build site is you can go, these are very common, you know, blueprints, I just go to Reddit, I go to other place I go to, you know, Nvidia support different channels and you can get the answer of like the little thing where, you know, the small thing you're missing. And that's where I Learned kind of AI was going and using different blueprints, different FTKs, playing with Nemo Retriever and being able to do cool things like that. So highly love that last question and then we'll wrap it up because we've talked a lot about the GB10 is we talked about all the, you know, The CUDA architecture, CUDA, ML, Tensor, RT, all of this blueprints, NIMs. But Nvidia AI Enterprise, if you're buying this for a sole device, you know, just for you as an independent contractor, you're not working for a large enterprise or any company, probably you could tune out to this point. But if you have an AI factory where you have a desired data center, you have servers, you've got kind of the whole kit caboodle, Nvidia AI Enterprise is the connective tissue and layer that allows, from a kind of a software orchestration standpoint, being able to connect that development that's happening on the GV10 or Dell Promax workstation, being able to push that up to the server for deployment, back down for fine tuning, however it needs to be done. So when I saw that, I mean, in GV10 does not come with Nvidia AI Enterprise, however, very quickly connects. If you already have a subscription, you just, you know, you add it on and you know, pay the license fee. But what is the true value of now that GB10 has the horsepower that it does, putting that into the Dell AI factory With Nvidia, with Nvidia AI Enterprise, what is the value that companies are going to now realize?
A
Well, I think when you think about Nvidia AI Enterprise, where you're, when you're on this, kind of sounds corny, but when you're on this AI journey, Nvidia AI Enterprise really allows you to have like this backup, right, the support that you need to help you get through this process. Because like you said, in some cases, some of those blueprints are really easy to do and some of them are a lot more complex. And you're working on various different components of building your AI applications and moving them out into the factory. This is a very complex task and it's something that we do with our customers all the time. And so you get access to the Nvidia experts to help you along the way, and you get the support that you need in order to get those things done. So you have a question about, hey, I'm working with Nemo and I have a couple of questions here. Then you, you're able to get that whether you go through Dell or Nvidia, you want to make sure that you're having that support that you need. A lot of times enterprises don't like diy, right? So none of us want to have like rogue it going on, right? Because that it's there to help support the end users. We don't want to have DIY projects going on that are not being supported because we need, these are valuable resources that we're doing. We're helping our, our valuable employees get the products that they're delivering out to market. And the Nvidia AI enterprise just allows you to have a support and a way to get these things through, through the process and access to all the, the goodness that Nvidia puts out there. We're a full stack company and so it's not just about the GPU. Right. It's about those libraries, about the SDKs, it's about, you know, getting things out into the market. Right. There's models that we put out there. Maybe you're doing genomics and you need access to a, a Bionemo model. Right. And so there's models that you're able to access and just really helping you get your jobs done faster.
B
I love it. And it's right. Nvidia if you think they're a GPU company. Yeah, not really the case. Never really been the case. But not, not. Yeah, that, that ship has sailed a long time ago. So Bri, this has been a fantastic conversation. What I like to do at this part is take about 30 seconds, pretend someone has just joined the episode. You have 30 seconds to recap what you feel they need to walk away with from watching this episode and give us about 30 seconds and then we'll, we'll wrap up the episode.
A
Great. So I think if you're watching this episode, you'll get a great indication about the GB10. So the grace Blackwell super chip that you can use at your desktop to help get your AI development fine tuning data science workflows done faster.
B
I love it. You're right. And with that, Marie, take a second, tell everyone where they can find you on like LinkedIn. You know you mentioned build.Nvidia.com if there's anywhere else you think they should reference, we'll make sure to put it in the show notes. But we'd love to hear it for the audience now. And then we'll, we'll close out.
A
Hey, to get started in your AI journey, I would say to go out to build Nvidia.com and I am only on one social network which is in LinkedIn. So you can see me. Marie Breedlove@LinkedIn.com I do have other socials but I don't pay. I don't really monitor them. So please don't reach out to me there.
B
At least you're honest about it. Like, at least you're. Hey, don't reach out to me there just, you know, I love that. That's so amazing. That's so funny. Well, Marie, you've been a fantastic guest and I know that you're traveling right now and carving out time for us. I really, really appreciate it. So, you know, it kind of comes to the end. The big thing is go right now where you're listening episode. Just go to Dell.com type in Dell Pro Max with GB10. Learn about it research. You can learn all about it on the product page. And if you're a software developer, a data scientist, you know, machine learning engineer, a neural network designer, an app developer, really give the GB10 a hard look. Think about, you know, are you Windows based? Are you Linux based? Think about your workflow. Think if you're a student taking a machine learning class in college or a neural network class, or, you know, an algorithm class, think about kind of your workflow. If it is, it is a great device that'll allow you to run a standalone. You can connect one as a companion device or even stack two for twice the power as a kind of a 400 billion parameter model fine tuning piece. So I think it's going to change the game. I'm very excited about it. I'm excited now that once it's launched, I'll be able to get my hands on it and actually play with it. But with that, you know, this is reshaping workflows with Dell Pro Max and Nvidia RTX GPUs. Until next time, keep your workflows running local and we'll see you on the next one. Do what you want.
A
Do what you want.
B
This podcast was produced in partnership with Amaze Media Labs.
Episode: Twice the Power, Half the Space: Dell Pro Max GB10 in Action
Host: Logan Lawler
Guest: Marie Breedlove (Global Sales Lead, Workstation Business, NVIDIA)
Date: October 23, 2025
This episode spotlights the launch of the Dell Pro Max workstation powered by NVIDIA’s GB10 Grace Blackwell Superchip—a compact AI supercomputer designed to redefine the possibilities of high-performance desktop computing. Host Logan Lawler and guest Marie Breedlove dive into the technical innovations, target users, and transformative workflows enabled by GB10, with a special emphasis on real-world applications, ease of integration, and the impact on education, research, and enterprise environments.
Timestamps: [02:04]–[04:43]
"Think about having an AI supercomputer at your desk... sharing 128 gigs of VRAM inside the superchip. If you're a data scientist, you can access maybe a 200-billion parameter model off of this super small device." – Marie [02:56]
Timestamps: [05:52]–[08:15]
"When we say AI, I'm not talking about installing comfy UI and generating a few images; I'm talking about the actual building—data cleansing, tagging, model weighting, quantitation... the people that put that together." – Logan [06:43]
Timestamps: [08:15]–[10:43]
Timestamps: [10:43]–[13:33]
"To be able to have this device where you can manage that [AI workflow] independently off your main device...I can't tell you how game-changing this will be." – Logan [11:56]
Timestamps: [13:33]–[14:40]
"When you connect them together, it'll see it as one unit...you’ll be able to access the 400 billion parameter model." – Marie [14:18]
Timestamps: [14:40]–[17:56]
"The biggest problem I have is people want to learn [AI] but, other than one or two people, they don’t have a GPU...so I’ve seen education as probably one of the biggest things." – Logan [14:40]
Timestamps: [17:56]–[21:16]
"My guys have to sign up for time on a valuable device, and this is worth two months of that [shared resource]. Within two months it’s paid for." – Marie [19:27]
Timestamps: [22:37]–[28:51]
“Think of the NVIDIA Blueprint as a recipe—some of us like cooking from scratch, others use meal kits or go out to restaurants. Blueprints are your shortcut to building AI models.” – Marie [26:08]
Timestamps: [28:51]–[33:08]
“None of us want to have rogue IT going on...We need to support valuable employees to get the products that they're delivering out to market. NVIDIA AI Enterprise just allows you to have support...to help you get your jobs done faster.” – Marie [31:15]
[33:40]
Recommended for: AI/software developers, data scientists, ML/AI students, higher education, enterprise IT decision-makers considering on-premises AI and research acceleration.