
Loading summary
A
Is when that underlying data changes. How do I know it's changed without having to go back and check that it's changed?
B
Ah, you know, you know, you hit something big there. Yeah, go ahead, Lynn.
C
Here's my advertorial. If you want to do agentic AI and you might have any idea why a couple agents thought they were pulling the same root data and didn't, this is what you need to be able to figure that out.
B
Welcome to Embracing Digital Transformation, where we investigate effective change, leveraging people, process and technology. This is Darren Pulsford, chief solution architect, author, and most importantly, your host in this episode. From Hype to impact building scalable AI solutions for the enterprise with AI experts Lyn Kampf from Intel and Russell Fishman from NetApp. Lynn. Russell, welcome to the show.
A
Hey, Darren.
B
Hey. Everyone that listens to my show knows a couple things about the show. I talk too much, but so I'm going to have you guys talk a lot today. And then second, I never have anyone on the show that isn't a superhero. So. Every superhero has a background story. So we're going to start with. We'll start with Russell. I'll give Lynn a little more time to come up with her superhero background story. So, Russell, what's your background story?
A
What have I let myself in for here? I'm worried now. Can't.
B
Yeah, I know.
A
Oh, we just want my. You want my background story?
B
Yeah, yeah, give it to me, Russell.
A
Come on, give it to me. I live in the New York metro area. And you can hear, you can tell that from the accent, right? Native Bronx accent. You hear that? Yeah, yeah, yeah.
B
It sounds really. You sound just like. Just like.
A
Yeah, exactly. Exactly. Yeah. No, so I, I grew up northwest London in the uk, had an interest sort of career. I started off life as an economist because, of course, that's how you get into tech. You start off with being an economist. But I got introduced into the tech industry originally. Started working for a company called eds. Remember eds? Ross Perot.
B
Oh, yeah.
A
Did that for about a decade. Lived all over the world. I had really great opportunities as a youngster to do that. Right. And sort of spent time in the Middle East. Spent a couple of years in the Middle East. That was kind of interesting. Ended up after coming back to. Ended up back in the US Then did stick with Cisco Systems and then came at NetApp. I've been at NetApp, actually. It's kind of scary. A decade now. Right. You know, it's one of those things. You don't come into these Things thinking you're going to be anywhere for a decade, but there you go. I've been here for a decade, so I've got an odd background. So it's kind of, you know, I've got the tech side of things, but you know, I'm really interested in the business side of things. What I do at NetApp is I run our global solutions product management team essentially. So that's how we connect products with customers. Which seems fairly obvious that you'd always want to do that, but you know, it's not just that isn't always. But yeah. So I've been personally responsible for AI solutions business for the last four and a half years. So it's been an interest. It's been a roller coaster, right. I mean, I think for all of us, it's been exciting. I really enjoyed. Feels like we're in a completely new phase of it right now and it gets me out of bed every morning excited to find what new crazy stuff is happening and generally, by the way, showing my wife new AI things. So that, that's, that's a thing, that's, that's a stand in the house, right? So when, when ChatGPT came out, I, she's a big Law and Order fan, right? And yet she is. And I, you know, but I was cheating, so, so I knew that ChatGPT, who has been in the industry for a while, I knew that it had a lot of content to work off of, lots of episodes, right? So I asked it to come up with a new unique episode. Now my wife has seen every episode of Law and Order. So I said, have you seen this one? And she's like, no, have you seen this one? I haven't seen this one. I'm like, I said, oh Russell, you're so lucky. You like that?
B
This is gonna be bad.
A
Listen, that was one as recently as yesterday where and if people saw this, this was at the time recording. A couple of days ago, Google's IP filters for their video generation service accidentally went offline. So there was a four hour window where you could get it to generate video with real characters in it, right? And so people were doing that with all sorts of crazy characters. There was one with Spider man hanging around New York City doing all sorts of New York City type stuff. And I'm sure my wife about what do you think of this? She's like, oh yeah, yeah. I said, that's AI. She's like, oh no, we're in trouble here, aren't we?
B
I'm like, oh yeah, we are.
A
Absolutely in trouble here. Right? So, yeah, that's my background. Very inquisitive, excited about tech, but the realities of making it real for customers is where my head's at.
B
That's awesome, Russell. All right, Lynn, how are you going to help Russell?
C
So that one's gonna be hard. I'm gonna tell you a story that very few people have ever heard. So I've been in the industry long enough that when I was pregnant with, I think, my second son, two boys, I had to travel to Japan. And this was when mobile telephony was going crazy. And because I was unable to drink, I ended up having to do karaoke. And so.
A
Oh, no.
B
Really?
C
Yep. And so here's where the superhero Clark Kent switcheroo comes in. I studied voice all the way through college. I was either going to be a lyric soprano doing opera, or I was going to be an electrical engineer in the computer industry. And it turns out opera singers don't eat very well at all. So didn't really want to do the peanut butter and jelly sandwich, so I had to do karaoke. When I got home, I had more cash than when I left. What did you do? And it turned out that the karaoke bar, who was owned by my customer, had a tradition of whoever won the night, everybody took up a cash collection and you get all the cash. So there you go.
B
Luann, have you met Sarah Music at Intel? Because we both worked at Intel. Sarah Music is a opera singer and electric.
A
Her name is Sarah Music.
C
How ironic.
A
Her surname is.
C
I know.
B
Seriously, Sarah Music. That is her name. That is her born name.
A
Fantastic. Brilliant.
C
Okay.
B
She sings for the North North Carolina, I think the Charlotte opera.
C
Oh, wow.
B
And she's been on my show a couple times. So you are. I can't say you're the first opera singer I've had on the show. You're the second. Ah, right. So classically trained, right?
C
Yes. Yeah. I was the only electrical engineer that did a full recital. Even though I was a non music major, I was a music minor.
B
That is so awesome. That. What a wonderful story. So, hey, today we want to talk about specifically private Gen AI. This is a big deal for me because I see way too many organizations going out there and they're not knowing what to do with Gen AI and all their intellectual property is just going like crazy to the public. Gen AI's in government. It's a huge problem, but government doesn't know what to do about it because, well, I'm going to say the nasty, dirty word. Nvidia told me I have to spend $50 million to buy a bunch of GPU clusters in order to do generative AI, which the three of us knew is a total lie. So the good news is I can now actually go out and buy this with a rag. I can stand up my own stuff. So that's a problem space. So I want to hear this story and how we got together, how intel and NetApp kind of got together to make this happen. You want to go first or Russell? Yeah.
A
You know, I'll give you sort of my two plays worth. I'm sure Lynn's got a much. By the way, her story was way better than mine, just to be clear. But. But that's okay. We're not competing here. That's okay. I feel like I was being here, but that's.
B
I kind of get the feeling now Lynn is a little bit competitive.
A
Just a little bit. Yeah. Well, look, you know, it's interesting, you know, the, the. The market for AI, I think you're. You're right there. And it's going through an evolution, right? And the evolution we're seeing now, we're entering the people like to talk about technology, talk about predictive AI, they talk about generative AI, they talk about gentic AI. I'd rather talk about it in the era. The era we're entering now is enterprise AI. Right. That's really where.
B
Russell, you are a product guy.
A
Oh, I am a sound bite. Right. Enterprise I. So what the hell do I need by enterprise AI? So enterprise AI is where the rubber meets the road. It's the difference between trying something out the wild west, if you will, and actually delivering value. And it has. Listen, it's. Every single technology introduction has gone through this maturity curve, right? So it shouldn't be a surprise to anyone who's been in the industry for a while. Frankly, it shouldn't be a surprise to anyone who just likes to deploy technology that means is meaningful and delivers value at the end of the day. Right. So it's not enough just to. I think last few years it's been about can I do AI? Now we're talking about not just can I do AI well, but can I do AI in a way that's actually going to deliver some value at the end of the day. And not like I'm going up to a roulette table and putting some chips down and hoping something pays off, but like, you know, so.
B
So you're talking about moving from science experiment, roulette, Right? Because that's what's been going on. I mean, Even in government where I do, they'll spend, you know, millions and millions of dollars saying, oh, I hope that works well.
A
And listen, there's, there's some organizations that are going to do that, Darren. Right. And I mean, I want to be clear that there are organizations, yeah, we.
B
Have to do some of that.
A
There's organizations that are going to go do that. Right. There's some big, you know, brand name companies where they're building specific differentiation where it makes sense for them to go do that and they have a risk profile that makes sense for them to go do that. But that's the minority. I want to be clear. Right. The majority is not that. There's a really good stat, Darren. Right. We did some primary research. 85% of AI projects never make it to production. Right now.
B
I think it's more like.
A
It may be right, but it's way too high, right?
B
Oh yeah, yeah, yeah.
A
I hear people talk about democratization, that sort of stuff. And the reality is is that democratization for us means accessibility and dependability in terms of when you deploy something going to deliver science anyway, that's kind of the NetApp background stuff. And what we've been seeing in the market and what we've seen from, we saw in Intel, Intel's been a close partner of NetApp for years. Actually for more than years. For decades. Right. If you look at what we bring to market in terms of our appliances, that means our aff, our FAS and other hardware platforms based on Intel. We have a rich history of innovating together. We actually innovated with some new techno intel technology to accelerate our storage efficiency capabilities and our latest platform. So intel and that app, that's not a new thing, Right. What is a new thing is realizing that we could help each other, going after and at one point help customers by helping make AI more accessible and more dependable and more repeatable. Right. And I'll lend you. But I'm sure you're gonna add to this.
B
Yeah, yeah. So I mean, Lynn, how, how does intel play in, play into this? Because we don't, we don't actually sell to very many people directly. We are.
C
Right.
B
Selling through our partners like NetApp.
C
Absolutely.
B
So how did, how did this conversation first start? Did Russell, you know, beat you over, over the head? Because I can tell. No, I mean, you know, you know.
C
What, it was like a mind meld, honestly. Because one of the things running the AI center of Excellence for intel, what I keep finding is that there are so many companies that they'll exercise their Free cloud credits, come up with some really interesting use cases and then go, oh, my God, how much does every OpenAI API cost me? And so when you start doing the math of I have 100 support engineers that they're all accessing the customer support database and information through a RAG interface, going through an LLM, how much am I going to pay per hour for this? And so it gets very difficult to pencil in. What is the right, you know, what's the right way that you can scale this out? How do you actually manage the flows then? That doesn't even get into the, where the heck is my data? Because everybody's got their data in the cloud, they've got their data on prem, they've got. So you can't move it. It's Hotel California. You can check out anytime you like. You won't move it. And so you need the visibility that a NetApp is going to provide. And I think the third thing that I remind people is it's really easy to place a po. It's really easy to just write the check and pay for the hardware. And you get the hardware and you set it up and it's like, well, now what? And so I think that people realize, wow, the degree of roi, that you actually have to find the universal examples of Chatbot as well as document processing. You don't need a GPU for that, especially with the NetApp solution. So, you know, people are learning on the back end what they might not have actually calculated on the front end.
B
Well, you know, I'm seeing that a lot with my customers, especially in government as well, is they know their management knows that the workers are using ChatGPT. They know it, they're using it at home.
A
Yep.
B
Which means they're. They're sending classified data home, running on ChatGPT, generating stuff and coming back. It's a huge problem in all of government, both federal and state and local governments, because it's such a powerful little tool. But how do I create it in a. In a safe environment where my IP is not going out to China, heaven forbid? Deep Seek. Holy smokes, right? Yeah. You want to talk about, you know, a TikTok moment? There's your TikTok moment with deep Seek. All that private stuff has been hard to set up on our own.
A
Right.
B
So with NetApp, you've brought the data and the gen together. So, Russell, you want to explain a little bit about what the offering really entails? I mean, if I go to NetApp, can I just go and order one of these things? No, no, and it solves all my problems and I drop it in my environment.
A
I mean, so what do we got going on? There's a couple of things, I think that sort of underpins the strategy here and what we're building, what we've built, I should say, and what we continue to evol. So firstly, I think that Both intel and NetApp believe fundamentally that open source is winning in AI, right? And anyone that's actually doing any work seriously in AI will know that very quickly. Even anything you see commercial ultimately is built on top of open source. So intel obviously is an incredible history and pedigree in the open source community. So does NetApp. And we've been responsible for a number of innovations over the years. Give you an example, not a lot of people know this, Darren, but actually the persistent storage interface for Kubernetes was built by NetApp. Container storage interface. That's an example of CSI. CSI was built by NetApp and donated to the open source community. So we both have a long history. Of course, if you think about the way that people access files, NFS, right? NetApp is one of the largest contributors to the NFS standard in the world. So the point is, both of us agree that Open Source is the way forward and I credit intel with recognizing this and being one of the founder members of a critical software stack called OPIA or Open Platform for Enterprise AI. It was really interesting to know that because it was an attempt to pull together, it has pulled together various elements of the Open Source stack into something that's really just an application that just works, that takes all of the guessing out of deploying these things. So we engaged with intel and realized, hey listen, you guys have got this thing, you working with the Linux foundation called up here. It's a bit rough around the edges right now, but I think with a little polishing, this thing could really make things super easy for enterprises. We shared our vision for where things are and what we found in intel as a partner just said, wow, yeah, we 100% agree. We were literally on the same page. This is great. We're having this conversation and we saw customers having this problem, exactly as you said, Darren. So off we went, we developed this software stack and we said, hey, how are we going to get it to customers? That's a big problem. How are we going to connect to?
B
Remember I said, I mean, that's one of the biggest problems of Open Source, right? I go to the GitHub page, I read the install and I'm like, now I have a PhD, I should be able to understand any of GitHub pages. Not the case when electrical engineers are writing GitHub.
A
Wow.
B
I'm just saying I'm a software engineer.
A
I could be a product manager. That'd be even worse, right? That's the lowest form of life.
B
Yeah, that would be better, right? Because normal people can understand it then. So you guys took something open source that we're working on and packaged it simpler. And Lynn, let's talk about that from your perspective, because as I mentioned earlier, intel doesn't really sell to end users directly, but we create all this content. So why, why did you reach out to NetApp? Why was that? Well, simply around this.
C
Well, you know what's really interesting, Darren, is, you know, I'm just giving you another one of my favorite analogies. Open source and components have been a lot like Home Depot where you get a box of nails, you get a pile of wood, and here's your blueprints and good luck diy.
B
And that's why I have to make seven trips to Home Depot before I get my project done.
C
So I kept hearing from customers like, look, I don't want locked in, I don't. I want them to have some more granularity over what options I have to deploy, but I can't spend two months building everything from every layer up. So, you know, give me Ikea, I'm fine doing some assembly required, but please fix this. And so a company like NetApp and a partner like NetApp that we've had as long as we've had made such a huge difference in all of that for us.
B
All right? So, you know, marriage made in heaven, right? Right. We're working together. So what is the.
A
Yeah, so as I said, we've been working on this software stack together, so we think we've got that solved. Nanop obviously has a lot of the data. I mean, here's an interesting bit of information. About 100 exabytes of data globally sits on NetApp. It's a lot of data, right? So that's a lot of data. So we'd certainly think, hey, we've got a kind of responsibility to get that data ready for AI. So that's kind of part of this as well. But as I said, my role is in Solutions at NetApp is to bring sort of technology to the company and connect them with the customers. And the way we wanted to do that was to find a flexible way to work through our joint channel partners. Right? So this is the key, right? Think about who, you know, how enterprises typically gain Access to technology. They're typically not working directly with vendors like NetApp and Intel, as you said, Darren, they're big buying through people who are not just selling on things that they're actually building a solution for them. And so we recognize that we really wanted to work through these guys, their best place to understand the specifics of customer requirements. So we produced this thing, it's called, it's actually, it's a lovely joint branding thing. NetApp AI Pod Mini with Intel. Yeah. So I know it sounds a bit convoluted, but it's not a bunch of.
C
Numbers and letters and numbers.
A
Right. And what it is is that this is made available through our channel partners. The channel partners actually take three things, right? They take the NetApp storage OS that's called ONTAP, that's delivered through an appliance format. Generally our AFF series appliance format. They take the class leading Intel Xeon 6 processor, right, which is delivered through a server. Right. Pick your, pick your server vendor doesn't, you know, whatever, right. And then it's, and it comes with, you know, essentially a version of this stack called OPEA Open Platform for Enterprise AI. Right. So we have co engineered and expanded on OPIA to again rough out those edges, but also to do some work to make the data integration piece that much stronger. So that it really focuses on a couple of really important use cases. It focuses on enterprise search and enterprise knowledge management. And these partners, back to your point, Darren, they're the ones that kind of deliver the support experience, right? Because you say open source, they're the ones that doing it.
B
Okay, so I want to, I want to touch on the, on, on the use case. It's a beautiful use case. Huge problem. We have it at Intel. In fact, just this morning I was on with another solution architect and he, and we were trying to find a specific presentation that I gave four years ago in upstate Michigan. I'm like, holy cow. I, you know, we found it eventually, but it took us an hour and a half. Right, right. To dig through all of the stuff. And I'm the one that put these in different directories all over the place because I'm scatterbrained. So is this the kind of use case that this helps solve by not just finding the, finding the presentation, but finding the content is more important because that's what we were looking for was the content in this presentation. So is, is this the idea behind this that I put with your storage appliance? All my data can be collected into that storage appliance and now I can now catalog all of that data through a rag. And now I can. I can do a better job at retrieval, a better job at snippet or content retrieval, and even generating new ideas from all my enterprise data. Is that the idea behind all of.
A
Those things you mentioned? You mentioned your government friends fretting quite understandably on the use of ChatGPT. It's a problem for many organizations, right? This is not. It's not new. So if you're doing it right, you're spending all that money that Lyn mentioned on open. Open air APIs, right? If you're doing it the wrong way, you're potentially leaking information into the public domain. So these are. These are real. These aren't theoretical concerns. They're real concerns. What we're doing is firstly making it so that you have a defined and predictable cost environment in terms of delivering this on prem. Next to. Ideally next to your data. Yes, it could be a NetApp environment. It might not be a NetApp environment. NetApp appliance actually helps bring and unify that data together with opa. What we're using is we're using the accelerated instruction set built into Xeon 6 to very rapidly generate the vector embeddings as part of the RAG environment. And that gets putting obviously back to database. That all sounds very typical rag. So what makes it enterprise? This is critical. We've hardened it. We've made it enterprise ready. What does that mean? If you have data that has, for example, access permissions associated with. It's a group. Legal can see it. Darren, you mentioned your files. That was your file directory. That wasn't someone else's. Right. Listen, it's good that you can search yours, but I don't necessarily want other people searching your home directory. Same as the legal team can probably have access to some legal stuff. They don't have access to finance. Well, guess what? Just the ability to flow through permissions from underlying file environments through to the RAG environment. That is an example of what enterprise means in this case. So, yeah, that's exactly what we've done. Right? So it literally does all the things you'd expect an enterprise class chatbot to do. And it does it out the box.
C
Yeah, Darren, I think one other thing that's super important here. Some of the early implementations of AI on other products had a situation where using rag, some of the employees were able to get access to the CEO's emails.
B
Oh, yeah, yeah, I've seen that before too.
C
Yeah. And you know, I had to. I studied and Managed to clear the bar for IAPP's AI Governance Professional certification. And the roles that you have in Europe, you know, you have very different requirements, whether you're a distributor, an integrator, you know, somebody who's a developer, someone who's a deployer. So outside of the US there's very significant places that you need to be able to ship these solutions, especially if you're a multinational and you're going to have to keep access and tabs on all of those regulations and what it takes to be compliant based on your role. So it, it gets very, very important.
B
I'm glad you guys are bringing this up, Lynn, because a lot of people say, well, I can run a rag on my laptop, which I do. I've got a rag running on my laptop. But that's not enterprise, that's just running on my laptop. Enterprise is a whole different beast. It's a lot more complex. I've got permission problems, I've got a policy and compliance issues that I have to deal with. Right. Because especially in Europe, as you mentioned, Lynn. Right. Europe is kind of a nightmare. What other sorts of things do I need to pay attention to, Lynn, around enterprise software compared to, you know, what we're seeing out there in Genai today, Running on my laptop or in the cloud?
C
Yeah, well, I mean, going back to the original open source, I mean a lot of the open source vendors are billion dollar vendors or more, but they share all of their source code in repositories. Now how are they a billion dollars and yet they share their source code? Well, underwriters have a very peculiar thing where they want to know that you can pick up the phone and get help and support and there's maintenance and there's. You're not just doing it by yourself?
B
Not from Sven, living in Iceland. Right. Because there are some open source packages where they're maintained by one guy in Estonia or Finland.
A
You know what we call that? Do you use this phrase as American for any British phrase? Bus factor. You ever heard that phrase before? I bet you have, Darren. No. Oh yeah. You don't want to be a bus factor of one. That means if you get run over by a bus, God forbid.
B
Oh, bus factor. We call it trucking.
A
Trucking factor. The same. Yeah, yeah, yeah.
B
Because we don't have buses.
A
I know, I know.
B
So yeah, you're absolutely right. So I love, I love the whole enterprise bent to it. Reliability, of course, is another big, big issue. And I. Tell me a little bit more about what makes on the storage side because I know some things about NetApp. NetApp has the ability to span multiple data silos and kind of break down the data silos. Can I expand this whole rag concept across your whole ecosystem of products that you have or is it tailored just to, hey, you got to move all the data?
A
No, no, no, no. It's a really good question. So it's really a two part answer though. So at the gate, what we've delivered is something that operates in heterogeneous environments but really does an exceptional job in homogeneous environments. I'll explain what I mean by that. NetApp has a class leading storage OS called ONTAP. And this storage OS is available on prem in various form factors. It's also available in the cloud, in fact. We are super unique, Darren, in that. I know Lynn knows this, but we've taken that storage OS is OEM to each of the three main hyperscalers and we're the only ones that have done this. So they have their own storage services that are built on top of that storage os. So you've got this. It's interesting, not only do you have all this capability, you have a backdoor, you have hundreds of data and you have the backdoor into the cloud. It is a really interesting opportunity to bring all that data together. It makes our lives a lot easier. Of course, we've got a whole bunch of technologies that make it really easy to make that data available into, for example, the aipod mini stack without actually having to ever move it. And more importantly, importantly, and I think probably this is most critical is when that underlying data changes. How do I know it's changed without having to go back and check that it's changed?
B
Ah, you know, you know, you hit something big there. Yeah, go ahead Lynn.
C
Here's my advertorial. If you want to do agentic AI and you might have any idea why a couple agents thought they were pulling the same root data and didn't. This is what you need to be able to figure that out because this.
B
Is really, really important that people are missing.
C
Yeah, go ahead Russell, you can describe this.
A
There's a couple of. Actually there's a really interesting use case that we've been working with, working through with a hospital in Europe as it happens, and they're building a, a clinical chatbot. Right. So just, just think about it this way. When you walk into, if you're, if you're a clinician and you walk into a patient's room, you're seeing a lot of patients, you're not keeping up to date with everything that's changed for every single patient. So they wanted to build a clinical chatbot that could basically provide an update to that clinician as they walked in very quickly, rather than remember the old days of, you know, notes hanging on the end of someone's bed, you know, and reading them through and trying to. Of course there's a risk you're going to get something wrong, you're going to miss something, whatever. Right? So. So they built this thing and it was great. And it obviously was consuming a combination of, you know, you know, structured, semi structured and unstructured data. I know you guys aren't like data people. I know you are actually. Oh yeah, you are. You down. You are. You know, the reality is, is that's what medical, medical records look like. It's a combination of all those things.
B
It's a mess. Yeah, it's a mess of different.
A
It's a mess of different data. Now the problem is, is that if you combined all of the patient records inside this medical institution, it was a lot. And actually it took about six hours to enumerate it. Right, so just to index it. I mean, just to check that there's been a change. The initial indexing takes way longer than that. Right, so. Oh, I was going to say just to enumerate it takes six hours. So imagine this. You're in a hospital, you've had blood drawn, you have a new blood test. The blood test is in your record. The clinician walks in and he or she gets results that are out of date. Why? Because that process, that demon hasn't gone through and managed to actually. Right, so here's an example. Right? So NetApp has some pretty unique technology. One of the cool bits of technology we have is this thing called snapdiff. Snapdiff detects when something has changed. It actually triggers the re vectorization of that data entity. Right. So these are all interesting things. I know there's so many of these little tidbits of what makes an enterprise class rag system versus a pocket, something that runs on your laptop, where that's acceptable. Whereas if it runs for a patient record, system probably isn't acceptable.
B
Probably not. No, no. So, okay, so if people want to find out more about the offering, they don't go directly to intel, they don't go directly to NetApp. Or do they? And then they, they point to their service provider and say, I need one of these. I mean, how do we, how do we get this out to the masses? Because this is an incredible offering. I can tell you guys put a Lot of thought behind it, especially the dynamic updating of the vector databases and I mean that's a huge problem.
A
So actually listen, most people, I love people give people URLs. But let's be honest, we all start by tendering something to Google typically. So let's just do that, right? You type in netApp, aipod, mini and then intel. So netApp, you know that aipod, aipod, one word, mini, intel. You put that in, you type enter right at the top. You're going to see introducing that app AI Pod Mini with Intel for AI deployment and you'll see a whole bunch of. The great thing is we've got all the technical documentation out there. You'll see lots of articles and other things talking about the details of it. But honestly what I recommend is go to your favorite channel partner that works with NetApp and OEMs of intel and you ask them for it if they're a NetApp partner. You're going to get a great answer from them.
C
Yeah, a lot of that channel. Darren, they have some incredible training programs where they're super hands on. I have been so delighted by looking and seeing how many of the channel vendors have actually proactively created these training programs with the companies like NetApp who they're helping actually be the hands and feet and their sellers are just ramping so quickly. It's to the point where I would like them to come train some of my intel sellers because they're very.
B
I think that's brilliant. Yeah, no, I, I would love, in fact, Russell, send me one. I'll put it in my lab here at the house and you know, there you go. Yeah, you know, all I need is some more equipment in, in my lab here.
A
Do you need more equipment? Yeah, yeah. But my wife asks me why we have.
B
Who cares if my electric bill goes through?
A
Yeah, yeah. My wife's asking why we have a 650amonth electricity bill.
B
That's right. No, this is really awesome. So guys, thanks for coming on the show today. Great, great to talk to you guys. This has been a lot of fun. We should do this.
A
Yeah, this, this was cool, Darren. You know, listen, we'd love to come back and give you an update. And listen, we're not stopping here, right? So this relationship and what we've built, you're going to see more and more capabilities and features. You're going to see us talk a lot more about it. This is where the market's going. Democratization of AI getting in the hands of as many organizations so they get everyone gets the benefit of it. This is what we're all about at the enterprise level.
B
Yeah. At the enterprise, Absolutely.
C
Yes.
A
Yeah, yeah.
B
Which. Which I love. I love this part because you guys actually have a real enterprise solution. So thanks again for coming on the show.
A
Thanks, Darren.
C
Thanks so much.
A
Thanks. So.
B
Thank you for listening to Embracing Digital Transformation today. If you enjoyed our podcast, give it five stars on your favorite podcasting site or YouTube channel. You can find out more information about Embracing Digital transformation@embracing digital.org until next time, go out and embrace the digital revolution.
Title: From Hype to Impact: Building Scalable AI Solutions for the Enterprise
Date: July 15, 2025
Host: Dr. Darren Pulsipher
Guests: Lyn Kampf (Intel), Russell Fishman (NetApp)
In this episode, Dr. Darren Pulsipher gathers AI leaders Lyn Kampf (Intel) and Russell Fishman (NetApp) to move beyond the AI hype and discuss the real-world impact of scalable, enterprise-grade AI solutions. The conversation explores getting from “science experiment” to business value, the practical challenges of AI deployment, and how Intel and NetApp are making open-source AI and enterprise data integration accessible and secure for public sector and enterprise organizations.
Defining “Enterprise AI”:
Russell frames the current phase as the era of "Enterprise AI"—distinguishing between experimental applications and solutions that truly deliver business value at scale.
High Failure Rates:
Most AI initiatives don’t make it past proof of concept.
Security & Data Leakage:
Dr. Pulsipher notes rampant use of public GenAI even for sensitive work, raising alarm over intellectual property and compliance.
Vendor Lock-In and Cost:
The impression that generative AI requires massive GPU investments is challenged.
Open Source First:
Both companies align in supporting open-source AI stacks—citing NetApp’s role in storage interfaces for Kubernetes, and Intel’s founding membership with OPIA (Open Platform for Enterprise AI).
Bringing It Together with “AI Pod Mini”:
The “NetApp AI Pod Mini with Intel” (delivered through channel partners) unifies data, compute, and open-source AI into a turnkey, enterprise-ready offering using:
NetApp ONTAP storage OS
Intel Xeon 6 processors
Pre-integrated Open Platform for Enterprise AI (OPIA) stack
“We produced this thing…NetApp AI Pod Mini with Intel…[channel] partners deliver NetApp storage, Intel Xeon 6 processor, and a version of OPIA.” (Russell, 20:59)
What Sets It Apart:
The solution goes beyond DIY RAG, with security and scalability baked in—addressing permissions, compliance, data update, support, and data integration in complex environments.
“Just the ability to flow through permissions from underlying file environments through to the RAG environment…is what enterprise means in this case.” (Russell, 24:11)
“Some early AI implementations…using RAG, employees were able to get access to the CEO’s emails.” (Lynn, 25:30)
“Europe…is kind of a nightmare. Outside of the US, there’s very significant requirements for compliance.” (Lynn, 25:48)
Dynamic Data Updates:
With “SnapDiff” technology, only updated data is re-indexed, ensuring current results and supporting use cases like clinical chatbots in hospitals—where timely, accurate data is life-critical.
Buying & Support Model:
Customers don’t go to Intel or NetApp directly, but through a network of partner integrators—who offer hands-on training and tailored deployment.
Open Source + Enterprise-Grade Backing:
The combo (open source AI + partner-based enterprise support) means organizations avoid the “bus factor”/“trucking factor” risk of relying on a single open-source maintainer (27:50).
On the State of AI Deployment:
“It’s not enough just to ask, can I do AI? Now it’s: can I do AI in a way that’s actually going to deliver some value?”
– Russell Fishman [09:13]
On Open Source Complexity:
“Open source and components have been a lot like Home Depot—here’s your box of nails, a pile of wood, and your blueprints. Good luck, DIY.”
– Lyn Kampf [18:36]
On Real-World Security Risks:
“Some early implementations of RAG…employees were able to get access to the CEO’s emails.”
– Lyn Kampf [25:30]
Compliance Is Not Optional:
“Outside of the US, there’s very significant regulations. If you’re a multinational, you’ve got to keep tabs on all those regulations and be compliant based on your role.”
– Lyn Kampf [25:50]
On Dynamic Indexing:
“SnapDiff detects when something has changed. It triggers the re-vectorization of that data entity… that’s what sets enterprise class RAG apart from something that runs on your laptop.”
– Russell Fishman [32:16]
Channel-Driven Adoption:
“You type in ‘NetApp AI Pod Mini Intel’—right at the top you’ll see all the technical documentation…But go to your favorite channel partner that works with NetApp and OEMs of Intel and you ask them for it.”
– Russell Fishman [33:44]
| Time | Topic / Quote | |---------|----------------| | 09:01 | Defining “Enterprise AI”; moving from science experiment to value | | 10:47 | “85% of AI projects never make it to production.” | | 14:20 | Security risks of public GenAI in government | | 15:26 | Why open source is foundational in modern AI | | 18:36 | The Home Depot vs. IKEA analogy for AI solutions | | 20:59 | What makes up the NetApp AI Pod Mini with Intel | | 24:11 | Security, permissions, and enterprise requirements for RAG | | 25:30 | “Early RAG could leak CEO emails”—need for compliance | | 27:50 | The “bus factor” of open source and why enterprise support matters | | 32:16 | SnapDiff, dynamic data updates, and real enterprise requirements | | 33:44 | How and where to get the solution; training and adoption |
This episode pulls back the curtain on the transition from academic AI excitement to truly enterprise-grade, scalable solutions. Pulsipher, Kampf, and Fishman underscore that sustainable, impactful AI hinges on open-source foundations, robust data integration, and enterprise-ready support around compliance, reliability, and security. The NetApp/Intel partnership (embodied in their “AI Pod Mini”) is pitched as the answer to bridging technical innovation with the realities of public sector and enterprise deployment—making the promise of AI both practical and safe at scale.