
Loading summary
A
And that content, it's pulling out the most likely answer, right? So the tokens drop in front of your screen. So it's a cow turning into a hamburger, and they're literally pulling a hamburger out of the cow. So the answer is coming from within the LLM. That's inappropriate. That's where you get hallucinations. And earlier, Darren, you also mentioned no per user access controls. There is the possibility of general knowledge.
B
It's not specific.
A
Correct.
B
Welcome to embracing digital transformation, where we investigate effective change, leveraging people, process and technology. This is Darren Pulsford, chief solution architect, author, and most importantly, your host on this episode. From island to AI pioneer with founder of Prion, Igor Yablokov. Oh, also, Igor, just so you know, I do all the editing myself, so if something goes sideways, no big deal, okay. I don't have to coordinate with anyone. I just edit myself. So no big deal. All right, Igor, welcome to the show.
A
Thanks for having me.
B
Hey, I look forward to our conversation day. We, we had a good evening entry conversation when we first talked, and I thought, hey, we've got to have Igor on the show. But before we get started, Igor on my show, I only have superheroes. On the show. Every superhero has a background story or a origin story. So, Igor, what's your or origin story?
A
Yeah, so let me see this. I'm originally from Greece, so that's where I was born to a couple of art artsy parents, shall we say. And you know, in the little island where I grew up, there was no running water, no tv, no radio, no electricity. So it was as idyllic as. As it can be. And at a certain point in time, you know, I noticed that there was a hurt dolphin. And then I think that's when the first idea popped in my head. Why can't I talk to you? And so I would, I would say a number of years later, my mother decided to move us to the US So that we could be part of the computer age. And that's that, you know, after. After that, you know, graduated with a computer engineering degree, became a research engineer at IBM Microelectronics. And then halfway through my career there, I started leading the AI, an early AI team. They were not being as aggressive as I wanted them to be to capture that opportunity because they were ahead of their time. They literally were ahead of their time. They, you know, in. In the DNA of everything, things that were Google branded and Microsoft branded were actually IBM engines behind the scenes, GM OnStar behind that was us. And so I stood up my last Startup. So I departed probably within a year of founding the previous company. You know, I walked out on stage at the first ever TechCrunch Disrupt conference. And if you remember that, it was like, I do remember those, you know, comedy Silicon Valley. And, and I pull out a razor flip phone out of my pocket and I speak into it. It talks Andreessen sitting right in front of me. And what I couldn't tell anybody was that we were secretly working with Apple on the precursor to Siri before the iPhone even came out. That's when they were, you know, debating whether to use the Mac OS or the iPadOS. And then five years later, after, after that appearance, approximately five years later, we get acquired on the download by Amazon. And that's how Alexa is born. So Alexa is my older sister's name. The code name for it is Prion. And for this company, you know, it was the offshoot of that previous experience because we decided to catch our own football. We knew that that style of natural language computing would eventually come to our workplaces. And so we decided to have a go of it, you know, got, you know, venture capital investors and things of that sort. And then, you know, probably within a year or two of founding the company, I get a call out of the blue and fellow says, hey, you know, I want to become your largest investor, you know, join your board, help you run the company. And I'm like, that's always a kid. Yeah, yeah. And then, you know, when I, when I did a Google search, the only thing that came out about this fella was a weirdly sounding book. And I'm like, what do you know anything about this AI stuff? And he's like, well, I worked with Peter Thiel and I'm familiar with Palantir. And I'm like, okay, let's go. And that was J.D. vance.
B
Oh, wow. Wow. Yeah, that's, that's always a good, that's always a good thing to hear, right?
A
Yeah, but, but we've, like I said, we've had a good go of it as well. And I'll, I'll mention some of the other folks that, that ended up joining the team here shortly. But you know, Chris Mall, who's, who's my second in command, you know, has a lot of experience working in the early days of Salesforce from year two through, through the IPO as well, one of the original Oracle execs that Benioff brought in and Informatica and so on. And so later on I'll share what our vision is of, of this style of AI computing. Because I think it's similar but different to the way that, that many, many folks are talking about AI nowadays.
B
So before we, before we get into, into that part, what do you feel? Because you, you were, you were there during the 90s, the.com boom and the bust. What's different about this time? Or is it different? Or is, you know, I mean, the hype cycle seems to be higher on this one than back then. Maybe not. What's, what's different this time than, you know, 20 years ago?
A
Yeah, the hype cycle is the same. It's just a recency bias. Right. We just feel like, oh my gosh, things that we're experiencing now, you know, feel, feel worse. Think, think about if you, if, if you had some sort of bodily pain now it feels worse.
B
It feels worse than what you remember.
A
Well, you remember from something in the past, right. That was a similar ailment. What I would, what I would say is this, the OGs in AI, you know, folks that were working it, you.
B
Know, back in the 80s and 90s.
A
80S 90s and things of that sort. There was no fame or fortune in it. There are literally no fame or fortune. There was three dominant reasons we were attracted to the field. The first was accessibility. Right. My chief scientist at the time was a blindfellow. So think about, you know, supporting handicapped folks. I had a great friend, Ellison, who was a disability advocate and the first, you know, graduate of an Ivy League school as a handicapped person. The second reason, you know, it's not just handicapped, but also a computing interface for children that's, that's voice only. That, that's a lot safer than having a screen associated and then also for, for senior citizens. So that's the first cluster is accessibility. The second reason we were attracted to the field was for safety reasons. You know, I already mentioned things like GM OnStar. Well, so that you don't text while driving, right. So this is way before things like CarPlay existed or you had Android Auto or you had, you know, Google Maps in your cars, right. That used to be called telematics. And then the third reason was to bridge cultural divides with machine translation. You know, I was, I was reading, you know, some, something about, you know, General McChrystal last night and how he was talking about how they didn't have enough interpreters to go around, you know, when they were in Iraq, so they couldn't, you know, interact with, with the population, which of course creates all sorts of risks.
B
Right.
A
And things of that sort. So that just tells you why you Know, many of us were working on that field, but now, yes, it does feel like a carnival, but it's a similar carnival to the, to the.com phrase.
B
Well, is in that I was, I was, I did a startup in 99 and it was not. I did not have a web strategy. And here I saw dog food.com get $5 million, $10 million. I had a, I had an integrated development environment which was the foundation to Galaxy. Right. We got bought up by, by IBM and which turned into Galaxy. But I could not raise a dime because I didn't have the right buzzwords in my, in my pitch at the time. Right. And I think we're seeing the same thing today. If you don't have an AI strategy with your startup, you're not going to get a dime. So it does feel a lot like. It does feel a lot like that. We saw some really big investments that went bust like webvan, but now they deliver food from Amazon and Walmart and all the. It's. It's crazy how this, how this goes in these cycles.
A
Yeah. Or the infamous example of the pets dot com. Right. The, the dog food being shipped. Right. Oh, that will never be.
B
Yeah, dog food.com was the one that. And then they figured out, oh man, it costs a lot to ship 50 pounds of dog food.
A
But you're exactly right. I mean, now there's special of attention, right. And every startup has AI and they're wearing their T shirts and they're so proud that they think that they're, you know, tip of the spear of these style of technologies. And it's sort of like, you know, you know, oh my gosh, our startup has to use chips. Yep. You have to use semiconductor, then you have to use cloud, then you have to use mobile, then you have to use web. You know, then you have to use cloud. It's like it's just going to become a default part of. Of course you have to use it in the same way that of course you have to drink water and of course you have to eat some things every day and then of course you have to walk and sleep and things of that sort. It's just going to become, you know, table stakes. But you know, when, whenever something new gets introduced, there's obviously some euphoria. People, you know, start branding themselves as the thing and then it just becomes table stakes and nobody really cares until the next big thing gets. It gets discovered.
B
What's, what's interesting? You already brought this up. What's interesting about this is this has been around for A while. So that moment when OpenAI launched Chat GPT3 3 November 30th of 2022, that, that was a pivotal moment in AI. Even though there was a whole bunch of really interesting and evolutionary things and revolutionary things that happened before that this made it highly accessible for everyone.
A
Right. Well that moment was, was not supposed to happen.
B
Oh wait, now you've got to explore this, Igor.
A
Well, yeah, it wasn't supposed to happen. The only reason it happened is because they breached through a mess of taboos. Right? So, so let me give you, let me give you a one by one, a build up of why, you know, the OGs of AI, you would say, why aren't you the ones, you know, running open AI? And, and they're.
B
Yeah, yeah, exactly.
A
Well, here it is. The first reason is that they're were supposed to be a nonprofit academic style research institute. That's what they were, you know.
B
Yeah, that's what they were touted as. I remember that.
A
Right. And so a lot of the cloud hyperscalers at that point had this pre investment in accelerators and things of that sort and they were just sitting there not being utilized. So it was a great shtick for them to donate that capacity to a re academic research institute and then they would get a write off. So that was the first thing is that these things were non profits, they weren't supposed to be for profit ventures. So they got more compute than anybody would normally get. Where you just said, you know, you have to raise venture capital. Nobody was ever going to get that amount of compute.
B
Oh no, you need billions of dollars to do it.
A
All right, so now that computer is sitting there, that's taboo number one. Taboo number two, because they were a research institute, whereas a commercial entity would never trawl the whole Internet and copy all this copyright, right?
B
Oh yeah, that would, they would be hammered for that. Right.
A
You'd never do that. So that's the second thing that they were able to do that nobody else ever did because the rest of us have to report to boards like hey, you know, I can't take your things and resell them. So that's the second taboo that they broke, the third taboo that they broke because many of us, you know, that may be listening to this, maybe have a computer science background or computer engineering background, electrical engineering background, things of that sort of. And guess what? We were trained on ones or zeros, truth or false. So they saw this alignment problem with it. This thing was hallucinating, literally making things up like, hey, why don't you Go, you know, kill your spouse and let's go take over the world. We would never launch that nonsense ever.
B
Never.
A
We wouldn't allow it to pollute our brand. But for them it was an interesting research curiosity. Look at this thing. You know, it's, it's dropping these tokens. How fun is this, you know, as a little toy. That's the third taboo that they broke that no commercial entity would have ever, ever done. That's why it caught Microsoft and Google and Amazon and Apple by surprise because these are things they would have never done. The fourth was RLHF, Reinforcement learning with human feedback. This is fancy talk for the fact that when you type things into these style of things, you're assuming it's an encrypted connection between you and, and a Google style entity. But in reality it was not what you were contributing, which could have been also your client secrets, your employee secrets and things of that sort were going to contact centers in Kenya, Pakistan and all these other places as well. So now your attorneys had to work overtime sending breach notices to everybody and your intellectual property was being carried out. That's the fourth thing. And the fifth is the worst taboo, the absolute worst taboo because the OGs didn't talk about these AIs as, as a path towards AGI or, or a bush. So then some people in their community start thinking of these things as divine entities. Now here's where things get spicy. Then they're, they're starting to bait folks into using these for mental health support as therapeutic things. Oh, upload your medical records and things of that sort. And it's starting to trigger suicides. Look at the reaction, the overreaction and the sad overreaction that some people had when the 4O model was retired and the GPT5 model was released that, that the 5 model was more objective whereas the 4 model was more buddyish and supported people.
B
Right.
A
And, and, and certain folks that were more weak willed and saying, hey, I miss my little buddy, what's going on? That is a very, very dangerous, you know, you know, situation. I have to say, to the point that psychologists are saying that the folks that are starting to get emotionally entangled, you know, with these AI assistants are trending towards psychopathy.
B
So is this, this trend sounds very similar to the social media problem that arose as well. Right? That technology advanced so quickly that we didn't know the ramifications of it. You're exactly socially and intellectually and culturally as well.
A
You're exactly right. And it really accelerated post 2012. You know, something was explained to me or I heard overheard over the course of the last year. Think about this this way. In terms of our experience in, in the US about one out of three social media transactions is state sponsored. So literally disinformation, just trying to put everybody at each other throats. Now what happened in 2012 onwards? Well, think about the concept of an American, right? I know, you know, a lot of folks are going to be listening to this worldwide, but I'll give you a specific example and this is probably true of your, your respective nation states as well. Somebody in Texas and California woke up, you know, in 2011 and they said my gosh, I'm an American and isn't this place great? That's, that's what their identity was. Now here's what started happening 2012 onwards with social media. The same two individuals woke up and said huh, that's weird. I'm an American in California. But those folks in Texas, they're kind of weird. They don't, they don't share the same American values that I do. And then the person in Texas woke up and pointed at the Californian and said huh, that's weird. I'm an, I'm, I'm a real American and those folks, you know, don't have the same values as I have. And think about what, what used to happen when you were born, let's say in the middle of the country you actually got a choice. It was sort of like panning for gold, right? Where you shake the pan and, and, and the rocks kind of move out of the way. Know if you're born and you're like hey, you know what, I'm a type of person that's conservative, faith based and things of that sort. You know, I want to be in an energy industry. I'm going to self select and move over to, over to Oklahoma, over to Texas, right, Texas. Or, or hey, you wake up and you're like hey, you know what I think, you know, they're, I'm more liberally minded and things of that sort, you know, have different lifestyle, I want to go over to San Francisco and things of that sort. So people self selected. But with the rise of social media it started, you know, creating these windows where you got to see other folks and, and, and it became more stark where people realized that inside of our nation state was actually like, and I'm trying to remember the anthropologist that did it, did this. There's about a dozen different nations inside of our country, which is great, we could be a federation, you know, of these you know, different styles of injury.
B
Well, they always existed, they've always been there, but now they're more visible than they were before.
A
Right.
B
There used to be more barriers. I saw the same sort of thing when I traveled Europe in the late 20 2008, all the way up through 2019 before COVID hit. You could see where the Internet had taken hold it amazingly so. And you could drive out into the countryside and see the villages and the towns that hadn't quite gotten there yet. Stark difference. So I actually, I actually saw some unification happening right across, across the world as well. But you're right, it did, it did point out and made more visible the differences in different regions in, in the world. But I also saw a unifying aspect to it as well.
A
And look, one of the, one of the, one of the great concepts that we hope bears true is there's a lot more kids that build sandcastles at a beach than, than kicks them down. But, but I think what's peculiar about AI and why we kind of took this, this, this curveball is, is you're going to get that personalized experience where instead of 12 different nations, you may end up having 300 million little individuals, individual little universes and things of that sort. And it's going to be a lot harder predict how those, you know, 300 million little mini universes are going to be integrating with one another as well, because everybody.
B
Yeah, yeah, that, that's an interesting aspect.
A
Yeah.
B
So, so with, with the rise of, of generative AI and the social impacts that we're seeing today, there's also a lot of benefits as well, as you kind of already mentioned, has driven though a lot of investment into AI. Where before AI was kind of, that's that redheaded stepchild that we know is going to come about and we'll give it some money every once in a while if we have extra money. But now the floodgates have opened. Where do you see it going now? Because now we all know when you put, when you invest money into something, it does grow and we do push the boundaries with it. Where do you see it going?
A
Oh, you're exactly right. I mean, we're over the moon when we see it working in the healthcare field. Right. And all the opportunities in terms of drug discovery and obviously helping us cure all these things as well. And we do have certain clients that are working on that. So the thing that prime decided to do is we realized that everybody had to know something before they did something. Right? Everyone. So this is a horizontal thing that's across industry. And the thing that we realized is in order to support human and machine or machine and machine transactions, what we had to do was unify all of the existing content that an organization had. And, and when you think about what is all the content and the knowledge that had to be unified in your respective organizations, whether you're big enterprises or big government agencies, it's the four Ps that have to come together. Now, what are the four Ps? Public information you trust from academic institutions and government agencies. The second P is publish information that you are properly licensing into your org, maybe from FactSet, Bloomberg, PitchBook, Alir Papers and things of that sort. Right. Most of us as commercial entities, we don't steal people's IP because we don't expect people to steal our IP as well. Economic interchange, you know, to these partnerships that are very fair. The third P is the proprietary crown jewels of who you are. Your experimental information, your research papers, your patents, your, your, your training materials. Right. The, the secret sauce of how you, how you.
B
So this is all proprietary internal stuff, right? Stuff I don't want out in the public.
A
Right. So. So you're exactly right. Public is everybody can have access to it, publish, everybody can buy it. Proprietary is this is it. This is who you are as an organization. You are intel, you are amd, you are Nvidia, you are Boeing, you know, you are Airbus.
B
And there's been a problem with, there's been a problem with proprietary information going into public gen, correct?
A
Yeah, it got unintentionally spilled. Right. And that's, and that's what we're trying to correct. And then the last that a lot of people forget is personal meaning for a single individual's eyes and ears only as well. Right. Maybe it's very sensitive HR stuff, maybe it's very sensitive medical information and things of that sort. It's only for you to know. It's literally only for your eyes and ears as well. So those are the four piece.
B
Those.
A
Four Ps, imagine, you know, we're in front of a witch's cauldron, have to be turned into the fifth P, which is process knowledge. Now, how do you do that? Now where do these four P's reside while they're in systems of record? What's what? That's fancy talk. For what your SharePoint folders, your one drives, you know, box folders, Dropbox folders, S3 buckets, Azure blobs, SAP ServiceNow, Salesforce. Right. Things like Atlassian, you know, Confluence.
B
Right.
A
Websites of, of all types now those systems of record, what are the data objects that hold your knowledge? Well, things like audio, video, image, text, PowerPoint, PDF, Word file, web page. So now what do you, what do you have to do? How do you get the left, you know, stuff on the left side, which is all these objects in these systems of record, including structured databases and things of that sort, and data lakes over to the right side, which is to drop tokens that are consumable by a person or, or a business process. And how do people like using AI? Well, things like chatbots, you know, mobile assistance, you know. Sure, on, on devices and things of that sort. Digital humans. Right, things of that, those are the experiences. Well, you have to ingest that knowledge, you have to retrieve that knowledge. You have to do generative smoothing, you have to have certain agentic things that trigger workflows. You may have your own deterministic deep research workflow builders. So this center, part of the Tootsie Roll Pop, we coin that as a knowledge cloud. All organizations are going to need the union of structured, semi structured, unstructured knowledge into a knowledge cloud to act as the institutional memory. Think of this. This is your own proprietary library of Alexandria, of Library of Congress. That is the origin of everything that you have to do.
B
Well, and, and like you said before, it's layered.
A
Correct?
B
Right. So for an organization there's, there's three layers. For an individual in that organization, there's four.
A
Correct.
B
That personal thing. Right. So I can't just say throw it all onto one big cloud or one big blob. Right. That everyone has access to. Because I also need access control. I, maybe I have all of my HR data in, in this knowledge cloud. Well, I don't want everyone in the company to have access to the data that's in there or even a large language model having access to all of that data, depending on who I am. Right. This is an important aspect of this as well. Right?
A
Yeah. Well, Jensen of Nvidia fame saw a subset of the prime slides way back when. And that's why I think that helped influence, you know, him talking about the, the stuff that Nvidia was driving towards is everybody having an enterprise software AI factory. And an enterprise AI factory basically takes, takes all of those raw material content and turns it into tokens on the other side as fast as possible. Right. And of course, I think what you said was very important. I don't want to oversimplify that there's a singular knowledge cloud. Right. Because that sounds rather nebulous, no pun intended, but there would probably Be a knowledge cloud for inner, for interactions with the outside world. So think about your, your clients, your partners and the like, government agencies. So that's public cloud, a private knowledge cloud for interactions with your employees. And then for the most sensitive intellectual property that you may have or classified content, you may have an on premise knowledge cloud. So think about literally installed on physical servers, disconnected from the outside world. That's your crown jewels in terms of how you make, you know, you know, drugs. Right. How you make semiconductors, how you make stealth. Right, right, right, this or that. So that we also understood that it would be a multi tier architecture as well.
B
Well and then also with, with the advent of AI PCs, I can actually run my personal cloud that's augmented by the corporate cloud, the corporate knowledge cloud and the public knowledge cloud, which, which is really fascinating when you start looking at these new architectures which I'm calling multi hybrid generative AI. Because now I can hit a public, a private, a, a personal, I call them public, private community, personal gen AIs. With these knowledge clouds I, I think this is the way that we're moving and I think it's going to be very powerful for individuals and also for corporations.
A
Yeah, we practically invented something called retrieval augmented generation. I mean we were, I think one of the first commercial adopters of transformer based technologies even before Google started adopting it for themselves. A lot of this stuff isn't new. Right. I mean, but why rag, you know, why that versus the way that you were used to using ChatGPT? Well, you have to think of Chat GPT as they're crawling the whole Internet and they're turning it into these, this large language model, this LLM where you call GPT1, GPT2, GPT3. And when you type in a prompt, it's based on the distance measure between your query and that content. It's pulling out the most likely answer. So the tokens drop in front of your screen. So it's a cow turning into a hamburger and they're literally pulling a hamburger out of, out of the cow. So the answer is coming from within the LLM. That's inappropriate. That's where you get hallucinations. And earlier Darren, you also mentioned no per user access controls. There is the possibility of general knowledge.
B
It's not specific.
A
Correct?
B
Right. If I want to be very specific, I need to give it more context.
A
Right.
B
And that's the whole point of rag, right is hey, let's pull out some of your own context, inject it into your prompts, in, into the knowledge of the LLM and, and constrain the results coming back out. That's. That's the key to it.
A
Right, well, right, but I'll be more specific. It actually lives adjacent. So the way that the.
B
Correct. Yes, adjacent. Yeah.
A
The way that the rack stuff works is the cow turns into a veggie burger. So the cow turns into veggie burger. And, and people scratch their heads. So, so I have one of our investors is a, is a good old boy Texan. And so he's like, hey, what's the difference between ChatGPT and what you're. What you're doing? And I'm like, well, you know, their cow turns on a hamburger, ours turns into a veggie burger. And he's like, why would you want that? You know, like a good text. And he would say, why would you want the veggie burger? And I'm like, you know, so you don't get bird flu. Now let me explain the difference. You got to think of RAG like a nuclear reactor core, right? Where there's separation between the control rods and the fuel rods. The control rods are the LLM. It's used to model language in terms of how to put the words together. But the fuel in terms of the output only comes from your systems of record, meaning your word documents, your PowerPoints, your PDFs, right, your. Your structured databases and things of that sort. That's where the answers come, come from. And the most important thing that you need, and I think, Darren, you started expressing it, is full source method citations. Not a single sentence should be painted as an answer where you can't click on it. It open up the exact page or jump right to the frame of where somebody starts speaking.
B
You need that explainability, you need that reliability in the data that you're doing. Especially let's. Let's talk about semiconductors, right? If I'm asking, hey, I need a new technique. I'm having a problem with this process, right. And, and I. Maybe the heat is too high or the chemicals mix is wrong. I need some help with this. And if I'm using a generative AI to help me plow through hundreds and thousands of pages of chemical reactions, I want it to show me where it got the information from so I can validate it because the risk is high if something goes wrong, right? So this is where I see RAG as being critical, right. Moving forward. And your guys's solution specifically, because I can have that explainability that, you know, in a large language model all by itself. There's nothing there. I can't.
A
And let me share with you. I mean, obviously you guys work in a very robust and, and sensitive industry. Let me give you an example of another one. You know, we've launched a nuclear reactor site in less than five business days. It was amazing. And why did they try to adopt AI in, you know, in their ecosystem? Because they predicted they could reduce the downtime of nuclear power plants by half if they had such an AI literally by half. And at scale, you're talking about 30 million documents. And every single one of those puppies matters. And let me tell you why everybody you know, especially for those of us that are in older generations. We remember Three Mile island, right?
B
Yep.
A
When Congress did an investigation of Three Mile island, they found six reasons why the Eastern seaboard was almost irradiated. And there was less than 30 minutes left before that was going to happen. So we, we came pretty close. And the six reasons were as follows. One was a design flaw. The second was a faulty valve. Four out of six were knowledge management issues. Literally, engineers and technicians were not getting, you know, you know, rapid enough access to, you know, technical materials for their decision support as well. Now you have something, a tool that can answer questions in two seconds that normally take, you know, two hours for these engineers.
B
Well, not only answer the questions, tell you where it got it from. That, to me, that to me is the game changer for rag and for solutions like yours.
A
Yep. And remember, at scale, 30 million documents, it's not enough just to ingest them. You know, you have to have the right authorities. Meaning you and I can ask exactly the same question because we have access to this same SharePoint folder. But we'll get two different answers because you'll have read access to that document and I won't. Right. Because it's a sensitive document. The stuff that gets ingested is it just flat, you know, probabilities that comes out? No. Maybe the engineers that are veteran, maybe you skew the answers more towards them versus somebody who just newly joined the organization.
B
Wait, I, I want to touch on that because that, that point that you hit right there is extremely important. Not all data is created equal concept. So with your guys's solution, I can, I can say this document has higher authority than document A, has higher authority than document C or D. Oh, and then document Z comes in and it has even higher authority. So you have that capability of saying, hey, I've got different reliability or trust. Trust in different documents.
A
Absolutely. And think about at scale, a multinational organization. How do you find contradictions? A lot of people just jump to generative smoothing. They're forgetting the retrieval step is pretty important. We found this with another big manufacturing entity that was our client. It's what the Navy calls a logical paradox, where you have one document on one side of the Earth, and then on the other side of the planet, you may have the exact opposite thing being told to your engineers and technicians. So which one is right? So you're exactly right. It's based on the authority of the author. It's. It could be based on recency. There could be peculiarities based on the geospatial coordinates of where a particular piece.
B
Of information came from. That's true.
A
How do you turn on a generator? Well, that's different from one site to another, right? Based on the, you know, the brand that they, they may have inputted. How do you know if content disappears if a summer intern shows up in the. And all the content that's feeding the AI, they delete or move content relating to a chemical spill or an active shooter for a campus. How do you find all the dupes and move them out? And you're like, what do you care about duplicate content for? Who cares? You just throw them all in.
B
This, this is, this is screaming data management. This is what. This is screaming to me. Which is something that I think a lot of organizations have basically just ignored over the last three years because they're chasing the big shiny object of AI. They're. They're buying a bunch of GPUs to put in their data center. And what they're finding is, oh, I forgot about the data. Right? And what you're doing. And I love what you guys are doing, you guys are bringing data back to the importance that it really is. It's great that I have a large language model that can do a haiku on my business strategy. Wow, that's awesome. Not very practical. And you only get real practicality is when your data is coming in. So I see a huge rise in data management and data management techniques and tools like what you guys guys have that enable me to take advantage of.
A
My data more effectively is that you're exactly right. And look, I've been doing this almost a quarter century. Our accuracy relative to even the hyperscalers is 50 to 100% more accurate. Because this is all that we've been doing, right? I mean, it's all that we've been doing, right? There's not a lot of companies that can do what intel can do or AMD can do, or Nvidia can do and things of that sort. You want to be an upstart, fine. But you're going spend, you know, 25 plus years, you know, trying to really figure out how to do it at scale. I'm talking about at scale. I'm not talking about taping out one or two things, you know, to get some venture capital bucks. I mean, talking planetary scale here. So we've been at this for a while. Even the last startup, you know, the majority of our, of our, of our clients were telcos. That's not an easy thing to do. Oh, that's not what you folks think of working with telcos. But I mean, you're talking about, you know, the uptime is, is insane, the response times that are necessary. And that's why we believe in, in, in essentially developing a full stack AI. We've always done that where we control every piece and we develop every piece because think about your iPhones. There's a reason why Apple tries to build their own chip, device, operating system and application. By doing so, they get the highest firepower with the lowest energy use as well. And that's how an AI company essentially being in the Goldilocks, where they can afford to build it all, but they're small enough where it's easy enough for them to put it all together, leads in accuracy, scale, security and speed. Those are the four legs of the enterprise AI stool.
B
Igor, this has been really fascinating. We've just touched the tip of the iceberg on this, but we are out of time and so I'm going to have to have you come back on and we can dig more into the technology behind rag and behind this new way of thinking about generative AI for personal and corporate and community and public. But Igor, thanks for coming on the show. If people want to find out more about you or your company, where do they go?
A
Yeah, I'm@prion.com that's spelled P-R-Y-O-N.com. thanks for having me, Darren.
B
Hey Igor, thank you. This has been very insightful. Thanks a lot.
A
Thanks.
B
Thank you for listening to Embracing Digital Transformation today. If you enjoyed our podcast, give it five stars on your favorite podcasting site or YouTube channel. You can find out more information about Embracing Digital transformation@embracingdigital.org Until next time, go out and embrace the digital revolution.
Podcast Summary: Embracing Digital Transformation
Episode: "From Island to AI Pioneer: Igor Jablokov on ChatGPT and Innovation"
Host: Dr. Darren Pulsipher
Guest: Igor Jablokov, Founder of Pryon
Date: August 19, 2025
In this episode, Dr. Darren Pulsipher sits down with Igor Jablokov, a pioneer in AI and the founder of Pryon, to explore the realities behind AI hype cycles, the foundational shifts being brought by generative AI technologies like ChatGPT, and how organizations can meaningfully use AI while controlling risk and managing knowledge. The conversation ranges from Igor’s unique journey from a Greek island to shaping voice assistants like Alexa, to a technical discussion on how modern AI architectures and retrieval-augmented generation (RAG) offer explainability and security—key for enterprise and public sector adoption.
On ChatGPT’s Break-out:
"OpenAI broke through a mess of taboos. The only reason it happened is because they breached through those. That's why it caught everyone by surprise." — Igor Jablokov (11:14–13:18)
On the Importance of Data:
"You only get real practicality when your data is coming in. So I see a huge rise in data management and data management techniques and tools." — Dr. Darren Pulsipher (36:31)
On Explainability in Critical Industries:
"Not only answer the questions—tell you where it got it from. That to me is the game-changer for RAG." — Dr. Darren Pulsipher (33:50)
On AI-powered Knowledge Clouds:
"Think of this as your own proprietary Library of Alexandria... the origin of everything you have to do." — Igor Jablokov (24:20)
This conversation provides a sweeping and insightful overview of how AI’s real value for organizations comes not from the latest buzzwords, but from robust architectures, explainability, and a relentless focus on knowledge management. Pryon’s approach—layering AI on carefully curated, authoritative, and access-controlled knowledge—is positioned as a blueprint for safe, scalable, and transformation-driving enterprise AI.
Learn more: pryon.com
Contact Igor: igor@pryon.com
(Ad segments, standard intros/outros, and promos have been omitted.)