Loading summary
Jeremy Harris
Right. You got to kind of get a good structure on how you're going to deal with it, how you're going to approach it, and if your leadership is on board. So prior to even, maybe even doing a policy, you might have to have a strategy type of discussion about how we're going to do this organizationally because the policy becomes only one part of that strategy.
Darren Pulsford
Welcome to embracing digital Transformation, where we investigate effective change, leveraging people, process and technology. This is Darren Pulsford, chief solution architect, author, and most importantly, your host.
Jeremy Harris
On.
Darren Pulsford
This episode, building your Gen AI policy with returning guest data privacy and Gen AI lawyer Jeremy Harris. Jeremy, welcome back to the show again and again and again.
Jeremy Harris
Yeah, hey, you know what? You keep asking me, so I will keep coming. Darren.
Darren Pulsford
Well, it's such a hot topic and I think in an earlier discussion I've had, Gen AI has just gone crazy. We need some guardrails around it. So I gotta call my lawyer friend that has worked in the space. I mean, this is, this is your expertise. Your expertise is data privacy, generative AI. I mean, this is. So I want to get to some brass tacks today.
Jeremy Harris
Okay, we can do that.
Darren Pulsford
Is that okay?
Jeremy Harris
Yeah. Yeah. Let's go in.
Darren Pulsford
All right. Because I need a Gen AI policy for my company, but I'm not a healthcare. But let's pretend like I am healthcare. Okay, so if, if I am, because I use generative AI a lot, as you know. But should I have a policy? I'm a startup, I got a production company, I'm a startup. Or if I'm a established company, do I need a generative AI policy or can I just ride on the data policies I already have?
Jeremy Harris
That's actually a really good question. I've had debates about this. I think the prevailing idea right now is you actually do need a separate Gen AI policy. Now. It can borrow greatly from those other policies. It's complementing what you already have. But I do think you need a Gen AI policy because the way that generative AI works in the systems that we have is a little bit different than what we've had before. Right.
Darren Pulsford
So explain exactly. Because, I mean, I would think that data, a data policy would be sufficient. Right? It's. Yeah, I have data privacy. So what's different about generative AI?
Jeremy Harris
Well, data is data. I mean, we'll start there. Data is data. You're going to give data, you're going to lend data, you're going to sell data, you're going to use data, whatever that looks like with your vendors, especially once you start with the generative AI, you lose control of data faster than ever before. Right? So like let's, let's say I have, we could use a healthcare example, right? I have a medical, a medical device vendor. I have a, a medical, or I guess an ehr, emr, whichever you want to call it. Electronic health record, healthcare record or electronic medical record, either one. I'll use EHR because that's the one I normally use. So if you have an EHR vendor, well, okay, you're giving data, but you've already gone through that contracting piece. You know that you're giving that data, they're protecting the data. You have that SOC type 2, likely that you have a lot of controls right in place that's going to follow your data classification policy, it's going to follow your data retention policy. I don't necessarily know what doctor or nurse or, or, you know, payment coordinator, whoever it is in my company is doing when they log into ChatGPT. I have no idea if they're cutting and pasting, uploading a spreadsheet with whatever data. So that's why I say in some sense, you lose control of that data a little bit more randomly or quickly. And unless you've fought through this, you won't be able to use the full potential of gen AI without having a large risk. So I do think you need that right now. It says heightened the risk. Everything's a little bit more timid or a little bit more insecure. I guess the feeling, right, not necessarily the data, but the feeling is everyone's a little bit more on tenderhooks. They're like, oh, what are we doing with this? We really, really want to get ahead of this. We really want to use this. We don't want to be left behind. And that's that quick pace, right? You just need to slow down and make sure that you have a policy, you've thought through what you're going to do with the data, who controls the data, can they train on the data, those types of things. So an AI policy is usually going to be the time that you have to look and strategically look at, hey, what are we going to do with AI?
Darren Pulsford
Okay, so what you, what you described there was doctors using public gen AI that's out there. So cutting and pasting. It would be almost like cutting a patient's name in, into a Google search, which.
Jeremy Harris
Yeah, yeah, I mean, or in Facebook.
Darren Pulsford
When you know your patient is lying to you when you, when you say, how did this happen? And they're totally lying to you, and you go on to Instagram and find out that, oh, that's what really happened. Right. Because they were riding a motorcycle, on fire, jumping through hoops, you know, over buses. Right. And. Right, okay, that's what would really happen. Is that unethical to do that? I mean.
Jeremy Harris
Well, the reality is it might, might be, but the reality is it hasn't been helpful to do that ever.
Darren Pulsford
You're right, it hasn't, but now it hasn't been.
Jeremy Harris
But, but exactly. Like I said, the, the pace has increased to such a point where it actually might be helpful. If you have an agentic AI like in chat GPT5 and you're starting to use these, you know, these agents, you, you kind of set things up on your personal thing because you're, you're really curious on how to use it. Unless your system has blacklisted the AI sites, you're going to have access to those sites. Right. And most of them haven't. Most of the companies I've talked to have not blacklisted specifically those IP addresses. So what you're doing is now a couple of them actually have, and I've had some interesting conversations with colleagues that it's actually been really helpful because they have approved and gone through things and they know where their data is going. And I'm like, well, kudos to them. But most of the companies I've talked to, the data just isn't quite that secure. And so when, when you're doing this, yeah, the doctor's like, hey, you know what? This actually could be helpful if I know something about this or know something about, let's get a trend. Like, I just want to run a trend. I'm not very good at Excel, so I'm going to upload this, this file and I'm going to ask ChatGPT to trend this. Well, they don't think to take out any of the identifiers. Right. They don't really care about the name or the record number. Right. They don't, but they don't clear that out. It's not like they've given you a clean set that's unidentified or de identified. So you're going to have some issues with that. So what you really need to do for the policy, getting back, kind of the, the idea behind it is one, you need to figure out as an organization what your, what your approach is going to be. Are you one, Are you going to embrace it? Are you going to embrace a lot of it or you embrace one, one vendor or, or are you going to onboard that yourself and have some sort of on site like a private, like a private. Are you going to do that or are you going to go out and buy that and just kind of engage with somebody. I know Google, you know, Google with their Gemini, they were looking at their Google Health product. They were looking at, hey, how do we get this into these, these health care situations and health care organizations using Google Health, using, you know, their Gemini product. And ultimately there's several out there that are more specific to healthcare. But either way, it doesn't matter what industry you're in, you kind of need to sit back and say, okay, we need to introduce what we consider to be generative AI. So you got to define that for your organization and you got to establish what your risk or your approach is going to be. We are going to use AI for X. Right. You got to kind of get a good structure on how you're going to deal with it, how you're going to approach it, and if your leadership is on board. So prior to even, maybe even doing a policy, you might have to have a strategy type of discussion about how we're going to do this organizationally because the policy becomes only one part of that strategy.
Darren Pulsford
Okay, So I love that approach. Right. Talk about strategy. How am I going to do it? The first thing, as a CEO of a company, I'm going to ask, all right, yeah, we want to adopt generative AI. I'm going to come to Jeremy and say, what's the risk? What risk do I have in using Chat GPT or Gemini, one of the public gen AIs, what is my risk exposure? So healthcare. Let's talk specifically healthcare. Because healthcare seems to be the most restrictive that in financial, financial healthcare, healthcare and banking.
Jeremy Harris
Yeah, I think you get some, some heavily regulated industries.
Darren Pulsford
So what's my, what's my exposure? Right. If I just say, do whatever you need to do with Gen AI to, to help with whatever you're doing inside healthcare, which could be patient care, it could be processing records. There's a lot of things, a lot of going on there.
Jeremy Harris
Right, right.
Darren Pulsford
And that's my biggest risk. Where's my biggest risk?
Jeremy Harris
Well, that's where I'm going to start. I'm going to say, well, what are you doing with it? Right, what, what, what data set are you actually invoking? Because that's going to determine. You probably have a lot of different risk levels depending on the sensitivity of the data, the volume of the data, which vendors you're using had conversations with OpenAI and their ChatGPT product and they really don't necessarily want the identifiers. They actually try to avoid having some of those identifiers because they don't need the liability of themselves. They actually want to have a data set, but they don't want to have that data set tied to a specific patient number or patient name. So they're actually trying to write some algorithms that they can de, identify or isolate some of those personal identifiers, which I found really fascinating in their algorithmic approach, because I don't know how they do that, but that's what they tell us anyway, or that's what they told me. So you're right, the risk you have varies depending on what you're inputting. It's pretty much a direct correlation. Right. The more sensitive the data, the higher the risk of that compromise. But we can walk through that because we're going to walk through and say, okay, well let's look at all of the use cases that you want to have right now or we currently have going on right now. And let's start there. Right? That's the part that I would start and say, what are we using it for right now? How is it helpful? And let's walk through some of those risks and what we can do to reduce that risk. What.
Darren Pulsford
So wait, wait, I gotta back up a second. How do I capture how it's being used right now, especially in a large, a large organization where I've got thousands of employees, that.
Jeremy Harris
It's, it's a tough one. It's a tough one. We have, you know, I've worked with some of the, the IT side or the information security side, the cyber guys, and they're going to be looking at, hey, what, what hits, what calls, right? What, what actually URLs were, what are we sending, you know, how often are we going there right now? That's a real time. Who's on these following websites that are tracked or we assume that are AI. We've actually gone out and said, oh no, real time. This is our network. You can go ahead and search who's on which sites right now. And it was surprising. We did that at one time. And just during a meeting, it just came up and I said, well, let's go do this. And there were like 69, 70 accounts somewhere in that neighborhood of individuals that, that were medical professionals, so licensed practitioners that were on, I believe it was an OpenAI server at the time. And then you had a couple on Gemini. Then you had a couple on it. So we, to catch it real time, you try to Figure out who's using what real time. Then it's virtually impossible to say, hey, we know everything that our system is doing, but that's where you got to start. You got to try your best to figure out what systems are people using, what servers, or you know, what, what services are being used. Is it Claude, is it chatgpt, is it Gemini? And you know, for example, I know one other healthcare system, not, not the one I was with, but one other healthcare system, decided to blacklist those and signed a contract with a particular service provider, went through, signed up, had a business associate agreement and everything. It was really interesting how they got this done. But then when they opened it up, they said, you can use AI to do your projects and you can have a lot more comfort with even proprietary or personal information because we have a restrictive agreement and this is how it's going to be used and taught and you know, how the LLM is going to learn, etc. Etc. But to get to your question, back to that, how do you tell who's doing what? It's, it's really, really almost impossible to tell every use case.
Darren Pulsford
I mean, but, but the company can take a look at data well within your rights, right?
Jeremy Harris
Sure, yeah, you can. And you should be able to identify who's where on your network and figure out where the data is going. Yeah, like I said, it's hard to do it because it's all real. If you're doing it real time, you can then start creating that list, okay, these are the providers, these are the where they're going. And now we know, hey, we can have interviews. And we actually did interview several of these physicians and some of the other affiliated providers and say, hey, what are you doing with Chat GPT or what are you doing with this? We're curious. We don't have a policy that says you can't do it right now. At the time we did not. So it's like, what are you doing? What's the use case? And how can we get on board with making sure that we're doing it in an ethical and secure way?
Darren Pulsford
I really like that approach because what it is, is, all right, we're going to open it up, monitor and then understand how people are using it and then go back and talk to them and now come up with a policy. The policy may be training. Maybe we need better training on ppi, but we still let people use it. Or maybe we run filters on every query that goes to one of these sites and rips all the PPI out. There's techniques that can be used to do that. But I like the first approach, which is, hey, let's find out what's going on first.
Jeremy Harris
Yeah, I mean, the reality is we need you. No leadership team knows exactly what's going on. Even in a small company, you might, I think you might be a little bit surprised. You know what happens when people have a leash, right? They just go and there's like, hey, I want to, I want to explore, I want to do these things. And there's not a bad intent. It's actually probably really good curiosity to, to explore and to do these things, to innovate and to use this, this technology in a way that's going to make it more efficient and more effective. I think that's the step that I've really embraced is to try to get a good sense of what is being used currently by talking to the people who are doing it. And again, there's not really a sanction or there's not really any sort of disciplinary action. It's just, I want to know what you're doing now. If I see something that's absolutely critical, I'll stop that and we'll make sure that that's not happening.
Darren Pulsford
Yeah, like sending passwords to the dark web, Right?
Jeremy Harris
I mean, probably not a good idea. I know ChatGPT actually will say, hey, if you send an upload a password thing, it will actually tell you, hey, that's probably not something you want me to have and I suggest you go change it. Yeah, I've seen that happening lately where they're trying to educate. But your second point is. My second point is you find out what's happening and then you educate on how you can do better. Better. The number one thing of your policy is really going to be just an overarching. Hey, this is kind of the, the generalized approach we take to AI. We might have a few rules that are very specific, but the majority of that policy is going to be very general. I think that's the best way to do it. What you do need to do instead of making the policy more detailed is all of the accompanying documents of the training, all of the, you know, the acceptable use cases, all of those. That's going to be in a separate governance kind of structure, not necessarily in your policy.
Darren Pulsford
So that will change more often than your policy.
Jeremy Harris
Yeah, because. Because you've been in companies where policies take forever sometimes to get.
Darren Pulsford
Oh, oh. At intel, when Gen AI first came out, the first policy hit in February of, after the November release. So in, In February of 2023, our first gen AI policy came out. It has been almost three years. We've had 15 policies.
Jeremy Harris
Yeah.
Darren Pulsford
So for total restriction to wide open to everything in between. But I think, I think your approach here is make it more general, back it up with training and guidelines as those are going to change. Because it's changing so quickly. We need, you know, we need to adapt. Right. As we see new things.
Jeremy Harris
I think this, there's something about a generative AI policy that people are treating it the same as other policies, like an HR regulation or something like that. The reality is your HR laws don't change. Even your privacy laws, right. HIPAA or whatever, they don't change nearly as often. Generative AI right now, and I'll use specifically generative AI, that term. But I think what you're finding is the US doesn't really have, you know, the Biden administration came out with, hey, here's our goal on generative AI. And the Trump administration, no, we don't like that. Here's our, our new approach to it. Meanwhile, the EU's out there doing their thing and China's even weighed in and said, yeah, we're going to win this war of AI and we're going to beat you all. Here's our framework. And everyone's like, wait, what? Like, so you can't really go with, hey, this is what the prevailing legal requirements are. You kind of have to back it up a little bit in the AI side and say, what are some of this, the basic ethical type of arrangements that we want to make sure that we're using AI and we're comfortable if a regulator comes in after the fact. Now, Right, because the regulations are up in the air. If they come in, we can at least point to a logical ABC process that says, here's how we evaluated the risk, here's how we knew that we could use AI in a way. So your policy is going to be that generic structure of here's our approach, and it's probably not yet anyway going to be a very specific legal framework like the NIST AI risk management framework, the federal side or the EU AI act, which has some very specific type of things. I think everyone is a little bit just up in the air right now. So my approach to AI is you have to have an explain, show your work methodology here. You can do a lot of things, there's a lot of gray area, but you have to be able to show your work in a way that a regulator is not going to say, hey, that's just willful and Wanton misbehavior. Right? That's just reckless.
Darren Pulsford
Gotcha. So, because there are no regulations, the best thing to do is to show your thought process, so be very explicit in how you got to where you're at. So the supporting documents are going to be very important here, right?
Jeremy Harris
You're going to have to have, in your policy, you're going to have to say, hey, here's our governance structure, right? Who's actually running it? Who's our role? Where are the role that are running our AI program? What are their, what are their functions? What do they do? Like legal compliance, you know, obviously your IT side, but your strategy, your hr, you know, someone has to be in there who's making the policy, who's reviewing the policy, and what's that cadence look like? Are there any working groups under that to say, hey, we want to make sure that we are doing use case analysis or, you know, in privacy or security impact analysis to different product or different ideas that come up?
Darren Pulsford
So, so this, this sounds like Risk Management 101.
Jeremy Harris
You'd be surprised how many people are just throwing that out the window and running around saying, oh, this is AI, it's great. Well, no, fundamentally, back to our first point, data is data. It is, it's, it's very similar in how we want to approach the fundamental part of it. We want to manage the risk, assess the risk, which comes really fast with artificial intelligence going on right now.
Darren Pulsford
Well, before we talk about the different types of risk, because generative AI brings out some different new types of risk, one thing I want to drill down a little bit on here is establishing this policy does not need to be restrictive of the usage AI, it's more of a. We're monitoring it, we're watching it, we're gathering use cases, we're trying to understand what it's doing and we're documenting how we're doing that. Is that the right approach right now? Because the EU has been, we're going to lock this puppy down and then what happens? Innovation goes nowhere and then people try to subvert the policy. Right?
Jeremy Harris
Yeah.
Darren Pulsford
So what you're talking about is a lot more monitor, watch, and then if something goes outside the bounds of the other policies that we have, then I need to, I need to step in and, and correct and train and things like that. Is that kind of the strategy here?
Jeremy Harris
Yeah, it depends on where you're going to go with it, though, right? Because if you're actually in the EU or you're part of that framework, you might not be able to do what I think right now is the good approach is take this artificial intelligence movement because this is going to revolutionize a lot of things. The concern I think from a risk standpoint and from a governance standpoint is well, do we even want it to revolutionize certain things? First of all, is it actually giving us a benefit? So I think there's some things that from a strategy standpoint, it makes sense to kind of monitor and not necessarily stifle. And I believe actually, you know, again, I'm no political viewpoint here is intended, but I believe that's the current kind of US Federal approach is we don't want to stifle that. We actually want to kind of try to put something out there that challenges innovation. Right. That actually is much more iterative. Well, it comes with its risks. But that's how I approach the policy making for AI as I want it to be much more, allowing for a lot of innovation, a lot of exploration with some guardrails that really haven't changed. You know, the California AG actually sent out a memo, we'll send out two One said, I think we talked about this last time I was on one said, hey, AI in general, you know, there's some guardrails. All of these other regulatory frameworks that we have still applied AI guys. And then they sent one out specifically for health care and said, you have to have this going on in health care too. And remember, a doctor is what is who has the license and they have to be providing health care. AI should not be providing the health care directly and making health care treatment decisions. It can augment that. And I think that approach is actually kind of a wise approach. We have frameworks, we have data, you know, privacy and security and all of these types of things going on. And when it comes down to it, AI still ones and zeros. Right? I still. And we want to make sure that we're managing that in a very similar way. But what I think it's, it's data that's doing things that we haven't really seen before.
Darren Pulsford
Well, that's, that's, that's the other part of this. Right. We don't have any policies around liability on a gen AI that's actually creating.
Jeremy Harris
Right.
Darren Pulsford
New things. Right. Whether it's a new treatment that we've never seen before. I can imagine surgeons coordinating, working with Gen AI to come up with new ways of doing. Oh, instead of open heart surgery for Darren, maybe we can, you know, do it another way without cracking his chest open. Right.
Jeremy Harris
Yeah.
Darren Pulsford
And if something goes wrong who do I blame a gen AI? Do I take OpenAI to court? And I kind of wish there was no blame type things here, but Gen AI is now creating. It's generative, it's generating new things. Right. So this is a new risk area that we've never seen before because machines have never really generated new concepts or new ideas. And people may argue with me on this that it's not generating anything new. I beg to differ really.
Jeremy Harris
I mean that, that's a, that's its whole purpose though, right. Is to predict that next thing. Right. An LLM in itself is predicting something and creating something that's not necessarily. Well, it isn't existent when it generates that. Right. It takes the pattern, it takes all of the recognition and it says, well, based on all of this, what's next? And it comes up with something that's, that's informed, but it's still something that's not existent. So I would, I would agree with you. I think it's generating something.
Darren Pulsford
It's something, but we don't know if it's good or bad yet.
Jeremy Harris
And that's, and that's part of, and that's part of the concern. Right? So if you get into, especially in the healthcare world, you know the risks associated with AI, you have, first of all, you have to even set up a process to identify what those risks look like because everything, you know, healthcare will be one, but other industries, you won't have the same risk. You'll have similar, but you won't have that same levels of risk in certain.
Darren Pulsford
Well, you do in the Department of Defense.
Jeremy Harris
Sure.
Darren Pulsford
That's a new target. I'm gonna blow it up out of the water. Oh, it happened to be, you know, a surfer.
Jeremy Harris
Right. And, and you'll, you'll have, you'll have those, those risks. Right. So I, so in terms of AI risk management and types of risk, I mean, you have all sorts of different things from the privacy side, from identity identification. Right. All of the ppi, all of that. Personal protected information, if you want to use that phrase.
Darren Pulsford
Right. So first one is data loss or data. Right. Not just data loss, but data compliance.
Jeremy Harris
Right, right. And, and how does that interact? Like how if, if you're really creating something and you're putting it into this LLM and you really can't take it back. No, you really can't walk it back right out of the LLM and say, oh, you can never remember that. You can. Well, how does that actually work with data privacy or the right to be forgotten as outlined in like the, you know, the GDPR or California has a very similar thing with CCPR or ccpa. How do you go through and, and how do they interact with each other? And that's where I think the, the, the real interesting intersection with the technology and the law will be, will be really focused on for a little while, if not forever really. Because it, how do you deal with that? Right? It's not really who owns the data, who generated it? Well, is it open AIs or is it mine as the user? Right. I don't know. There's another idea, another risk you have, but as you identify these risks, you have operational risks. The, you know, the data might be leaked, the data might be re identifiable. You have legal risks. Even to the point where I have a legal strategy that I'm asking one of the AI models to say, hey, if I have this, these are the facts of my case. I put all this information, well, guess what? That's. Is it searchable? Like, okay, how about I connected it to my Google Calendar? Does that open up my Google like my whole system? Like maybe. So there's a lot of risks that I see that I'm like, huh? These are really fascinating risks that I think as, from, from my perspective as the, as the risk mitigator, the risk identifier, at least I'm working with the IT folks and this and the strategy folks, the innovative innovation folks, to say, I love what you're doing, but we have to remember a few things as we're doing them right. We have to be a little bit more cautious on how we're going to approach it. Let's do some things with some models, let's figure out how it works. Let's drive that change with some de identified information. Let's drive that with some other data sets that make more sense to if even if they do get into the wild, I'm not going to be devastated or we're not going to have a compliance problem.
Darren Pulsford
So Mo, most of the risk that you've mentioned so far has been around data and all the different aspects of data. Are there additional risks that we have to deal with in that how much do I trust the results that I'm getting back and decisions based off of you mentioned explainability. You know, is it explainable how I got to the results that I got? And so to me this, this really calls out and maybe in an AI policy you have to have this trust and oversight that we as humans still have the final word and still the buck stops with us still, right? Do you See it going that direction or do you see people completely relegating decisions to AI moving forward?
Jeremy Harris
Well, I think people want to let AI make the decisions. I think that's the easier way to do it.
Darren Pulsford
Well, because then they have no accountability.
Jeremy Harris
Yeah. But I think what I really see though is despite that want, I think that there really has to be the operational oversight in the governance. This, this person in the loop concept that you read a lot about. Right. There's got to be some, somebody, some knowledgeable person even, not just a body, but some knowledgeable person who can look at what's going on and analyze and say that's not right. Or something's a little off here. Even if they can't identify exactly what it is, they need to go through that validation of the data. Right. In, in healthcare, one of the big issues that I find to be a stumbling block is how that model has been trained. What is that? What are we going to do with the information? Right. Where we have this great use case, but we're not sure that the data is reliable in the first place on how that model was trained.
Darren Pulsford
So when you, when you look, are the, wait just a second, are the big, are the big public gen AI, are they willing to give out how they, how the models are trained?
Jeremy Harris
Not really, no. I mean there's really not a good bias test that you can say, hey.
Darren Pulsford
I mean they're all bias. We all know this, right? I mean, sure, but.
Jeremy Harris
And it's hard to track, Right. It's hard to track what the bias is.
Darren Pulsford
Right.
Jeremy Harris
And that, and that's part of the problem is they're, they're even, even if they say they're open, it's sometimes a little hard to get to. Well, what's the data set or what, what is the data that you actually did train this on and what data.
Darren Pulsford
Did you not train it on?
Jeremy Harris
Right, right. Did you exclude, you know, if you're looking to treat a certain condition that's more prevalent in an African American population and that model's been trained on Asians or white population, you're not going to get the results that you think you need. Right. You're not going to get the most effective results at all. And so that role based training, that role, that bias testing is kind of tricky. So that risk right there to actually, and I don't want to say really it's hallucinating because I think that might be a little too far. It's just doing what you would like to do. Right? Right.
Darren Pulsford
And so you know what, Jeremy we just coined a new term, AI blind spot. It's a, it's a, it's a blind bias. Right. It's totally true. Right.
Jeremy Harris
And, and you know, we go through bias training all the time as humans. Right. We have unintended type of consequences for our natural unknown biases. Right. Our inherent biases.
Darren Pulsford
Our inherent bias.
Jeremy Harris
AI has an inherent bias and no one can identify what that looks like on the back end because we're not sure what the data was. So we have this transparency problem, you know, with this bias and that, that leads to kind of what you talked about and mentioned that the explainability, it's, there's reputational risks behind relying on AI. If I want to say to you, as my customer, as my patient, as my, you know, whatever relationship we have, you can trust me. And I'm using a model that's not trustworthy. You're going to have a potential disconnect there. And that delta will lead to the risk that, that residual risk of what, what risk do you have? Foreign.
Darren Pulsford
This is going to be really interesting, Jeremy, because I could see someday a doctor, me as a patient going to doctor. Which gen AI do you use? Which model? Which model do you use? Oh, well, I use open AI. I'm not happy with their, with their medical bias. I'd rather you use Stanford's gen AI. Yeah, right. Or what, what. Or the Mayo Clinics Gen AI. But it's, it's fascinating because how many of us will ask a doctor where they graduated from?
Jeremy Harris
Yeah, it's.
Darren Pulsford
People do.
Jeremy Harris
It. It might, it might make a difference, especially on the first meeting.
Darren Pulsford
But yeah, it's just, it's a really interesting concept because our doctors are biased based off of the medical school that they go to.
Jeremy Harris
Sure, sure.
Darren Pulsford
Right. I mean, obviously. Right. Because that's where they learned. Right. There's this whole Western medicine, Eastern medicine. There's the conflict between MDs and DOs that is out there. Right. And you know, there's all these different things going out there. I think Gen AI just introduced another, another potential thing. It may be in the future. But these blind spots I think are, are real and can, you know, raise their, raise their heads sometimes. Hopefully not in healthcare.
Jeremy Harris
Yeah. And that's one of the things I think that Even the healthcare CEOs that I have heard speak or talk about, they really want to use it in a way to use AI in a way that effectuates the most change. But yet it's not too, too crazy. They're not allowing it. They understand, they're not trying to Allow it to diagnose or do anything. But what it's really helpful for is, hey, can you analyze a million of these mammogram images and then help me, the physician, say, based on those pattern recognitions that we've done, this one look like it's 10 trending in that same way. And then they can give them that help that they need to augment. I actually know a couple of medical directors, they, who use, they don't use artificial intelligence as a word, they use augmented intelligence because they can't unders. It just doesn't make any sense to be artificial. That's not helpful.
Darren Pulsford
We don't want augmented.
Jeremy Harris
We want augmented. We want to be able to be better.
Darren Pulsford
And so you got to go to my shop and buy the, the augmented shirt, right? AI Augmented shirt.
Jeremy Harris
That's right.
Darren Pulsford
We can all walk around augmented by.
Jeremy Harris
AI and that's, that's what Mark Zuckerberg was talking about with his glasses. Right. If you don't have the glasses that are AI enabled, you're going to be behind. I'm not sure that's true. But that's his idea though, right? You're augmenting what's going on. And I think that's really the kind of his vision. I'm not sure that's not, you know, necessarily true. It might just be feeding his own machine, I don't know. But it, but it's maybe, maybe a thought that I've had. But, you know, it makes some sense though, right, because the augmentation might be helpful. The augmentation actually might prove to be really productive.
Darren Pulsford
Well, and so I would want, I would want my doctor to be augmented by an AI.
Jeremy Harris
Well, yeah, and I think we talked about this last time you asked me whether I would go, and I agree. I would want the help, I would want the augmentation because I think there's value in, in that mass amount of data being processed.
Darren Pulsford
But it's not replacing my physician. It's helping my physician understand better what's going on. Because there's no way we can keep it all in our head. There's absolutely no way.
Jeremy Harris
Right, right. And that's what, getting back to the very beginning questions, that's what the policy side of the strategy side should do for you. Right, that's exactly coming full circle. That's exactly what it should do for your organization is say, how are we using this to augment what we do in a responsible way to mitigate some of these risks? I mean, we talked about a couple data. You know, there's some ethical risks, legal risks, maybe some IP or copyright type of risk. You might leak information, but that's data. You have some, some reputational type of harms that might come about. So there are some risks and they're not really that different than anything that we've seen in terms of how you name the risk. But I do think that this will take a lot more continuous monitoring of the risk and iteration between more teams than ever before. Because, you know, legal compliance, it, they all need to be lockstep in the AI world because the use cases keep coming. For example, I think it's rare that you have and I'll use healthcare because that's where I'm in. But it's rare that you're going to have a nurse come up and say, hey, this is what I need for this computer program, this software, right? They usually have software and they're kind of constrained. I mean you have the emr, the ehr and you have that there. And they'll say, okay, I don't love it, but I'm going to use used within this constraint. It'd be nice if it had a function. That's true. But now you're getting nurses coming to the IT saying hey, I want to use AI for this, this and this and that. And you've never had that type of interaction before. So the number of use cases or the proposed use cases coming up to the governance body, whatever that looks like in your organization, whether it's one person, whether it's, you know, a committee of 40, which is too big by the way, but if you have this, whatever's coming up has to be evaluated in a very quick, timely manner to gain the most benefit for it, you know, because you'll lose that, that innovative spirit, you'll lose that curiosity if you don't approve some of these. But at the same time you have to have that system set up to be nimble enough to do it in a very responsible way. Hey, we're going to take a data set, we're going to de. Identify it, we're going to use it in a sandbox and we're going to see if this works.
Darren Pulsford
Well, maybe I should be doing that, right? Maybe I should be part of your governance model and, and maybe that's true.
Jeremy Harris
And that might not hurt, right? To have. Hey, what aren't we thinking about? Even they, they're great at spot issues.
Darren Pulsford
They're great at that stuff. Yeah. Hey, Jeremy, this has been, this has been wonderful. I, we're out of time, of course but you and I, we could talk forever, so. But it would bore the audience. That's true. If people, you know, if people want to reach out to you, Jeremy, they reach out to you on LinkedIn. Yep. Is that the best? Yeah.
Jeremy Harris
Yeah. Jeremy J. Harris Privlaw at Gmail. There's an email for you or there you go. I think that's the profile name on LinkedIn too. It's Jay Harris Privlaw. Yeah. Reach out. I'm happy to have conversations about it. Happy to discuss it.
Darren Pulsford
Awesome. Awesome, Jeremy. And, you know, hopefully this episode will carry on like your others have. They've been quite successful, caring and for my listeners that don't know, Embracing Digital Transformation is now number one on Apple podcasts in the technology category, which is amazing. I don't even believe it. We'll see how long we stay there. Maybe a day or two, hopefully a week or more. So keep listening. And Jeremy, thanks for coming on the show.
Jeremy Harris
Thanks for having me again, Darren. Appreciate it.
Darren Pulsford
Thank you for listening to Embracing Digital Transformation today. If you enjoyed our podcast, give it five stars on your favorite podcasting site or YouTube channel. You can find out more information about Embracing Digital Transformation at embracingdigital. Org. Until next time, go out and embrace the digital revolution.
Episode Release Date: August 12, 2025
Host: Dr. Darren Pulsipher, Chief Solution Architect for Public Sector at Intel
Guest: Jeremy Harris, Data Privacy and Gen AI Lawyer
In this insightful episode of Embracing Digital Transformation, Dr. Darren Pulsipher delves into the intricacies of establishing a Generative AI (GenAI) policy within organizations. Joined by returning guest Jeremy Harris, an expert in data privacy and generative AI law, the discussion navigates the challenges and strategies essential for integrating GenAI responsibly, particularly within the highly regulated healthcare sector.
Darren Pulsipher kicks off the conversation by addressing a fundamental question: "Do organizations need a separate GenAI policy, or can they rely solely on existing data policies?"
Jeremy Harris firmly asserts the necessity of a distinct GenAI policy:
“I think the prevailing idea right now is you actually do need a separate Gen AI policy... It’s complementing what you already have.”
[01:51]
He elaborates that while data policies provide a foundation, GenAI introduces unique dynamics in data handling that existing policies may not fully address.
Harris highlights how GenAI processes data differently, leading to rapid data loss and diminished control:
“...once you start with the generative AI, you lose control of data faster than ever before.”
[02:28]
Using a healthcare example, he explains the risks of unauthorized data sharing through platforms like ChatGPT, where sensitive information can inadvertently be exposed.
A critical step in policy development is understanding how GenAI is currently being utilized within an organization. Harris shares strategies for large organizations:
“We have to try our best to figure out what systems are people using, what servers, or what services are being used.”
[13:18]
He discusses methods like real-time monitoring of network activity to identify GenAI usage, emphasizing the difficulty but importance of tracking usage across thousands of employees.
Darren underscores the importance of a strategic approach:
“Establishing this policy does not need to be restrictive of the usage AI, it’s more of we’re monitoring it, we’re watching it...”
[21:43]
Harris agrees, advocating for a balanced strategy that allows innovation while implementing necessary guardrails:
“We have to have some guardrails that really haven't changed.”
[22:00]
He emphasizes the need for continuous monitoring and iterative policy development to keep pace with the rapidly evolving GenAI landscape.
The conversation delves into various risks associated with GenAI, particularly in healthcare:
Harris introduces the concept of "AI blind spots," referring to inherent biases in GenAI models that are often opaque:
“AI has an inherent bias and no one can identify what that looks like on the back end...”
[32:59]
A significant portion of the discussion contrasts "augmented intelligence" with "artificial intelligence." Harris advocates for AI as a tool to enhance human capabilities rather than replace them:
“We want augmented. We want to be able to be better.”
[36:15]
Darren echoes this sentiment, envisioning a future where GenAI assists professionals without taking over decision-making processes.
Effective GenAI policy requires robust governance structures that facilitate swift decision-making and policy iteration:
“Legal compliance, they, they all need to be lockstep in the AI world...”
[37:32]
Harris stresses the importance of cross-functional collaboration among legal, IT, and strategy teams to evaluate and approve GenAI use cases promptly.
As the episode concludes, both Darren and Jeremy emphasize the dynamic nature of GenAI policies. They advocate for policies that are adaptable, transparent, and focused on responsible innovation. The conversation wraps up with practical advice for organizations seeking to implement GenAI responsibly, highlighting the importance of monitoring, education, and strategic governance.
Notable Final Thoughts:
“That’s exactly what the policy side of the strategy side should do for you... how are we using this to augment what we do in a responsible way to mitigate some of these risks?”
[37:32]
For further discussions or consultations on building GenAI policies, reach out to Jeremy Harris:
Thank you for listening to Embracing Digital Transformation. Stay tuned for more episodes that navigate the complexities of the digital revolution.