
Loading summary
A
Foreign.
B
Welcome to the Emerging Litigation Podcast.
A
Thank you, Tom. I've been looking forward to it.
B
I'll just give a brief introduction here about what we're going to talk about. So federal AI policy is moving quickly from broad principles to real legal action. In 2026, the DOJ announced an Artificial Intelligence Litigation Task Force. This is a group created to challenge state AI laws that the administration views as inconsistent with a national, minimally burdensome framework. How am I doing so far?
A
You're doing great, Tom.
B
Okay, thank you. I can. I can read. At the same time, the DOJ prosecutors have also signaled what they expect companies to be able to show about their AI controls, risk assessments, governance, and evidence that AI is. Is being used responsibly and lawfully. We want that. So what does all this mean for in House counsel? We're trying to build AI governance that holds up under scrutiny from the. From the federal government, whether the issue is compliance or internal investigations or AI misuse like deepfakes. And just quickly, you are a partner in Reed Smith's Global Regulatory Enforcement Group, and you're a former member of the Volkswagen AG Independent Compliance Monitor and Auditor team, right?
A
That's exactly right. Thank you, Tom. And I think that's a great summary of all that's been happening and just kind of taking a step back. So, obviously, AI is a tool can be used for you or against you, and the government understands that. So with the litigation task force that was announced with the recent executive order, it's really the Trump administration saying, okay, we don't have a federal AI law right now, but we have very progressive and proactive states out there who have come up with their own laws. So let's try to pause all of that for a moment and try to figure out whether there can be a comprehensive effort or law that may be at the federal level that could be useful. And we've seen that actually with the EU AI act, where there is a comprehensive law that would apply to all of the EU union states. So that's sort of, I think, where this is headed. But, of course, you know, we've seen in the last year or so that executive orders come out, and then there's some time of just trying to figure out what are the best next steps, and that's what we're waiting for right now. But I will say the federal government has been focused on AI for a few years now, even under the Biden administration. So under that administration, there was an update to the DOJ's evaluation of corporate compliance programs. And what that does is it's saying when the DOJ reviews a company and its compliance program, when deciding whether to prosecute or deciding about a resolution, that the DOJ looks at how the company uses AI. So how are you using it in your compliance program? Do you have the right controls to mitigate bias, to make sure that the AI tool is giving you something that's truthful and verifiable? And there's questions in the evaluation that can kind of guide companies when they are using AI for compliance purposes to say, okay, if I ever get in front of the doj, what do I need to show them? Whether it's documentation about protocols or whether it's use examples to show them that there was an effort to mitigate all of the errors or bias that can come out of an AI tool. So all of this is to say that the DOJ is well acquainted with AI. They use AI themselves. Right. There was a Biden executive order that came out a few years ago saying to all the federal agencies and departments, you've got to come up with your own list of how you use AI and which tools you use, and you need to make it public. And the DOJ still does that. They have it on their website. They have an inventory. It's a long, long spreadsheet of all the tools that's used and how they use them. And that's interesting in a way because I do think there is sort of this perception that law enforcement uses AI only for what I would call hardcore crimes. Right. Which would be the types of things you would see at a state level. But that's not true. There are plenty of AI tools that the DOJ has disclosed that's used for white collar crimes or for, you know, instances where companies commit crimes. Whether that's, you know, looking at financial records and summarizing those or looking at travel. I've had clients who have called me and said, look, the Department of Homeland Security just showed up at our office and wants to know why our employees been to China 12 times in the last 18 months. You know, that's through AI, they're able to summarize across sources in a better, more enhanced way than they ever have before. And they're going to continue to do that. And in order to see if they can predict or detect crimes.
B
Yeah, yeah, you can see room for just tremendous advantages in terms of identification and research and digging in. You can also see room for just crazy levels of abuse. So that's, I guess that's with, with a lot of things, a lot of inventions and transformations. Certainly the Internet was that way. It was so awesome. And yet. Oh, my God. So crazy. You know, online banking, amazing, but just a new way to steal. But. So that's good. So, you know, you. So you mentioned the DOJ memo, the executive order, and then there's an evaluation of corporate compliance programs, and I'll put a link to those in the show notes so people can read them for themselves. So, so, so the d. So it's aimed at state laws. And. And the executive order emphasis emphasizes a single national framework. So the legal risk map for companies may shift as federal and state approaches collide and as. And as policies might shift. Because I think, you know, that happens. I'm sorry, I didn't mean to. I laugh at myself. So what should in house legal teams expect this year, both from the DOJ federal posture and from some of the state activity that we see evolving?
A
You're gonna see something that's akin to what we saw with esg. And I know ESG is not a popular acronym these days, given the current administration, and that's another policy change that you and I can laugh about. Right. But. But there was a time before this administration where it was really difficult to understand where the enforcement efforts were gonna come from. Is it states? Is this federal? And then when the Trump administration came in, it was the states, particularly the AGs in New York and California, saying, we're not giving up on esg, and we still have laws that you must abide by, particularly in California. And so there was this unpredictability of, okay, well, the federal government may not be so focused on certain things, but the state government and law enforcement and authorities are. And I think hopefully our clients are sort of used to that kind of predictability. Because unpredictability, I should say, because you've got to figure out where your highest risks are and really put your resources into that. So when it comes to AI usage at a company, you've got to see, okay, where are our risks? I'm guessing because California is such a large economy that most companies are doing business in California. So you need to look at, okay, what are our risks there? And then how can we mitigate those if the federal government is not going to be as active in enforcement? But I think what you're seeing from the Trump administration is, look, there's unpredictability here, and we want some control over it. And so we're going to look at it, and having control over it from a federal perspective actually could branch up other risks. Right. Increase other risks that Companies need to be aware of. So, but, but I will say this. Um, despite government enforcement and, and I do have cases right now where we have employees at companies who are using AI to commit crimes. I mean, it's, it's definitely happening. But besides enforcement, you really have to think too about whistleblowers. And, and you've alluded to this already, Tom, about deep fakes and, and misuse. But you know, what I try to share with my clients is we gotta change our mindsets a little bit. You know, we used to be able to see a photo and we could tell right away if it was Photoshopped. Right. Remember when Photoshop was a rage?
B
Yeah.
A
Now it is so hard to see an image and know whether it's AI generated or not because the images are so good. I mean, they're so enhanced these days. And even yesterday I was scrolling through some videos with my daughter and we were laughing about this one video and she was like, oh, that's AI generated. And I was like, are you sure? It looks so real. And I literally had to scroll to the bottom of the comments to see that it was AI generated. And that's the problem that we have. And so when you have really active whistleblowers, which our government is providing a lot of incentives for there to be whistleblowers, you know, they get a bounty in many respects now, more topics now than ever before through the federal government. So there's incentives and now there's this great tool that can provide evidence or create evidence that looks real. And so a lot of what we're doing right now is trying to educate our corporate clients on, you know, what are ways that whistleblowers can create evidence and it even can be mundane things that get you thinking, that gets you to issue spot. Like for example, I saw this great post and it was actually referenced in a New York post article where DoorDash customers were taking a picture of the food they received and then using AI to make it look as if the food was not cooked through or maybe was rotten. And then they would ask for a refund through the DoorDash app using that AI generated photo to say, look, I didn't get my food was not right. You know, it wasn't cooked through or it was rotten so that they could get a refund. And that's stealing, right? That's straight up that. And so it's interesting how these mundane examples that are happening every day gets you to think about the bigger examples that can happen in a corporate context.
B
Yeah, I mean, it's amazing what you're looking at right now. You're looking at me. I'm actually a I. Because I mean, who would believe. Yes, yes, he could look that good.
A
Yes.
B
And clearly it looks like when he went to shave this morning, he forgot to put the little clip on the razor and he, I just gouged my chin. I have a nice full chin. Anyway, that's a whole separate thing. I think I'm going to it it back in. But you mentioned, you know, the younger generations. You know, I think, I think I'm pretty in tune to AI use it a lot. And I actually, I did a, I think I might have sent you, I, I did a, a podcast with JD Supra about using AI in drafting. So. Because attorneys don't always love to write, but they love not love to talk, but they are good at talking. You can turn a lot of your talking into drafts and, But I, you know, I talk about, be careful, check everything, check everything. It's never going to write a final product for you. But anyway, so I sent my daughter, who loves to snowboard. I sent her a picture from Russia where the snow had fallen outside of Moscow or wherever. It was so high, it was at the top of like 10, 15 story apartment buildings. And I'm like, wow. And I believe it. You know, the climate's going bananas. Why not? It's cold up there. I don't know where they were. Siberia maybe. And I thought, and you know, Russians were, were Russian kids, I think were snowboarding off the tops of these buildings. And I thought this made me actually, you know, really like the Russian people. But it'll look great. And I sent it to her. She said, dad, it's A.I. like, okay, caught me. You know, it happens a lot and it's happening a lot with, with people. That was my first concern about how, how cool those videos can be. I love to see like when a wolf and a deer are hanging out together. You know, these are all very sweet, but yeah, I was always concerned when you'd see like a head of state giving a speech about something like, how do you know? And then people in even less sophisticated company countries see that. I feel like we're becoming less sophisticated. But, but when you, when other countries, people would see it, they'd like, oh, yeah, our president said let's. So let's do that. So anyway, I feel like it, I feel like the stuff should be marked like a cigarette label almost. Right. And if it doesn't have that.
A
That's right, that's right. And I think there are some requirements on certain platforms that you have to market. But like I told you, I had to like scroll down pretty far through all these hashtags to actually find it in the video that my daughter and I were viewing. But, but the reason why I'm laughing is because it's again, these images are so good and so enhanced that it is. And it makes you feel a little not so sophisticated and not so smart when you're, especially with, with me when my 12 year old is telling me, don't you see it's AI but, but it's again, it's, it really grabs you. And some of these videos and images, it really produces emotion. Right. So you're sitting there like shocked or you're, or you're angry or what have you. And, and that's what I mean, I think breeds some of the concern for our clients is they're thinking I could get something and I'm not even sure if it's real. What should I do next? And I'll give you an example. A few weeks ago I had a client who received a tip that one of their blue collar workers may have been in conversations online with minors. And it was not an appropriate conversation, it was a licit conversation. And so we, we received images of the conversation like the chat. And I wasn't sure at first whether it was actually real or not because I again, AI quality is high and so we had to do some things to make sure that it was actually legitimate before we sat down and talked to that person. And so we went through with the help of their IT department. But we also will hire forensic consultants to help us with this. Is looking at metadata and trying to understand is there some sort of indicator in the metadata that would show that it was actually generated on a platform that is a, you know, is an AI platform or what have you looking at it to see. Okay. Because we don't want to assume every single time someone sends us an image or a video that it's accurate. We need to do our own, to diligence as part of the investigative process. And most of our clients of course have an investigative protocol that they use. But just like with a paper document, you need to make sure and you need to double check that it's actually accurate before you start assuming things and, and bringing up allegations. So it's the same thing, it's just in a different format and, and honestly takes more time and takes an expertise that many companies may not have, which is why we use consultants sometimes.
B
Yeah, I feel like Somebody must be working on some kind of an app that you can run something through using AI, probably say, you know, is this real or not? You know, right now it's like it's. What do they call it? Not. There's a crowdsourcing. You know, it's like right now we're relying on each other to say no. Anyway, so what. So moving on to really kind of practical things. What specific controls should. In house teams have to. So they can demonstrate that they're using AI responsible, they're overseeing the use of AI responsibly across their organization.
A
If I had the ideal situation with a client, and we were in front of the DOJ talking about the client's compliance program, and the DOJ started asking questions about AI usage. I mean, I would love to have a protocol for how AI is used in the compliance program. And what that means is we even have protocols here at Reed Smith, for example, of how we use our AI platform for litigation, for example. And it will outline, you know, when is Genai useful in a litigation? At what stages? When should we not use generative AI for our litigation, for example, you know, when we're doing a privilege review, we would prefer to have human eyes on those documents to make sure they're privileged. Right now, we don't believe using Genai is the best use of that tool to detect privilege. And so it is nuanced in some ways. And so we want a human to be able to do that. And that's in the protocol. Also talks about ethical considerations, especially for lawyers. We have rules we have to follow, and then it talks about other tools. Here at the firm, we use Harvey for AI purposes, but we also have other tools that we can use in eDiscovery when we're reviewing documents or reviewing videos. And it kind of just outlines the different tools and what they're good for so that you're not using the wrong tool for the wrong task, basically. And that's what I like to see. Now, we also cover how do we mitigate bias and errors in our protocol, and whether that's through a protocol or whether we, excuse me, like a prompt or maybe we've seen evidence of. When you try to do this task with genitive AI, you're going to get a lot of false positives. So don't use it. Right. That's what I would love to see and be able to show the doj. Look, we've taken a lot of thought. We've taken a lot of time. We put this together, this is what we follow we may even audit sometimes how we use these tools to make sure we're not generating a lot of bias, false positives. I mean, we've seen that with clients, you know, because there are cautionary tales, right, of clients using AI to review resumes or candidates. Because what, you know, we've seen, you know, years ago, is a very top big brand company here in the United States decided, okay, we know who our superstars are in the company. Let's take those resumes from those superstars and let's put them into an algorithm so that when we see candidates that fit those criteria or those descriptors, we're going to get more superstars. So at a top, high level view, that makes total sense, right? That's ingenious. But what was happening was the candidates that were actually taken out of that algorithm and got to the next step for a screening, it was not a very diverse sample. It actually was all these false positives started to happen, and it was basically, they were looking at candidates that were all basically the same, and it wasn't giving them what they wanted. And so it led to disputes in litigation. And so testing your algorithms and your AI usage is a good idea to see. Okay, are we creating our own false positives here? Are we creating a. A situation where bias is going to be much higher than what we really want?
B
Okay. Yeah. I've got so many questions I'd like to ask, but I. I can't. I'll just say. I'll just say. And you don't respond to these. But now, for example, this isn't. This isn't political. But, you know, so whatever. The. The whole Epstein investigation, when, you know, when I heard that they're like three gazillion files and photos and videos, and it's going to take us forever. I'm like, are they not using AI? You know what I mean? It seems like that's an ideal. So moving away from that particular instance, if you've got a data dump, like you said, you get so much data, you get email, you get photos and video. You guys, you have a system. And I don't know if it was. You mentioned Harvey. Is that something that you all use? Is that what does that.
A
Yeah. So we have. Our EA platform here at the firm is called Harvey, and Harvey is Genai. And so you can generate, you know, all sorts of different things, whether it's summaries, you know, interview questions, you know, summaries of depositions. If you want to do summaries of financial statements, which I did the Other day, a chronology. You can do all sorts of things through Harvey. But then we have other tools when we get client data, whether it's, you know, pictures or video or documents, to help us review those. So, you know, you mentioned, you know, the Epstein files. You know, one thing that I need for one of my cases is we do need facial recognition for videos. We have over 2,000 videos that we need to review. It would take hours and hours and hours for me to have a. A team of reviewers do that. I'm concerned also that it won't be as efficient. And so we are test. We have a test case where we're trying to see can we use an AI tool that will have facial recognition on these videos so that I know exactly when a certain person appears in these videos. And so that's definitely something we are using and looking into to be more efficient. I know there's this idea that in the legal industry, you know, where you bill by the hour mostly, that using these tools is taking us away from profit, but I don't feel that way at all. I think that's a complete misunderstanding because I don't think actually you can be really effective or efficient looking at over 2000 videos with hundreds and hundreds of hours of time when it would be easier and better for our clients and for us to be able to detect when a certain person appears. And then I can have a human look at it and say, is this really helpful for our case or not?
B
Yeah. When is that hourly thing going to go away?
A
You know, I've been doing this for over 20 years, Tom, and that is a. That's been a question for as long as I can remember. And I think it's going to be a question for a while.
B
I'm just going to go out on a limb here and say, I think it's just so stupid, my limited experience with hourly rates, when I'll work for clients and stuff and they want. I won't even do an hourly rate. I just. I'm like something, you know, a lawyer. You could say something in 10 minutes that can change the, you know, a course of a company. So that's 10 minutes and, you know, and they made a gazillion dollars following your advice or whatever, or they stayed out of trouble or it's just like. It doesn't. I don't get it. But anyway, you're not the only one.
A
That's for sure. You're not the only one.
B
Plus, I don't know how you do it. What do you sit there. I know you do. I know how you do it, but generally I feel like I'm watching like a chess match, you know, where you're always like, okay, I'm hitting this client's clock and this client's clock. And I know you guys have software and stuff for that, but that's, that's just my pet peeve on behalf of attorneys.
A
It's funny that we're talking about AI Tom, and then you're billing, you know, you're bringing up the billable hour conundrum, but both of them brainwash you. Like when you start billing hours, you think of time differently and you think of your daily tasks differently. Right. But using AI, same thing. Like once you start using AI and you start generating your own prompts and putting in prompts and getting better at it, you start to think in everything as prompts. So. So it's the same kind of brainwashing. Changes your mind. Yeah. Your mindset for sure.
B
Yeah. I have found myself. Yeah. It's funny how it does that. Even silly things unrelated though. I'll be reading a hard copy magazine and I'll want to flip photo to the next photo. I'm like, oh God, my brain is rewired. But. But yeah, you do start to think that way. There was interesting thread that somebody brought up on, on LinkedIn about gender. Gender differences in AI and the use of AI and the question was around whether women. This is all generalizing. Of course. Women are traditionally have more soft skills that actually make them better for writing prompts and following up and doing more detail work. And then I kind of, I did a little bit of poking around because I thought because they were saying, you know, women can actually jump ahead. But what I. All I can think about is men and learning anything new. Like, like men never generally don't use instructions. I don't use instructions. Man will dive in. Damn the embarrassment. You know what I mean? Where some of the stereotypical or a woman might be more careful and so might be better suited for AI prompts. Who knows? It's always fun to talk about.
A
Interesting. Yeah. And I'm sorry to interrupt you, but that's really interesting. I have not seen that. But I can tell you just from my own experience, when I first started doing prompts, I kept starting off with the word please. And that's the reason why I'm doing that. Yeah. And I don't know if that's. If that's changes anything. I haven't tried it to see if it changes anything, but I would start off with please and Then I felt like I had to give a lot of background to help the platform sort of understand where I was headed. But it's so interesting, like how prompts can really change over time and how you can improve yourself on prompts. But also, I've wondered another thing. So I received a summary of a deposition that was generated through our platform, Harvey. And it was interesting because I was really asking for, what are the inconsistencies? So this plaintiff has filed a complaint against my client, alleged a ton of things. Right. And then she was later deposed a few years later. And I was like, okay, well, let's try to see if we can get a summary of the inconsistencies. That would normally take an associate several hours to do to read the deposition transcript, to compare it to the complaint, to figure out the inconsistencies. And then within 20 minutes, I got something from Harvey. And I don't know if it was because the plaintiff is a woman, but Harvey put in a footnote to kind of describe to me how, even though it's named some inconsistencies, it could be based on her gender. And I thought that was interesting, because I did not ask that. And I thought it was very interesting that it said that. And so. But again, these are things, as you get to use AI more and more, you kind of understand, like, what to look for and what can be helpful and what may not be as helpful. But I literally looked at the deposition summary on a plane, and I out loud said, oh, wow, because I did not expect to have that footnote. But it's interesting, and it is something I'm sure someone is studying at a very high collegiate level of what does it mean as far as gender differences with regard to pronouns, but also maybe actually the platform signaling or actually pointing to gender differences. So it's very interesting.
B
I'll send you a link to that post. I forget the woman's name off the top of my head, but she does a lot of posting on LinkedIn and asks good questions. I thought that was a fun one. Okay, so getting back to all of this, so what? Investigation workflows and guardrails have proven effective, especially around validation, privilege, and documentation that stands up to scrutiny.
A
Yeah, I know we talked about the protocol. I'll just give another example of a workflow that I think works really well. And it also is a control, which is once you start training business personnel on potential deepfakes and you give them examples, like the door dash example, or we like to show a video that one of our partners created to show that, you know, anybody can create a video through AI and through public sources of AI actually, you then are going to start to get a lot of business personnel reaching out to compliance and saying, is this real? Is this real? Is this real? Right, Especially. And that's great, because that's what issue spotting is about, is when you're looking at things differently, you're going to keep, you know, referring back to compliance and saying, hey, can you see if this is real? This is real. And so a lot of my clients, what they've done is they have put in an escalation procedure. So when that comes in from a business person saying, look, I received this document from a vendor overseas, and it looks a little off to me, can someone please look at it and see if it's real or if it's generated by AI Then it will go through a procedure where it will go through compliance to an IT person who will have the expertise to help them sort of figure out, okay, do we think it's real or not? If it is real, then it will go through the compliance piece and up to legal. If they're not sure, then I will get contacted and we'll look into a forensic consultant to see what the issue could be if it is real or not. But that escalation procedure is really important. Right. And it's just an extra check, but through established processes. Because I'm sure companies have, you know, they've done this before in other areas where they had these escalation procedures to say, okay, we're going to have to bring in experts throughout the company to help us figure out what is the issue and what is the next step. And that's really important. And I would say, too, Tom, just to kind of throw this out there, because I don't think our clients do it enough, is if you're a public company and you have a board of directors, they're all very interested in AI and AI usage at the company. And some of our clients have board committees that are focused on technology or emerging technology and the risks. But I would say to compliance departments and legal departments, you should be bragging about your use of AI and what protocols you have to your board. They want to hear about it. And it's interesting to me that when I'm reporting to a board committee or a board about an issue and I bring up, you know, isn't it great that our legal and compliance departments use AI for X, Y and Z? Sometimes the board members are just surprised. And I'm. And I'm surprised they're surprised because that's something that really should be promoted. And legal and compliance, you know, oftentimes people think of it as a, as a way to expend resources and spend. Spend money and they're not bringing value to the company. That is not the case. And this is one of those areas where legal compliance programs can say, you know, we actually are bringing value and this is how we're bringing it through AI. So I just want to encourage people to do that.
B
Yeah, I think it's here. Embrace it, you know, and learn how to use it. Learn, learn how to use it ethically and responsibly and. Yeah, because, I mean, obviously you can use it badly, but you can use, you know, a chainsaw badly. I don't know why I came up with that metaphor. But yeah, I mean, you've gotta. It's like I said, it's, it's here. I've got a nephew who teaches writing at the University of Wisconsin. And this is a big deal for teachers. So how are kids? And as somebody who's written his whole life and one day I'm going to get quite good, but he, you know, he's got an issue where he can kind of spot it there. Also, I was working with an SEO company who was advising us on a client on AI, and I said, here's a list of words that I will always use that you want to avoid. And there are things like if I'll ask it about a court decision, I say, give me a, give me a summary of a court decision, of this court decision or this lawsuit. And it always say in a landmark lawsuit against so. And so I'm like, as I've told it, never use landmark. You don't know if it's landmark. Landmarks are very, you know, you got to know more than I know and more than you know that to make it a landmark. But maybe somebody somewhere, they dislike the word landmark, they like the word landscape. You know, in the AI, landscape is going to be like, okay, there's certain words like that. So I've actually told my AI stop using landscape and never say landmark. In fact, don't use adjectives because, because that's subjective stuff. So. But it's a. But anyway. Yeah, just I. But anyway. My, my nephew, I have a feeling what I think what he said was they're going to do more in class writing, you know, get the old blue book and pen out. And even if it's just short, you want to see that somebody can put their thoughts down coherently on paper. But Otherwise, teaching them to use it and identify. But you talked about using forensics, forensic folks. And while you were saying it, I was thinking, why don't you just save some money, you know, and send it to your daughter to see if it's real or not? Because I think kids of a certain age.
A
Great point.
B
Yeah. There's a great book called really Great point. Yeah. There's a book called the Mindset List that actually is from a professor at Beloit College, also in Wisconsin. I don't know why I'm giving them a plug. But the. But it's like if you were born in 1968, you've never know. You'll never know a world without so and so. It's like my. My kids will, for example, they'll never. They were born in 91 and 93. They'll never know a world without computers, without a mouse. You know, when they're like 2 years old, they're navigating a mouse. It's like, okay, this thing over here is going to make that thing over there move. So it's a cool list. But kids now are going to be like, they won't know a world without AI so they'll always be on the lookout, you know, for fakes.
A
So. Exactly right. And can I just, like, double down on what you just talked about, about adjectives and the use of landmark and landscape? We see more whistleblower complaints in the last, I don't know, four to five months that are 20 pages. They have a lot of adjectives like you're mentioning. They have every potential statute known to man, you know, most of which do not apply. And it's really easy to see right away that it's been generated by AI but the, you know, the issue doesn't stop there because the compliance department, you know, they've seen it. They've sent it to us as outside counsel. So obviously they want us to see it. Because what's ending up happening with these long whistleblower letters with lots of adjectives is it. Only. It almost makes it harder for the compliance department to understand, well, what do I do next? Like, what's really the issue here that's embedded in 20 pages? And so we've. And you can't just say to your board or to the doj, for example, it was so long, we just decided not to do anything about it. Right. And so we have to figure out, you know, what's the core issue and what we should look into because of the. What you were just referring to all the adjectives. All of these, you know, words and phrases that don't really make sense in the context that we're in. And, you know, I have drafts of, have even, you know, pleadings where I thought, oh, this is definitely generated by AI because half of what they're saying in the legal context doesn't make any sense. And so that's, I commend your cousin for, like, let's get back to brass tacks and let's start doing, like, writing in class. The same thing with law school. It's almost like you've got to have the experience and also the education to know when you're looking at something well, that doesn't make sense in the legal context. So this must be generated by AI. You have to have those experiences.
B
Yes, you're absolutely right. You know, I just, I call it, you know, smell test. Because I'll have to. I'll look at something and like, this all looks good and sounds good. But wait a second, is that really right? And then I'll ask, I'll ask AI. Sorry. Good catch. I'm like, okay, wait a minute. You know, thank you for correcting me on that, because that's not actually. Okay. So, yeah, it's, it's. I don't know, I treat it like a. I treat it like a young or inexperienced researcher that's just really, really smart. But so finally, let's get down to some, some very practical ideas. What, what should companies be doing right now to. To get in line with these requirements?
A
So I would say from a compliance program enhancement point of view, I think the government authorities would expect companies to use AI to enhance their policies. I can't tell you. Even two years ago, a company would send me eight policies and say, can you please help us make these more streamlined or make sure we get rid of inconsistencies? Which always happens. It's like sometimes when you get the instructions, like you were saying, to put together furniture, it's like, see page five, you go to page five and there's no page five. Right. So an AI is a huge helpful tool to help you with that, for inconsistencies and streamlining policies, enhancing your training to make it more user friendly and easier. I've seen our clients using AI to help them summarize whistleblower complaints or surveys. A lot of our clients do employee surveys every year and they want to see, like, how the responses have changed over time. Are they improving their processes so that they can see that in the responses from the employees? So they will use AI to summarize that I have one client who's told me that she has a very specific question in her survey to the employees about feeling like you're in a silo or feeling like you have too much pressure or stress. Because she feels like if you feel like you're in a silo and you have tremendous stress to hit a target or to hit a goal for the year that the company has deemed to happen, you are more likely to break the rules or to circumvent rules to do that. And so she's using AI to kind of see, okay, where are those hotspots to say, okay, I need to do more training in those places with that sales team and that geography or whatever it is to make sure that I'm preventing something that could actually turn out to be something that would cost us millions and millions of dollars to get over. So there's all sorts of different ways to use AI to really help a compliance program. On the internal investigation side, you know, we use it for, you know, I've mentioned chronologies and interview questions and summarizing different whistleblower complaints so we can kind of see again, where are the trends? Drafting and editing. Or even the other day, you know, I was given something in Portuguese, and even though I can speak and read Spanish, Portuguese is a little different. So I was reading this audit report in Portuguese and I had to get an answer to the client within five minutes because of an issue that was going to happen. And, you know, I read it and I thought, okay, these are the paragraphs I really need to understand more fully. And I put it into Harvey and had a quick high level translation, which was great. But also with document review, Tom, it has helped me so much in so many cases. And I'll give you an example. I had a client who had like a hundred custodians, like a hundred employees who had teams, chats and emails about an issue that was really just permeated the company. And I had to figure out who first learned about this fraud because it just was perpetuated year after year after year. And so we had all the teams, chats and the emails put into a review database. And there was an AI tool associated with that database where I could literally put who was the first one to learn about the fraud, Right? And I was more descriptive and detailed than that. But automatically all the summary comes up and it says, In 2017, Tom realized there was an issue about blah, blah, blah. And then it goes on to summarize, and then it has all these documents that you can click on to See, like, where it got the information to create the summary. And that was huge because I. Yeah, because I just avoided like all this time. And then I said to the reviewers, okay, I need you to focus on these documents first and then try to find other documents like it so that we can be more targeted instead of reviewing, you know, doc after doc, team chat after teams chat. And Tom, I know this will surprise you, but a lot of teams chats embedded in a bunch of Starbucks talk about, oh, did you get your Starbucks stars and did you get your coffee? And blah, blah, blah. There's like two lines of actually something important. And I was like, there's just no way we would have been able to get to that important piece as quickly as we did with AI being the tool. And so there's just so many ways to enhance things and make things go faster.
B
I just think, I just think of the stupid things I've said. I say to people and they say back to me in teams chats because we're just, we're human beings and we're not in the same room and we just, like, have funny observations or whatever and it's just so dumb. But we have. But it's fun. You know, it takes a couple minutes, but if somebody read it back, like, what were you doing? What were you. Is this how you spend your time? Like, no, but that day I did a little bit. It's just like, you know, that happens
A
more often than not. You just nailed it. It happen more often than not, Tom, like when we're showing, like in an interview, an employee teams chat after team chat, or WhatsApp message after WhatsApp message. Right? And they're just like, so embarrassed. And I remember one person was just like, please don't look at all that Starbucks talk, or don't look at all this talk about political views. And we're not there for that. But it is. It's amazing to see how people react. And it used to be emails, right? I used to show emails more often than not, but we're past that now as we talk about evolving technology. It's all in the WhatsApp and it's all in teams chat these days.
B
Yeah. And sometimes I like, with teams will. Because I'll have. I'll have meetings or phone calls with people that are about business, but then I'm also friends with them. And so then we'll go off on something and then it'll read back, it'll give a summary of the call Tom shared as a. His bizarre experience on spring break. And when he was 20 years old and he ended up in jail. And Sarah responded that I hope he's learned whatever. It's just so you treated it so formally, like our goofy talk. Okay, that didn't need to be in there. That's good. Okay, now, one last thing. You've mentioned Harvey. I meant to ask you, is that, is that, is that off the shelf or is that proprietary to read Smith?
A
No, I think Harvey is used by other companies and other firms, but this one in particular is customized to us. But I would say some of my clients use Harvey, but others use different tools that have come about that are more tailored towards in house counsel and in house legal departments and compliance departments. I will mention this though, Tom, because I thought this was interesting. One of my clients mentioned to me that they had decided to actually have two different AI platforms at the company. One was for the business at large, and then one was for the legal and compliance department. And the reason why is because they realized they were getting. The legal department realized that they were getting whistleblower complaints and other just drafted documents that they could tell that the platform was using their legal memos and their privileged and confidential materials, and they could see that it was learning from that. And the legal department, compliance department decided, okay, we can't have that. And so that's why they decided to separate it. Which is more of a cost, right? More resources have to be dedicated. But that was the first time I had ever heard that from a client. And I would not be surprised if that keeps happening. But here at Reed Smith, we use Harvey. We're trained on Harvey. We actually have, within the litigation department, we have just started routine webinars, but also where we're talking with each other about how we're using AI, what's working, what's not working. The partners were just trained the other day on how to use certain features in Harvey. We get new features from Harvey pretty regularly that we're trained on. I mean, it is very much a focus here. I, you know, I was at a different place for 20 years and I've never seen anything like this, like the kind of resources that's, that are put into technology like at Reed Smith. And it's great because sometimes I feel like I'm actually teaching our clients. Like, we have a CLE presentation where we talk about AI with compliance and internal investigations. And in the appendix, I have like six or seven prompts that I share with clients that they can literally cut and paste, like prompts that I use. It does say please in it by the way every single one. But you know, I think it's really important that we're also sharing with our clients how we use it and how they can enhance what they're doing because it is such a great tool and like you said, if you don't use it, I mean, the technology is going to just run away from you. You will be in a, in a poorer state than those who do.
B
My use of it is very simple. You know, I'm writing about areas of law and. But anything I have for a client, I have a disclaimer. Initial drafts and research were done with the aid of, you know, Microsoft Copilot and Adobe Assistant. But then I say it's been reviewed by a legal editor with 40 years experience. You know, I just put that in there. Clients. I mean, nobody's actually made a big deal out of it. But. But if you saw the first draft versus the final, you would see, you know, it's checked. And I do, like you mentioned the one case where the system shows you where it got the information. And that's one thing I on with Copilot. Adobe does it automatically. Now Adobe's different because I'll give it a 150 page document and I'll ask it give me the key points, but then it'll show me exactly where in the document it got these key points. And so I can go through, just like you were saying, and go through and check them. So that's it. So I don't know. I think that's. I think I covered everything. Is there anything else you wanted to go over? I think we nailed it. No.
A
This has been so fun. Yeah, this has been a lot of fun. I knew it would be and I knew the time would go by really quickly. The only final thought I would say is, although there are ways that AI can be used against companies, there are so many other ways where it can be really beneficial. And I do have a few clients that are just a little bit timid in sort of dipping their toe into AI because they're afraid of all of the risks. Sort of like, you know, the sky could fall down. But I think just like you and I were talking about, once you get started and your mind starts to shift in the, you know, sort of the brainwashing of prompts, you start to realize that it can be an amazing tool that can really be efficient and effective. But like you said, Tom, having experience and being able to see what actually is useful with AI is imperative. So I really appreciate you allowing me to talk about it. Would love to talk to you anytime. This was a lot of fun. I knew it would be. And I just appreciate everybody listening.
B
Yeah. Well, thank you, Andrea. Thank you very much. We will do it again. You have a good rest of your day.
A
Excellent, sir. Have a great day, Sam.
Emerging Litigation Podcast with Tom Hagy
Guest: Adria Perez, Partner at Reed Smith’s Global Regulatory Enforcement Group
Date: February 25, 2026
This episode delves into the U.S. Department of Justice’s (DOJ) recent establishment of an Artificial Intelligence Litigation Task Force, focusing on the evolving landscape of AI governance, enforcement, and compliance for corporations. Host Tom Hagy is joined by Adria Perez, an expert in regulatory enforcement and corporate compliance, to discuss federal versus state approaches, emerging legal risks, whistleblower dynamics, practical controls, and effective protocols for responsible AI use.
On the Patchwork of AI Regulation:
“There was this unpredictability of, okay, well, the federal government may not be so focused on certain things, but the state government and law enforcement and authorities are.”
— Adria Perez, [06:38]
On Deepfake Challenges:
“It is so hard to see an image and know whether it’s AI generated or not... And that’s the problem that we have.”
— Adria Perez, [08:50]
On Investigative Burdens:
“We need to do our own due diligence as part of the investigative process. And most of our clients of course have an investigative protocol... But it honestly takes more time and takes an expertise that many companies may not have.”
— Adria Perez, [13:07]
On Practical Guardrails:
“I would love to see and be able to show the DOJ: Look, we’ve taken a lot of thought. We’ve taken a lot of time. We put this together, this is what we follow...we may even audit sometimes how we use these tools.”
— Adria Perez, [16:17]
On the Need for Board-Level Communication:
“You should be bragging about your use of AI and what protocols you have to your board. They want to hear about it.”
— Adria Perez, [29:56]
On AI as an Enhancement Tool:
“Although there are ways that AI can be used against companies, there are so many other ways where it can be really beneficial... once you get started and your mind starts to shift... you start to realize that it can be an amazing tool.”
— Adria Perez, [47:37]
This summary provides a comprehensive guide to the episode’s main themes, discussion points, memorable quotes, and actionable insights for corporate legal professionals and compliance practitioners navigating AI’s legal and regulatory frontier.