Transcript
A (0:00)
Welcome to the IA on AI Podcast, part of the Audit Podcast Network, where we bring you weekly updates on AI from the internal auditor's perspective. Here we go. Hey, everybody. Just wanted to make a quick announcement. September 24th and 25th of 2025 is the Audit analytics and AI Conference. It's the only conference dedicated to internal auditors strictly around data analytics and AI from the audit perspective. The goal, when we have speakers come on. Is to have them show you how to do a lot of this stuff and talk about how they've been able to accomplish the things that they're going to show tactically. So it's not just this is why you should do it, it's a lot of how you should do it. It's all virtual. So if you're in a time zone halfway across the world, no problem at all. You can absolutely check out the recordings. This year you can get 14 CPEs and then two additional free CPEs. You have to come to the conference to check that part out. We have another fantastic lineup. Highly recommend. Go to the show Notes, go to the website, check out the lineup, check out the topics that we have for you this year. Again, September 24, 25th, 2025, depending on when you're listening this, that means it's either 14 days away, eight days away, seven days away, one day away, or it's the day of. So go to the website, check it out, register, and hope to see you guys there. All right, here we go. From timesunion.com Warner Brothers sues mid journey for AI Generated images of Superman, Bugs Bunny and other characters. If you're not familiar with Mid Journey, they're the kind of primary image, AI image generator. We don't really do a ton, I feel like an internal audit, maybe a audit report thing here or there's a few other kind of use cases, but it's not like a huge audit tool. I don't even really use it that much, to be honest. But anyway, there are some risks to consider, mostly around ip, but even then there was something I thought was pretty interesting, so I'll read straight from here. So, Warner Brothers suing AI company Mid Journey for copyright infringement, alleging that the startup enables its millions of subscribers to create AI generated images and videos of copyrighted characters like Superman and Bugs Bunning. The lawsuit alleges midjourney's practices create, quote, consumer confusion regarding what is lawful and what is not lawful by misleading its subscribers to believe that midjourney's massive copying and the countless infringing images and videos generated by Its service are somehow authorized by Warner Bros. Discovery. I think this is the thing that stuck out to me the most about this was if we are in our organizations there is some kind of copyright issues and using AI, I think we're going pretty naturally. One, who cares? It's easy for us to do, right? It makes the job easier. I'm probably going to do it anyway. And then two, well, the tool's giving it to me. It's on them, not on me. And so anyway, I just feel like overall there's more coming out around IP infringement and that's something that we definitely need to be aware of how big a risk this is in your organization depending on the industry you're in. Obviously it can depend. But if so if you, you know, read that or just heard that and went we got an issue there or that might be big for us. Review the corporate use of your generative AI for IP compliance. Obviously update any policies. Training to prevent unauthorized use of copyrighted material. Also training is going to be huge across this. The amount of people who one still don't understand how these tools work, which is not super surprising, although I have this expectation that we should in audit understand how they're being used. We've talked about that a few times in the past. We've done some webinars on it as well. But then also training in general around AI governance, which we are going to continue to pound continuously on these segments that we do. I was told recently there was an organization that, quote, we just kind of rolled out our AI tool. Like the organization rolled it out and they had like a month to prepare. This is a pretty big organization. And they went, yeah, it's just we had a month's notice and then it was there. So imagine having like this organization, 20,000 plus employees and then going, hey everybody, you got a month to figure this out because we're going to push the button and you're all going to get it. So AI governance training is still going to be absolutely critical for everyone from wealth professional CA banks, shift focus from Genai buzz to responsible standards study fines. When I first saw this headline, I went, yeah, there's gotta be some kind of financial impact that they're starting to feel or they're predicting is going to happen. Relative to responsible AI. It was very doubtful that it had anything to do with being a good citizen of the world. So from the article, financial institutions worldwide are dialing back the hype around Gen AI and putting more weight on responsible AI frameworks. According to new research, a study from analytics firm FICO and Corinium Global Intelligence surveyed more than 250 senior executives across banking and financial services. The findings point to a clear recalibration in AI priorities as firms seek measurable value and greater accountability from their deployments. What I really appreciated was the next quote because Dr. Scott Zolde, FICO's Chief Analytics Officer, came out and said, responsible AI extends beyond risk mitigation. It's a business imperative. And I went, ha, okay, good. I'm glad we're actually talking about it instead of trying to act like this is we got to do what's right for the world. There is an ROI impact to this. The survey also found strong belief in the benefits of unified platforms, with more than 75% of executives saying that greater alignment between business and IT leaders, along with consolidated AI infrastructure, could increase ROI by 50% or more. Over half of CAOs and CAIO's chief AI officers, roughly 56%, believe that implementing responsible AI standards will significantly impact ROI. So again, responsible AI rolls up under AI governance. If you do not have AI governance as an advisory project, as an audit project, if you haven't done that yet, you absolutely have to. You still have a few months to get it in by the end of the year, but if not, you do some kind of annual planning, Whatever your audit planning looks like, it should be top of the list going into 2026. And to further support this from Brussels Signal, AI ethical breaches skyrocket, new report shows. New report into the rise of AI showed incidents linked to ethical breaches have more than doubled in just two years. A recent report from McKinsey warned of a sharp increase in ethical controversies, from cheating scandals and exams to biased recruitment systems and cybersecurity threats. It also showed that ethical and operational issues involving AI have increased over twofold since 2022. July reports by the OECD and Rand shared similar concerns, including accountability, I. E. Who is responsible when AI errors, transparency, whether users understand AI decisions, and fairness whether AI discriminates against certain groups. If you're starting to pick on on the theme it's about AI governance. And if you are in the position and you're going, we'd love to do that. We don't have the resources. Most people do not have the resources, the AI competency to do an AI governance audit or advisory project. Most internal audit departments do not have that. And so they do have to go to a third party. And the fact that if you go, well, we don't have the budget for that, that's not an excuse. It's your job. If you're leading the audit department to get more budget, those things are not set in stone. You can absolutely get, get more. I know there's some 9, 30 year ends coming up. Honestly, if, if I was in that position, if I was leading internal audit department, Obviously there's also FY20, 25 year ends coming up. And I was, this group I was talking to the other day said, hey, yeah, we're super busy. We're putting together our committee materials for the end of the quarter. We'll get back to you, you know, when that's over. I say all that to say if it was me. And I said I had already talked to leadership and said, hey, we need more budget. We have to do an AI governance project. We'll just broadly say project. And they said, nope. Then during my Q3 and end of year reporting, audit committee materials, I know people spend a lot of time, some people spend a lot of time on those. I would say drop it all, I don't care. We're not taking any of that stuff. I'm going to go to them with a piece of paper that says AI Governance Advisory project. We need more budget. And that would be it. If you use a bunch of slides, I would have one slide with that or something very similar on it and I would present that and I wouldn't do anything else. If they go, well, what about. I would go, I don't care. This is the biggest thing right now. We have to got to get this under control. And I know there's some people who aren't naturally okay with asking for that and they've probably gotten shot down a ton in the past on other asks. We did an episode of the Audit podcast with Dave Hill, who was the former CEO of SWAP Internal Audit Services. So this is episode 233. If you're curious. Want to go take a listen Literally called How to Get More Internal Audit Budget with David Hill. We had him on because I know this is a problem for CAEs and audit leaders is trying to get more budget. And so we went, Dave, he'd written some articles about it. Come on the show. Please explain to folks how they can do that. So if you're in that position, you're struggling to get more budget, check out that episode 233 of the Audit podcast with Dave Hill and get it knocked out if you need proof to build your business case. So maybe, maybe you do need another slide in there. Okay. If you need proof, almost every episode of this show, the I a on a I. I nailed it. That time show is about these stats speaking to the importance of AI governance. So go. You don't have to necessarily listen to every show if you haven't. But just go to the show notes, click all the links to all the sources that we've pointed to. When we talk about these, go to those. It's usually pretty obvious. Just look for the percentage sign. Like if you're. I don't have time for this. Hit control F percentage sign and just look for the percentages. Gather that up, put it together in a format that works for you and your stakeholders and go look. Here's the proof. We got to do something about this. Thank you for listening and be sure to follow the link to greenskiesanalytics.com in the show notes and schedule time to see how green skies can make the hype of AI a reality in your internal audit department. All right, that's it for this week. We don't really have a catchy slogan to end the show yet, so if you. If you have one and you want to send it to us, we'll be happy to include it. And if we get a bunch, we'll just do a different one every single time. But until then. Well, I don't know until then, because we don't have anything to leave the folks with yet. So have a good week.
