Loading summary
A
Foreign.
B
Welcome to another episode of the Digiday Podcast, a show for anyone who's still wondering if their ChatGPT search result was indeed sponsored. I'm Kamika McCoy, senior marketing reporter here at Digiday.
A
I'm Tim Peterson, executive editor, video and audio at Digiday Media. Yeah, a show for those people or a show for Mickey Mouse's lawyers?
B
Them too. Them too. Them too. Oh, my goodness.
A
Today we got a lot to cover. So much of it, like, oriented around, like, Generative AI and the legalities of Generative AI. So later in our featured segment, we had Rob Driscoll, who's a partner at the law firm Davis Wright Tremaine. He joined to talk about the ripple effects Generative AI has had on brand, intellectual property, property, copyright, trademark. We recorded this episode before Disney announced a deal with OpenAI to make more than 200 of Disney's characters available for people to create Sora videos starring. So we did not get into the specifics of that deal. We also didn't really talk about any particular companies because Rob is a lawyer. Lawyers generally want to stick away from talking about specific companies. But I feel like this conversation was really perfectly timed because then when the Disney OpenAI deal was announced a day after we recorded with Drob, I had all of these new questions in my head about, like, how did Disney's lawyers sign off on this deal without risking copyright or trademark protections?
B
Yeah, there's. I really appreciate the conversation with Rob. It felt like one of those moments where, I hate to say it, you're sitting in a lecture, someone's talking to you, and you're just like, yeah, well, what about this? Well, what about this? That doesn't make sense. What about this? And he's just kind of like, on point, answering every question to the best of abilities. A couple of I don't knows in there because we're still wading through the water on a lot of this, but the Disney deal with OpenAI kind of like, blew my mind because it was not one that I saw coming. Mostly because Disney has lawsuits with against others, like Mid Journey in its lawsuit back in July with Universal. There's been kind of a lot of back and forth about AI. I'm almost thinking of this as, like, maybe Disney is kind of striking an allegiance here, which I feel like I've said a million times, where if you can't beat them, join them.
A
Yeah, that definitely feels like it's the case. It's definitely. I was also super surprised when Seb Joseph, our executive editor of news, sent Me a link to. I think it was a variety story about this announcement. And it's like, wait, they did what? Because as you mentioned, back in the summer, Disney filed a lawsuit against Midjourney for copyright violation. I mean, even, what, a few hours after announced this deal with OpenAI, Disney said, oh, yeah, and by the way, Google, cease and desist. Like, no, you cannot be using any of our IP in any of your generative AI. Um, I assume the writing between the lines is. Unless you want to pay us for it.
B
Exactly.
A
But that's another interesting thing with this OpenAI deal is like, it's another one of these circular deals for OpenAI where Disney's investing a billion dollars in OpenAI at the same time as I imagine OpenAI would be paying Disney for Sora users to be able to, like, use Mickey Mouse or Cinderella or Darth Vader in AI generated videos, right?
B
Yeah, I imagine so. I also think it's interesting that this applies to its animated characters from Disney and Marvel and Pixar and Star Wars, I think. But they also, in the reporting stated that it does not include talent, likeness, or voices. So I feel like they're drawing the line humans here. Now, here's the.
A
Scarlett Johansson will be like, excuse me, what?
B
And then we'll repeat that all over again. But I do think it's really interesting that that's. They draw the line here and there's a lot of conversations about, like, oh, there's guardrails. I still feel like there's going to be questions about IP and copyright, no matter what guardrails you put in there. Because I think there's always going to be some nefarious somebody somewhere doing something that they have no business doing with these poor characters.
A
Oh, yeah. Last night I was talking with my significant other about this, and she's just like, okay, so what? You know, like, what's the big deal? I was like, well, the big deal is I bet there are already people who are planning out their prompts to have Mickey Mouse, Ella from Frozen, you know, poor Lelo from Lilo and Stitch doing horrible, horrible things.
B
I see what's in this for OpenAI right there. All of the AI companies and LLMs are, like, starved for training data, right? And this gives them, like, a really good end. I cannot for the life of me. And maybe you can help me here with your hypothesis. If we can both put on our tinfoil hats for a moment. I. I don't understand what's in this.
A
Have you taken yours off?
B
That was my first mistake. I don't understand what is in this for Disney, especially because, like I said, they've got lawsuits against other companies for the same thing. I don't get to understand what's in this for Disney.
A
I imagine, like, you know, I would love to see the contract here to understand, like, what the language is. I imagine there's a first mover advantage to Disney doing this because OpenAI seem to kind of like put out the test balloon on this with that Wall Street Journal article from a month or two ago talking about, like, hey, we've got cameos in store now and it'd be great if, you know, McDonald's did a deal for us to like, put Ronald McDonald in here so people could make videos featuring Ronald McDonald. The fact that it's like Disney who. It's hard to think of a company that has more valuable IP than Disney and who's also been more protective of that ip. Mean thinking about, like the Disney vault, for example, Disney doing this deal, I imagine there's terms in there that Disney can now go to a Google, a meta, a mid journey on down the list and say, like, okay, here's what this deal needs to look like. Here's what we need to get out of this. And especially, like, if people are coming across more AI generated videos or AI generated content, if that would need to exclude Disney content, then you risk a horrible world in which all we're doing is watching AI stuff. But if all we're doing is watching AI stuff and not Disney, then Disney loses that connection with audiences.
B
Which takes me back to, I mean, yes, the first mover advantage here, but I just feel like there's just more risks than there are opportunities.
A
I think there's risks on both sides of it.
B
What's the risk in your POV for OpenAI?
A
Oh, I meant more like risks for Disney.
B
Oh, to not doing this. Got it.
A
Mm. Because then it's just. Especially if it doesn't do it now, but it needs to do so later. But the precedents have already been set by whether it's. I mean, Warner Brothers has enough going on. But like Warner Brothers or Universal or like anyone else who has Mattel, you know, let's say who has IP and what they can be doing with that IP or what AI companies can do with that ip. Because one thing I feel like I know about Disney, Disney loves to control. Yeah. And so being a first mover on this gives it inability to control. So we'll see how much control it actually has once this comes to pass and we see what people are Having Belle and the Beast do on Soar.
B
Oh God. Oh God. Yeah, maybe. Maybe that's it. You know, where there's more evident like an intrinsic value here for Disney. I'm almost like I said at the beginning of this conversation, thinking of this as like, like empires being built digitally where like your country, which is Disney, pledges its allegiance and now they've got some kind of NATO agreement type deal, you know, to make sure everybody's on the same accord. That's just who they've decided to kind of partner and team up with out of all of the companies.
A
Yeah. And Disney's gotten more. It's been like more forward looking in some respects when it comes to ip. Like I think it was a year ago, it did like a billion or billion dollar plus deal with Epic Games to make Disney IP available within Fortnite.
B
Yeah.
A
So there is a precedent of Disney doing things like this. I mean even things like what's the game? Kingdom Hearts.
B
Yeah.
A
Where it brings all these different characters together into like a single universe, Metaverse, what have you. So I think like, yeah, where I kind of land on this is like this is Disney looking to set a precedent, looking to be the one to kind of like dictate terms. I just would love to know what those terms were.
B
And we will continue waiting on said terms. I know they've got a three year deal per reporting from both companies, so we'll see how this shakes out in the end and what precedent has indeed been set. But you know what precedent is not going to be set anytime soon is advertisers getting the opportunity to have ads show up in some of these LLMs. So on Monday, Adweek reported that Google was expected to roll out ads in Gemini next year. The details on the pricing and the format and things were unclear. But very quickly, Google's VP of ads, Dan Taylor, responded and said the story is based on uninformed anonymous sources who are making inaccurate claims. There are no ads in the Gemini app and there are no current plans to change that. Now this comes on the heel of OpenAI's code red perplexity. Step away from its ad strategy back in October and it begs the question, will chatbots get ads anytime soon?
A
Of course they will. It's interesting. Anytime you have an executive at a company come out and very publicly declaim a story or report from any publication, the fact that he first says there are no ads in the Gemini app is like, that's not what Adweek was reporting. There are no current plans to change that. It's the kind of thing where you always try to read into the language of current plans. So were there plans a week ago? And things changed because OpenAI with that code Red, according to that memo that's been reported on by the Information among other outlets, the idea was we're just going to focus on ChatGPT and we're going to have to wait on figuring out this ad product, folks. But then I think it was this past week there was a Wall Street Journal story reporting on, okay, this code red's set to expire in January and then it's going to be back to code orange and they can get back to the ad product plans. So I absolutely think ads and chatbots are coming. I kind of wonder with this ads in Google Gemini thing, if Google's planning to roll Gemini into search. And that's why that's how Dan Taylor is able to kind of pick hairs here, because one thing that we've been seeing is Google search is getting more and more AI chatbot. Like there are AI overviews and then there's AI mode. And I mean, within this past week, as I've been clicking to, I think it's like dive deeper is the language on the button in AI reviews. It's taking me into AI mode. And then this week Google also announced that it's bringing Gemini to Chrome on iOS and I think iPadOS, which if Gemini is going to be within Chrome, at what point does it not be Gemini anymore and is just Chrome, like a Gemini powered Chrome, a Gemini powered search. And at some point we just stop saying Gemini powered and it's just search. It's just Chrome only it's a chatbot experience.
B
Well, that'll give us a reason to believe in the AEO and Geo and all the other godforsaken acronyms that have been floating around here.
A
That's a whole other thing. But I mean, they are. You know, even when we had Dan Taylor on the show back in May, he said that they were testing ads in AI mode and there was even a search engine land story within the past month or so in which someone, I think on X posted a screenshot that reportedly shows ads in AI mode and they kind of look like ads in search anyways. So maybe it's also Google hasn't figured out what ads in an AI chatbot experience should look like, which, I mean, neither has OpenAI, from all we can tell. Neither has, I guess Perplexity technically has, but no one's buying those ads, so.
B
So they still have not figured it out.
A
Exactly.
B
I think that's the whole thing here. Right. Because you've got all these signals that point to like the road map is there and the executives at these platforms clearly see it and they've got plans in place. If the shoe. It's when the shoe is going to drop, I think is kind of the biggest question now at this point. I almost wonder if Perplexity kind of stood as a. What's the word? Not like a canary in a coal miner, basically. Yeah. And spooked everyone to be like, hey, maybe let's go back to the drawing board and see what this looks like so that we can actually make money off of this and not tank our already cash burning businesses even further by rolling out something that advertisers don't want.
A
Yeah, yeah. Or users, as it were. Because that's something Sam Altman has kind of called into question is like if we're putting ads in ChatGPT responses and expecting people to like, click on those to get the answer that they're looking for, then that means we're not doing that with just the natural response. So like, he was basically drawing a line that like, ads inherently undercut the value of the organic response, which I really enjoyed the shade thrown there. I don't know that it like has to be entirely true, but the shade throwing is fun.
B
Yeah. I live for tech company executive drama. I think it's fantastic. But yeah, I think we'll just have to continue to wait and see here. They don't have a choice because my thing always comes back to like, you're burning through a ton of cash, man. At some point you're going to have to put that product out there. The sacrificial cow already made the first step. You know, take your notes and move on. We know it's coming. Just put it out there. Let's, let's get this over with.
A
I mean, they're burning through a ton of cash. That said, they just raked in $1 billion from Disney. So if they can get other companies to do some, you know, similar circular deal, and especially if companies are getting more comfortable using these generative AI technologies and incorporating their intellectual property into them, then maybe there is an opportunity for OpenAI to be doing more of these kinds of deals. Which is what makes for such an interesting conversation that you and I had with Rob Driscoll from Davis Wright Tremaine, talking about the copyright considerations and trademark issues with respect to marketers and generative AI technologies.
B
Absolutely. Well, with no further ado, we'll stop our yapping and let Rob explain this to us.
A
Rob, welcome to the podcast. Thanks for joining us.
C
Thank you. Happy to be here.
A
Yeah, we're happy to have you. I think Kamika and I are always happy to have a legal expert to speak to, because that's just helpful to get that level of insight. On that note, do we need to preface this entire conversation? I feel like anytime I talk to a lawyer or a financial advisor or accountant type, there's the, like, disclaimer of, this is not legal advice. Do we need to do any of that before we get into things?
C
Yes. Or, I mean, I can say that or you can say that. Really, what I'm. What I usually say is that I'm. I'm just giving you my personal opinions of things, and I'm not speaking on behalf of any client or my law firm.
A
Okay. It's the kind of thing that's only ever in my mind when I'm talking with, like, a lawyer of, like. Oh, things can be construed certain ways.
C
Yes.
A
So, Rob, we wanted to have you on to kind of understand how the conversation and, yes, the lines are developing or to what extent they're developing when it comes to this intersection of brands and AI. You know, more and more marketers are using generative AI tools for ads. There's also AI platforms that are looking to get brands to, like, put their mascots in their AI platform so that people can, like, create video starring, I don't know, Mickey mouse or Ronald McDonald or who have you. And it seems like a massive gray area. Kameka, you've been all over this kind of topic, talking to people, like, before we throw it to Rob, do you even have, like, much of a sense on whether it's gotten any clearer or is getting blurrier?
B
No. I think I've said on the show before, I've even talked to influencers and creators about their concerns over AI and content usage and things like that. And the look on their faces when I'm like, do you have any legal clauses or protections in your. Your contracts? And they're like, no, but that's a good idea. So it's kind of really speaking to the gray area that exists here, which is why we're excited to have you, Rob, to kind of make some sense of what's happening or what's not happening when it comes to AI and brand and IP and copyright and other acronyms.
A
Yeah. So, Rob, has it gotten any clearer or has it gotten even fuzzier?
C
I think it has gotten clearer in the sense that as time goes by companies get a little more comfortable with generative AI. In my, in my view, there's been a pretty striking shift in the last maybe roughly year or so in how companies, what their attitudes are about generative AI and the use of generative AI. And what I mean by that is that early on when ChatGPT came out, when generative AI really kind of hit public consciousness, brands had a lot of fear and a lot of concern about it being used in any kind of marketing context. Just because, and we can get into this in more detail if you'd like, but in general there's a feeling that unlike in the old days where you knew that your marketing material was created by a human sitting at a desk using traditional equipment, you know, with generative AI you're getting material that you know is based on prior work. You don't know exactly what the secret sauce is that produces it. And so for brands there initially there was a lot of concern about sort of anything touching generative AI. And in fact, early on there was a lot of activity around brands instructing their ad agencies, their design firms, production companies do not use any generative AI tools in connection with our work. As I was saying at the outset though, that has kind of flipped such that now a lot of companies are embracing it and instead of prohibiting the use of generative AI, they're mandating it. They're saying this is a more efficient and cost effective way to produce a lot of content. So not only are we not telling ad agencies, for example, not to use generative AI, in many cases we're requiring them to use it because we think that's the most cost effective way to get the job done. So things have really changed relatively quickly in my view.
A
What flipped that switch especially, I understand what would have flipped the switch from a cost a speed perspective, but flip the switch from then the brand's legal teams not being like, absolutely not, take this down right now, what are you doing?
C
Yeah, well, I mean, I should make clear there's a range of different views and attitudes towards generative AI and some companies are on the more conservative side and some are on the more risk tolerant side. But I guess I think every time a new technology rolls out, we go through this process where there's curiosity, fear, dabbling with it, you know, figuring it out, trying to understand how, how the technology interacts with traditional legal structures and as well as like, you know, marketing priorities, you know, and so, you know, for example, when social media was new, I am old enough to remember that and have it be A part of my professional career. And like a lot of companies were like sort of tiptoeing onto like, how do we, how do we use Instagram? Should we be worried that it's an uncontrolled environment relative to traditional media forms? And obviously now the vast majority of companies are not particularly concerned about social media or meaning they don't view it as an environment that entails an unusual amount of risk. And so I think some of that process is already happening with generative AI. People are seeing it used, getting their heads around what the upsides and downsides of it are, and just generally becoming more comfortable with it.
B
I'm curious, who did the flip happen for mostly, right? Like, who's the appetite growing for? Because I imagine that like a financial brand or like a mortgage company or something like that may approach it differently than a CPG or even like when you think about a legacy brand versus a digitally native brand.
C
Yeah, that's a good question. I'm not sure how much I can generalize about that because I do think every company kind of has their own risk tolerance as well as their own aesthetic preferences, which also are part of the picture with generative AI and different factors. But I do think you're right, Kimiko, that there are certain kinds of companies that historically tend to be more conservative on legal matters, not just with generative AI, but across the board. And certainly a financial institution is a highly regulated company is more likely to be on the conservative side of that. You know, sometimes maybe brands that are oriented towards children tend to be a little more conservative, things of that nature. So I think there maybe there are some sort of broad distinctions you can make. But overall I think it's more of an individual decision, company by company, as to what they feel good about.
A
And it seems like there's two kind of like main risk evaluations that any brand or really any company who's using generative AI to create content would need to take into account. One is this question of how do I know if the generative AI tool is or is not trained on copyrighted content without consent? And therefore what's my exposure than in creating stuff even if I'm not putting in, hey, make me a studio Ghibli looking video if what it spits out happens to look like that? I don't know to what extent I'm immune from that. I remember talking to agencies, ad agencies two years ago, and they were all saving every prompt. And that was the mandate from the top of if you're using any generative AI tools for anything. You have to save all of the text to your prompts. That's one of the things. Risk considerations. Has that gotten clarified in any degree? What is the potential exposure for a company using a generative AI tool to create content, whether or not they know if in fact that generative AI technology is or is not trained on copyrighted content without consent?
C
Well, there is some risk in the United States, the way copyright law works, if a person or company generates something new that is basically the same as an existing piece of work that could be copyright infringement, even if there's not an intentional copying happening. And so that's really the fear with generative AI is that a company is going to use an AI tool to produce some creative material and just not know that in fact the output is closely similar to something that already exists out there. So that is still a concern. There are some practical things that brands and other creators can do to try to deal with some of those risks. Keeping the prompts, as you were describing, is one thing that companies do. Companies might instruct their employees. Don't specifically prompt the AI tool to give you something that looks like something exists. Yes, don't give me a prompt. That's going to be clearly problematic. Most companies that are using AI think it's important that the generative AI output is more of a starting point as opposed to the end product. So there'll be some human intervention, human creativity applied even, just it's valuable just to have a human look at the thing. And it's not foolproof, obviously, but one might look at something and say, oh, does that sort of remind me of something that's out there? Does that look like it might be an issue? But the idea that there has to be that the generative AI output might be raw material but not the final product, and the final product is going to reflect some other significant creative input from a person is part of how most companies that I'm aware of use generative AI. And I think that helps as well. And as I said too, it may just be part of this process of getting comfortable with new technology and the legal risks that are presented in that. We're still in an early stage, but it may prove to be the case that these risks are out there. And maybe once in a while there will be a company that is unlucky and uses AI to produce something that turns out to be infringing. But that's probably not going to be the most common scenario. And it may be the companies will say to themselves, well, we're willing to absorb that risk. Given that most of the time this works very efficiently.
B
I'm curious about what this looks like on the flip side. Right. Because on the one hand, there's brands concerns about copywriting other people's things and that ending up in their ad. On the other side of that, a brand like how protected is a brand with copyright. So for brands like Coca Cola, Popeyes, McDonald's, others that have, you know, created the bulk of the ad using generative AI, if someone spoofs this ad like how protected is that material? Given that AI had a pretty heavy hand in generating that content, is there like a place where the buck stops?
C
Well, you're identifying what still is a murky area in the law as it applies to generative AI output. There are questions about whether gen AI outputs are copyrightable. Probably ultimately it will depend on the level of human involvement in the product. So there is a big question there. But I would say that too, to me, is an area where different brands will have a different perspective on what that issue means to them. Meaning there may be some brands who just say, look, we don't want to produce advertising material that we can't own. Like, it's important to us that we own, that we control that, and also that we can ensure that no one else is doing something that creatively is very similar, such that it would be infringing. Whereas others might say, okay, we get it. We may not own this material. In fact, the AI tool may spit out similar material for other users and it may be kind of a crowded field, but we're okay with that because we like the output or we like the message and we just understand that's part of the, that's part of the picture that you get when you use generative AI. So I think that too is just. It just depends on, you know, priorities and sensitivities that each, each company may have.
A
Does that.
C
Like.
A
So let's say Digiday had a mascot and we copyrighted this mascot for whatever godforsaken reason, and we throw it into a generative AI tool to create ads for Digiday. But now that generative AI tool has kind of this image of this mascot and directly, indirectly starts reproducing that for others, would Digitize still be in a position to be able to call out copyright violations for that use? Or by virtue of us putting this mascot that I've just completely fabricated into the tools, do we kind of lose that? Right?
C
Yeah. Well, there are a bunch of issues that come up with that scenario. One Thing is that my experience is that many companies, at least the ones that are most more sophisticated, that are using generative AI are they're not using off the shelf publicly accessible tools, they're using enterprise software that they have licensed. And usually they, one of the terms of the license is that their inputs are not going to be used to train a model or to generate output for other people. So, you know, for other, other users of the platform. So typically companies, again ones that are sophisticated at least do view that as a potential problem. And they want to make sure then that the tools they're using don't essentially don't allow for that. They want a closed off system so that their inputs aren't being accessed or used to create outputs for other people. I think the other thing to mention is that I think brands, trademarks, logos, mascots, as you mentioned, those tend to be companies hold those close more so than say photographs, texts, there's other kinds of material that they may own from a copyright perspective. Whereas trademarks, logos, things that identify the brand are, are usually treated more carefully. And I think most companies, I actually believe would be reluctant to allow their brand identifiers to be manipulated in the ways that you're describing. And I know some companies have done that. So clearly there's a difference of opinion on that amongst the various companies. But most companies really view their trademarks and logos and things as being kind of special and deserving of extra care and really things that they don't want to be modified or adapted. There's also a legal underpinning to that too in that one of the features of trademark law, and we're usually talking about trademark law when we talk about brands, logos, things of that nature. One of the trademark law principles is that trademarks have to be used consistently in a manner that's under the control of the trademark owner, essentially. So if a trademark owner takes their logo, for example, and allows it to be adapted, modified, animated things to be done to it by other people who aren't under the control of the brand, there at least is a technical legal argument that that's actually diminishing the legal protection that's available for that trademark. So that's a legal thing that kind of underlies some of the more brand or oriented concerns that companies have about protecting their trademarks.
A
So it's really interesting because one thing Kimiko and I have talked about on the show before is in my diseased mind thinking about all these AI only feeds and generated apps at this point is oh, what are going to be the ad products here. What if there's a way where users make videos that incorporate some brand, mascot or other kind of signifier? And that can be it's product placement with AI, but there's some sort of money change in hands that. That's a way to not just have more ads just slapped in the middle of content feeds because, Jesus, we have enough of those. But it sounds like if that were the case, that would be a big risk for the brands to be doing that, to allow kind of like AI generated product placement in user generated videos. That's too much. Too much hyphens in that sentence.
C
Yeah, it could be. And I apologize. I feel like my answer to every question is it depends. And everyone has a different point of view. But that's a little bit true here too. I guess what I would say is the way trademark law works is very kind of rigid in its insistence on. On control by the trademark owner. That just is kind of intention with the freedom that comes in a scenario where a brand's allowing users to use AI tools to adapt or repurpose or recontextualize trademarks.
A
Okay, do you expect to have conversations then with clients where like a lot of the work that you're doing is kind of like really determining the language around usage in this respect next year, or maybe you're already having these kinds of conversations and figuring out like, what are the terms to be able to, like if a brand wants to let people on a platform create AI generated videos using their logo, but still wants to maintain control and also wants to like, stay somewhat firmly within trademark law or trademark protections, what that language could be.
C
Yeah, yeah. So I would say that's a conversation that's already happening. One thing I'll mention is that the same issues that we're describing with respect to trademarks also come into play when you just think about people and their images. There are some platforms that are making deals with celebrities, for example, to allow replicas of their images to be created. And so the celebrities kind of have the same general types of concerns as the brands, which they may be saying, okay, I think that's cool. I'm happy for my image to be usable by people who are doing creative things. And that's all fine, but there are some parameters around that that I'd like to see implemented. Celebrities don't want to be shown in ways that are derogatory or offensive or things of that nature. So. So, yeah, so it's the same kind of issue. And I think any. Whether it's a brand or a celebrity that is looking at enabling adaptation of a valuable asset, they have to think about what parameters and restrictions do they want to impose around that and how did they get those imposed too? What's the technical capability of enforcing those rules?
A
Kumiko, you've reported.
B
I actually wanted to ask about what the, what are. What the, the ramifications is not the right word here, but let's just go with that for, for, for the sake of the conversation. But like, what legal recourse is available to you? Is it just a matter of like a cease and desist? Or do we kind of have to wait until this mounting legal cases that are already in place right now play out to kind of get some type of precedent set for any type of legal recourse?
C
Yeah, well, I'm mostly talking about context where again, whether it's a brand or the celebrity, where they're actually authorizing the use of their IP or their image in a platform. And so there it's largely a contractual issue of like, you know, they will establish terms on which they will allow their property to be used. Beyond that, the question of whether IP owners have recourse if their IP is used in generative AI outputs is a very big question. I'm going to think, if I can answer that succinctly. But I guess what I would say is a lot of the legal rules that we already have and have had for a long time will work in the generative AI context. Meaning if someone is using generative AI to produce an adaptation of an existing film, for example, you would sort of look at that the same way you would look at an adaptation of film that was made without using generative AI. And just. You would. It'd be a question.
A
Fiction.
C
Yeah, yeah, you'd be saying, well, is this a fair use from a copyright perspective? Is there any other legal issue with it? And so the generative AI itself does not necessarily change the analysis as to whether or not the, you know, the fan fiction or whatever the adaptation is, is allowed. But as with many other tech developments, generative AI just allows these things to happen so much faster and more easily than in the past. So things that would have taken an enormous amount of effort and time to do in the past now can be done in seconds. So from that perspective, it's very different. But I do think a lot of the legal rules will still suffice in analyzing these things.
A
And to what you mentioned, name, image, likeness, to what extent would the existing language apply to AI, or specifically generative AI or not? I was talking to an executive at an agency that does a lot of deals between brands and influencers and I was asking this person, this was yesterday, does existing nil language cover AI use or not? Would there need to be new terms around this? And this person was just like, oh yeah, no, it's covered. They're not a lawyer, they're not a legal expert, their job is different from that. To what extent is that or is that not true? I imagine this could be. It depends too. But I wonder what it would depend on.
C
Yes, well, I think what we've seen a lot of in the last few years is, you know, companies figuring out the extent to which their existing traditional contract language allows them to do what they want to do in terms of using generative AI and also whether their existing traditional contract language protects their assets in the way they want to protect them. And so I would say early on, when Genai was first bursting on the scene, there was a lot of looking at existing contracts and figuring out, well, what do these terms mean in the context of generative AI? They weren't written with that in mind, but whether it's a license that's allowing a company to use existing material or whether it's something that's restrictive, in either case you'd be looking at it and saying, well, how does this apply here? What does this mean in this context? And so you just have a range of possibilities. Some companies in their contracting are being kind of subtle and trying to get to a place where they feel like they can do what they want to do, even if they're not specifically describing the use of generative AI in their terms. Others are really trying to address it in a much more head on way. So you are right. The answer is it depends. But it is something that's important, especially for things like talent contracts, content distribution agreements, any kind of production agreements. All these contracts that are more than a few years old probably don't directly address these issues. So you're really in a situation of looking at them and trying to understand what you think is is possible under some old language.
B
Yeah. I'm curious if any of the language that's come across your desk has it changed. So for instance, if it once said no person should X, Y and Z, does it now say no person and or entity should X, Y and Z to kind of make sure that it becomes an all encompassing document. Here.
C
Things have changed a lot. Again, it really depends on on whose point of view is being reflected and what they're trying to accomplish. Meaning a lot of saying in a, in a license agreement, you know, distribution agreement for entertainment content, the owner of that, of the property, the licensor is likely now to be much more specific about whether or not the content can be used for any kind of AI model training, for example. That was something that wasn't happening, you know, obviously until fairly recently. And then on the flip side, the licensee is maybe trying to either subtly or more directly get some rights to in fact use the content for AI training. So there's a pretty high level of sophistication among a broad group of companies. So this issue is not brand new anymore. So I think most, most companies, they are, they are thinking about it and trying to address it.
B
I don't know if this is too far fetched or too far out of left field, but like I mentioned at the top of this call, I don't see a lot of influencers themselves coming up with this type of language. Influencer agencies are kind of in the same boat. Is there any reason that would strike you as to why they're not coming up with this language? Is the onus kind of on the brands instead that are striking, these deals with influencers?
C
I think a lot of influencers and talent more generally, including actors, you know, voiceover artists, you know, all kinds of creative folks, they are interested in, they really are interested in ensuring that AI tools are not going to be used to create replicas of them. That's, that is a pretty hot issue and it's been reflected in the SAG AFTRA agreement. It's a feature of a lot of audiobook production contracts. It comes up all over the place and you can understand that it, an actor would like to know that their livelihood is not going to be seriously undermined by the capability of replicating them without their consent. So I actually do think a lot of the talent side are concerned and trying to restrict, to expressly restrict those kinds of activities. I mean, this may be a little specific, but the fact of the matter is that a lot of, when you talk about influencers, especially a lot of times those contracts are very, even setting aside AI usage, they're very detailed about how the influencer can be used in the marketing campaign and what their services are and that kind of thing. So to some degree there's inherently a limitation on what can happen with that influencer's likeness, even separately from the question of whether AI can be used to create replicas. And the same is true in advertising and marketing generally, including like with traditional actors. If you hire an actor to appear In a campaign, sometimes the contract is very specific. You can produce three TV commercials of 30 seconds long and 10 print ads. It's very, very specific. And so when you have those kinds of restrictions in an agreement, even if you haven't put particular limitations on generative AI, and inherently there is some protection for the talent that they're not going to just appear in an endless quantity of, of materials produced by AI.
B
Which kind of goes back to the point you made earlier about some of the language that's already in place extends itself to generative AI.
A
Yes, Rob, as you've been mentioning a lot of times, it depends. There's all kinds of variables and factors on which decisions depend or consequences depend. I feel like a lot of times when it comes to legal situations, what this can depend on is legal precedents that have been set. We've had like some cases get decided this year when it came to like copyright or fair use. With respect to meta and anthropic, I'm sure there are other cases that maybe just weren't as high profile. Have any precedents been set to help clarify things?
C
Well, I would say the two areas where there still is uncertainty are about the use of copyrighted material for AI model training. And that's where most of the litigation has occurred in the United States. So that's still an issue. You may know there was a Copyright Office decision. There are some questions about the extent of copyrightability of generative AI output. So that's still a bit murky as well.
A
Wasn't the Copyright Office supposed to issue new rules or new guidance? But I don't know if that ever happened.
C
I'm not sure where they are on that.
A
I don't know if anyone knew, just.
C
To be honest with you. But actually, so from that perspective, there have been cases, there have been decisions, there's been agency guidance that, that's important but, but we haven't seen a lot of activity around what I would say is just your kind of garden variety cases dealing with a specific AI output being infringing or say an AI output where you've got a person who looks a lot like another person. So there might be rite of publicity claim. Those, those are. We haven't seen a lot of that yet. But I do think those are places where in my mind the use of goes to what I was saying earlier, the fact that the generative AI was used to produce that output is not really the most important thing about those cases. Meaning if I'm a celebrity in my image and I see an image, say online somewhere that I think is me or looks an awful lot like me, and I want to try to prevent that. It doesn't particularly matter whether that image was created using generative AI or a paintbrush. If it looks like me, it looks like me. And the same is mostly true with respect to copyrights as well. So we haven't seen a lot of that litigation yet. Just on the infringement questions. The focus really has been more on the AI training.
A
Rob, should Kimiko and I try to go copyright our voices, would that be smart of us? It depends, I'm sure.
C
Yeah. Well, you may want to join SAG aftra. They will assist you.
A
Awesome. All right. Well, Rob, this is really helpful to get some clarity on how things are shaping up. I'm sure we'll get more clarity in the next year. Maybe cases get decided or we see what the best practices are for companies. But thanks so much for coming on the show and talking with Kumiko and about this.
C
Happy to talk with you.
B
Well, that brings us to the end of this episode of the Digiday Podcast. Thank you to everyone for listening. And please don't forget to share this episode with someone who you think would enjoy it. You can even rate us and leave us a comment on Apple Podcasts. We'll be back next week with another episode of the Digiday Podcast. Thank you so much for joining us.
This episode explores the intersection of generative AI and intellectual property (IP) rights, focusing on the recently-announced Disney-OpenAI deal. The hosts, Kamiko McCoy and Tim Peterson, delve into the implications for brands, agencies, and legal frameworks. Their featured guest is Rob Driscoll, a partner at Davis Wright Tremaine, who unpacks how companies are navigating legal ambiguities around copyright, trademarks, and AI-generated content.
The hosts maintain a conversational, curious, and lightly irreverent tone, balancing technical insight with pop culture references and playful asides (“A show for those people or a show for Mickey Mouse's lawyers?”). Rob Driscoll’s contributions are measured, practical, and precise, frequently noting the case-by-case nature of legal risk in this fast-evolving field.
This episode of The Digiday Podcast serves as a wide-ranging primer on the state of generative AI and IP law in marketing and media. The sudden shift in Disney’s approach—from litigation to licensing—reflects the industry’s acceleration toward embracing AI despite unresolved legal questions. As companies rush to adapt, the conversation highlights the need for evolving contracts, vigilant brand protection, and the inevitability of ads (and legal test cases) in the generative AI space. As summed up by Rob Driscoll, “It depends”—but that’s changing quickly.