Transcript
Podcast Host (0:00)
This podcast is brought to you by audiohook, the leading independent audio dsp. Audiohook has direct publisher integrations into all major podcast and streaming radio platforms, providing 40% more inventory than what could be accessed in omnichannel DSPs. What's more, audiobook has full transcripts on more than 90% of all podcast inventory, enabling advanced contextual targeting and brand suitability. Audio Hook is so confident that in addition to CPM buys, they offer the industry's only pay for performance option where brands can scale audio and podcasting with peace of mind mind knowing they are only paying for outcomes. Visit audiohook.com to learn more. That's audiohook.com.
Alan Chappelle (0:51)
Welcome to the Monopoly Report the Monopoly Report is dedicated to chronicling and analyzing the impact of antitrust and other regulations on the global advertising economy. If you are new to the Monopoly Report, you can subscribe to our weekly newsletter@monopoly-report.com and you can check out all of the Monopoly report podcasts@monopolyreportpod.com I'm Alan Chappelle. This week my guest is Professor Helen Nissenbaum. Professor Nissenbaum is the Andrew H. And Ann R. Tisch professor of Information Science and the founding Director of the Digital Life Initiative at Cornell Tech. Her research spans issues of bias, trust, security, public autonomy and accountability in digital systems, most notably privacy as contextual integrity. While Professor Nissenbaum is really accomplished, she is not necessarily an expert in the ad space. Nonetheless, she has written on a bunch of topics like profiling and privacy enhancing technologies, and Helen is an influential voice within the larger privacy community. She is also someone who has been critical of the ad space and I think it's really important for all of us to listen to our critics. So let's get to it. Hi Helen, thanks for coming on the pod. How are you?
Helen Nissenbaum (2:07)
I'm fine. Hi Ellen, thanks for having me.
Alan Chappelle (2:10)
I'm really looking forward to this discussion and hopefully we can we can get this done before the big thunderstorm that's apparently heading to the east coast comes. Yes, my first question is this so the conventional view is is that first party data is more privacy safe than third party data and that view has been encapsulated into the ad industry self regulatory codes and it was a concept that was arguably even supported by the FTC years ago. So can you summarize your understanding of what first party data is and share why you think that notion that first party might be better? It might be misguided?
Helen Nissenbaum (2:50)
I do think it's misguided and I hope you'll forgive me, because to explain why, I need to talk a little bit about the theory of contextual integrity. There are two common views on privacy. One is that privacy is akin to secrecy. So you have more privacy if the data is withheld and kept secret. And you'll often see that concept of privacy at play in computer science material, even though it's not completely explicated as such. And there's the view of privacy as control over information about yourself. Now, contextual integrity, it partly absorbs those two intuitions, but mainly the starting position is that data flows, and often flow of data is very productive, and we need it, society needs it, different values needed, like health or commerce and so on. So the important aspect of privacy that contextual integrity tries to grab onto is the idea of appropriate flow. So what we mean by appropriate flow, according to this theory, is flow that is constrained by certain contextual norms. They're like social norms or social rules. And those rules depend on various, what I call parameters. But you can say factors. One of the factors is the recipient of the data. So who's getting the data? Like, who's sharing it, who's it about, what kind of data it is, and under what constraints is the data being shared. Now, sometimes a recipient of the data, which we can think about as the first party recipient of the data, could be a problematic recipient. So, for example, if you're commonly, we hear these kinds of examples, you're law enforcement and you're surveilling a population and you're placing people under, you know, there's facial recognition and so on, that is not considered acceptable in a democratic society, then even the first party can be considered problematic. So it really depends on these other factors to determine whether first party or third party. Now, as a matter of fact, it's often the case that when we are sharing information with first parties, we're doing it in a very conscious way. And so as a matter of fact, if you look at the cases of first parties and third parties, we often find that first parties are more acceptable in receiving information about us. But there's nothing required in that.
