Loading summary
Kurt Nickish
You're listening to Is Business Broken, A podcast from the Merotra Institute for Business, Markets and Society at Boston University Questrom School of Business. I'm Kurt Nickish. Today we continue our conversation about misinformation online with a live panel we recorded recently at CitySpace. We start by discussing Section 230, a law at the heart of debates over free speech and content moderation on the Internet. Passed with bipartisan Support in the 1990s. Section 230 shields social media platforms from liability for user generated content. But in today's digital landscape, critics argue that it has empowered tech giants to sidestep accountability. We explore how the Internet has evolved since the law's inception, what responsibilities platforms should bear, and the challenges in balancing free expression with content regulation. Our panel includes Marshall Van Alstine, Alan and Kelly Questrom, professor in Information Systems at Boston University Questrom School of Business Nadine Strossen, professor of Law at New York Law School and former President of the American Civil Liberties Union, and Michael Masnik, CEO and Founder of Copia Institute and its publication, Tech Dirt. First, let's just give everybody a big round of applause and welcome. Marshall, I'm going to turn to you to get us rolling here. Let's set the scene and just go back to this thing called Section 230, which is a law that was passed years ago and is kind of responsible for the way the Internet works today. What is the foundation for all of this from a sort of a legal and practical market perspective?
Marshall Van Alstine
So just as background, section 230 is the Internet law that prevents or protects platform companies both from the user generated content, the third party generated content, but also their editorial decisions around that. It's actually dating from 1996, and it kind of distinguishes platforms from common carriers or other companies giving them editorial rights in order to reduce harassment, to reduce, you know, pornography, child sexual abuse material, all those kinds of things.
Kurt Nickish
Let me ask a little bit more about that. But 1996, what was Congress worried about at the time?
Nadine Strossen
To me, Section 230 passed, to the best of my knowledge, unanimously. This was not a partisan matter. It was designed consistent with First Amendment principles that already protected the speech at issue. But to make it absolutely clear and easier to enforce that these companies could not be held liable for the massive amounts of third party content that were being put on their platform. If they were held liable the way a newspaper were, they would really have two choices. Either they would have to be extremely rigid gatekeepers that were thereby obliterating the unique promise of the Internet to provide an unprecedented mass media platform where everybody in the world could speak to everybody else in the world, but not if these gatekeepers had to screen everything for fear of liability. The other alternative would have been that they would have done no screening at all. And therefore the Internet would become not only family unfriendly, but unfriendly to all of us who hate spam. For example, spam is constitutionally protected speech. So Congress said, look, we want to give you the latitude to choose to engage in some content moderation, and you can do that without being held liable for the content that you don't screen. And I think it was completely consistent with First Amendment values and also consistent with making the Internet a place where many of us could find communities and platforms that were consistent with our values.
Kurt Nickish
And our interests, and also consistent probably at that time with this idea that we have this huge new economic engine. Let's not fetter it and let's let it roll, because it's taking us into the future, even though we don't know exactly how it's going to end up. Mike, all that makes sense why section 230 was passed the way it was. It's a different Internet today. I'm just curious, what. What are the practical ramifications of it? What did it bring us up to now? That's a problem.
Michael Masnick
I'm actually going to argue that maybe it's not such a problem and that the Internet might not be as different as people think. Obviously, there are some differences with the Internet today than back then, but the general principles behind section 230 made sense then, and I think still makes sense today. If anything has changed today, it's that obviously the Internet has become more and more central to our lives. It's become important in ways that certainly people didn't predict. And it is something that, you know, most people can't really live without for very good reasons. You know, some of it is that it's providing all sorts of information, but it is. It provides the different services and tools that many of us rely on for all sorts of useful things. With that has come some speech that people certainly have problems with or disagree with or worry about, but that doesn't change the fundamental principles behind section 230, which is if you just view it as a tool for saying, you know, who is liable for this particular speech, it actually makes a lot of sense. And I think that many of the concerns that we see about it today are other concerns that people are putting on section 230 for a variety of reasons, whether it being that it's really the First Amendment that they're upset with and they know that they can't change the first amendment and therefore they're going to blame section 230. Or there are other societal level problems, in some cases mental health problems or other issues that society has struggled with for decades.
Kurt Nickish
I mean, it's striking that, like platforms are getting a big pass from section 230. They also have incredible power. You were talking about how they were either publishers or carriers or dually. So, and can you just explain that a little bit more? Like, what does Section 230 help platforms be? And how does disinformation as a problem sort of play into that?
Marshall Van Alstine
One of the interesting things that many of you may know, if you treat someone as a common carrier like AT&T, they have to carry all content. So if you use a telephone to make a terrorist recruiting call, AT&T is not going to be liable and they had to carry your phone call. That's under the common carrier rules. Not true for publishers. So publishers are liable for what they publish because they have editorial rights, and therefore their editorial decisions subject them to liability for terrorist recruiting on the platform that you wouldn't have on the common carrier case. Platforms are kind of interesting in that they have unbundled these rights and responsibilities and they've gotten the best of both. So they don't carry the liability for amplifying awful but lawful content, but they also do have complete editorial rights. So on one hand, common carriers have liability, but they have to carry everything versus the print, which can carry anything they want, but they do carry liability versus the platforms, which can choose to carry or not carry. And they don't have liability. So that's the complication. That's where this gets to be nuanced and challenging. Let me invite Mike and Nadine to jump in because it's a very nuanced subject.
Nadine Strossen
I want to jump in on the premise because I think that you pose the question, Kurt, the way most people do, as if Section 230 somehow benefits the platforms. But I would contend, and this goes back to the origins of 230, and I'm old enough that I remember when it was being legislated and enacted, and the purpose was not to benefit platforms or whoever was running Internet services at the time, but users, those of us who were not being given platforms on the centrally controlled Gatekeep forums such as the New York Times or Fox News or so forth, if we were to realize the benefit of the Internet, whoever was providing services would have to have an incentive not to go through extraordinary screenings. So we're all benefited. And in terms of the intermediaries who are directly provided immunity, it's not only the giant tech platforms, it's any online service that provides a platform for any third party. So it could be even any of us if we refoward an email, if we refoward a tweet, if you're a little neighborhood niche blog and you allow comments on it or reviews. In fact, I think if Section 230 were reduced in its protective scope, the major beneficiaries would be the existing tech titans, because they have the resources to cope with additional regulation, and it's the smaller speakers and platforms who don't. So I understand the hostility toward the dominance of a few tech giants, but I think that reducing the scope of Section 230's protection would be counterproductive.
Kurt Nickish
Mike, let me kick it to you with this question. You can say that all these platforms are carriers and they don't have to be responsible for what's on there, except for child pornography, foreign interference in elections, maybe. But when you're on there, the algorithm sure feels like it's being moderated, right? To me, it just feels like I'm getting certain things fed to me through this company, and it's not this open space that I can just wander around in. It's something different, or it feels that way at least. So I don't know, with that criticism, maybe of just kind of saying, you know, companies don't need to be responsible and it's just this open, open, free space. Do you recognize the fact that there is pressure to change things or ask platform companies to moderate things differently, protect users, and then what are you worried about there?
Michael Masnick
Yeah. So, I mean, there are a few different ways to answer that question, which I think is a good question. You know, the simple fact is that any of the other platforms, whether it's a newspaper or whatever, they also use an algorithm in terms of figuring out what belongs on the front page. Which stories do they think are going to be the most interesting when it comes to the platforms? There's this idea, and it showed up a little bit in your question, which I would push back a little bit on, is the idea that Section 230 gives no incentive for the platforms to then deal with problematic or bad content. The opposite of that is true. Part of the immunity for moderation is that they can make these decisions and not be sued for it. In fact, that's where the Prodigy case came from that created section 2. Hundred thirty, because Prodigy was trying to create a family friendly environment and trying to moderate. They got sued. And so part of section 230 is saying that we will not create liability for the platform making those decisions in order to protect. And there are other incentives beyond the law for different platforms to be better about these things. You know, there's this belief that it has to be within the law, but that's not true. If you have a platform that is full of garbage and abuse and harassment and spam and all these other things, your users will start to go elsewhere. They will want to go elsewhere. It will be an opportunity for someone else. If your business model is advertising, the advertisers, we're seeing this right now with the platform now known as X, the advertisers. I think the last stat I heard was 84% of their advertising revenue went away because Elon Musk decided to tone down the moderation that he does and ramp up the algorithms in a different way. And so the incentive structure is there. And section 230 actually protects the companies in terms of experiments and figuring out what is going to create a better, safer overall platform. Whereas if you took away section 230, that would become very difficult and it would become much more problematic for the companies to try to remove problematic content. Because as we saw with the Prodigy case, if they made a mistake, they would face liability. And to get back to the point that Nadine was making, this creates a situation where the biggest companies will win. Google and Meta, they have buildings full of lawyers that smaller platforms don't have. If there are lawsuits, they can handle them. It is everybody else, the smaller folks, the other smaller sites. And as Nadine said, Section 230 protects users. When you forward an email or you retweet something, all of those things are protected under section 230. None of us have buildings full of lawyers. Whereas Section 230 protects you in those cases as well.
Marshall Van Alstine
So let me push on one or two ideas here. So I actually think for the most part Mike is correct, but I think we need to be careful about platform business models. So to a first approximation, I think it's correct. So platforms will try to take responsibility for damage that occurs on platform. They lose advertisement, people flee because of harassment. What they don't do is deal with the externalities, damage that occurs off platform. So they don't internalize insurrections that happen in Washington or in Sao Paulo. They don't internalize judicial interference, the death threats that happen off platform, the voting folks that hop on off Platform, that's an externality in economic terms and they're not dealing with the externalities that happen off platform. So I'll agree with the first half. They'll deal with the damage on platform, but they don't deal with the damage off platform.
Kurt Nickish
So Mike just talked about how there can be competition. If you have a bad experience on a platform, you can go somewhere else, somebody else can come up with something better. But if we change things, the big platforms will win. It sure feels to me like the big platforms are winning now. Like Google tried to create some social media company platforms. It didn't work. It's hard. Like section 230 was written before that we really understood network effects. The switching costs of changing platforms is high, high. Marshall, just as an economist who studies platforms, is a market solution to just, oh, I'm going to go to somebody else because they moderate content better. Is that realistic?
Marshall Van Alstine
So you've raised a really interesting issue. So traditionally, if you wanted to compete with an existing piece of media, you might go form your own printing press, which would be fine in a traditional media, but you can't do that in a social network. Google has tried to launch social networks three times and failed. If you try to leave a platform like Twitter or LinkedIn, you can't take your followers with you or the people whom you follow, you can't take with you. So there are network effects that make switching costs vastly harder, that also entrench the incumbent folks. So we may need to confront some new economic realities compared to the traditional business models. Now, I think there are some interesting solutions that we can try to introduce along those lines. Mike has some that I actually like along the protocol measurements. Candidly, I don't think those go far enough. I think I would like to hear him expound some of those for the audience. So I think let's even draw some breadcrumbs to a couple of solutions when I invite some others to comment, and I think there are several others I'd like to offer on top as possible ways of dealing with that incumbency and network effects that gives them so much market power.
Michael Masnick
So I have a whole paper on this. If you do a search on protocols, not platforms, great.
Nadine Strossen
I highly recommend it.
Michael Masnick
I'm not going to go into all of the details of it, but a very quick approximation of it to understand it is to think about the way that email works today. I'm sure that everyone in the audience listening to this today has different types of email accounts. You might have an email account from your ISP you might have a Gmail account, you might have a Yahoo. Outlook, whatever it might be you might have from your university or from your workplace. And all of those email accounts can communicate with each other. And if you switch your email account, you can take your address book with you, you can update it and let people know. And so the idea behind protocols is if social media and other Internet apps today could work more like email does, where you could have a number of different providers and different users could communicate across those providers. And if you don't like what one particular provider is doing, you can easily switch to another without losing contact. Right. So if you leave Facebook right now, if you don't trust Facebook, you don't trust Mark Zuckerberg, and you decide you want to shut it down, if you leave and all your family continues to use Facebook, you lose out on all of that contact, and you can't share stuff with them as easily as before. Whereas if the world were built more like email and on protocols, then you could leave the platform that is run by the person you distrust and yet still communicate, share information, and still be in contact with other folks. And I think that we're starting to see some efforts today. There are multiple different competing efforts to build protocols for social media and for other online services. And they're still very early, but we're beginning to see what that world might look like. That's actually one of the things that I'm really excited about as an alternative to the sort of centralized platforms that are so much of the concern that we're talking about.
Kurt Nickish
Mike, that's really interesting, this idea. Do you see this as something the industry would do itself? Is congressional action on tweaking section 230 necessary? Is this a regulatory path because the that big companies may not want to give up that market power? I'm just curious how you see this rolling out if experiments are successful.
Michael Masnick
I mean, one, we're already seeing it today, and there are a few different reasons why, and some of it is sort of chaos in the market. The blue sky was kind of a spin out from Twitter. Twitter was kind of working on this with the idea that it was going to embrace it, because Jack Dorsey saw this as a way to deal with the continued regulatory scrutiny that he was facing, and he thought this would would be a better approach. At the same time, we're seeing Meta not go quite as far, but when they launch Threads, which is their competitor to Twitter, they decided to embrace Activity Pub, which is another protocol and another sort of decentralized setup. It's not as doesn't go as far as Blue sky and the protocol does, but is a really interesting thing for a company that historically has always been very proprietary about its data and its setup, and yet decided to embrace Activity Pub and allow people who are not on its platform to communicate with the people on its platform. And some of that might be because of the scrutiny that's there, but without any particular legal change necessary to get us moving in that direction.
Marshall Van Alstine
Let me give you an example where I think that's moving in the right direction, but I still think that we need some legal changes to make that work. So many of you may have heard of the story of the NYU University team that was actually trying to, with user permission, track what was being shown on Facebook and they're trying to see how much misinformation was being shown on Facebook. Facebook said, no, no, no, no. That's the private data of the advertisers. That's bull. This is what folks are being shown. So of course you should have a right to analyze what you are being shown. And so Facebook turned off all the APIs to do that. If we extend the laws to empower users, to give users the right to control their own information, then you would have the right to benchmark what you're being shown, to analyze what's being shown, to choose the algorithms that show you those things. So this is a case where I think we need to not just talk about speakers rights, but also talk about listeners rights. What do you as a user have the right to see and do with your own data? And this is where I do think we need some modifications to the laws to keep Facebook from doing that kind of thing and allow users to make better use of the information they receive.
Nadine Strossen
I would echo that, acknowledging that it's a difficult issue, as it often is when there are conflicts between, or let's say tensions between freedom of speech, including the editorial discretion of the platforms, and user privacy, which also has free speech implications. Because if you fear, reasonably that the price of accessing a platform is that you're going to forfeit your privacy, you may be deterred from exercising free speech. In all of these cases, the Supreme Court and the ACLU and others who care both about free speech and privacy have seen that as a very difficult issue. But I think that I would at least look favorably upon, and many digital free speech experts say they would also look favorably upon potential legislation that required radical transparency or greatly increased transparency on the part of these companies. It's almost a kind of consumer protection approach, rather than directly telling them how they should moderate or not moderate content, which were the laws that the Supreme Court looked askance at. This would be more in the nature of procedural protections and information for us consumers to make rational or informed choices about which platforms we use and which we don't.
Marshall Van Alstine
I would totally support a user empowerment set of laws as distinct from government regulation that has government deciding what should be happening. I really think we need to give users the rights and create more marketplaces for them to trade on what is of value to them and not have government make those choices.
Kurt Nickish
We're using the word government here, but you know, there's federal and states and just as an example, the state of Arkansas today, the Arkansas attorney general sued YouTube and parent company Alphabet, saying that the algorithm, the way that videos are streamed to youth, is contributing to a mental health crisis. So Nadine, just as a. I mean you're familiar with a lot of these legal cases. Section 230 came around because of cases. We don't need to discuss the merits. But I'm just curious, like, where's the pressure going to come, do you think, for regulatory changes? And I mean, do we want a system where like states are regulating this differently, or does it need to be a federal solution?
Nadine Strossen
I'm very skeptical toward any regulation, including litigation that is basically seeking to blame the media, blame the messenger for deep rooted social problems. I don't see that social media or the Internet are any different from more traditional media throughout history, especially when it's a relatively new medium. Whatever the major social concerns are at the time that new medium is blamed for that problem. I mean, in my lifetime, which is quite long now, I remember I was only a child at the time, but comic books being blamed for what was then quaintly called juvenile delinquency. Pinball machines and more recently video games were blamed for school shootings. And when the Internet first hit the public and political and media radar screen in the 90s, of course it had been known to scientists long before that we were having what many now call a moral panic about child sexual abuse. Of course any abuse of one child is a tragedy. But the scope of the problem was greatly exaggerated, which led to exaggerated responses, one of which was to blame the Internet. And that's what led to the so called Communications Decency act, which would have censored all kinds of sexually oriented expression online. So I think it's really dangerous not only to free speech, to blame the media for what Marshall calls externalities. That is what people do. And people are influenced by countless Factors including everything we see, read or hear on every medium. So it is depriving the rest of us of freedom of speech to crack down on a media that had maybe was a factor that led one user to commit an antisocial act. But even more fundamentally, it is a distraction from the real solution to the real problem. If we have a mental health problem, let's look at the underlying causes and their causes are myriad, not just blame a medium that may be depicting some of the problems.
Kurt Nickish
It sounds like. I mean, would you say that a platform isn't responsible for radicalization for a terrorist because they're just showing some of that stuff and it's an individual choice.
Nadine Strossen
It is an individual choice. I have been very active and still am in international human rights group which look at those videos at that material for purposes of documenting war crimes, for purposes of documenting humanitarian abuses. And the Electronic Frontier foundation eff, which does wonderful work in this area, documented how measures that were designed to take so called terrorist content off the Internet have forever stymied the ability to bring prosecutions against those who were committing crimes of genocide, crimes of war, mass rapes. So every expression can have a positive impact or a negative impact. It depends how individual users process it.
Kurt Nickish
Okay, we have so many great questions from the audience. I want to go to those and then maybe we'll end with a little bit of where we see this going forward. And Mike, we may have a bunch of these for you because a lot of these are about specific companies out on the west coast too. So you may be familiar with those. Let's ask one question about net neutrality that's come up on the common carrier versus platform front. A few years ago there was a major debate over net neutrality and its impact on ISPs and other network providers. If I remember correctly, the FCC declined to classify ISPs as common carriers at the time. What's the responsibility of an ISP in this discussion?
Michael Masnick
So I can take that really quickly. I think that is actually an important part of this discussion. I think that it's important to recognize that each individual platform, whether it's Facebook or X or YouTube or whatever, is not the public square, but that the larger Internet itself could be considered the modern public square. And in order to have access to that, that's where the net neutrality question comes in. Net neutrality is very, very nuanced as well and somewhat tricky. And I don't think we want to go too deep on that. The reality is that the FCC has actually gone back and forth based on who the President is in terms of whether or not they're requiring net neutrality on ISPs. There is an argument there too that it's really a competition issue, where again, it all comes back to competition in some way or another. If there were more competition, it was easier to switch ISPs, then maybe net neutrality would not be as big of an issue. But right now the fact is most people only have one or two options in terms of their high speed Internet access and therefore they need net neutrality in order to be able to access everything else. But that's a separate issue from the individual services, which are smaller private entities in terms of what they allow and what they don't allow on their platforms.
Kurt Nickish
We may be sticking with you here, this question. The company Cloudflare, a major cdn, has decided to stop hosting certain sites over the years. They're arguably a core piece of the technical infrastructure. Does that make them different than say, Facebook, or should they be treated more like a common carrier? You're nodding, Mike.
Michael Masnick
Yeah, so, I mean, Cloudflare is a really interesting case and it's one. I've spoken to people there for years because I think they've very much struggled with this. There are these questions of, we're sort of talking about the different layers of the infrastructure and how much responsibility they should have versus how much they should be considered common carriers. And, and Cloudflare has really struggled with that. And I think it is a really difficult problem that they have tried to have as hands off an approach as possible, but there have been cases where that, where it's become too much. You know, in one sort of famous case, the CEO and founder of Cloudflare determined that the Daily Stormer, which is a sort of famous white supremacist neo Nazi group, that he did not feel comfortable protecting them anymore and decided just, just decided to turn it off. But at the same time wrote a thing saying, I'm uncomfortable that I have the power to make that decision and it should be a wider discussion. I don't think too many people took him up on the idea that it should be a discussion in terms of whether or not it should be Cloudflare's decision or not. It is a challenge. As you get further and further into the infrastructure layer. The general feeling, and I sort of feel pretty strongly this way too, is that it should be closer and closer to a common carrier. As you get closer to the edge, as you get closer to the point that the user is interacting with directly, then they should have more freedoms to determine what is Their own editorial policy, what is it that they allow on their property? If you're just sort of at the pipe level of shipping back and forth A to B. One of the important things in thinking about common carrier stuff that gets confused in this discussion, common carrier has that word carrier in there because the history of it is it's taking you from point A to point B and then leaving you alone. That's it. You know, getting you from here to there and you know, allowing anyone to do that is fine. When you start talking about it for things like social media, YouTube, what does that mean? Do you have to host it forever? You know, is there no end to which you have to host this content? Common carrier structures don't make sense for things that are hosting content which, you know, there could be reasons that you will want to pull it down later. And so I think thinking about it in terms of what layer of the infrastructure and then recognizing that there are like really difficult and nuanced trade offs with each of these options as you go. But Cloudflare is definitely an interesting one to follow.
Kurt Nickish
Mike and Nadine, I do have questions for you, but I'm going to stick with Mike here just for one more second. This question. Where do you think decentralized social platforms such as Farcaster and Lens play into this? And what's your opinion about that?
Michael Masnick
So like Farcaster and Nostr and Lens and the ones I mentioned earlier, Activity Pub and Blue Skies at Protocol, these are all attempts to build these more decentralized systems. And that's where a lot of my focus and interest is these days. I think there's a lot of interesting opportunity, but they're all very early and it's all very experimental and we're seeing certain things work and certain things not work. And we're, we're in that learning stage. But I think efforts that encourage those types of systems to exist where it's not just controlled by, you know, one billionaire sitting somewhere 25 miles in a circle around where I'm sitting right now.
Kurt Nickish
Not to mention any names.
Michael Masnick
Yeah, not to name any names. Yeah. But you know, think of all of them. They're all probably within 25 miles of where I'm sitting, you know, and so like having a world where it is more decentralized, where the power does move out to the ends of the network, where the users are empowered, which all of us agree is the world that we should be looking at. And so I'm encouraged by all of these different experiments and we'll sort of see which ones actually catch on, because at the end of the day, some of these are more technically interesting than interesting to users. And the only thing that is going to matter in the end is are they actually useful to the actual users. And so let me add one thought.
Marshall Van Alstine
To that, because I actually think there's an extension of this, which I think the decentralization, in the end will be the solution. Separately, we've been working on research here at the business school Equestrim. I now think it's possible to reduce the flow of misinformation with no censorship at all and no central authority judging truth. But it hinges on marketplace design. So it's not just a technology question, it's going to be a trading on rights question. And we may be able to do this in such a way that users and speakers and listeners in a decentralized ecosystem can clean this system up. So my sincere belief is that we're at a point where we may be able to move in the direction of completely decentralized systems. But it's a conjunction of computer technology and law and economics were actually creating the rights for these things to happen. So I think decentralization ultimately will be the way to go.
Michael Masnick
The one thing I'll very quickly add, I know there are other questions, and I'm talking too much, but very quickly, is that section 230, I actually think is fairly important to allowing these decentralized systems to exist, because if you took it away, it would be much harder for any of these experiments to get very far without being sued out of existence. And so if you want these systems to exist, section 230 is incredibly important.
Marshall Van Alstine
That's the tiny point of disagreement that we can come after that.
Kurt Nickish
Yeah, you talk about taking Section 230 away, but there is conversation on the right about deplatforming. Right. Just taking section 230 away altogether. Let's stick with this idea of sort of the large and small competitors in the market problem. This question maybe for you, Marshall, since large platforms have tremendous market power, is an antitrust a way to manage the harm to smaller competitors. We've talked about making changes, but we don't want to make the big companies too powerful or hurt the small ones. But we've got antitrust, right?
Marshall Van Alstine
Wonderful question. And it's a fascinating issue where, and I've actually been involved in this, I actually helped review some of the Digital Markets act and actually review some of the legislation coming out of Europe. We got to distinguish between two different things, which is antitrust, often used to address market imperfections of Monopoly power, misuse of monopoly power versus some of the free speech questions with the First Amendment and expression. Again, I'd like to back off from having government make those determinations. So I agree that we can actually try to reduce some of the market power. But if users gained the right to import the algorithm, what we're doing is then creating competition on top of the infrastructure. So you could then choose between an Amazon, a BBC, a Breitbart algorithm on top of Facebook. That's creating the competition in a way that you can actually have users decide what's going to be in their best interest without the government stepping in and say, okay, we're going to split Facebook into three different companies. A real problem that we face in antitrust that's different in the 21st century than what it was in the 20th century. In the 20th century, antitrust is based on supply side economies of scale. Large fixed cost, low marginal cost. In the 21st century, our giant monopolies are based on network effects, demand side economies of scale. And I will assert that most of the interventions that worked in supply side economies of scale don't work very well in demand side economies of scale. And the reason is that when you cleave networks into smaller and smaller and smaller portions, you're not creating network effects, so your interventions inadvertently destroy value. So we need better systems for creating new kinds of competition than our traditional mechanisms of breakup. Give you one or two quick points on why that's the case if you use traditional antitrust mechanisms. So, for example, below marginal cost pricing is a test of predation in the Internet economy. Zero pricing is profit maximizing. It's devastating to competition, but it's profit maximizing. I did some of the original mathematics on two sided markets, so you have to use a different model if that's going to be the case. Another one. What's the test of predation? You're restricting output in order that you can charge higher prices. Is Amazon restricting your purchases? Is Google restricting your searches? Is Facebook restricting your posts? It's the opposite. Your traditional tests of antitrust aren't working in the demand side economy of scale. So we need a different set of economic tools that will be more free speech friendly. So yes, we need some interventions, but we need different ones than the one we've been trying.
Michael Masnick
Can I make one suggestion for a potentially interesting intervention that I'm not sure if Marshall would agree with, but I think actually gets to some of these things, which is an approach that is better than destroying. Destroying section 230 and might be better than antitrust as well, because I agree with you that antitrust doesn't fit neatly here. There is one law that I think is getting in the way of a lot of this, which is the cfaa, which is the Computer Fraud and Abuse act, which is normally thought of as an anti hacking law, which Facebook in particular has used to stop people from building algorithms. The discussion you mentioned earlier of the NYU researchers trying to get advertising data out of Facebook, they threatened them with a CFAA claim, a violation there. Facebook has used the CFAA to block third party competitors to try and build algorithms for Facebook. If we fix that and allow third parties to build the algorithms by changing the cfaa, saying that Facebook can't threaten them, that you can build these algorithms, that you can empower the users, that you can build third party services, you get at at much of what you're talking about without resorting to antitrust and without changing section 230. So I don't understand why CFAA reform gets no attention in this debate, but I think it's actually super important.
Kurt Nickish
Yeah. Well, there's our next panel. I guess in the last four minutes you brought up a new law. That's great. So let's round it out now. I just want to hear. And let's stick with you, Mike, just do you want to change section 230 at all? You've talked about the way you want things to go, but I. If you would touch it, would you change it and where do you actually see things practically going forward?
Michael Masnick
Yeah, no, I would pretty much leave section 230 as is. I think, if anything I would roll back the change that was made in 2018 to it, which I think has been very problematic. We don't have to get into what that was. I would fix. There's a typo in there which is actually kind of being litigated over right now. But I don't need to get into what that is. But it's creating an issue right now in a lawsuit based out of Boston, by the way. So I would leave section 230. I would look at these other things like CFAA reform. I think there are other areas of patent and copyright reform that could have much more of an effect that would allow for this user empowerment. I think having better privacy regulations that again empower the users to control their own data is a much more effective thing. Section230 as it stands has been incredibly powerful and incredibly useful for users and for people to be able to speak and to make the Internet in the form that they want. There are problems on the Internet, but they're not caused by section230.
Nadine Strossen
I agree with Mike and I'd like to add a point that is consistent with free speech principles and more effective in dealing with the disinformation problem, which we mentioned but didn't really get into. And that is critical media literacy skills. From the the earliest age on, we have to equip all of us to do our very best to sort the true from the false, the misleading from the accurate, the fake news or disinformation. Use whatever term you want to choose from verifiable sources. And even if we had the most totalitarian restrictions in the world, it's impossible to throttle all disinformation. So the only possible approach, and this reminds me of other kinds of prohibition strategies, I think they always fail. I think I'd be interested to hear what an economist thinks about that. But rather than, I'll use an economic analogy rather than focusing on the supply side because we're never going to eliminate the supply of disinformation or other potentially harmful speech, including speech that might have an adverse impact on mental health or hate speech, you name it, terrorist speech. We have to focus more on the demand side. Reduce people's demand for and interest in these kinds of potentially dangerous, harmful kinds of expressions and if they are expressing exposed to it, increase their resilience and their resistance to it.
Marshall Van Alstine
So I'm going to make a prediction. And my prediction is that some changes to section 230 will happen and a couple of reasons for thinking that's the case. If you look outside the United States, they're way more interventionist. If you look Twitter X was just blocked in Brazil. The CEO of Telegram was just arrested in France for illegal activity taking on it. Japan is writing laws to keep people from scamming people out of their life savings. Hold Facebook accountable for people using Facebook ads to scam people out of their savings. I like to draw an analogy. I think we see an information pollution problem today analogous to the industrial pollution problem we saw a century and a half ago. And it took a little while to get the laws correct. The labor laws are things to deal with the pollution problems. So I do think that we're going to need some changes and I think small changes actually can get us most of the way there. In particular, one of the changes I do think we need is going to give users more rights to choose their own algorithms. Even in those cases when the platforms would prevent them from choosing their own algorithms, as in the case in the NYU study or in the case of I want to follow Nadine and Elon Musk says, no, no, no, you can't do that. That doesn't strike me as legitimate. And I do think we need changes to enable that. That. And the last thought is I think if we do market design in a way that actually helps to address these pollution externality problems, we can then arrive at completely decentralized solutions without the government invention. And I'm hoping that we can get the best of both clean systems and free speech.
Kurt Nickish
Marshall, Nadine, Mike, we really appreciate your expertise and your time and your perspectives. Thanks. And let's give this amazing panel a big hand.
Marshall Van Alstine
Thank you.
Kurt Nickish
That's Marshall van Alstine, Nadine Strossen and Michael Masnick. For more episodes, please follow the show on Apple Podcasts, Spotify or wherever you listen. Thanks for listening to Is Business Broken? I'm Kurt Nickish.
Podcast Summary: "Is Business Broken?"
Episode: "Regulating Platforms & Speech in an Age of Fake News"
Release Date: November 7, 2024
Introduction
In this episode of Is Business Broken?, hosted by Kurt Nickish from the Ravi K. Mehrotra Institute for Business, Markets & Society at Boston University Questrom School of Business, the panel delves into the intricate dynamics of regulating online platforms amidst the rampant spread of misinformation. The discussion centers around Section 230 of the Communications Decency Act, its impact on social media platforms, and the evolving challenges of balancing free speech with content moderation in the digital age.
Panelists:
Understanding Section 230
The conversation begins with Marshall Van Alstine explaining the essence of Section 230:
"Section 230 is the Internet law that prevents or protects platform companies both from the user generated content, the third party generated content, but also their editorial decisions around that." (02:06)
He outlines how the law, established in 1996, grants platforms immunity from liabilities related to user content while allowing them editorial discretion to moderate content.
Historical Context and Congressional Intent
Nadine Strossen provides historical context, emphasizing the bipartisan support Section 230 enjoyed during its inception:
"Section 230 passed, to the best of my knowledge, unanimously. This was not a partisan matter... It was designed consistent with First Amendment principles..." (02:40)
She highlights Congress's intent to foster an open Internet by preventing platforms from becoming overly restrictive gatekeepers or, conversely, allowing the Internet to become a haven for unfiltered, potentially harmful content.
Modern-Day Implications
Michael Masnick challenges contemporary criticisms of Section 230, arguing that the fundamental principles remain pertinent despite the Internet's evolution:
"If you just view it as a tool for saying, you know, who is liable for this particular speech, it actually makes a lot of sense." (05:01)
He contends that the increased centrality of the Internet in daily life does not necessarily invalidate Section 230 but rather underscores its continued relevance.
Platform Liability and Editorial Rights
Marshall Van Alstine elaborates on the nuanced position platforms occupy between being common carriers and publishers:
"Platforms are kind of interesting in that they have unbundled these rights and responsibilities and they've gotten the best of both." (06:52)
He explains that while platforms have editorial rights to moderate content, they are shielded from the liabilities that typical publishers face, creating a unique regulatory landscape.
Impact on Smaller Platforms and Users
Nadine Strossen argues that Section 230 benefits users by promoting diverse platforms and prevents large tech companies from monopolizing content moderation:
"If Section 230 were reduced in its protective scope, the major beneficiaries would be the existing tech titans... it's the smaller speakers and platforms who don't [have the resources]." (10:06)
She warns that diminishing Section 230 protections would disproportionately harm smaller platforms and limit user choice.
Algorithmic Moderation and User Experience
Kurt Nickish raises concerns about how algorithms influence content visibility, questioning whether platforms genuinely remain neutral:
"...with that criticism, maybe of just kind of saying, you know, companies don't need to be responsible..." (10:57)
Michael Masnick responds by comparing algorithmic moderation on platforms to editorial decisions in traditional media, asserting that Section 230 actually empowers platforms to make such decisions without legal repercussions.
Challenges of Network Effects
Marshall Van Alstine discusses how network effects reinforce the dominance of established platforms, making it difficult for new entrants to compete:
"If you try to leave a platform like Twitter or LinkedIn, you can't take your followers with you..." (15:06)
He suggests that traditional market solutions may be insufficient to address the entrenched power of major tech companies.
Decentralized Platforms as a Solution
Michael Masnick introduces the concept of decentralized social platforms (e.g., Farcaster, Lens) as potential remedies to centralized power:
"...protocols is if social media and other Internet apps today could work more like email does..." (16:19)
He emphasizes the importance of protocols that allow interoperability between different service providers, reducing reliance on single entities.
Marshall Van Alstine adds that decentralization, combined with innovative marketplace designs, could mitigate misinformation without necessitating heavy-handed regulation:
"We're actually creating the rights for these things to happen. So I think decentralization ultimately will be the way to go." (33:44)
Antitrust Limitations in the Digital Era
Marshall Van Alstine critiques traditional antitrust approaches, arguing they are ill-suited for addressing network-driven monopolies:
"When you cleave networks into smaller and smaller and smaller portions, you're not creating network effects, so your interventions inadvertently destroy value." (34:09)
He advocates for new economic tools that better address the unique challenges posed by modern digital platforms.
Alternative Legal Interventions
Michael Masnick proposes reforming the Computer Fraud and Abuse Act (CFAA) to empower third parties to build competing algorithms without facing legal threats:
"...changing the CFAA, saying that Facebook can't threaten them, that you can build these algorithms..." (37:29)
He suggests that such reforms could enhance user empowerment and foster a more competitive ecosystem without dismantling existing legal frameworks like Section 230.
Net Neutrality and Common Carriers
A listener inquires about the responsibilities of Internet Service Providers (ISPs) in the context of net neutrality. Michael Masnick responds by distinguishing between platform-specific regulations and the broader Internet infrastructure:
"The larger Internet itself could be considered the modern public square. And in order to have access to that, that's where the net neutrality question comes in." (27:39)
Cloudflare's Role and Common Carrier Status
Discussing whether infrastructure providers like Cloudflare should be treated as common carriers, Masnick explains the complexities involved:
"Cloudflare is definitely an interesting one to follow." (29:07)
He highlights the challenges in categorizing infrastructure providers under existing legal definitions and the implications for content moderation.
Decentralized Platforms' Viability
When asked about the future of decentralized platforms, Michael Masnick expresses cautious optimism, noting their experimental nature and potential to distribute power more evenly across the network:
"...efforts that encourage those types of systems to exist..." (31:39)
Antitrust as a Tool Against Market Dominance
Responding to a question on whether antitrust measures can curb the excessive market power of large platforms, Marshall Van Alstine acknowledges the shortcomings of traditional antitrust approaches and underscores the need for innovative solutions tailored to digital markets:
"We need different systems for creating new kinds of competition than our traditional mechanisms of breakup." (34:09)
Preservation and Refinement of Section 230
Michael Masnick advocates for maintaining Section 230 in its current form, with minor corrections to address specific legal ambiguities:
"I would pretty much leave section 230 as is... it's creating an issue right now in a lawsuit based out of Boston." (39:10)
Enhancing Media Literacy and User Resilience
Nadine Strossen emphasizes the importance of developing critical media literacy skills to empower users to discern misinformation:
"We have to equip all of us to do our very best to sort the true from the false... and increase their resilience." (40:08)
Anticipated Legal Adjustments
Marshall Van Alstine predicts incremental legal changes to strengthen user rights and promote algorithmic transparency without overhauling existing frameworks:
"...give users more rights to choose their own algorithms... We need changes to enable that." (43:31)
Final Thoughts on Decentralization and Market Design
The panel concludes with a consensus on the potential of decentralized systems and thoughtful market design to address the multifaceted challenges of misinformation and platform dominance, advocating for a balanced approach that safeguards free speech while mitigating harmful content.
Notable Quotes:
Marshall Van Alstine:
"Platforms are kind of interesting in that they have unbundled these rights and responsibilities and they've gotten the best of both." (06:52)
Nadine Strossen:
"Whatever the major social concerns are at the time that new medium is blamed for that problem... it's depriving the rest of us of freedom of speech to crack down on a media that had maybe was a factor that led one user to commit an antisocial act." (25:53)
Michael Masnick:
"Section 230 is incredibly important... the only thing that is going to matter in the end is are they actually useful to the actual users." (16:19)
"Critical media literacy skills... we have to increase their resilience and their resistance to it." (40:08)
Conclusion
This episode of Is Business Broken? offers an insightful exploration of the complexities surrounding online platform regulation, Section 230, and the pervasive issue of misinformation. Through a balanced dialogue, the panelists highlight the need for nuanced legal reforms, user empowerment, and innovative market solutions to navigate the evolving digital landscape while upholding the principles of free speech and accountability.
For more engaging discussions, follow Is Business Broken? on Apple Podcasts, Spotify, or your preferred podcast platform.