Loading summary
Podcast Announcer
The following podcast contains advertising. To access an ad free version of the Lawfare Podcast. Become a material supporter of lawfare@patreon.com lawfare that's patreon.com Lawfair also check out Lawfare's other podcast offerings, Rational Security, Chatter, Lawfare, no Bull and the Aftermath.
Advertiser
Ever wonder just how good or bad your diet really is? The Mayo Clinic Diet has just launched its new Diet Score, a game changer in tracking your health. Your Diet Score is personalized, based on tracking key activities that deliver healthy habits, support weight loss and deliver long term sustainable success. The Mayo Clinic Diet was developed by doctors and dietitians from the world renowned Mayo Clinic. It's real foods, no gimmicks or fad diets, just life changing healthy nutrition. Join today for access to customizable meal plans, tracking tools, group coaching and at home workouts. Everything you need for long term success. Get your free Diet score now@mayoclinicdiet.com go no sign up needed now. The Mayo Clinic Diet accepts the CareCredit credit card the Mayo Clinic Diet Healthy Weight Loss for Life they say opposites attract. That's why the Sleep Number Smart bed is the best bed for couples. You can each choose what's right for you you whenever you like. You like a bed that feels firm but they want soft Sleep number does that. You want to sleep cooler while they like to feel warm. Sleep number does that too. Why choose a Sleep Number Smart bed? So you can choose your ideal comfort on either side and now save 40% on the new Sleep Number Special edition Smart Bed limited time exclusively at a Sleep Number store near you. See store or sleepnumber.com for details.
Ravi Iyer
The results of those kinds of experiments that you know, reducing the incentive to comment back and forth or to re share things actually improves the ecosystem I think is another thing that we can learn about. Specifically, how do you create more pro social media?
Renee Diresta
It's the Lawfare Podcast. I'm Renee Diresta, Contributing Editor at Lawfare and Associate Research professor at Georgetown McCourt School of Public Policy. I'm with Glenn Weil, economist and author at Microsoft Research Jakub Muchumgama, Executive Director of the Future of Free Speech Project at Vanderbilt University and Ravi Iyer, Managing Director of the USC Marshall School Neely Center.
Glenn Weil
I just think no matter what our goals are, the design of sort of the overall information ecosystem and what gets surfaced is critical.
Renee Diresta
Today we're talking about design versus moderation. The way that social media platforms are built influences everything from what we see to what is amplified to what is even created in the first place as users respond to incentives, nudges and affordances. These processes are often invisible or opaque, though new decentralized platforms are changing that. So we're going to talk designing a pro social media for the future and the potential for an online world without Caesars. I want to just kind of bring you guys in right now just thinking about the difference between moderation as policing, failed end state versus is design, right? Design as a proactive way to cultivate behaviors, to subtly shift norms, to guide users in particular directions. Not necessarily through top down rule enforcement, but rather by determining the affordances of a system and what the system lets us do. So one of the reasons that I'm excited for this conversation with you all specifically is that when I read your work, you all have such deep thinking about the specifics of ways that system design can produce better social media. I know that Glenn and Jakob, you guys just had a paper recently released, you titled it Pro social media and I'd love to just start with that. I think the, the term pro social media is wonderful. I'd like to maybe ask you to define what that means and, and tell us a little bit about your work.
Glenn Weil
Yeah, so I think the key idea that motivated the term pro social media is that obviously social media are doing something social. They're using social, social information to serve content. But that doesn't necessarily mean that they're achieving the goals that many people had in creating social media, which was to strengthen connections across people, you know, help communities be stronger and you know, reinforce the social fabric that they build on. So social media could in theory either be like sustainable, you know, agriculture that reinforces, strengthens the soil at the same time as it harvests from it, or it could be like, you know, clear cutting agriculture. And I think many people believe that, you know, social media has actually been undermining the social fabric as it's been harnessing it and we want to try to make that more sustainable, more regenerative, so to speak.
Jakub Muchumgama
Yeah, I think what excited me, you know, I have a much more narrow focus than these two brilliant gentlemen. I come at this topic from sort of a free speech perspective. And I think what excited me about the pro social media approach is that I think people in the free speech space have very often been sort of on the defensive and making these abstract principled arguments that were a, a bit difficult to apply consistently when it comes to social media, but also just not convincing a lot of people because social media makes the, you know, the harms real or perceived of speech so much more visible to a lot of people. So, so it makes them much more willing to engage in trade offs and restrict speech than they were in the analog world. And so I think that the pro social media approach in many ways is a good way for free speech activists to frame a much more positive vision for social media. One that a empowers users at the expense of centralized platforms and or government. You know, today we're seeing a huge development towards government mandated content moderation and also one that says, well, you know, yes, free speech has some harms, especially in an online connected world where anyone can share anything with anyone across borders and where some of the harms that can be involved in free speech can be very visible, can travel with lightning speed and can lead to real life harms. But here are some models that might actually use the power of speech and access to information to mitigate and diffuse some of those harms in ways that are constructive but that rely basically on speech rather than giving outsized power to platforms and or government. So I think that's an incredibly powerful and empowering vision for social media that resonates really well with a basic commitment to what I would call egalitarian free speech.
Renee Diresta
I want to hold the decentralization piece for a little bit because I know we're going to talk about what makes new experimentation possible. I know I've written about that, we've talked about that in the context of middleware here a little bit. I want to focus in on specific features and designs. You know, you guys articulate ways of thinking about this. We talk a lot about bridging and balancing and this idea of bridging and balancing as a goal. What do we mean when we talk about bridging and balancing as a way to create a more pro social web?
Glenn Weil
Well, I don't want to go on too much of a historical digression here, but I think it's useful to understand that both of the kind of competing visions, or you know, falsely, as you pointed out, Rene, maybe competing visions of like what media should be like today really came out of World War II. There were two different movements. One was that right after Pearl Harbor, Henry Luce, the publisher of Time, convened a commission called the Hutchins Commission that came up with principles that the media could abide by in order to avoid it being nationalized. Because they were very worried that division in the media misinformation had led to us not being prepared for Pearl harbor and that the government was just going to nationalize the media. As a result. So they wanted to avoid that. On the other hand, there was a group of people led by Margaret Mead and other social theorists who thought that kind of the concentrated nature of the media, that, you know, broadcast nature of radio and journalism had led to fascism and that the way to address that was to have like a much more multi sided media. And I think we've arrived at both kind of this desire for bringing people together and this desire to have lots of voices heard from those two respective movements. And I think the real question is how we can bring those together. And that's really our goal in this paper, is to use this notion that Rich really came out of the Hutchins Commission of content that brings people together across divides and content that reflects the diversity of positions people have. That's bridging and balancing as being sort of critical elements that need to show up, while we also ensure that we have all the different diversity of angles that social media allows without too much gatekeeping.
Renee Diresta
And you're talking specifically about, I guess, the question of how. So I've seen Audrey Tang, who was the digital minister of Taiwan, was a co author on your paper, also has spoken a bit about surfacing, where content comes from, right? Kind of labeling communities that it originates from. I was kind of intrigued by this idea because when I was at Stanford, Internet observation, we would do these things that we kind of called narrative traces, right? Where did something come from? Where did that meme originate? And for us that was a question of like, is it authentic? Right? Did it come from some authentic community? Was it something that was kind of dropped in through an influence operation from a state actor or whatever? What is the provenance? How do you guys think about that? Why do you think surfacing where something originates is helpful for this bridging or pro social behavior? How does it help us have a better web?
Glenn Weil
I think there's actually two different aspects to this. One is where does it originate from? And Audrey's done some amazing work on that in Taiwan, on basically creating liability for things that don't have signed provenance. So I think that's really a fascinating approach and one that I'm a big fan of. There's another element that we emphasize more in this paper which I also think is important, which has more to do with where does the popularity of this originate?
Renee Diresta
Okay.
Glenn Weil
You know, there's the people who created it and their signatures, but then there's the people who liked it and reposted it and so forth. And obviously the first one you can do just by having Some kind of disclosure about where it came from or cryptographic signature, which is what Audrey worked on. But the second one is complicated because there's going to be thousands or millions of people who liked or retweeted something. So you can't just list all their names. You have to give some characterization of them. And that's why we focus on this notion of using the internal learning that the machine learning tools are doing about the communities that something's appealing to, because that's how they're doing personalization in the first place and then trying to be transparent about that as a way of giving a sense of who this is popular with, so that you know the audience that you're sharing something with, which I think is a really important element, not just for sort of misinformation or news related reasons, but also just for isolation reasons, I think, you know, when you used to be, you would go to a concert or you would attend a lecture and you would get a sense of the other people in the room. And that's much harder online and obviously does happen in environments like Reddit. But we'd like to bring aspects of that to this by using transparency about the internal understanding of the community that the models already have.
Renee Diresta
Ravi, I'm curious, when you were at, you know, your role at Facebook and other places, how did you all think about that question of what communities were engaging with content? Was that something that you were also attuned to, this question of bridging as far as what was curated out to more people?
Ravi Iyer
Yeah, I mean, I actually started my time at Meta working on polarization. And so I think there are three findings or things that we learned that are relevant here and that can sort of give some specificity to what pro social media could be. One, you know, there's some famous, there's a Wall Street Journal article about this, but there's also like other books that are not about Facebook about this. And basically it's the finding that many publishers and politicians say they, they produce worse content, they produce divisive content, content they're not proud of because of the algorithms on social media. So, you know, John Peretti of Buzzfeed went to, you know, Facebook and said, like, look, we're producing divisive content not because we want to, but because that is what does well in your algorithm. Many politicians in, in Europe say that. And so that's not a moderation thing. That's not about, you know, figuring out what you can say or not say. That's, that's a company sort of Incentivizing, effectively paying people with attention to be more divisive. The second thing is that a lot of people see content they don't like, and a lot of people don't like divisive content. People don't want to argue with their relatives online all the time. You know, there's one study, 70% of Facebook users sees content they want to see less of. You know, they often see it multiple times per session. They often see it within the first five minutes of scrolling. And so, you know, there's a business incentive to actually reduce these kinds of divisive experiences. It's actually like turns people off of these products. And that's why you see, I think, a lot of people moving from, you know, some of the more divisive platforms to like a blue sky or to someplace where it just feels like you can have a conversation again. And then the third thing is that, you know, we. One thing I worked on with these break the glass measures, design measures that really, they're kind of like temporary design things that, that, that changes ecosystem and sort of change those incentives. And we did that in part because when you rely on moderation, you make a lot of mistakes. And so if you're working on something like Myanmar or Ethiopia, something in some far off place, it's really hard for a company, for anyone, it's really hard for anyone, let alone a company thousands of miles away to make decisions about what people should or should not say. But if you can say something like, you know, look, maybe we shouldn't be optimizing for the thing that gets the most comments. You know, obviously the thing that gets the most comments is not always the best thing, right? Like the, the picture of my night out last night that got the most comments might be a great picture, but like, your health information is not meant to be like debated back and forth, right? Like it's meant to be boring. And the fact that you're talking about a lot actually maybe means that it's not great information. And so, you know, the, the results of those kinds of experiments that, you know, reducing the incentive to comment back and forth or to reshare things actually improves the ecosystem, I think is another thing that we can learn about. Specifically, how do you create more pro.
Renee Diresta
Social media with the break glass measures. That was also, as I recall, particularly around like post January 6th, that was also deprecating political content. Right. And that was sort of trying to resurface more content that bridged people in the sense of things that were more human. Right. Here's more from your friends, More baby pictures, more wedding pictures? Is that the sort of things that were kind of upranked instead?
Ravi Iyer
I mean, there are lots of things that were done, and I think there's an article in Tech Policy Press about all the very specific things that were done around January 6th. I mean, the things that I think are most worth learning from are removing some of these engagement incentives. So not just removing a whole class of content, but actually sort of improving the incentives within that class content. You know, people should be allowed to talk about politics, but they shouldn't be incentivized to talk about it as entertainment. And when you optimize for, like, the thing that gets the most comments, it gets to be more like entertainment. The other thing that was done around January 6th that I think is worth learning from is rate limits. Like, there was a reduction in the amount of times you could invite people to a group. You know, if you were to ask yourself, how many times should a person be able to invite people to a group? Or how many times should I be able to message strangers? Or how many times should I be able to do anything? You will come with a far lower answer than platforms have rates for.
Renee Diresta
Yeah, no, I remember that. We were always mystified by that, actually. Yes, you would see a lot of the time when we would look at, even looking at like, markers of inauthenticity, you would see one person mass blasting the same post in the Same second into 60 different groups. And. And it was always kind of a remarkable affordance that, that you had the power to do that. This was where actually, like, funny enough, the old freedom of speech versus freedom of reach article, the argument that Asa and I were making back in 2018, that got reframed as content moderation, right? Like Elon put it on top of his content moderation, you know, page or whatever. But we were talking about it in the context of what you're describing, in the context of curation, actually. Like, what is it that should be curated and amplified? Like, in the moment, what is the incentive that you create for particular types of content to be boosted? And I think that's a really interesting question. And then I remember, maybe this takes us kind of into the. The question of, like, who decides? Because one of the things that, you know, break glass measures actually became politically controversial. Right. And, Jakob, I don't know if you want to kind of pop in on this with your opinion, but this question of why does the platform get to decide? This is a very opaque shift. It obviously has impact, particularly if political content Is the thing that gets deprecated in these moments or people begin to feel that that inability to invite people into groups is somehow limiting the potential growth of a political movement or something along those lines. Like this is where you start to see that tension come in. The question around transparency and to what extent design intersects with the regulatory conversation is a very interesting one. Right. Because areas where the moderation conversation quite clearly can't. It's not as clear cut. I think that the design conversation doesn't intersect in the regulatory conversation. And I'm curious what you all are seeing and thinking about on that front.
Jakub Muchumgama
I think transparency is obviously important because it reduces the sort of the speculation and conspiracy theories around it probably doesn't eliminate it, but it ideally reduces it. Especially if there are also ways to track our platform platforms actually implemented. I mean of course there is a spectrum where you say, you know, if you distinguish between freedom of speech and freedom of reach, and if you have clearly ideological ways to amplify reach and de. Amplify it, then you know, I think we're getting into to free speech territory. But the more I guess you allow users to have input on this, the better because that then limits the platform's ability to skew the conversation. But then have full transparency on where the platform decides and what its design is actually based on. And the amplifications of that I think is the optimal solution. How you implement that in practice is something that I would leave for smarter people than myself like Glenn and Robby. And I think you would probably never be able to have a system that would, that would satisfy everyone just because we deeply disagree about these things. And everyone, when you look at a platform and what goes on, everyone will have this tendency to say, well, I think that a lot of speech, I have a voice here, but why am I not? Why do I only get why I personally have such a extremely pathetic reach on Bluesky, for instance? That must be because Jay Graeber has designed it in such a way that people like me don't get to.
Renee Diresta
Can't possibly be my post. Must be this. Somebody's putting their thumb on the scale. No, I get it.
Glenn Weil
I do think Jakob's getting at something important. Which is the reason why I spend a lot of time focusing on terms like balancing and bridging or trying to come up with these big principles in terms of some communication rather than like relatively technical tweaks, even though of course they have to be implemented technically. Is that I think like the legitimacy and the way that we talk about these things and the ability to relate them to sort of democratic principles is actually like central to what it is for them to be good design features. You know what I mean?
Renee Diresta
Yeah. Say more about that.
Glenn Weil
I mean, I guess if you think about, if you think about our democracy, right. Like we have a principle of free speech, but we don't have the principle that anyone can come and speak in front of Congress at any time. Right. The people who get to speak in front of Congress have some kind of democratic procedure that ensures that they're representative of the population in some sense and that there's some process of doing that representation that is like written down somewhere in a document and that people are concerned about the adherence to the rules of that document. And so there's just like a huge amount that's put into the allocation of reach of like the effective voice that we have as well as having free speech. So I think this is something that's very well established and I think the more that we can tie however it is that we are organizing things to like principles that are kind of meaningful and can be written down and legitimated in this sort of way that was also very critical to what Audrey did in Taiwan. I think the more that we're going to be able to, you know, get the legitimacy that's necessary for any of this to work. Because the reality is if we moderate out X and Y, but no one thinks that was legitimate, they're going to go find it somewhere else anyway and they're not going to buy into what they're getting on the platform. So that legitimacy I think is just as important as the efficacy.
Jakub Muchumgama
But it also, and I think it has to be, there has to be a very strong element of, of bottom up legitimacy because otherwise you're just getting back to sort of the, the digital version of the, of the analog public sphere where you, where you have sort of traditional institutional gatekeepers and then there's not going to be buy in from those who didn't have a voice before. So I think that's incredibly critical and sort of going back also to this ideal of egalitarian free speech underlying this.
Renee Diresta
No, I agree. I think. Have you seen the, there's a paper Susan Benish wrote, I'm blanking on her co author's name unfortunately, but it was on time, place and manner restrictions. Right. I think some of us have talked about this in the past. I've written about it in the past also. I was writing for a while about circuit breakers. The dynamic around it was like information flows, right. How do we Think about design and information flows. When I was on Wall street, circuit breakers are a thing that are put in place so that people can be put into a more reflective mindset, so that stocks are not constantly whipsawing around when new news comes out to kind of temporary halt so that people can digest information. And these models that we have of thinking about design tools and friction in particular, as ways of creating temporary ways of shifting people's thinking, putting them into a more reflective mindset so that we're not kind of careening around from one information crisis or rage machine to the next. And the ways that design can actually do that quite effectively. I think. I don't know if you all have seen that paper or that research. I think, Ravi, perhaps you have. I wonder what you think about that.
Ravi Iyer
Yeah, yeah. The other author is Brett Frischman, and a great paper. It's about time, place and manner, friction and design. I think it's a great paper. And I'd say that the most important thing, you know, we're talking about who should decide. We don't want these big gatekeepers. I think the best way you do this is no one decides, right? So there's a way that you can reduce reach of content, which is like you identify kinds of content that you want to demote, and then there you're kind of making a moderation decision. You're deciding like this, these are things I don't like. I'm going to reduce those things. But if you instead you decide, like, I'm not going to optimize for what people pay attention to. I'm going to do surveys and give them things they aspire to consume, which tends to be more, you know, healthier content, more aspirational content. And I'm not deciding. Users are deciding, Right. That's just a much more legitimate way to do it. And it supports users agency. It's not. It's not taking away from what users want. And a lot of users, they don't want, you know, they get more sexual content than they want. They get more sensational content what that they want. And so if you ask them, you know, aspirationally, what is it you want, you actually get a different answer. So I think there's a way that you can design systems where no one's deciding. There is no, like gatekeeper. It's really like designing so that users decide. And all the decisions are really content neutral, not about what we do or do not want people to say.
Renee Diresta
I want to talk about that devolving control to the users. Maybe in the context of decentralization. But while we're talking about who decides and we sort of alluded to the regulatory conversation, and the thing that I always thought was interesting, that ties into the legitimacy piece here is that I think most people don't like the idea that social media companies decide. Right. It is a form of unaccountable private power. It's quite opaque. Nobody really knows what they're doing. There have been efforts to create some transparency platform accountability and Transparency act was one such law, never managed to pass. Ravi, I think you have looked at a number of other different types of regulatory interventions that touch more on design. What are you seeing? Where does the. If we say that user control is one thing, but it's a far way off. Right. And it may be that centralized social media incentives don't align and we can talk about whether that's true in a couple minutes. Decentralized is its own animal. We can talk about the trade offs there. We don't want the government making moderation decisions. How should we think about the role of the state, whether that's America or what's happening in Europe with the dsa? How should we think about the regulatory conversation around design?
Ravi Iyer
Yeah, I mean, I think it's analogous. I use the analogy of cars and food. Like once upon a time we didn't have regulations for how cars were designed. And so you could have a car without seat belts or you could make food however you wanted to in your meat factory. And then people got together and said like, look, we need some minimum standards so that, you know, people don't get sick and people don't crash and go through their windshield. And so I think, you know, the physics of social media are increasingly becoming understood and we need minimum standards for the design of social media products. And, and you know, in some ways our First Amendment does some work for us because you actually can't regulate in the United States, you know, what people can and can't say online, but you can regulate whether a product is safe. And the Supreme Court has weighed in that there's a difference between the expressive components of an algorithm and the functional components. So there is no message trying to be conveyed by an algorithm that says, you know, I want to, I want to keep you on here as long as possible. And therefore, you know, and we also know that there are lots of externalities to that. There's certainly harm to kids. And so you see things like the Kids Online Safety act or the Safe for All act or the, or the age appropriate design code. You're seeing A lot of laws really go in this design direction, both because it's more effective. Also it's more legitimate and, you know, less prone to abuse. And it's required by our Constitution.
Podcast Announcer
Deleteme makes it easy, quick and safe to remove your personal data online at a time when surveillance and data breaches are common enough to make everyone vulnerable. Deleteme isn't just a one time service. It's always working for you. Constantly monitoring and removing the personal information you don't want on the Internet because the data brokers keep putting it back. So a one time service won't do the trick. Delete Me sends you regular personalized privacy reports showing what info they found, where they found it, and what they removed. So look, I've got a pretty active online presence. I do, you know, wacky things like shine lights on Russian embassies and put it up on Blue sky and Instagram. And despite what it may appear, my privacy is really important to me. I put stuff out there, but I also don't want my private data out there because you know, when you do things like that, people want to retaliate against you. Have you ever been the victim of identity theft, harassment or doxxing? If you haven't, you probably know someone who has. And this is why I use Delete me and Delete me can help. So take control of your data and keep your private life private by signing up for Deleteme now at a special discount for our listeners. Get 20% off your Delete Me plan when you go to JoinDeleteMe.com Lawfare20 and use the promo code Lawfare20 at checkout. The only way to get 20% off is to go to JoinDeleteMe.com/Lawfare20 and enter code Lawfare20 at checkout. That's JoinDeleteMe.com Lawfare 20 code Lawfare20.
Renee Diresta
Come on down to Boost Mobile and turn your tax refund into six months of savings.
Ravi Iyer
Nope, all wrong.
Renee Diresta
You're on the radio touting Boost Mobile's 5G network. You gotta use your radio voice like this. Come on down to Boost Mobile and get six months free when you buy.
Advertiser
Six months on our best unlimited plans.
Renee Diresta
Now you go. This is just how my voice sounds.
Advertiser
Just say it like you mean it.
Renee Diresta
Okay.
Advertiser
Plus enter to win up to $10,000.
Ravi Iyer
And double your tax refund.
Renee Diresta
Oh my.
Advertiser
Requires upfront payment, taxes and fees. Extra terms and exclusions apply. Visit boostmobile.com for full on returns and suites details. They say opposites attract. That's why the sleep Number Smart bed is the best bed for couples. You can each choose what's right for you whenever you like. You like a bed that feels firm but they want soft sleep Number does that you want to sleep cooler while they like to feel warm sleep Number does that too. Why choose a sleep number Smart bed so you can choose your ideal comfort on either side and now save 40% on the new sleep number Special edition Smart bed limited time exclusively at a sleep number store near you. See store or sleepnumber.com for details. Weight loss it needs to be fast and sustainable. Noom GLP1 starts at just $149 and ships to your door in seven days. Take it from Lauren who lost 22 pounds on Noom.
Renee Diresta
If I come off of the GLP1.
Advertiser
It'S not going to automatically make my weight. Yo yo back $149 GLP1s. Now that's Noom Smart. Get started at Noom.com Real Noom user compensated to provide their story. Individual results may vary. Not all customers will medically qualify for prescription medications. Compounded medications are not reviewed by the FDA for safety, efficacy or quality.
Renee Diresta
Maybe we should chat about the the user option then. So right now the decentralized option that where users have the most control and there are the Most users is BlueSky. Right. We've seen a pretty big adoption curve for them recently and I think everybody here is on bluesky now. Right? Y'all are all three of you. Yep. Jakob's having some problems with it, but the rest of us are doing fine. I guess for, for those who are listening who are not on bluesky, there's a lot of different ways that users can can experience. There are interesting ways to have control over what used to be curated through the people you may know algorithms, which was Facebook and Twitter's ways of suggesting users for you to follow algorithmically. Now there are what bluesky calls starter packs where you can find one person that you trust. You can click on their starter pack and you can sort of subscribe to and follow all of those people. So it solves the cold start problem and you have some agency over immediately going and finding people that you like, that you trust, that you find interesting and then seeing who they like and trust and find interesting. And so you can kind of build your initial social graph that way. So there's the sort of social graph building piece. There is the ability to create and subscribe to feeds. So for a long time now you can go and you can just pick different types of feeds that you want. There's a gardening feed that I subscribe to as a really crappy gardener. You can find people who will help you, like, figure out why your plant is dying. There is Black sky for, you know, kind of people in the black community who want to find, you know, that sort of like, black Twitter community on Blue Sky. There are, you know, so many different types of kind of identity affinity group feeds that you can find and follow. There's different topics, news feeds. There's one that's really great, that's gift links. If you just want to subscribe to all the different gift links that people drop on the platform and just, you know, read news for free, basically. So it's a really just a kind of a cool way to immediately curate your feed. And the thing that's really nice is they make it very easy to toggle between feeds. So if one feed is very boring, if your Discover feed or, you know, your, your friends that you follow are not posting very interesting things, or they're kind of quiet that day, you know, you can pop into one of your other 10 feeds and immediately see, like, what else is happening elsewhere on the platform. And then finally the other thing that they have, which kind of at the intersection we can say of, of moderation and design is there's the labelers. So you can actually choose to have certain content either obscured or kind of hidden in your feedback. You can, they'll put up a little interstitial over it, label it, and you can also have shared block lists. So that's roughly speaking, the different ways in which users have incredibly granular control over very different types of the Blue sky experience. I think one of the reasons that we've seen this adoption, I think, is really the mainstream platforms swinging the pendulum pretty hard on the moderation front. Right. So I think a lot of the migration to Blue sky was in response to Elon buying X. And I think the liberal audience is feeling that they didn't really like what happened to curation on X. They didn't really like what happened to moderation on X moving to a different platform. I think you saw Zuck do the same thing. He recently had a pretty big shift in what he said. X was going to moderate around again, a little bit of a. A bump there. Curious how, how you all see this, this shift to decentralized platforms. I have seen it as a. An opportunity to show users what is possible, but I'm not sure how many users are thinking about it in those terms. I kind of get the sense that more people are there because they think of it as a vibe shift. Right. They're feel, they're fleeing what they see as like bad moderation and curation vibes on other places. And so they're coming over to this new place, but they're not necessarily thinking about it in terms of, wow, it's really fantastic that I have more agency.
Jakub Muchumgama
My hunch is that, that, that you're right. So if you fleet X and now Facebook, there's a good chance you did so because you thought that maybe content moderation was getting too lax. Right.
Renee Diresta
Or, or because you saw Elon in your feed constantly. Right.
Jakub Muchumgama
Or just like that's true.
Renee Diresta
Curation on X got really weird.
Jakub Muchumgama
Yeah. And also, I mean, Elon is something that I've written about many times, is not exactly your, your principal civil libertarian free speech defender. He is very much someone who defines free speech as stuff he likes and has all kinds of arguments to limit and moderate things that he doesn't like. But that's sort of the way he marketed it. And I think that that that turned off some people. And also the announced changes by Zuckerberg, which, you know, I think you can look at it as, you can, you can be sort of cynical and you can say that was clear pandering to the new administration coming in in order to avoid sort of the worst revengeist consequences of a new administration that where, where Trump obviously was not a big soccer break fan. But I think that some of the announcements were from a free speech perspective actually pretty good in the way that they were announced. Implementation obviously is different. So I like some of the features that you mentioned on Blue sky. That's fine. I guess from a free speech perspective, the real difference is how light touch is Blue sky when it comes to the centralized content moderation. So for instance, if you look at the, the hate speech policies of Blue sky, they're not that very different from other platforms. It's not sort of a, you know, we have all these features, so we're going to be super, you know, we're not going to touch a lot of hateful stuff centrally. I don't have any stats on how they implement it, but it's just when you look at the, at the policy, it's not very different from, from, from the other platforms. And we have to remember that the other platforms, there's been, we put out a report a year or two ago where we looked at what we call Scope creep in hate speech policies of platforms. So we looked at I think eight platforms and their hate speech policies since they were first sort of publicly iterated and up until 2022 or 2023, and you see a huge increase in the number of protected categoristics. You see sort of lower thresholds. And even though most of the platforms say that they are committed to human rights principles, their hate speech policies actually go way beyond the definition of hate speech in the International Covenant on Civil and Political Rights. This UN Convention, which on the one hand protects free speech, but then says you have an obligation to prohibit narrow categories of hate speech. And even though, I mean, these conventions are obviously not legally binding on, on private platforms, but they say they are committed to these principles. What we found was that very clearly the direction was towards more restrictive hate speech policies. And just by looking at Blue Skies hate speech policy, it doesn't seem to be much of a game changer. And I'd be interested to see how they, you know, data on how they enforce this, because what we've done, we've done also a number of studies, first in Denmark, but then the latest one we did was Sweden, Germany and France, where we looked at some of the most popular politicians and media outlets and we looked at the number of deleted comments there, and we found that the vast number, I think between 90 and 98% on YouTube and Facebook respectively, were perfectly legal comments. And the most of those that were deleted were not only legal, they were not particularly controversial. So that suggests that this scope creep has had a consequence, an impact not only on lawful speech, which you'd expect a fair amount of lawful but awful speech to be, to be moderated away, but even sort of speech that is not even particularly controversial.
Ravi Iyer
I mean, I just like that that dovetails well with my experience that, you know, hate speech, and I'll agree with both Jacob, and in some ways with Elon, that, you know, the concept of moderating on hate speech, you know, the goal is reasonable, but the way it actually gets implemented in practice actually has a lot of negative effects. So a lot of things you end up taking down are things like men are scum or, you know, they're not, they're not things that we're actually thinking are harmful. And then. And a lot of things you end up leaving up are what you call fear speech. So people talking about a crime committed by an immigrant, just reporting on it, and then you see all the, the vitriol it generates. And so you're never going to get at that kind of thing with a policy. And so I think, you know, you're right, Renee, that I don't think people responding to differences in moderation because I don't think those actually make a huge difference in divisiveness. I actually think the thing they're responding to is a vibe change between Twitter or X and Blue Sky. Like, people don't want to post something and, you know, get attacked by 300 people. They want to have a reasonable conversation with regular people. And so if you have a platform where it's normative to just attack each other, then, you know, regular people are going to leave.
Renee Diresta
Well, I think design really does so much toward shaping norms, and that's. This is where I think that ties back into what Glenn is describing and the work around. What do you curate? What do you surface? You talked a little bit about bridging as a means for surfacing disagreement without being disagreeable, I think is how I've seen it expressed in its simplest form. Glenn, I don't know if you want to talk about that. I want to also mention Masnik's work on overcoming Digital Helplessness and talking about the Agency piece, but give me a little bit about the. That. That concept of how do we create that sense of where users do feel comfortable where. Where the. The norms are such that you feel like you can speak without being, you know, barraged by a mob of people because of what is curated and surfaced, it doesn't create main characters constantly.
Glenn Weil
I mean, I think it's important to understand that this emphasis on design over moderation is both a defender and an attacker thing. It's both good and bad. So, for example, there's wonderful work by some colleagues of yours when you were at Stanford. Renee, Molly Roberts, Jen Pan, Gary King. And what they show is that the most effective stuff that the Chinese do is not actually the great firewall, it's the forum sliding. It's flooding the space with division attacks, with distraction garbage. Basically, you know, you can talk all you want about free speech, but if the room, if there's deafening noise playing everywhere, it's not very feasible to speak over that. Right. And so I just think no matter what our goals are, the design of sort of the overall information ecosystem and what gets surfaced is critical, you know, to achieve the goals of making people feel that they can be, you know, part of a conversation, to me means doing exactly what you were saying with the Blue sky feeds while maintaining some of the ease that you get from a more algorithmic curation, which is people need to know the context of the conversation. If people don't understand where they're speaking, it's going to be very hard for them to do that. There's completely appropriate times to start ulating or speaking in tongues. It's called church, you know, or mosque. Right. But that's probably not something to do when you're in an academic conversation about chemistry. And if we let everything get mixed together and people don't have any sense of that context, then you're going to get a lot of inappropriate behavior. Not for any particularly malicious reason, but just because people don't know what conversation they have. Like, you know, men are scum, for example, is a very contextual thing. Like, if you are in a conversation that is, like, meant to be bridging a bunch of different things on controversial issues related to feminism or abortion, saying men are scum is probably like, could be a pretty problematic remark. If you're having a conversation about, you know, sexual abuse, it might be a very appropriate thing to say. So not giving a sense of the context or the audience that you're speaking to can really undermine our ability to have civil conversations. And I think that restoring that in ways that are consistent with the ease of an algorithmic feed is really important. That's a lot of what we're trying to.
Jakub Muchumgama
But I think here, again, it's important that we still have those spaces for those who want the robust, uninhibited discussions. Also because, I mean, human beings are, you know, we're driven by our emotions a lot of the time, right? We're sitting here, we're having a rational discussion. We're saying, what would an optimal, optimal information space look like? And we can have great ideas about that, but the human beings that navigate it are not always motivated by those. And so you have let's, you know, the latest example, the. The Khalil Mahmoud case, you know, that's something that has upset a lot of people. And they're going to vent their frustrations about it and their fears about government overreach on free speech. And they're not necessarily going to express that in a very polite way because they think that the government is curtailing First Amendment rights in a way that's really scary. And you have to have spaces for that, even though it sometimes delves into hyperbole. Then you can have a feed where First Amendment lawyers have a much more substantive discussion about the niceties of the case. And I want those things to coexist.
Renee Diresta
Do you want them to coexist on the same platform? I go back and forth on this. This is the challenge with decentralization. It gives people the opportunity to move in response to the vibes, meaning you don't have to be on Twitter. You can go be on bluesky, which is currently perceived as lib Twitter. My hope is that it won't be for very long. My hope is that people recognize the technological capacity, the ability to build, much like the Fediverse, right? Run your own server, do your own thing, set your own rules. Reddit, again, the same thing. You've got infrastructure, make your subreddit. You can have our conservative and our liberal coexisting in the same place on the same infrastructure. Are you looking for people to be in dialogue with each other? Because that, I think, is the piece that is struggling. There are a lot of different social experience sites that are coming about, and if you want to find, you know, the saltiest possible world, it's always been there. It's called 8chan. You can go, right? The question is like, nobody's ever been deprived of that experience. The question is like, where, you know, how do you create the spaces where the disagreement manages to come in contact and achieve consensus? Because my big concern is that we've created places that people can go to for the vibes. But we haven't created, we haven't found ways to use design as solutions to do that bridging and create that consensus. And even as we've created more small public squares, which I think are good, we have not yet found the design solution that bridges that consensus space.
Glenn Weil
Renee the way I think about it is that giving the space for the small conversations might seem like a contrast to doing the bridging, but I would actually argue that it's like a necessary other side of the coin, because until we understand what those smaller spaces are, we don't even know what to bridge across at some level, right? So I actually think, you know, my optimal. My ideal design for this type of a situation is one where there is a common platform that has affordances for both of those things and actually uses the data from each to inform the other. Because by having the awareness of the small conversations, we know what the larger conversation, if you choose to tune into it, is going to need to navigate and bridge. Because without the smaller ones, there's just no way to be attuned to that.
Renee Diresta
Mike Masnik wrote a really interesting piece in January of this year. It's called Empowering Users, Not Overcoming Digital Helplessness. It is asking users to make a pretty big mental shift, concept shift in how they engage with their role, their own role, their own agency on social platforms. This question of, you know, I remember some controversial people landed on bluesky, and even though they didn't post very much and they did nothing that was sort of directly, immediately, you know, obnoxious. On bluesky, some members of the community were extremely angry that they were there, right, because of their past behavior on other platforms that they found upsetting, offensive, et cetera. And this question of, you know, you can have a very strong block feature, you can empower users with specific tools. You can even create, again with federation, the ability to defederate from other servers. What do you think shifts the way that users respond rather than kind of calling for the mods to take an action? Do you think that that's a reasonable expectation that people should be rethinking their relationship to their own agency here, or is that an unrealistic expectation?
Glenn Weil
I mean, I think there's some different categories of users, so I don't think everything needs to be devolved. There's no one type of user, right? Like there's some people who have massive followings, who have official positions within certain communities. And I think the notion of having those people take on additional responsibilities, which they already do in the world, is very consistent with the role that they play. I mean, there are people who are users of Blue sky who are also literally the editor of the New York Times or the pastor of megachurch or whatever. And the notion that one would expect those people to take on roles in the digital space that are commensurate to their roles in the physical space, or that there would be digital native equivalents of those that also exist, makes to me a lot of sense. The notion that everyone needs to be acting in such a sophisticated way seems sort of unrealistic to me. And, you know, I think the best designs would allow people to sort of sort into those roles and take on those responsibilities as appropriate to the social role they're playing.
Ravi Iyer
I mean, I'd say we do want people to interact in the same space. And my goal, you know, I think there should be a room for everyone, but I think I would prioritize regular people. And I think a lot of these platforms don't prioritize regular people. They prioritize the hyper engaged, you know, online warrior. And that is not a regular, that's not most people in society. And so I think we in the world know how to make spaces that prioritize regular people. And if you are like an online warrior who just wants to argue about everything, we know how to sort of exclude the people from those spaces or make them take their turns or limit how, how much they dominate a space. And I think we just need Spaces like that online where, you know, and I think you should be able to argue and say things in strong ways or say things in academic ways. But you should be well intentioned. You shouldn't be there to create an argument. You should be there to, to have a thoughtful discussion. And unfortunately, our spaces aren't designed for that. And so it may take a shift we may need to have. The reason we don't see it happen as much is because we often prioritize like a space that's used a lot. And so we're used to, like I have to refresh my feed constantly and see what's new there. Like there isn't maybe not something new to be learned. Right. And so maybe we need to check our feeds every two days instead of every 30 minutes. And then maybe the conversation would feel more natural.
Renee Diresta
I liked the, you know, Jay's talk at South By. I don't know how many people noticed this, but Zuckerberg had been going around in his, you know, sort of Roman. How often do you think about the Roman Empire meme? Sort of like all Zuck or nothing sort of shirts comparing himself to Caesar. The all Caesar, either Caesar or nothing. And then she had a shirt on that said a world without Caesars. I'm not, I would butcher the Latin pronunciation, so I'm not even going to try. But it was just sort of a nice way of wearing a shirt that sort of articulated the ideological distinction between a platform that is so run in accordance with the vision of one person in a very top down controlled leadership versus the world without Caesars, which I think is really a very appealing way to phrase this, the potential of decentralization. Since I know we only have a couple minutes, I am curious. You know, platforms, lawmakers, users, everybody has very different visions for this future of speech online. I'm curious what you all see as the most realistic outcome. Where do we see things going over the next five years?
Jakub Muchumgama
I think that we're at a moment where lawmakers in a lot of countries, including democracies, would be skeptical about, especially when it comes to decentralization. If you were to say, let's allow users to have more control and then minimize our content policies and our centralized moderation. I think that, you know, if. Look at what's going on in Brazil, for instance. Look at what's going on in India. Look at the European Union. I think the European Union now with the way that the Trump administration is acting, I think there's even more skepticism about American platforms in Europe and even more of a wish to say we need to have control over what's going on on these platforms because they undermine our democracy. Unfortunately. I think some of those reactions are going to potentially frustrate some of the ideas that we're discussing today. One of the things that I also liked about the article is that some of these ideas have been implemented in Taiwan. And so I spend a lot of time sort of saying we don't have to always think about, for instance in free speech debates, the dichotomy between Europe and the US for instance. There are actually really interesting places around the world, a country which faces an existential threat, including state sponsored disinformation on a scale that no other democracy faces, that actually tries to navigate this challenge without resorting to some of the solutions that democracies, well established democracies unfortunately are flirting with. So I try to point to that as a way forward and I think that would do well. Unfortunately, it seems to me that a lot of lawmakers are not necessarily don't know that and they still think in these very binary terms. So in the short term I'm probably pessimistic, but I think what you need to Judah and this goes back to my initial remarks that especially when you work in the free speech space, you can't just talk about John Stuart Mill and principles. You have to show something concrete, something that works. Where people say, okay, I actually see this is something that works. This takes care of some of my concerns. And now I'm no longer so inclined to say, well, I need a platform to implement my free speech or my policies and do away with the people that I don't like or I want to the government to, to adopt these rules to protect me from whatever evil forces I see out there.
Glenn Weil
I don't know if you've been following the financial markets or the newspapers, but it seems like it's a general time.
Ravi Iyer
Of uncertainty and no one knows and.
Glenn Weil
And that, and that that can be a problem. But I actually think it's kind of great. I think predictions are disempowering, fair enough. Uncertainty is empowerment. I think it's a moment for us to steer things and to together make that change and to focus on it. So I don't know. There's a lot of bad outcomes I'd be happy to talk about and there's a lot of great ones. And I think it's our chance to seize the reins.
Ravi Iyer
I am actually more optimistic. I think that we all walk around with our phones that we have complex relationships with. If you ask people, most people including kids want to use their phones less and we have all these apps that are trying to get us to use them more. And I think that there's just too much energy in the system, too many people who are unsatisfied with the status quo for nothing to change. I do think that the moderation paradigm has somewhat held us back here where, you know, I think you get into never ending wars about what people should and should not be allowed to say online. And I think the design paradigm is taking hold. There are more and more people thinking about how these platforms are designed, how do we give people choice? The Digital Choice act recently just passed in Utah to actually like force people to force platforms to allow that choice across users. So I think there's just too much energy in the system. And I talk to policymakers every day who are trying to make that change. And so, you know, maybe it's not going to happen immediately, but people are not happy.
Glenn Weil
And Ravi, you deserve congratulations for the wonderful work you did on that. So thank you.
Renee Diresta
Well, I know we are at TIME and I just want to thank all of you for joining me today to chat about this. I feel like, you know, we could do an entire another hour on what's happening in Europe and Brazil, so we will have to actually do that at some point. But thanks so much for, for talking about the the papers, your work both on the regulatory front, on the academic and design front, and the on the free speech front. Really enjoyed the chat. Looking forward to having you all back in the future.
Glenn Weil
Thanks you all.
Ravi Iyer
Thanks for having me.
Renee Diresta
The Lawfare podcast is produced in cooperation with with the Brookings Institute. You can get ad free versions of this and other lawfare podcasts by becoming a Lawfare material supporter at our website lawfairmedia.org support. You'll also get access to special events and other content available only to our supporters. Please rate and review us wherever you get your podcasts. Look out for our other podcasts including Rational Security, Allies, the Aftermath and Escalation. Our latest Lawfare Presents podcast series about the war in Ukraine. Check out our written work@lawfaremedia.org this podcast is edited by Jen Pacha and our audio engineer. This episode was Kara Shillin of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.
Advertiser
Skincare should do more than just feel nice. It should work. Enter Dende Science backed high performance biotech skin care that delivers fine lines, dullness, dryness, uneven skin tone and texture. We have it covered. No fillers, no fluff, just the good stuff your skin actually needs. Today's skin, tomorrow's innovation. Check it out at dende.com that'S-E-I-N-E.com your future skin will thank you.
The Lawfare Podcast: "Lawfare Daily: A World Without Caesars"
Release Date: March 14, 2025
Introduction
In the episode titled "A World Without Caesars," The Lawfare Podcast delves into the intricate dynamics of social media design, moderation, and the quest for a more pro-social online environment. Hosted by Renee Diresta, Contributing Editor at Lawfare and Associate Research Professor at Georgetown's McCourt School of Public Policy, the discussion features esteemed guests:
1. Design vs. Moderation in Social Media
Renee Diresta opens the conversation by distinguishing between moderation as a policing mechanism and design as a proactive strategy to cultivate positive user behaviors. The panel emphasizes that the architecture of social media platforms significantly influences user interactions, content amplification, and overall community health.
Notable Quote:
Ravi Iyer (02:00): "The results of those kinds of experiments that reduce the incentive to comment back and forth or to reshare things actually improves the ecosystem."
2. Pro Social Media: Concept and Goals
Glenn Weil introduces the concept of "pro social media," highlighting the discrepancy between social media's intended purpose of strengthening social connections and its often counterproductive outcomes. The aim is to design platforms that are sustainable and regenerative, reinforcing the social fabric rather than undermining it.
Notable Quote:
Glenn Weil (04:08): "Social media could in theory either be like sustainable, you know, agriculture that reinforces, strengthens the soil at the same time as it harvests from it, or it could be like, you know, clear-cutting agriculture."
Jakub Muchumgama adds a free speech perspective, advocating for empowering users through design rather than relying solely on centralized moderation or government intervention. This approach seeks to mitigate harms through constructive means, aligning with principles of egalitarian free speech.
Notable Quote:
Jakub Muchumgama (05:05): "The pro social media approach in many ways is a good way for free speech activists to frame a much more positive vision for social media."
3. Content Provenance and Transparency
The discussion shifts to the importance of knowing where content originates. Glenn Weil underscores the dual aspects of provenance: the source of creation and the community's role in amplifying it. Transparency in these areas can foster a better understanding of content dynamics and user engagement.
Notable Quote:
Glenn Weil (10:37): "There's the people who created it and their signatures, but then there's the people who liked it and reposted it and so forth."
4. User Behavior and Platform Design
Ravi Iyer shares insights from his tenure at Meta, revealing how algorithms inadvertently promote divisive content by incentivizing engagement over quality. He highlights experiments that reduced incentives for polarized interactions, thereby improving the overall ecosystem.
Notable Quote:
Ravi Iyer (12:25): "The results of those kinds of experiments... actually improves the ecosystem."
The panel discusses "break glass measures," such as de-emphasizing political content post-January 6th, which aimed to prioritize more human-centric and less divisive content. However, these measures sparked debates about platform authority and transparency.
Notable Quote:
Ravi Iyer (15:39): "People should be allowed to talk about politics, but they shouldn't be incentivized to talk about it as entertainment."
5. Decentralization and User Control
Renee Diresta explores decentralized platforms like BlueSky, emphasizing user agency through features like starter packs, curated feeds, and granular content controls. The panel acknowledges the potential of decentralization to empower users but also notes challenges in fostering meaningful dialogue across diverse user bases.
Notable Quote:
Renee Diresta (31:04): "It's a really cool way to immediately curate your feed... you can мгомmatically toggle between feeds."
Jakub Muchumgama expresses skepticism about users' motivations, suggesting that many migrate to decentralized platforms not out of a desire for enhanced agency but rather to escape perceived poor moderation on mainstream platforms.
Notable Quote:
Jakub Muchumgama (34:52): "If you fleed X and now Facebook, there's a good chance you did so because you thought that maybe content moderation was getting too lax."
6. Regulatory Considerations and Future Outlook
The conversation transitions to regulatory frameworks, with Ravi Iyer drawing parallels between social media regulation and historical regulations in industries like automotive and food. He advocates for minimum design standards that prioritize user safety and mitigate harmful externalities without infringing on free speech.
Notable Quote:
Ravi Iyer (26:07): "You can't regulate what people can or can't say online, but you can regulate whether a product is safe."
Jakub Muchumgama highlights global perspectives, noting that regions like Europe and countries facing existential threats from state-sponsored disinformation are adopting stringent measures against American platforms. He points to Taiwan's innovative approaches as a potential model forward.
Notable Quote:
Jakub Muchumgama (51:57): "We're at a moment where lawmakers in a lot of countries... are skeptical about decentralization."
Glenn Weil and Ravi Iyer offer contrasting yet complementary visions for the future. While Weil sees uncertainty as an opportunity for collective steering towards better designs, Iyer remains optimistic about the growing momentum towards user-centric design paradigms.
Notable Quote:
Glenn Weil (54:52): "Predictions are disempowering. Uncertainty is empowerment."
7. Balancing Open Dialogue and Structured Design
The panel discusses the necessity of creating spaces that accommodate both uninhibited discussions and structured, civil discourse. Jakub Muchumgama emphasizes the importance of allowing diverse expression while maintaining areas for substantive dialogue, acknowledging the emotional drivers behind user interactions.
Notable Quote:
Jakub Muchumgama (43:55): "Human beings... are not always motivated by rationality."
Glenn Weil argues for platforms that offer both small, focused conversation spaces and larger, bridging interactions informed by user data, ensuring context-aware communication.
Notable Quote:
Glenn Weil (46:34): "Having the awareness of the small conversations, we know what the larger conversation... needs to navigate and bridge."
Conclusion: Towards a Pro-Social Online Ecosystem
As the episode wraps up, the panel reflects on the transformative potential of thoughtful design in shaping the future of social media. They advocate for a balance between decentralization, user empowerment, and thoughtful regulatory measures to foster an online environment conducive to meaningful and civil discourse.
Final Insights:
Closing Remarks
Renee Diresta thanks the guests for their invaluable insights, hinting at future discussions on global regulatory landscapes. The episode underscores the urgency of reimagining social media platforms to better align with democratic principles and user well-being, envisioning a world where digital interactions enhance rather than fragment societal bonds.
For more discussions on national security, law, and policy, visit www.lawfareblog.com. Support the show at Lawfare Supporter.