Renee Diresta (37:33)
It'S a pretty big reversion to a maybe pre2015 kind of timeframe on some of this stuff. So I think it's important to just go over what happened because it's being described as like, content moderation has changed. So there's a few things, right? First, there's an end to the fact checking program. So the fact checking program was started in 2016 and it was largely in response to people being like, hey, tons of what we very quaintly for like five minutes called fake news, where we meant like things that were actually false, that were going viral. That kind of stuff was happening a lot in 2015, 2016, particularly because things like Macedonian Content Farms were gaming Meta's algorithms. And so in response to concern largely from the left at that point, you did start to see Meta begin to make statements about how one of its values was making sure that its users were informed. And you have to keep in mind also that all of these policies are now being read through an American culture war lens, but they apply globally. And so the fact checking thing is a very interesting carve out because while it was implemented and rolled out globally, it's being ended in the United States and continued elsewhere because Europe and other countries have actually regulated that Meta perform certain types of things whereby fact checking is not part, fact checking specifically is not part of it. But there is this sort of set of policy rubrics that Europe wants to make sure that what it considers to be its interests in having informed citizens continue to be met. So you have this kind of complicated thing around fact checking. In lieu of fact checking in the United States, we're going to get what are called Community notes. So I've been a big supporter of Community Notes for a long time. It's where any user can go and sort of throw up a comment saying, hey, this is wrong, this is out of context. It's supposed to be supplemental to fact checking in a way. It says like fact checkers have to go and do all this work, they're bandwidth constrained. Whereas users and the wisdom of the crowd can go and can correct more things, or people who are very deeply informed on an issue can add perspective. But in order for a community Note to display, somebody has to write it, a bunch of people have to go vote on it, and then a bridging based algorithm has to determine that a sufficient number of the raters come from what it considers to be Opposite sides of the political aisle, right? And that's because you don't want to have Community Notes that appear because of brigading, right, where everybody on the left says, like, okay guys, go up, vote my note. Right? So you have to have that broad appeal. And one of the reasons why Community Notes is important is that fact checking works a lot of the time as far as, like, helps people change their mind and become informed, but only if they trust the fact checker. And so Community Notes is a means of adding more legitimacy because on the right, there's been a very deliberate effort to undermine fact checkers for a very long time now and to delegitimize fact checking. And so one of the reasons to kill fact checking and roll out Community Notes as opposed to having both of these things together, is that capitulation to the audiences on the right who fundamentally distrust fact checking and see it as some sort of vast, tyrannical mass media cabal trying to silence conservative voices. So that's the sort of tension between fact checking and Community Notes. That's like one small piece of what was in that announcement. There was also stuff about reducing auto moderation, meaning instead of having content like the auto mods, which is the kind of colloquial term, is when non human entities are sort of reading, if you will, Facebook comments and posts, deciding if they violate a policy and then actioning in a no humans in the loop process. So the algorithm decides that something is good or bad and acts accordingly. This leads to some like, really bad over enforcement. Like they're not very precise. That's meta's problem, by the way. They could make them more precise. But there's a but what they're doing is they're basically saying that this, this part of Zuckerberg's kind of litany of changes is that the auto mods are going to be deployed only in very specific kind of high tension or high stakes areas. And he mentions like drugs and terrorism and child safety. So again, I don't think that's necessarily a bad call. These things just aren't very good. People don't like them. They do create a sense that the platform is against you and somebody is censoring you. And everybody across the political spectrum has had some surreal experience with an auto mod, like misconstruing something. So it's not a terrible change. But on the flip side, much like Community, it puts the onus on the users now to report, and that's a huge shift. I don't know. You know, I imagine we've all been on social media for like, 15 years now. I don't know how many people feel like, oh, yes, I have a great experience when I report something, like, my report is actioned, it's taken seriously. I feel confident that the platform cares about me. No, that doesn't happen. Most people actually won't report because they don't believe the platform will do anything. So it's just sort of going to, I think, you know, in aggregate, change the tone of. Of being in some of the communities on the platform. And that'll take me, I guess, to the final, you know, the final major change on the moderation front, which was Meta changing some of its hate speech policy. And there, what you see is Zuckerberg specifically calls out immigration and LGBT and trans issues, gender issues. He mentions gender specifically. And there what you've seen is some changes where I was actually very, very surprised. You can go and you can read the change logs. You can see what it was, you know, two weeks ago versus now. You can now do things like call LGBT people, including, you know, teenagers and kids who are on these Instagram and platforms like this. You know, you can call them mentally ill. You can. It really just sort of opens the door. You can call immigrants, like, dirty and all these other terms. Areas where we used to Meta used to, again, if platform policies are reflecting platform values, saying, like, dehumanization is not okay. We don't want you bullying people. That's not even a particularly courageous stance. Right? This is just how we behave as humans amongst ourselves. We don't say these things to people when they're in front of us. But now all of a sudden, we're kind of opening the door to change that dynamic. There used to be this line that platforms tried to strike where you could argue political commentary on culture war issues, like you could talk in the abstract about men playing women's sports, these sorts of things, but you weren't supposed to go after individual people with dehumanizing language. And we're seeing that rollback. And that, I think is like, that's an explicitly political choice to try to be more in line with the MAGA aesthetic. And candidly, I think the real reason it's being done is that Meta knows that that kind of language is going to come from political elites and doesn't want to be put in the position of trying to moderate the political elites who are going to say it's, hey.