
Loading summary
Christy
The youth mental health crisis is growing and social media is a major driver. Kids are spending up to nine hours a day on screens, often unsupervised, and studies show a direct link to anxiety, depression and even suicidal thoughts. That's where Gab comes in. GAB offers safe phones and watches with no Internet or social media. Just the right tech at the right time. From smart watches for young kids to advanced parent managed phones for teens, Gab keeps kids connected safely. Visit gab.com getgab and use code getg for a special offer that's G A B B.com getgab Gab Tech insteps independence for them, peace of mind for parents Hi, this is Christy from Back to the Bar. You've probably heard about GLP1 weight loss medications and the side effects that can come with jumping in too fast. That's why I love Noom. Makes getting started easy. Their microdose GLP1 program begins with a smaller dose and gradually scales up based on how your body reacts. The Noom GLP1 microdose program starts at 90 and it's delivered to your door in seven days. Start your microdose GLP1 journey today at Noom.com that's n o o m dot com Noom micro changes big results average weight loss eight pounds in first month meds and personalization based on clinical need and not available to all individuals. Medications are not reviewed by FDA for safety, efficacy or quality. Pricing based on first month only. I'm Marissa Wong, intern at Lawfair, with an episode from the Lawfare archive for January 18, 2026. On December 23, Secretary of State Marco Rubio barred five Europeans associated with content moderation efforts in the European Union from entering the United States and ordered that they be deported if found in the U.S. rubio justified the ban by accusing the Europeans of leading organized efforts to coerce American platforms to censor, demonetize and suppress American viewpoints they oppose. This followed a series of moves by the Trump administration against content moderation laws overseas. For today's archive, I chose an episode from September 9, 2021, in which Evelyn Dweck and Quinta Jurecic spoke with David Thiel, the big data architect and chief technical officer of the Stanford Internet Observatory, about content moderation or the lack thereof on two alt social media sites, Parlor and Get.
Quinta Jurecic
I'm Quinta Juarecic and this is the Lawfare Podcast. September 9, 2021 let's say you're a freedom loving American fed up with Big Tech's efforts to CENSOR your posts. Where can you take your business? One option is Parler, the social media platform that became notorious for its use by the Capitol rioters. Another is Getter, a new site started by former Trump aide Jason Miller. Unfortunately, both platforms have problems. They don't work very well. They might leak your personal data, they're full of spam, and they seem less than concerned about hosting some of the Internet's worst illegal content. Can it be that some content moderation is necessary after all? Today we're bringing you another episode of our Arbiters of Truth series on the online information ecosystem. Evelyn Duwak and I spoke with David Thiel, the big data architect and chief technical officer of the Stanford Internet Observatory. With his colleagues at Stanford, David has put together reports on the inner workings of both Parler and Getter. He walked us through how these websites work and don't the strange contours of what both platforms are and aren't willing to moderate and what we should expect from the odd world of alt tech. It's The Lawfare Podcast, September 9th. Content moderation comes for Parler and Getter. David, thank you so much for coming on the show we asked you on to talk about two recent reports you put out with the Stanford Internet Observatory on Gitter and Parler, which are both sort of small new platforms catering to, I think it's fair to say, largely right wing clientele. In your parlor report from this January, you describe the site as an alt platform. In your Gitter report from this August, you write that Gitter is a new alt network. What do you mean by that adjective, alt?
Evelyn Dweck
So it does denote that it is largely right wing. And I wouldn't say that it necessarily means alt right, but it's mostly an alternative for groups that have been deplatformed in some number from major platforms like Facebook or Twitter. So for a lot of these people, alt just means it's the only alternative network that will accept the, you know, people with those types of online behaviors.
David Thiel
That's a nice way of putting it. You make it sound like a lovely refuge for the otherwise cast out members of society. So tell us a little bit more about them then. Tell us about Pahler. It got a lot of attention around the January 6th insurrection. Many rioters were posting selfies of themselves in the Capitol building on Parler and so on. And scraped posts from PALA have since popped up in evidence introduced against the rioters in criminal cases. So what's been happening on PALA since its big moment in the sun around January 6th?
Evelyn Dweck
I think since the 6th, it's been in a little bit of a decline. It's not really a place where radical planning activity happens so much. It was heavily used on January 6th to kind of display what was happening during the insurrection. But a lot of the actual planning goes on on separate platforms like Telegram, things like that. Right now it's mostly pretty bog standard right wing rhetoric from both the larger outlets like Epoch Times and oan, as well as a few other fringe networks. There are some personalities that are rather large there. Dinesh d' Souza and so forth gets a lot of engagement, but right now it's mostly. It's mostly news, syndication and commenting.
Quinta Jurecic
And Getter is newer. Can you tell us a little bit about its origin story? And also if you have any sense of what its name means, I've been very, very puzzled by this because it's.
Evelyn Dweck
G E, T T R. Yeah, it's. So in terms of the origin, it is a little bit confusing because it was originally started out as a kind of test project by Miles Guo as Gotome or something like that, and it went through an iteration of Getter with an E and Getter with an R. And they still actually are kind of confused about which it is in their documentation and product builds. In terms of the origin, as near as we can tell, it was developed by a Miles Guo operated development team, still appears to be. And unlike Parler, which launched a couple of years ago and grew slowly over an extended period of time, and there were these kind of discrete migrations of different groups of people, different nationalities, Gitter kind of launched with a splash all at once. It got a decent amount of signups in the first week or so and has kind of tailed off since then. The growth is just getting slower and slower. In terms of size, it's maybe a tenth of the size of Parler, but of course it's been around not nearly as long. In terms of the content on there, it's. There's a lot more of the Guo Bannon ecosystem. So you know, Bannon's war room, Miles Guo's gtv, those kinds of things. So there's a lot more of the. The Guo sphere, as it were, than there is on Parler.
Quinta Jurecic
And can you tell us just a little bit more about the Guo sphere, as you put it? I think Miles Guo is a name that our listeners have maybe heard pop up a couple times, at least on this podcast, but it might be helpful to spell out just who he is and sort of what his connection to this American right wing ecosystem is.
Evelyn Dweck
Sure. So Miles Guo is this Chinese billionaire, currently, you know, fugitive from the Chinese government, mostly based in the US has partnered with Bannon on a couple of efforts, mostly around this kind of promoting the idea that China is responsible for Covid, that China is a, the PRC is a government that needs to be collapsed in some way. A lot of pro Himalaya and Tibet content. So it's, it's a rather odd network that's kind of adjacent to some of the stuff that you might see on Epoch Times, but a bit more fringe and conspiratorial. It does have a bit more of a flavor of, of Bannon's war room, things like that.
David Thiel
So the, the big marketing proposition of these platforms is that they are free speech forums. You know, they won't moderate your content or interfere with your sacred First Amendment rights like the Democrat stooges at Facebook, Twitter and YouTube. Right. And so before we get stuck in, I was wondering if you could just give a high level description of how different the community standards of these platforms look to I guess what we call the mainstream platforms. So you know, this is obviously a different question to how they enforce those standards, which I think we'll come back to because every platform there's a large delta between the standards on paper and how they actually are enforced in practice. But you know, the, the community standards documents are in a sense marketing documents that sell a platform's self conception to the world. And in some ways I was actually surprised at how much more discretion these platforms left for themselves to moderate even if their community standards were shorter or perhaps because their community standards were shorter. So for example, geta's content policy is very short, but includes the words. GETA holds freedom of speech as its core value and does not wish to censor your opinions. Nonetheless, you may not post any harmful, vulgar, profane, hateful or otherwise objectionable material. And otherwise objectionable is particularly ironic because these are the very words in section 230 that conservatives generally take issue with and want to remove because they allow. Well, they don't, don't need to get into the legal weeds on this one. We do that enough on other episodes. But you know, the way they see it is that's what allows Facebook and Twitter to moderate harmful content that should otherwise be protected by the First Amendment. And Parler's amorphous guidelines include like these guidelines are subject to modification unilaterally by Parler at any time, showing how prepared they are to really commit to this whole free speech thing, unless maybe at some point it changes and they don't like it anymore. So I guess what's your takeaway from how these platforms community guidelines and marketing is different from mainstream platforms?
Evelyn Dweck
I mean, I think that they've made a bit of a carve out mostly targeting what other platforms would consider to be disinformation or misinformation, as well as things that would qualify as hate speech under various US laws. So it's not so much that they're really targeting a fully free speech model. It's a combination of they don't want anybody to determine the truth or falsity of anything and they don't really care about racist or bigoted content. It also is kind of convenient because, you know, in practice they're just not very good at moderation, which kind of works out in their favor to some degree. But yeah, I mean, I think it's largely a disinfo carve out.
Quinta Jurecic
And so what are the demographics of the people who use these platforms? You have a really interesting analysis of not only the languages that people use on them, where people seem to be coming from, but also the emojis that they use in display names on both Getter and Parler.
Evelyn Dweck
Yeah. So Parler. It's interesting because there was this two year timeframe we could see these waves of migration as people were either deplatformed on Facebook and Twitter or they were encouraged by some leader of their community to migrate. So you can see the three main demographics on Parler at various times come over and those were, you know, American, right wing. There was then at the encouragement of the various bolsonaros in Brazil to migrate over, partly due to some deplatforming, but they also just seem to latch on to any new platform that might be less restrictive on them. And then there was a separate migration of largely Saudi Arabic speaking users as well, who had been caught up in various rounds mostly of deplatforming from Twitter. In terms of Getter, we see some of those same demographics. It does tend to be us, Brazilian and Saudi, but they kind of came over all at once as the platform launched. You can't really tease apart the separate waves of migration because of the compressed time frame and a lot of the way that we do analysis of where people are coming from. I like to use emoji just as a, you know, kind of fun and interesting way of quantifying kind of the personality of some of these users as well as the way that they like to self identify. So you'll see a number of things that at first it's not really clear why they're popping up. There's some obvious things like Brazilian flag, Saudi or Israeli flag, blue hearts for people that are expressing, you know, Blue Lives Matter support. But also it's some odd things. Like in Getter, the most popular thing that people put in their nickname is a check mark emoji because they're attempting to appear to be verified when they've not actually been verified by the platform. So there's some interesting strategies that go on there.
Quinta Jurecic
I wanted to dig in a little more to the Brazil aspect because I do think this is really interesting. As you mentioned, there's a significant Brazilian user base on Parlor. There are about half as many Brazilians as Americans on Gitter, by your calculation, which is itself far and above the third place country, which is Canada. And listeners are going to be listening to this on September 9th. Earlier this week, there was a really strange incident where Jason Miller, the former Trump advisor who is running Gitter, said he was detained and questioned for three hours by Brazilian authorities in an airport in Brasilia after traveling to the country, which is sort of in the midst of a bit of a constitutional crisis right now. So can you talk a little bit more about, like, what the Brazil connection is here?
Evelyn Dweck
I mean, there's currently politically, a lot of interesting parallels between Brazil and the US And Jair Bolsonaro has been trying to exploit that somewhat. So in terms of both, in terms of kind of modeling his campaigns and behaviors off of Donald Trump, he's also kind of trying to appropriate the model of calling into dispute the integrity of the elections in the country, as well as what happened on the 7th of this week, which was essentially doing a little bit of a January 6 test run to see what would happen if a bunch of his supporters came out to the capital and, you know, kind of faced off against some of the police forces and government buildings there. So, you know, Bolsonaro's been less heavily censored by some of the mainstream platforms. He has had a number of YouTube videos and so forth taken down mostly because of COVID disinformation. That's kind of what spurred the migrations they, Bolsonaro and his sons advertise and their profiles, their accounts on Parler and Getter and have pushed it pretty heavily. So it's a large enough demographic that Getter itself turned itself green and yellow yesterday to celebrate their Independence Day. It's a big part of the user base.
David Thiel
Wow, that's quite something. I didn't realize it was quite that significant. So really fascinating. Do you have a theory about why these platforms haven't really taken off. So, I mean, you mentioned some of the numbers earlier, and it might be good for you to reiterate them again, if you can. But in putting them in comparison to the, again, what we're going to call the mainstream platforms of Twitter, which has around 321 million users, and Facebook, which has 7.2 billion. So maybe you can give us the comparison for Parler and Ghetto, and whether you have any theory about why that might be that they're struggling to sort of gain those big numbers. And can you maybe talk about the trajectory of those? Like, obviously, obviously, Pala had pretty rapid growth last year. How's Getter going? And how, like, does it look like they're on an upward slant? Is it particularly rapid still? Or what's. What's going on there?
Evelyn Dweck
Well, it's. It's kind of interesting because, you know, as a platform, Getter has a few advantages. It's, you know, less technologically hacky. It has some pretty strong Trump advisors and associates behind it. But in terms of growth after the first, you know, three quarters of a million or so, it's trailed up quite a bit. At the time that we did our report, which was essentially at the beginning of August, it had just reached 1.5 million users, and now it's at 1.7 and some change. So the growth curve is tapering off quite a bit, and it honestly has tapered off a lot over that first week of growth. In terms of Parler, I think that last we checked, they had something like 10, 12 million users, and they've probably grown a bit since then. But they were also seeing growth taper off even at the time that we were initially looking at it. So I don't think that they're going to become particularly large, particularly unless Trump decides to actually use one as a primary platform. And I think that that's probably for a couple of reasons. The main one is there's not that much to do on these platforms. They don't have these kind of group dynamics that you would get on Facebook groups or in Reddit. Really all that happens on them is somebody posts, usually a news story, a bunch of people chime in and say, yeah, and everybody mostly agrees with each other on Gitter. Somebody posts, like, some obscene cartoons to troll the rest of the users, and that pretty much just happens over and over. There is not really anyone to argue with per se. So whereas Facebook and Twitter get this more dynamic engagement as people actually disagree with each other, you know, forming these communities where people are largely of A similar mind about things isn't really the most interesting or engaging way to spend your time.
David Thiel
That's such a fascinating observation, actually, because you know, you'll go on Twitter on any given day and there will be people going, oh, this hell site. You post anything, you tweet anything, and someone will dunk on it and other people will tell you why you're wrong and everyone's to going, complaining about it. But what you're saying is like, secretly, that's actually part of its value proposition because it makes it more fun and interesting to be on rather than everyone just going, yeah, totally, yeah, you suck it to him. So that, that, I mean, that makes a lot of sense, but I'd never really thought about it quite that way before because the other thing that you hear a lot is that users divide themselves into two ecosystems on the mainstream platforms as well. But obviously that's not as true as when the users go to an entirely separate platform altogether. Which is, I think, something we'll come back to about the sort of maybe the way the future of the Internet ecosystem as a whole looks. But a question I had was about the extent to which Geta and Parlor are competing with each other or complementary products or how they overlap. So if Parlor already existed, what's the need for geta? What's the value proposition or the corner of the market that it's trying to get in on? And is there any way of telling the extent of user overlap? And if not, you know, for the general masses, the extent to which the high profile flagship accounts, if you like, do they overlap a lot?
Evelyn Dweck
So in terms of the differences between the platforms, I mean, they, they are fairly similar. I would say the, you know, the dynamics on Parler are a little bit different because they try and do this Reddit up and down vote thing. I don't know how much impact that really ends up having. Getter itself was part of the way that it was launched and originally conceived was to be this kind of overlay for Twitter that would allow more content. So when people would sign up, they would have the option to pull in their previous Twitter timeline all of the engagement data for that. It would pull in their Twitter followers and add it to any followers that were on Getter. So you were actually getting people with follower counts higher than the actual number of users that existed on Getter. So it was meant to be tied to Twitter in a more intimate way. Now Twitter eventually cut off their API access and so they can't really do that anymore. So now it's more of a just a Twitter clone with some, you know, kind of similar news ecosystem to Parler. In terms of motivation, I think it was probably, it was just seen as having some value to Miles Bowe and Steve Bannon to have a platform that they could use for their own personal ends and have a lot more tight control over it. Probably having Jason Miller on board was seen as having a higher probability of being able to attract Trump eventually. In terms of them competing, I would say somewhat. But the other thing is that a lot of these platforms, they look like they're having a lot of engagement from these high profile conservative figures, but in reality, a lot of these are really just integrations. So if this person posts a news story or something on their Twitter or Facebook, it just gets auto pushed to these other platforms where some people will engage with it. There's really not much of a cost for them doing it. And they do overlap for, you know, on the majority of these platforms with fairly few exceptions.
Quinta Jurecic
I find that really fascinating because of course, the whole appeal of how these platforms are marketing themselves is that they are an alternative to mainstream platforms like Twitter and Facebook that are, you know, censoring you and not letting you speak your truth. And yet, as you say, they're relying a fair amount on those platforms for growth and content insofar as, you know, there's importing tweets and posts from other websites. There's the element that you just mentioned where sort of Getter is maybe not particularly interesting because there's not a lot of conflict without being able to engage on Twitter. You note at one point in your parlor report that it seems like more Brazilians at one point were posting about Parler than Twitter than actually went and joined Parler after the fact. So I'm curious if we can dig into this a little more. You know, what do you make of the extent to which these platforms that are marketing themselves as sort of nominally independent and alternative and free are actually held up by the existing mainstream platform ecosystem?
Evelyn Dweck
I mean, it is a fair amount. Some of it, you know, Getter was an example of trying to do that very directly and integrate with a specific platform. The way that Parler set it up was they just had various ways of allowing their high profile users to syndicate content rather than just, you know, use Twitter as more of a syndication platform itself. Now, Getter may have implemented something like that on the backend as well, but on Parler, for example, you could just have some pundit that has an RSS feed and everything that they post there will end up Looking like it was just posted by them on Parler. So either integrating with some type of content syndication system or relying directly on another platform, a lot of these things don't really have any direct interaction from the high profile people posting there. It's really they're just kind of feeding off of these external sources. They get some engagement from it. And for those pundits, this is basically just free marketing for them.
Quinta Jurecic
So one of the other things that really struck me reading your reports is how comparatively easy it seemed to be for researchers to get access to data from all platforms. We've talked a lot on this show about how incredibly difficult and frustrating it's been for researchers to get data from the Facebook. But on the other hand, your team and other researchers seem to be able to get a pretty substantial amount of data, maybe more data than should have been available. And we'll talk about that in a bit from the parlor and getter APIs. I'm curious what you think of that sort of level of transparency on the part of all platforms. Maybe even if it's unintentional transparency, I.
Evelyn Dweck
Think they just haven't had to deal with the same types of issues that larger platforms have and they're also just not familiar with it. They haven't had to worry about people taking large amounts of data. They also don't have quite the same use case. So for example, pulling data from Twitter is actually quite easy. They've got excellent APIs, they integrate well with the research community, but it's also largely all public content. Whereas on a platform like Facebook they will worry about people harvesting information about groups of users, largely using their real name with real metadata and some of them being in private groups. So Facebook has a different anti scraping use case than some of the other platforms would. So there were some cases where they didn't want quite the level of transparency that they had. For example, the, you know, the various roles and privilege levels of users, things like that. I think it's just because their abuse threat model is different and they haven't had to. They haven't really got their hands around a lot of abuse cases, spam, you know, explicit content, you know, any kind of other nefarious use for scraping. So I think it's just not a big deal for them. And they're also just fairly early and they're life cycle. They don't have a lot of people on board that have experience with running these things at really large scale.
David Thiel
I think that says something really interesting though as well about it might be a bit more difficult to set up a social media platform than people commonly assume. Like Trump, I think, has been announcing since the great deplatforming that some really awesome platform is, you know, coming, dropping any minute now, and we're still waiting. So it might not be quite so simple to sort of set up a platform as guess because of exactly these things that you're talking about, these threat models that maybe platforms don't consider early in their life cycle, or maybe don't have to encounter early in their life cycle, but then once they start to grow, they inevitably come and they get caught unawares. So maybe let's. I think that's a good segue to be talking about content moderation specifically and how these platforms have or have not handled that. So as we talked about at the top, both Parler and Geta initially advertise themselves as platforms that don't moderate content, you know, free speech havens. And then, unsurprisingly, they both rapidly ran into problems. I like to call this the lifecycle of every user generated content platform. And the first place that this happens, of course, is spam, which is a great case study for a platform's capacity and willingness to moderate content because it pops up everywhere. And if you don't moderate spam at some point, your platform becomes basically unusable. And also that, you know, even in free speech havens, for some reason there's this idea that spam is this objective category that everyone just agrees can be taken down. It's not kind of a censorship problem. So in your report you write that on Getter, the most frequent 10 URLs included in a user profile are all either spam or troll content. The most common URL is a broken link to YouTube associated with nearly 1,750 profiles that appear to have been created as part of an automated spam campaign. And on parlor spam involving buying Trump coins, whatever those are, is very common. So tell us about how the platforms have handled spam so far and what level of problem it is for a platform once it's getting started.
Evelyn Dweck
Yeah, the Trump coin thing is this common thread between all of these US right wing platforms that's in both spam regular advertisements and shows up on telegram channels. It is an eternal mystery what people are doing with all of these Trump coins. But yeah, in terms of spam, I think another thing of note on Getter is that when you look at some of the most common phrases or emoji, one of the things that jumps out is trans lives matter on Getter and That was basically just one person doing a spam campaign, a very obvious one that they did not have any systems to catch at all. So I think this mostly happened at the beginning of the platform. You had a bunch of left leaning or people that just wanted to troll coming in and throwing on tons of explicit comments in comment threads. And it really doesn't seem like they had any system in place either to detect that stuff in real time or to even go back and clean it up. Most of the stuff that we have in the report where people's URLs are just things linking to Rick Astley or Pornhub URLs or things like that, those are all pretty much still there. So they can of course use that as an argument saying, aha, well, we're in favor of free speech, so we don't take things down. But really it's just a case of they're not super good at it. And spam is kind of the, the thing that obviously everybody knows that they're going to encounter. So the lack of preparation just means they, they didn't really quite understand what they were getting into.
Ben Wittes
Deleteme makes it easy, quick and safe to remove your personal data online. At a time when surveillance and data breaches are common enough to make everybody vulnerable, more and more online partisans and nefarious actors will find this data and use it to target their political rivals, civil servants, and even outspoken citizens, perhaps posting their opinions online. Take it from me, I have been the subject of such online activity. It's nothing pleasant. But with Delete Me, you can protect your personal privacy and the privacy of your business and the people around you from doxing attacks before sensitive information can be exploited. So as you know, I have an active online presence. I do some provocative stuff. I shine lights on Russian embassies. I do express my opinion multiple times a week in various formats, including the Situation column on this site. And you might not know it, you might think I'm just, you know, flamboyant and don't care, but actually my privacy is really important to me. I don't release whole classes of information about myself. There are things that I don't want to be part of the public conversation about me and my ideas. I've been a victim of identity theft, harassment. Never doxing yet, but you know, there's always tomorrow. And if you haven't, you probably know someone who has. Delete Me can help. So take control of your data and keep your private life private by signing up for Deleteme now at a special discount for Lawfare listeners get 20% off your Delete Me plan when you go to JoinDeleteMe.com LawFair20 and use the promo code Lawfare20 at checkout. The only way to get 20% off is to go to JoinDeleteMe.com lawfare20 and enter the code lawfare20 at checkout. That's JoinDeleteMe.com lawfare 20 code lawfare20 hey folks, Ben Wittes here. I did not start Lawfare in order to deal with payroll stuff. I just didn't. I started it because I had things to say about the law, about national security, because I wanted to raise up other people who had important things to say on these subjects. I don't want to think about payroll. You know, I know that's true of other people who were running small businesses. They got into it because they're into the thing they do, and it means hustling and figuring out a lot of stuff a lot of times on your own. But you don't want to spend your evenings guessing at tax forms or tracking down onboarding documents. Gusto handles all of that for you so you can spend your time on the parts of your business you actually love. Gusto is online payroll and benefits software built for small businesses. It's all in one remote, friendly and incredibly easy to use so you can pay, hire onboard and support your team from anywhere. Automatic payroll tax filing, simple direct deposits, health benefits, commuter benefits, workers, comp 401, you name it. Gusto makes it simple and has options for nearly every budget. And you can get direct access to certified HR professionals to help support you through any tough HR situation. It's quick and simple to switch to Gusto. Just transfer your existing data to get up and running fast. Plus, you don't pay a cent until you run your first payroll. So try Gusto today@gusto.com LawFair and get three months free when you run your first payroll. That's three months of free payroll@gusto.com Lawfare One more time Gusto.com.
Evelyn Dweck
Why choose a sleep number Smart Bed.
Quinta Jurecic
Can I make my site softer?
Evelyn Dweck
Can I make my site firmer? Can we sleep cooler? Sleep number does that cools up to eight times faster and lets you choose your ideal comfort on either side. Your sleep number setting J.D. power ranks sleep number number one in customer satisfaction with mattresses purchased in store and online. And now the more you buy, the more you save on beds, bases and more. Plus, get free premium delivery on any bed with base limited time for J.D. power 2025 award information. Visit J.D. power.com awards check it out at Asleep Numbers Store today.
Christy
BetterHelp Online Therapy bought this 30 second ad to remind you right now, wherever you are, to unclench your jaw, relax your shoulders, take a deep breath in.
Evelyn Dweck
In.
Christy
And out. Feels better, right? That's 15 seconds of self care. Imagine what you could do with more visit betterhelp.com randompodcast for 10% off your first month of therapy. No pressure, just help. But for now, just relax.
Evelyn Dweck
Hey, it's Adam Grant from Ted's podcast Work Life, and this episode is brought to you by ServiceNow. AI is only as powerful as the.
Quinta Jurecic
Platform it's built into.
Evelyn Dweck
That's why it's no surprise that more.
Quinta Jurecic
Than 85% of the Fortune 500 companies use the ServiceNow AI platform.
Evelyn Dweck
While other platforms duct tape tools together, ServiceNow seamlessly unifies people, data workflows, and AI connecting every corner of your business. And with AI agents working together autonomously, anyone in any department can focus on the work that matters Most. Learn how ServiceNow puts AI to work for people@servicenow.com.
Quinta Jurecic
So some of the examples that we've been using are pretty funny, but you note a pretty egregious example of a failure to moderate on Gitter in your report that is serious and pretty upsetting. So you write that the platform doesn't implement the industry standard for detecting child sexual abuse material, and you actually uncovered a number of apparent instances of such material on the platform, which you ended up, you say, reporting to the national center for Missing and Exploited Children. Can you just walk us through what happened there and what the significance is of Goodr's failure to prevent this kind of material from proliferating on its platform?
Evelyn Dweck
Sure. So I'll start with kind of the way that most platforms do this. There are a few systems you can use, but the industry standard for detecting child sexual abuse material is a system that Microsoft made called PhotoDNA. And the way that this works is that it has a database of what's called a perceptual hash of an image, which is basically just a numeric representation of of an image that can withstand some manipulation. So if an image is cropped a little bit, you know, if it's altered slightly, it should still come out to the same number. So whenever you upload an image to a platform like Facebook or Twitter, it's going to be checked against this database. It'll process the image, generate that number, and go and compare it against a database of known CSAM or child sexual abuse Material. And this is kind of when it comes to child safety, it's the basic thing that you're going to have to implement if you don't want that material to proliferate on your platform. So that can be either your hosting provider that stores media or images in some way, or your website that allows user uploads. And I want to be clear about the way that we did this. We essentially had an automated system go and examine a set of images on the platform and what it would do is it would take that image, process it in memory and do two things. One, to see if it is potentially explicit or violent content, and do a second check, basically sending that number, as it were, so that we could see if that number matched a known example of csam. And if it did, we just dropped it completely on the floor. It never gets stored. But we do store the number and metadata so that we can hand it to the national center for Missing and Exploited Children. So, you know, we're not going in and personally examining any of this, but programmatically we're confident that this is a match. We sent it to the center and it does match instances that they do have in their database.
Quinta Jurecic
And where do you think this sort of failure to implement PhotoDNA is coming from on Getter's part? I mean, I could imagine a couple different ways that they ended up with sort of no system in place, or that's not quite true, you know, that they do have a community reporting system which is sort of not adequate. But is it, you know, deciding as an ideological matter, it's so important to us that we don't moderate, that we're intentionally not going to implement a system to catch this. Is it just not realizing that it's something that needs to be taken into consideration? When you're building a social media platform in the year 2021, do you have any sense of where it might have come from?
Evelyn Dweck
I definitely don't think it's intentional. I think that it's probably one of two things. One, just not realizing that this is a system that needs to be implemented in the first place. Another is that they may just have been thinking that relying on user reporting, thinking that that was going to be adequate, which is not accurate. A lot of the content that we did end up reporting had been there a fair amount of time. It was just in places where a fairly limited number of people were going to look. So, you know, these can be in posts of users that don't have a ton of followers or they all have like minded followers it can be in comment threads where nothing else is really happening. And user reporting isn't really going to help you a lot when you're having stuff posted on troll accounts and things like that. It's also, you know, it's not a system that even we could leverage because if we wanted to use that reporting system, we would actually have to have someone go and visit and view the content because you can't report it without doing that. So really, PhotoDNA is going to be the. Or a similar system is going to be the only acceptable way to implement this kind of stuff. Now, what Getter claims is that every time an image of any child is uploaded, they have people manually examine it, which is just a completely bizarre claim. And I think they just don't understand how their website works because they don't have machine learning that accurately detects whenever a child is uploaded. And it's not like you can just have somebody go and look at every picture and then click OK to have it post to the site. So even if it was working the way that they claim they would, where they're. They have just an army of people manually examining every picture that's uploaded, that would still mean that it has a window of exposure on the platform. So it's just not a workable system for preventing this kind of abuse material.
David Thiel
I was going to say that's a pretty sick burn to say they don't really understand how their website works. But then I was thinking about it and I'm not sure that Facebook or Twitter or any of these platforms really understand how their website works once you really think about it. But obviously this is a cat catastrophic and sort of really depressing failure on their part. But I sort of want to look at a counter example perhaps of a platform just really turning a blind eye to egregious content on their platform. And that's actually with Parler, because a lot of Parler users were shocked and felt betrayed when it came out after January 6th that Parler had referred violent content and incitement that they'd found on their platform to the FBI over 50 times before the Capitol riot. And users were posting things like so Pahla is no better than Facebook and Twitter, and so you are snitches over nothing but Democrat conspired bs. And then, as Mashable put it, Parler found itself unironically explaining the First Amendment to its user base, filled with members who declare themselves to be constitutionalists and free speech advocates. Because, as Parler patiently explained, the First Amendment, of course, does not protect violence, inciting speech, nor the planning of violent acts. And such content violates Pahler's terms of service, and any violent content will be shared with law enforcement. At the same time, the broader public conception is that Parler was a pretty integral part to the lead up to January 6th. In that quote, they also said that they only relied on user reports to find illegal content rather than proactively searching for it. And so this distinction between illegal content and lawful but awful content is pretty integral to the understanding of alt tech platforms in my mind, because of course, every platform has some responsibility to moderate the former, but the whole brand of these platforms is that they won't moderate this lawful but awful stuff. Do you have any sense of how comprehensive they were? Like 50 reports actually seems pretty low, but it was surprising to me that they had done it in the lead up to the riot. So it wasn't just in response to the pressure that they got afterwards in the spotlight of their role in that. So I guess if you have any sense of how seriously they're taking that obligation towards illegal content and the accuracy of that public conception that Parler was a cesspool that wasn't really doing anything in the lead up to January 6th.
Evelyn Dweck
Well, I think that some of the stuff that they did end up referring may have also just been because Amazon at the time was actually putting pressure on them and proactively identifying examples of the this content. So it may have been some effort to try and stay on the platform before AWS actually gave them boot originally. I don't think it's easy for us to get a ton of insight into that, particularly because when Parler came back, they had scrubbed everything that had happened on the site prior. So during their period of downtime, they essentially did a full system reset and that kind of cut off some of the historical research that might have been possible. So it's not clear, you know, how much, you know, inciting content they might be removing. It is clear that it's, you know, a bit harder to deal with when people are uploading, say, videos of people inciting, because that is something that has to be most of the time manually examined or user reported, whereas, you know, certain language you could automatically detect and flag for potential moderation. So, you know, I do think that it's a case that they didn't really quite anticipate, but in terms of what they're doing now, I don't think we have a lot of insight. Also, once they did come back online, they changed the way that their site operated. In such a way that a lot of existing research tooling had to be redone. So there's a bit of a gap in the Parler historical record, as it were.
Quinta Jurecic
I do think it would be useful to talk a little bit more about how Parler sort of vanished and then popped back up, because I think it raises a lot of interesting questions. So just to sort of recap for listeners who may have not followed as closely, Parler was originally running on Amazon Web Services, which then sort of got upset over the violent content on Parler, as you mentioned, and booted the platform off the web after the capital riot on the grounds that it wasn't moderating content sufficiently. Parler later popped back up on a different hosting platform. It also got in a fight with Apple, which initially refused to allow the app in the App Store, although it later relented after Parler introduced a version that supposedly had increased content moderation and would moderate hate speech for users that downloaded the app through Apple's App Store. And it struck me that part of what we seem to be seeing here is that, you know, you have these platforms that are saying we're not going to moderate or we're going to moderate minimally, and the effect is that that ends up kind of pushing the responsibility to other platforms at different levels in the stack. So Amazon Web Services or the App Store, do you think we'll see more of that kind of moderation imposed on alt platforms from outside actors that sort of have an incentive to be seen as, you know, the adult in the room compared to Parler and Gitter, I.
Evelyn Dweck
Think that it sounds like Amazon will probably start taking a little bit more interest in the stuff that's hosted on its platform. You know, we've seen this dynamic just in this last week with the Texas Pro Life website that got bounced around a bunch of platforms, ended up on Epic, along with a number of quite disreputable sites, and then got largely deplatformed from there as well. So I think that kind of life cycle of deplatforming is starting to become a little bit more common, and it's starting to accelerate. There are a few other, I think, interesting angles that happened with Parler. So they did have the, you know, App Store and Play Store removals. They eventually kind of worked around it by only displaying some posts if you viewed it through a particular app. But they also had a number of other troubles that were just as problematic. So, for example, they had a service provider called Twilio that they used for verifying people's phone numbers to make sure that they were, you know, actual phone phones and not just voip numbers, and also that they used for sending verification codes when people logged in. So when they pulled their service, when Twilio pulled it from Parlor, they actually found themselves unable to verify users. And they had the option of just allowing people to sign up with any email address or a fake phone number, or just shutting off registrations. And they actually just had to shut off registrations for a while. They then went, and because no service provider would work for them, they couldn't really do this second factor thing anymore. And they had to essentially just let people sign up with plain email addresses. So it's not just a matter of whether they can be up or down. It's kind of that larger ecosystem parts of it can just kind of fall out at any time, and they have to pivot to find a new way to function.
David Thiel
So to sort of dig into something that we've been gesturing at in a number of these answers, but maybe to put it all in one place, is this idea that, you know, they come out with these community standards, that they're the free speech havens, but they actually, actually do moderate content, sometimes in reaction to specific controversies and sometimes in reaction to the need to make their platform usable. And sometimes just for, like, it's hard to tell what reasons. So, you know, when Geta was first launched, users were prevented from using even mild expletives. And in July 2020, Pahla's then CEO announced that the platform would prevent users from having obscene usernames or posting pornography and so on. And so do you get the sense that there's any underlying philosophy driving the way that these sort of evolutions are happening? Because it doesn't seem like they're sticking to that original core philosophy that they came out so boldly and said, like, maybe just give us a picture of how far they're departing and in what ways they're departing from that original sort of broad statement they were making.
Evelyn Dweck
Yeah, I mean, I think that this is an example of it's actually kind of hard to know what kind of platform you want to have. And I don't think that they either knew what type of platform that they wanted to have or knew what would be necessary to actually achieve that. And so, you know, they're being flexible and, you know, using some of these blanket terms like in Getter's content standards, so you can't upload anything that's filthy or something. I mean, I think that there is a little bit of A, a path that all of these companies have to go down to as they start narrowing down what they want. But they're, yeah, they're not sticking to this kind of constitutional idea of free speech because they've gone out of their way to say that on Gitter, for example, they have some left wing trolls and, or as I think as Jason Miller called them, center left. And they were taking those people off the platform because they came there to cause trouble. So yeah, I think they've got a desired state and in terms of what it takes to get there, they're just kind of fuzzy and trying things until they find what works.
Quinta Jurecic
So speaking of how these platforms are now on the stage of sort of trying to figure out what it is they're actually doing, Mike Masnick at techdirt has made the argument that these platforms are sort of speedrunning the history of content moderation, that they're going through the phases of we don't need to moderate anything to, oh, actually there's some bad stuff we don't want to. Oh, it's actually really hard to figure out what we do want and what we don't want. One recent example is Gitter, as you know in your report, was flooded early on with posts from ISIS supporters, which struck me as kind of an echo of the moment tech platforms had in 2014, 2015, where they were trying to decide what to do with violent Islamist content. So I'm curious what you think of the sort of the speedrun argument. You know, if, if Parler and Gitter are running through the history of content moderation at 10 times speed, like what is next? Is there any universe in which they mature as platforms or are they going to be sort of lurching from side to side forever?
Evelyn Dweck
I mean, it's, it's entirely possible that they can implement some of the, the industry standard stuff, and we hope that they do. I mean, obviously there's a lot of these, a lot of this content is vitally important that we prevent the proliferation of it. At some point, all of these companies are going to reach some of the real hard problems that are still kind of being addressed. So for example, right now if you uploaded, say the Christchurch shooter video to Parlor or Getter, it's highly likely that that would go up and stay there for a while until, you know, if anybody reported it, it might come down and it could essentially be a platform that people could reference from somewhere else and use it to store that type of content. Facebook had a really difficult time trying to, trying to mitigate that because that's actually a really extremely difficult problem, being able to parse a video that's been altered, been cut up and rearranged and trying to detect the semantics in that video. So you know, there are still a lot of hard problems in content moderation that have yet to be addressed and I don't. And they may not realize yet that that's something that they're going to have to get into. But yeah, that's one of the advantages that these larger platforms have is both the amount of time that they've had to be able to develop these systems as well as a larger group of experts and large scale machine learning systems, things like that that they can leverage. So yeah, I think they're just going to continue to find additional problems and continue to realize how this is a space that's full of problems that are hard to work with at scale.
David Thiel
So that's one of the things that I find really interesting about alt tech platforms because as I mentioned earlier, I actually think this is really one of the biggest questions about what happens now to our online E ecosystem, about whether we do have this really obvious splintering of our Internet ecosystem far more than we already have of a red Internet and a blue Internet. And you were sort of talking about some of the challenges there. But the thing is that there's some serious capital flowing into the alt tech space like Rebecca Mercer of the billionaire Mercers who are Republican mega donors that famously funded a lot of Steve Bannon's initiative. And Cambridge Analytica holds a majority stake in Parlor and controls the majority of the board. Peter Thiel and J.D. vance have invested in Rumble, which is a rising alt tech version of YouTube. And you mentioned the Miles Guo connection to Geta and so you know, it can't be simply that they don't have the resources or maybe the resources are so like to set up a proper platform, are so vast that even that kind of firepower isn't enough. I'm just wondering if you have any theory behind, even with all of these resources, why these platforms still seem just so bad at being competent. Is it just that they are so early in their life cycle and didn't really think about this in advance or how what's the reason for that mismatch between what I would have thought was obvious resources and the capacity to deploy them effectively?
Evelyn Dweck
I mean, I think it may just have not been intuitive that you know, at this point they would really need a content moderation team and a kind of trust and safety technological infrastructure that was probably bigger than their entire engineering team, because it's not something that would be particularly intuitive. You start off thinking like, oh, we've got some people that know how to develop mobile apps, we have some backend people that can set this up, some cloud infrastructure experts. And these are also relatively simple sites. So the problem at the outset does seem relatively straightforward. Get some money, get some engineers and throw it up there. And the trust and safety teams at the larger companies are kind of invisible for a lot of people. So I just don't think that they anticipated what it would mean to get those people, where you would be able to get them, what technologies would be involved. And I don't think that they have a significant staff of people that have worked at larger scale platforms that have that kind of historical knowledge to bring along. And it's also just not something that necessarily you can solve by throwing money at the problem. So there are some things that are very straightforward use PhotoDNA have systems that can detect whether something is porn or not. Those are services that you can basically run out and integrate within a week. Now, at some point when you're looking at actually having a team go through things and find out if it's like explicitly violent content, you know, some of the really horrible stuff that manual content moderators have to pour through, you know, it's not necessarily easy to get those teams assembled, figure out how to actually handle that material, who to report it to, you know, whether it depends on what country it's originating or being displayed in. It's a complex space.
David Thiel
Yeah, that's super fascinating as well, because some of the history of content moderation at the beak platforms was, it was kind of an afterthought as well. You know, they set up these platforms to just like, come on, come on, create a community. And then they sort of started running into problems and they were like, we'll take down the bad stuff. And then that turned out to be really hard to do. And you know, here we are, however many years later and we have these elaborate rule books, but it was an afterthought. Whereas now I think given all of that history, for any platform that starts like community expectations, just public expectation, and just maybe the way that the Internet has evolved since then, content moderation really needs to be considered part of the infrastructure of a platform rather than an afterthought. And it's almost like for any user generated content platform, I mean, we had Peloton for God's sake, having to moderate QAnon content in its username. So like, I think that anywhere that is creating that allows users to create content these days needs to be thinking about content moderation as part of its infrastructure and part of its risk assessment of its product. But it's a super interesting example of that that, you know, even, even these platforms aren't doing that yet, even given the tech lash of the last four years.
Evelyn Dweck
Well, and also, I mean, it's not that the larger companies are immune to this either. I mean, when Facebook launched its marketplace feature, there was not necessarily enough thought put into how they were going to prevent abuse in a marketplace scenario. And so it gets launched and they're like, oh, we have a great trust and safety team. We can detect all kinds of bad stuff. And then people are selling guns and drugs and people and all that kind of stuff. So something that I think needs to be integrated into the threat model of any kind of both platform and feature launch is, you know, what is the worst thing that people are actually going to do with this and what are we going to use to mitigate it?
Quinta Jurecic
So to close, I wanted to ask if there are any other alt platforms that we should be paying attention to, but that maybe we haven't heard about yet. Like, what's the next platform that you have your eye on?
Evelyn Dweck
It's a good question. So it's kind of this dilemma in the, in the research community because on the one hand we want to build all of these robust tools to analyze these platforms, and on the other hand, some of them just disappear within a few months and then all the effort was in vain. I don't know that there's quite another up and coming platform in the same vein as Getter and Parlor. I think we might be kind of getting to a little bit of new platform fatigue. In addition to Parlor and Getter, we've got all these other things like Gab, which has been around for a while and is a bit more overtly white supremacist. But yeah, I think that if I were to make a prediction, I think that the next moves in this kind of catering to the right wing or deplatform populations is not going to be so much in the Twitter clone space. It's going to be more of a, either more of a communication system, more chat oriented, or more group oriented. And I think that'll kind of pose its, its own problems. But yeah, there's, there's no obvious next thing coming at the moment that I'm aware of. But when we come across something, we'll try and, you know, analyze as quick as we can.
Quinta Jurecic
All right, that's all the time. We have just David, thank you so much for coming on.
Evelyn Dweck
Thank you.
Quinta Jurecic
You've been listening to Arbiters of Truth, the Lawfair Podcast series on our online information ecosystem. You can find past episodes in the Lawfare Podcast feed and we'll be back with another episode next Thursday. The Lawfare Podcast is produced in cooperation with the Brookings Institution. Our music is performed by Sophia Yan, our audio engineer was Hamza Shetu, and our producer is Jen Patya Howell. Please rate and review the Lawfare Podcast and whatever app you use, and consider supporting us on Patreon at www.patreon.com lawfair. You'll gain access to an ad free version of this podcast and weekly Lawfare Live events along with other benefits. As always, thanks for listening. Listening.
Christy
Hi, this is Christy from Back to the Bar. You've probably heard about GLP1 weight loss medications and the side effects that can come with jumping in too fast. That's why I love Noom makes getting started easy. Their microdose GLP1 program begins with a smaller dose and gradually scales up based on how your body reacts. The Noom GLP1 microdose program starts at $99 and is delivered to your door in seven days. Start your microdose GLP1 journey today at noom.com that's n o o m.com Noom micro changes big results average weight loss 8 pounds in first month meds and personalization based on clinical need and not available to all individuals. Medications are not reviewed by FDA for safety, efficacy or quality. Pricing based on first month only.
Original Date: September 9, 2021 (rebroadcast January 18, 2026)
Guests:
This episode of the Lawfare Podcast, part of their "Arbiters of Truth" series, examines the challenges, contradictions, and realities of content moderation on two "alt-tech" social media platforms: Parler and Getter. Quinta Jurecic and Evelyn Douek speak with David Thiel of the Stanford Internet Observatory, whose research team produced in-depth reports on the operation, growth, moderation standards, and core user bases of these right-leaning platforms.
Key themes include what drives the existence of Parler and Getter, how and why their promise of “free speech absolutism” rapidly founders, their disjointed moderation policies, the technical and ethical landmines they encounter, and what their bumpy trajectories reveal about the broader information ecosystem.
“Alt just means it's the only alternative network that will accept the, you know, people with those types of online behaviors.”
"Getter...launched with a splash all at once...the growth is getting slower and slower. In terms of size, it's maybe a tenth of the size of Parler."
“Getter’s content policy...includes the words, ‘Getter holds freedom of speech as its core value and does not wish to censor your opinions. Nonetheless, you may not post any harmful, vulgar, profane, hateful or otherwise objectionable material.’”
“Bolsonaro and his sons advertise...and have pushed it pretty heavily. So it's a large enough demographic that Getter itself turned itself green and yellow yesterday to celebrate their Independence Day.”
“There's not that much to do...Really all that happens is somebody posts, usually a news story, a bunch of people chime in and say, yeah, and everyone mostly agrees with each other...It isn't really the most interesting or engaging way to spend your time.”
“A lot of these things don't really have any direct interaction from the high-profile people posting there. It's really they're just kind of feeding off of these external sources.”
“The lack of preparation just means they didn’t really quite understand what they were getting into.”
“When it comes to child safety, [PhotoDNA] is the basic thing that you’re going to have to implement if you don’t want that material to proliferate on your platform... Getter claims that every time an image of any child is uploaded, they have people manually examine it, which is just a completely bizarre claim.”
“It's not just a matter of whether they can be up or down. It's kind of that larger ecosystem—parts of it can just kind of fall out at any time, and they have to pivot to find a new way to function.”
“They're not sticking to this kind of constitutional idea of free speech...They're just kind of fuzzy and trying things until they find what works.”
“They're going through the phases of, ‘We don't need to moderate anything’ to, ‘Oh, actually there's some bad stuff we don't want,’ to ‘It's actually really hard to figure out what we do want and what we don't want.’”
“I just don't think that they anticipated what it would mean to get those people, where you would be able to get them, what technologies would be involved...It's also just not something that necessarily you can solve by throwing money at the problem.”
“I think that the next moves in this kind of catering to the right wing or deplatform populations is not going to be so much in the Twitter clone space. It's going to be more...communication or group oriented.”
On Platform Dependence and Irony:
Quinta Jurecic (24:21):
“The whole appeal...is that they are an alternative to mainstream platforms like Twitter and Facebook that are censoring you...and yet, as you say, they're relying a fair amount on those platforms for growth and content...”
On Fundamental Challenges:
David Thiel (44:42):
“That's a pretty sick burn to say they don't really understand how their website works. But then I was thinking about it and I'm not sure that Facebook or Twitter or any of these platforms really understand how their website works once you really think about it.”
On Content Moderation as Infrastructure:
David Thiel (60:58):
“Content moderation really needs to be considered part of the infrastructure of a platform rather than an afterthought.”
This episode pulls back the curtain on the myth and reality of “alt-tech” platforms. The promise of a “censorship-free” digital haven tumbles quickly into the same hard choices, resource shortfalls, and structural contradictions faced by their mainstream rivals—often with far less preparation and technical skill. For all their political and media resonance, platforms like Parler and Getter must grapple with—and often fail—at the hard, unglamorous work of content moderation and user safety. Their struggles are instructive for the entire social media landscape.