Transcript
A (0:00)
AI agents are getting pretty impressive. You might not even realize you're listening to one right now. We work 24. 7 to resolve customer inquiries. No hold music, no canned answers. No frustration. Visit Sierra AI to learn more.
B (0:15)
Support for today's show comes from Zoom. Work moves faster when everything works together. That means meetings, chats, docs, and AI companion all supporting each other seamlessly. Learn more@Zoom.com podcast and Zoom ahead. Support for Decoder comes from Adobe. Life is unpredictable, and that means you need your projects to adapt with whatever gets thrown at you. That means mastering the ability to pivot and collaborate with others to reach your goals. Adobe gets that, which is why they made a tool that's just as flexible as you are. PDF Spaces In Acrobat Studio, your PDF files are no longer static, and instead they're living documents that flex with you and your project's needs. Learn more at Adobe.com Dothatwith Acrobat.
C (1:08)
Hello and welcome to Decoder. I'm Nilai Patel, editor in chief of the Verge, and Decoder is my show about big ideas and other problems. Today's episode is about X, Grok, and Elon Musk, so I'd like to take a moment and pre reply to all of the people who are going to send us emails before actually listening to this episode. Thank you. Do read all the emails. Please share this episode in outrage with five to seven of your friends if you can. It will really upset me personally and you will have won. Okay, moving on. By now we're several weeks into one of the worst, most upsetting, and most stupidly irresponsible AI controversies in the short history of generative AI. Grok, the chatbot made by Elon Musk's xai, is able to make all manner of AI generated images, including non consensual intimate images of women and minors. What's more, because Grok is connected to X, the platform formerly known as Twitter, users can simply ask Grok on X to edit any image on that platform, and Grok will mostly do it, and then distribute that image across the entire X platform. Over the past few weeks, X and Elon have claimed repeatedly that various guardrails have been imposed on the image generator. But in testing by Verge reporters and by others, these guardrails have been mostly trivial to get around. In fact, it's become abundantly clear that Elon wants Grok to be able to do this, and that he's become very annoyed with anyone who wants him to stop, particularly the various governments around the world that are threatening to take legal action against X. This is one of the situations where if you just describe the problem to someone, they will intuitively feel like someone should be able to do something about it. And it's true. Someone should be able to do something about a one click harassment machine that's generating intimate images of women and children without their consent. But who actually has that power and what they can do with it is a deeply complicated question, and it's all tied up in the thorny history of content moderation and the legal precedents that underpin it in countries around the world. So to help figure it all out, I invited Rhianna Pfefferkorn on the show to talk it through with me. You've heard Rhianna on the show before. She's joined me to explain some complicated Internet policy problems in the past. Right now she's the Policy Fellow at the Stanford Institute for Human Centered Artificial Intelligence, and she has a deep background in what regulators and lawmakers around the world can do with a problem like Grok if they so choose. And that's really the key part, if they choose. Biggest problem in the entire Grok situation right now is that many of the people with the power to do something about Grok here in the United States are choosing to do nothing. That's almost everyone in Congress, the Department of justice, the Federal Trade Commission, state lawmakers, state attorneys general, and maybe most importantly, it's Apple and Google who control the mobile app stores that distribute X and Grok. Tim Cook and Sundar Pichai could look at X, they could look at Grok and say, you know what? The rules of our app stores prohibit creating products that can generate non consensual deepfake intimate images and pull the apps. But so far they haven't done anything. In fact, they haven't even replied to requests for comments on whether they think they should do something. It's just been radio silence. So Rihanna helped me work through the legal frameworks at play here, the various actors involved that have leverage and could apply pressure to affect the situation and where we might see this all go. As XAI does damage control, but largely continues to ship this product that continues to do real harm. Here's one thing I've been thinking about a lot as this entire situation has unfolded over the past 20 years or so, the idea of content moderation has gone in and out of favor as various kinds of social and community platforms wax and wane. The history of a platform like Reddit, for example, is just a microcosm of the entire history of content Moderation in around 2021, we hit a real high watermark for the idea of moderation and trust and safety on these platforms as a whole. That's when Covid misinformation, election lies, QAnon conspiracies, incitement of mobs that would riot at the Capitol could get you banned from all of the major platforms, even if you were the President of the United States. It's safe to say that that era of content moderation is over and we're now somewhere chaotic and laissez faire. It's possible that Elon and his porny image generator will push that pendulum to swing back, but even if it does, the outcome might still be more complicated than anyone wants. You'll see what I mean as we have this conversation, I think okay, Rhiannon Fever, Korn and the Grok Saga. Here we go. Rhianna Pfefferkorn, you're the Policy Fellow at the Stanford Institute for Human Centered Artificial Intelligence. Welcome to Decoder.
