Transcript
A (0:01)
It's on.
B (0:12)
Hi everyone from New York magazine and the Vox Media podcast network. This is on with Kara Swisher. And I'm Kara Swisher. On Christmas Eve, Elon Musk announced a new AI feature called Grok Image Edit. It's an image editing tool built into X, the social platform formerly known as Twitter. The tool allows users to create AI edited versions of other users images and then post the AI version as a reply to the original post on X. But because X rolled out Grok Image Edit with almost no safeguards, strangers flooded X with sexualized deepfakes of real people. For example, Grok could take a photo of a woman on X and create a realistic deepfake that looks like the original image, except the woman is now in a bikini and has quote, donut glaze on her face. It could do the same thing with images of children. And it did, creating countless deep fakes to add insult to injury. Because those images were often posted as replies to the original tweet, the people who were victimized got notifications every time someone interacted or replied to the sexualized deepfake image. So in essence, Elon built a tool for creating and distributing sexualized deep fakes. And he did so in the most humiliating and degrading way possible for the victim. And X has arguably become the tool for creating and distributing AI generated child sexual abuse material, or csam, what used to be known as child porn and frankly, still is no surprise. Governments across the globe have begun investigations into Grok. And after publicly mocking the controversy and accusing his critics of censorship, Xai has started putting guardrails on Grok with mixed success. It's deeply ironic considering that when Elon bought Twitter, he vowed that getting CSAM off the site was the, quote, priority number one and quote, will forever be our top priority. But it's not surprising given that Elon has positioned Grok as the quote, spicy and anti woke chatbot. And it's not the first time Grok has made news for generating non consensual deepfakes. Also, Elon, as he's proven time and again, always takes the heinous position on any subject. And this, I can't believe I'm saying this is his most heinous. The backlash against Grok has been swift and widespread, thankfully, but so far the consequences have been minimal. I think it's important to talk about this because I remember when everyone agreed that child porn was wrong. The fact that we're debating it right now is sickening. As a parent, as a reporter, as a citizen of this world as it's grotesque what these people are doing and benefiting from it, financially and otherwise. My guests today are Renee Diresta, Hani Farid, and Casey Newton. Renee Diresta is the former Technical Research Manager at Stanford's Internet Observatory. She studied CSAM for years and is one of the world's leading experts on online disinformation and propaganda. She's also the author of Invisible the People who Turn Lies Into Reality. Hani Farid is a professor of computer sciences and engineering at the University of California at Berkeley. He's been described as the father of digital image forensics and has spent years developing tools to combat csam. Casey Newton is the founder of the tech newsletter Platformer and the co host of the New York Times podcast Hard Fork. This is a difficult but important topic. It means a lot to me, so please stick around. And to all the tech people who continue to resist doing anything about safety, especially of children, we are not going to stop until you lay down and change the situation.
