Transcript
Sponsor/Announcer (0:00)
Support for Decoder comes from Adobe. Life is unpredictable, and that means you need your projects to adapt with whatever gets thrown at you. That means mastering the ability to pivot and collaborate with others to reach your goals. Adobe gets that, which is why they made a tool that's just as flexible as you are. PDF Spaces In Acrobat Studio, your PDF files are no longer static. Instead, they're living documents that flex with you and your project's needs. Learn more@adobe.com do that with Acrobat.
Jess Weatherbed (0:35)
The.
Sponsor/Announcer (0:35)
World moves fast, your workday even faster.
Neilai Patel (0:38)
Pitching products, drafting reports, analyzing data Microsoft.
Sponsor/Announcer (0:42)
365 copilot is your AI assistant for.
Neilai Patel (0:45)
Work built into Word, Excel, PowerPoint, and other Microsoft 365 apps you use, helping.
Sponsor/Announcer (0:52)
You quickly write, analyze, create and summarize.
Neilai Patel (0:55)
So you can cut through clutter and and clear a path to your best work. Learn more@Microsoft.com M365 Copilot ready to relax.
Sponsor/Announcer (1:04)
In your dream bath retreat without the stress of figuring out every detail yourself? At the Home Depot, your bath upgrade is covered Shop fully designed rooms and curated bath collections to go from inspiration to transformation fast savings of up to 40% will make it easier on your budget and find everything you need from tubs to toilets and haul the tile in between to bring your vision to life. The Home Depot Dream Baths built here.
Neilai Patel (1:35)
Hello and welcome to Decoder. I'm Neilai Patel, Editor in Chief of the Verge and Decoder is my show about big ideas and other problems. Today we're going to talk about reality and whether we can label photos and videos to protect our shared understanding of the world around us. No, really, we're going to go there. It's a deep one. To do this, I'm going to bring on Verge report Jess Weatherbed, who covers creative tools like Photoshop and Canva. For us. It's a space that's been totally upended by generative AI in a huge variety of ways, with an equally huge number of responses from artists, creatives and the people who consume all of that art and creative out in the world. Now, if you've been listening to Decoder or my other show, the Vergecast, or even just reading the Verge over these past few years, you'll know that we've been talking about how the photos and videos taken by our phones are getting more and more processed and AI generated for years now. And now in 2026 we're in the middle of a full on reality crisis as fake and manipulated, ultra believable images and videos flood onto social platforms at scale and without regard for responsibility or norms or even basic decency. The White House is sharing AI manipulated images of people getting arrested and defiantly saying it simply won't stop when asked about it. We are just totally off the deep end now. Whenever we cover this stuff, I get the same question from a lot of different parts of our why isn't there a system to help people tell the real photos and videos apart from the fake ones? Some people even propose systems to us. And as it happens, Jess has actually spent a lot of time covering a few of these systems that exist in the real world. The most promising is something called C2PA. In her view is that so far these systems have been almost entirely failures. In this episode we're going to focus on C2PA, since it's the one that has the most momentum. It's a labeling initiative spearheaded by Adobe, with buy in from some of the biggest players in the industry, including Meta, Microsoft and OpenAI. But C2PA, which is also sometimes referred to as content credentials, has some pretty serious flaws. First, it was designed as more of a photography metadata standard, not an AI detection system. And second, it's really been only half heartedly adopted by a handful, but not nearly all of the players you would need to make it work across the Internet ecosystem. We're at the point now where Adam Masseri, who runs Instagram, is publicly posting that the default should shift and you should not trust images or videos the way that you maybe could before. Think about that for one second. That's a huge, pivotal shift in how society evaluates photos and videos. And it's an idea I'm sure we're going to come back to a lot this year. But we have to start with the idea that we can solve this problem with metadata and labels, that we can label our way into a shared reality. And why that idea might simply never work. Okay, Verge reporter Jess Weatherbet on C2PA and the effort to label our way into reality. Here we go. Jess Weatherbed welcome to Decoder. Hi, I want to just set the stage. Several years ago I said to Jess, boy, these creator tools are criminally undercovered. Adobe as a company is criminally undercover. Go figure out what's going on with Photoshop and Premiere and the creator economy, because there's something there that's interesting. And fast forward. Here you are in Decoder today and we're going to talk about whether you can label your way into consensus reality. I just think it's important to say that's a weird turn of events.
