Transcript
A (0:00)
I think the media guys think the tech guys started it and the tech guys think the media guys started it. I think the media guys think the tech guys started by economically disrupting them.
B (0:07)
I think this is why we're seeing such a resurgence in live streaming and interest in these sort of like communal experiences. Because like, live is something that is so hard to fit. It is such a, like a human thing.
A (0:17)
We actually need to have decentralized cryptographic truth that's not behind a paywall that anybody can verify, no matter how poor they are, no matter what. I think just like you should not be subject non consensually to government surveillance, you shouldn't be subject non consensually to corporate surveillance.
B (0:33)
Okay, but what about an independent media reporter? Is that okay?
C (0:38)
What happens when anyone or anything can generate information at scale? AI is making it easier than ever to create content, but much harder to verify it. As agents generate text, images and even identities, the systems we've relied on for trust, from media institutions to social networks, start to break down. In response, new ideas are emerging. Cryptographic verification, decentralized identity, and new forms of social coordination that aim to prove what's real rather than simply assert it. But these shifts also raise deeper questions about privacy, accountability and the role of journalism in an AI driven world to understand and debate what comes next. Theo Jaffe speaks with Balaji Srinivasan and Taylor Lorenzo.
A (1:27)
And so I think as much as I like AI within the digital tribe, it accelerates coding. It's great for search, all this kind of stuff between digital tribes. It's often bad because it's just, you know, AI agents spamming 50 different people with a resume or a sales email or something like that and just breaks the commons. And so we're going to need to have I think a whole new generation of human only social networks.
D (1:51)
I think how verifiable that would be. Like you can assume some kind of biometric method of like proving that you are a human. And so you have your account that says I'm Theo, I'm a human, but then on my account I can just post stuff that I generated with ChatGPT, you know, maybe with some savvy prompts to get around pangram.
A (2:08)
And there's thing. Yeah, yeah, well here's web three of trust, right. So you would have the way. So there's a whole statistical cat and mouse here. But just to give you a sense web of trust is A asserts that B is trustworthy, who asserts C is trustworthy, who or to DS trustworthy. And then the trust drops off. Right? You trust your friend and maybe trust your friend's friend, but probably not your friend's friend's friend's friend's friend. Right. And so there's a way of modeling that mathematically. And you can have not just one proof point, not just a trust B, but also, you know, X trust B and Y trust B and Z trust B. And they trust them for a bunch of reasons, all of which is expressed in the metadata on the edges of nodes. And then you can do a calculation and inference of what is the probability that somebody continues to be human. And then you also have not just automatic Pangram style reporting, but you have manual flagging of something. It's, there's, there's a certain, there's a set of signals people can look at. And I, I think if you establish the culture as being human only and you also take away some of the payoff for pasting just reams of AI text, I think it's possible to do. It's a little bit like Snapchat where, you know, Snapchat is disappearing messages. And yes, of course in theory you can take a photograph of the thing, but in practice it did deter people from doing it. So you can set the culture in such a way that I think you can deter it.
