A (10:44)
The issue is, is that when you're using a piece of software, it almost certainly references inside it shitloads of other software, tons of people developed on the Internet. And when you start using your thing, you're implicitly including all of the other software written by all these people. You don't know who they are. They're just like random keys on the Internet that end up. And someone trusted it and that person trusted someone, and that person trusted someone, and that person trusted six other people. So if you exploit any of those, you start including stuff in your software that you have no freaking idea. It's like six levels of trust removed that if you change two lines, you can steal everything, right? And like, this is how it's always worked and it's always been intellectually. Like, huh, that's kind of scary. But it's kind of implicitly worked. And like, the reality is, is the incentives were kind of fine. And by the way, it did break a bunch, but it was like kind of okay, the velocity was low enough and if you were like, including a new package, there was like a process around you, like, do I trust this? Do I not trust this? Like, what are the implications? Someone was on the hook. Now you have AI agents running around looking for capabilities. I need something that does this, I need something that does that. And just pulling shit out of GitHub off the Internet, right? And using it. And because you're going so much faster, it's like just driving a car faster, you're moving faster, you're checking less, you're able to do way more. But the cost is that there's all this implicit risk and, like, subversion that's going to, like, happen as a result, right? It's kind of like going around and just grabbing random pieces of writing off the Internet and praying. And I think the reality is, is, like, people, this is not a new problem. Everyone understands this at some level. And, like, what you're actually seeing it play out. And what it comes down to is everyone's going to say, no, no, we need, like, more secure xyz. But then other people and more people are like, I don't care. I just want to go as fast as possible, right? And, like, I think the as fast as possible crowd will generally win because they'll be able to get more cool stuff done faster. But it will create a huge security. It's a huge problem. The. The flip side is there will be like, the secure core where, like, someone really checked all the packages, right? And like, someone knows what's in them. And like, you actually the problem. And, like, and you can say, well, I will help us check more. And that's. That's sort of true, but it's so much more expensive and slower that there's going to be this tension. And I agree with Dave. I think all the money is not going to be made on who can run the most tokens fastest or build the coolest shit, because everyone will go infinitely fast on that. You'll be able to charge an enormous premium for security and privacy. But the problem is the cost is going to be. Your stuff's going to suck compared to the stuff that moves fast. It's going to be slower, it's going to be 10 times more expensive, and most people won't care, right? Per usual. And so I just think that's what's going to happen. And like, it's, it's. I mean, it's. I mean, I do think that we're kind of witnessing, like, the final Internet disaster of AI. The first Internet disaster of AI is like, Bye bye web. And like, bye bye ad models and like, bye bye the entire business model for the whole thing. The second cataclysm of AI is going to be you're just going to like, have all these agents running around doing crazy shit and including things. They have no idea what they're doing and they'll check the APIs, like, yeah, that thing works and it sort of will, but there's going to be like a, like something hidden in it it and like everyone's going to do it and no one will trust anything and the whole thing collapses.