Transcript
A (0:00)
Foreign hey, this is Casey Ellis for Risky Business and today we're talking to Firos, a booker DJ from Socket. For those who don't know, Socket is a software supply chain security company that protects over a million repositories and counts Anthropic, OpenAI and Figma amongst its customers. Frost built WebTorrent, maintains over 100 npm packages to get downloaded a billion times a month, and teaches web security at Stanford. So we're talking to the right guy when it comes to open source dependencies and the security problem overall. So today we're going to talk about the impact of AI and kind of the agentic world that we're living in at this point in time we're transitioning into and how they're seeing some of the effects of that in the open source security problem space. A couple of other things that have been happening in and around the Internet when it comes to open source over the past little while. And we'll land on a new product launch that for us is very keen to talk about. So good to see you, mate. It's been a while.
B (0:53)
Yeah, good to see you, Casey.
A (0:55)
This is going to be fun because I think this is a latent trash fire that the Internet's kind of collectively waking up to at the moment. Right?
B (1:04)
Yeah, I mean, this has been a smoldering trash fire for a little while now, just with kind of how developers deal with open source software and how they know what's safe and what's not safe. And I think AI is just pouring fuel onto that fire. So excited to talk about it with you today.
A (1:18)
Cool, let's get into it. So you're seeing inside of some of the leading AI labs and a whole bunch of large enterprises, I guess. What does agents writing 90% of the code actually look like in practice when it comes to particularly open source, but just in general as well?
B (1:31)
Yeah, I think we're seeing a really big structural shift. Agents now write the majority of production code at the leading AI labs and I think this is only kind of spreading downstream to kind of the later adopter companies as well. In some of the leading AI labs we're seeing 90% plus of new code written by agents. And of course, like you said, we have a lot of the leading labs as customers, anthropic cursor, OpenAI scale, AI XAI, et cetera. And so this is obviously, I think, probably one of the biggest shifts we've seen in the way that software gets written ever. And if you think about it, just how many shifts like this do you see in a lifetime, you see the Internet, you see mobile, you see cloud, and I think AI. So we're really dealing with something here. And what's interesting is it wasn't like human developers did a great job with avoiding vulnerabilities and especially dealing with dependencies beforehand. And so we're seeing dependency counts at these customers really, really high. Like they're in the mid hundred thousand range of dependencies. And we're seeing agents pulling in packages automatically and dependency graphs only getting larger and larger over time. So I think the core question is how do you enforce safety when neither humans nor agents can realistically reason about dependency risk at the time they're installing that code?
