Transcript
David Mittin (0:00)
If 50% of traffic is already bots, it's already automated and agents are only really just getting going. Most people are not using these computer use agents because they're too slow right now they're still like previews, but it's clear that's where everything is going. Then we're going to see an explosion in the traffic that's coming from these tools and just blocking them just because they're AI is the wrong answer. You've really got to understand why you want them, what they're doing, who they're coming from, and then you can create these granular rules thanks for listening to.
Podcast Host (0:31)
The A16C AI podcast. If you've been listening for a while, or if you're at all plugged into the world of AI, you've no doubt heard about AI agents and all the amazing things they theoretically can do. But there's a catch. When it comes to engaging with websites, agents are limited by what any given site allows them to do. If, for example, a site tries to limit all non human interactions in an attempt to prevent unwanted bot activity, it might also prevent an AI agent from working on a customer's behalf, say making a reservation, signing up for a service, or buying a product. This broad strokes approach to site security is incompatible with the idea of what some call agent experience, an approach to web and product design that treats agents as first class users. In this episode, a 16Z Infra partner Joel de la Garza dives into this topic with David Mittin, the CEO of arcjet, a startup building developer native security for modern web frameworks including attack detection, sign up spam prevention and bot detection. Their discussion is short, sweet and very insightful, and you'll hear it after these disclosures. As a reminder, please note that the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. For more details, please see a16z.com disclosures.
Joel de la Garza (1:58)
It seems like what once was old is new again and would love to get your thoughts on this new emergence of bots and how while we know all the bad things that happen with them, there's actually a lot of good and really cool stuff that's happening and how we can maybe work towards enabling that, right?
David Mittin (2:14)
Well, things have changed, right? The DDoS problem is still there, but it's just almost handled as a commodity these days. The network provider, your cloud provider, they'll just deal with it and so when you're deploying an application, most of the time you just don't have to think about it. The challenge comes when you've got traffic that just doesn't fit those filters. It looks like it could be legitimate, or maybe it is legitimate, and you just have a different view about what kind of traffic you want to see. And so the challenge is really about how do you distinguish between the good bots and the bad bots. And then with AI changing things, it's bots that might even be acting on behalf of humans, right? It's no longer a binary decision. And as the amount of traffic from bots increases, in some cases it's the majority of traffic that sites are receiving is from an automated source. And so the question for site owners is, well, what kind of traffic do you want to allow? And when it's automated, what kind of automated traffic should come to your site, and what are you getting in return for that?
