
a16z general partner Erik Torenberg speaks with Balaji Srinivasan, angel investor and entrepreneur, about why AI simultaneously reduces the cost of creation and increases the cost of verification, and what that tension means for the shape of the AI economy. They discuss why AI drives companies toward the "trusted tribe" model of the Chinese internet, why physical world tasks are easier to automate than digital ones, why shortcuts only work for experts, and why AI makes everyone a CEO rather than making CEOs obsolete.
Loading summary
A
AI doesn't take your job. AI makes you the CEO. The problem is AI is a shortcut. And a shortcut is good, except when it's bad. If you don't know how to go the long way around, then you can't debug the AI.
B
Do we not think that AIs are just going to be also better at taste and agency?
A
I don't think that's true. On a short term basis. Humans are the sensor, AI C actuator. So it's like a human machine synthesis. What's taste? Taste the sense. And that is what AI can't yet do.
B
What happens when AI really achieves its potential? Will LLMs get us to AGI and some capacity?
A
No, no, actually the oppos.
C
Every tool that makes creation cheaper makes verification more expensive. The printing press made publishing easy and forgery easier. Photography made documentation instant and manipulation inevitable. In 1839, the first year a camera could capture a human face, people trusted photographs. Absolutely. Within a decade, courts were already debating faked evidence. The cheaper the creation, the harder the proof. AI has compressed this cycle into months. A resume that once took hours to fake now takes seconds. A slide deck that signaled competence now signals nothing. The generation cost has collapsed, but someone still has to confirm what's real. And that cost is rising fast. The result is a world that fragments into trusted groups where AI supercharges productivity on the inside and raises walls on the outside. I speak with Balaji Srinivas, angel investor and entrepreneur.
B
I want to start by talking about the AI economy. And I'm curious if you think it will look more like the Internet economy, where applications take most of the value, or the cloud economy, where there's kind of infrastructure takes most of the value, or it's more distributed. There's an argument that the big labs will take it all because they have all the capital, they have the compute, they've vertically integrated. But there's also an argument that, hey, maybe they won't, because distillation is 98% cheaper than it is to build a model. Open source catches up and apps control the user relationship. How do you think this economy is going to play out?
A
Great question. So I do think that at least a very large percentage of the future is going to be distillation and decentralization. Because as Anthropic said, distillation attacks work on their thing. Right? And so a relatively small number of API queries helps to kind of distill a large model into something small. And it's very hard to stop that. Right, because you're stopping queries from coming back. You'd have to somehow detect that or what have you right now. It's also, it's hard to morally stop it because what do they do? They copy the whole Internet and put it into their thing. Right? So talking about stopping the copying, it's like Facebook or LinkedIn stopping someone from scraping what they scraped. Right? Like Facebook scraped all these Harvard social networks, or Google scraped the entire Internet. Build a Google index. I get why they want to do it, but it's hard to support that. Okay, so the other thing is, I think the future is personal, private, programmable, because AI is so powerful that you want to use it within the trusted tribe for a variety of reasons. First is it doesn't miss. Okay. Or rather it doesn't miss small things in large data sets and things that were effectively secure through obscurity. A small example, but an important One is the JMail thing, right? Like the Jeffrey Epstein thing. You can query like, this guy had never thought that all of his emails would be publicly indexed and searchable by AI ten years later, or what have you. Right? So you can issue queries that will synthesize information across thousands of emails or whatever and build a story right then and there. Okay? So what that means is it's not just surveillance, it's what the French call souvelence surveillance from below, or even the Jeremy Bentham Panopticon where everybody's watching each other. Any information that's in the public gets indexed and then put into these AIs where people can stalk each other and so on and so forth. And then what that means is the commons becomes a hall of mirrors with all kinds of pseudonyms and so forth, and people retreat back to caves and tribes. Okay, so within that trusted tribe. Yes. If you share all your code within the trusted tribe, you share your whole code base. Boom, you can zip along. And so AI increases productivity within the trusted tribe, but outside the trusted tribe, aren't you getting a ton of AI spam and AI spam emails, AI spam replies. Right. Low quality slide decks that are sent over. People will send me these slide decks. And I love AI. Okay? And you know, my reaction is to seeing AI in a slide deck.
B
What excitement?
A
No, no, actually the opposite. When I see AI text in a slide deck and you can immediately see it. Why? Because no matter how advanced AI has gotten, there's a generic look to it. You know what I mean? It's like somebody who doesn't change the Windows default desktop wallpaper or the Apple Default wallpaper. Like most people don't change defaults. So default AI looks like AI no matter where the level of it is. You know what I'm saying? And so because of that, when I see an AI slide deck and it's got, it's not this, it's that, or it's just got like a wall of text, right? AI can generate what I call lorem ipsum, but it's lorem AI ipsum, okay? When I see that and it's AI text or AI images, I think they're lazy, stupid or evil. Okay, Lazy because they just hit a few characters and then they throw something over. And like the Mark Twain thing of I didn't have time to write you a short message, so we sent you a long one, right? I didn't have time for it. You were short letter. The whole point is concision is very valuable. So they're lazy because they didn't actually put in the time to make it concise. So they sent me some blog. It's almost like pasting in a search result. Or they're stupid because they, they don't understand that I can tell the difference instantly between AI slop versus something that had some care go into it. Or they're evil, where they're trying to get something over on me and try to send something that's clearly fake or not properly diligent and so on, so forth. And the thing is, if I have that reaction, okay, as one of the most pro tech people out there, pro tech, pro AI see all the benefits of AI, I can only imagine how mad anti AI people will be, right? Where they can't see the upsides of a thing, right? They can only see the very real downsides, right? Just to say why those happen, AI is for AI does reduce the cost of generation, but it increases the cost of verification. And many markets, like for example, quickly generating a resume is not that much better than just writing it yourself. But now verifying a resume has gone up and to the right, right? So because it's something where it used to be that somebody would have to sort of have a certain vocabulary to be able to write a well done cover letter or resume or so on and so forth. And now you have to spend more energy parsing that because they can have a simulacrum of something that kind of looks good, right? So now you have to very closely read it. So you have to spend, you can still do it, but you spend more energy on verification. So what I do, for example, is I fly everybody out for Interviews first I do in person and I give them proctored exams, offline exams, because they can AI the online. And just a credible threat of doing the offline means they don't use AI on the online exam, for example. Right. And so AI is going to create tons of jobs in proctoring and verification. This brings me back to where's the future of AI? I actually think AI makes the Internet a lot more like the Chinese Internet. Why? Why Chinese companies? If you look at the Chinese tech ecosystem and many Americans aren't familiar with it, I'd recommend it's a little bit dated now, but read Kai Fu Lee's book AI Superpowers from several years ago. Okay. The main thing about Kai Fu Lee's book is it has a history of the Chinese tech ecosystem where for example, you and me being in tech, we kind of know how Microsoft came up, Apple came up, Google, Facebook, Amazon, whatever. We have some idea of the history. And that history is important because there's things that worked in the past that didn't work today and now they can work and so forth. The Chinese tech ecosystem is like the Galapagos Islands, where many of the same kinds of things exist, but in different form. For example, Meituan, which is like the closest way of putting it, the Chinese Groupon. But if Groupon was executing at $1002-002000-00000,000 scale, they're very competent. Like if Groupon and Doordash and so on and so forth all became integrated into one amazing kind of app, right? The point about the Chinese tech ecosystem is because they arose in a low trust society, they don't have SaaS, not in the same way that we do instead, because if, oh, my data's on their servers, they're probably eavesdropping on me, right? My data's on their servers, they're probably going to copy my stuff, right? They just assume that the other guy on their side is going to look at their stuff, unless it's like their close friend or something like that. And so because of that, everybody codes their own stuff, which obviously has a frictional cost to it, right? Because trust reduces transaction costs, however. So they have to rebuild. They have to reinvent the wheel over and over again. They have less division of labor and so on and so forth. Their software isn't as good because they have to keep rewriting the software. Now, with AI, many companies can do something like that. Like a non Chinese tech company can be like a Chinese tech company where it can have a lot more, let's call it digital Autarky Okay. You have high tariff barriers on the outside world, so to speak. Right. And the build versus buy question has always been there. Do you build it yourself or do you buy it? And it does mean that you can build more internal tools with emphasis on internal tools. And the reason I say that is what I find AI great for as of today, visuals over verbal. Right. It's great for images and video as opposed to big blocks of verbal text. Why images and video? We have built in GPUs so we can instantly see if something's wrong, like the hands are messed up or something like that in an image, right. So you can, you can quickly. Verification is relatively cheap visually. Right. For example, if you look at a piece of paper and it's got static or something on it, right? Like a crumpled piece of paper, versus if you look at two, three faces, our brains are optimized for checking very subtle things off in faces, but not in crumpled up pieces of paper. You know, those. That's a pattern of noise that we wouldn't be able to tell. And that also extends to web pages. For example, you can quickly look at a web page that AI generates or a mobile app, and you can see if the UX looks janky, which it often does, right. And then you can. You see that it's broken there and you can fix it. Also, front end stuff has lower risk than verbal stuff. Right. For the back end, you know, if you are verifying each pull request one at a time, fine. But people who've tried to go full auto on AI, you saw the Amazon thing where they've called all hands because of the outages.
B
Yeah.
A
The problem is AI is a shortcut. And a shortcut is good, except when it's bad. So the more expert you are, you can use a shortcut. For example, if you just memorized E to the I PI plus 1 equals 0, you could just rattle that off. But if I asked you to prove it from first principles, right. You'd have to know the definition of a complex exponential and you know, like how the, the exponential generates to a function of complex variable and you know all that kind of stuff, right. And so if you like our generation, that is a pre AI generation, learn all that stuff offline and we can actually use the shortcut because we know how to go the long way around. If you don't know how to go the long way around, AI is a shortcut, then you just don't really actually know. You can't debug the AI. And I think the biggest difference between Me versus Dario. Or you know, like, you know, basically like his view of the world perhaps is. I think AI is built for the harness, at least for now. May maybe, you know, by the way, he's an amazing engineer and entrepreneur and so on. And maybe I'm wrong. Okay? So I put an asterisk on this. Um, but the whole alignment thing means that AI is built to start when you prompt it. Like economically useful. AI does exactly what you want it to do it like, you know, you prompt and it does a pirouette and then it says, you know, absolutely right. You know, right? Like how. How you saw that animated in the physical world and physical AI, the Chinese AI, the robots do exactly what they want them to do and then stop. Now, in the physical world, by the way, that's another thing. So AI for visuals, you can verify it with your eyes, right? AI for certain kinds of backend code. You can unit or integration test it and you can review it. AI for the physical world is very verifiable. Because the thing is, the digital world is fundamentally decentralized in a way the physical world isn't. There's only one physical world, right? So you can say, did the AI move this box from this palette to that palette? That is something where you can get it to probably 100% over time. Why do we think so? Because self driving eventually got there, right? Move this car from this location to this location at a hundred percent reliability. There's only one physical world. So eventually all the sensor data, all of that converges on one thing. By contrast, the digital world, there's all these people who live in their own constructed environments. Harry Potter fan fiction here, Star wars fan, right? And so AI is slurping up all of this stuff. And so it's simultaneously, it can. It can put you in some secret agent, you know, kind of world, right? Star, you know, and people who have LLM psychosis will talk to the AI and think it's real, because a very immersive virtual world that they live in. Do you know what I'm saying? Right? So the other thing about it is the boundary of a digital task is almost always more fuzzy than the boundary of a physical task. Like having a hundred boxes here and moving them over there, you know, when you're done, right? How do you know when you're done with your to do list? That's harder, right? Those things are fuzzier, right? So verification is actually harder in the digital world than is in the physical world, which means reinforcement, learning and training is much easier in my View in the physical world with robots and self driving cars, drones and so and so forth. So the Chinese style of physical AI will also be successful. So AI works for visuals, AI works for the verifiable and AI works for the physical. When it is one of my rules. And it took me a little while to articulate this, but four words. No public undisclosed AI. Why? There's a temptation by many. There's going to be. There is a huge backlash called let's just say no AI. It'll be like a drunk who just wants nothing to do with it, right? And AI is like, it's a funny way to put it, like alcohol people analysis to nuclear weapons. But I'll just say analysis to alcohol for a second. Some cultures simply like they can't hold their liquor, you know, maybe they lack alcohol dehydrogenase or what have you, you know, and so they just ban it, right? They just like they can't. Because sometimes it's easier to say I will not do this at all. Then I'll do this a little bit of the time. It means people will slip, right? It's like saying I'll work out every day versus I'll work out some days. It's just easier to kind of keep the habit of all the time, you know, sometimes, right? So it'll be AIT totalers that just swear off it completely, right? And you know, Nate Silver actually had a great line where he said AI for him because he's like a poker player, among other things. He's like, it's a gamble. Why is it a gamble? Because I have to formulate it and dispatch it to the AI and then verify the result. And often that's slower than doing it myself. And I'm sure you've seen that, right? Like the, the, the act of prompting and writing it down and then verifying the result. AI doesn't really do it end to end necessarily. It does it middle to middle, as we've talked about, right? And it's very much like, do I delegate this to an employee or do I just do it myself, right? Because articulating it out in clean English and hitting enter is sometimes slower than just, you know, like, like for example, if you're describing what to do in a video game, jump over the mushroom to this, that, right? Versus just hitting A, B, C and there and being non verbal about it, right? It's sometimes easier to do it that way. That's just like a proof of concept, right? Where you'd be like, there's certain kinds of things that are harder to say than do, okay, Those types of things where it's hard to verbalize what it is, right? And some people will say, oh yeah, neuralink will solve this. The difference is, you know, they'll say, let's just read your mind and tell you, which is actually, it's worth engaging the concept because neuralink exists. But I don't know if you've seen those things where, like, they image somebody's brain. There's nothing in there, right? So the thing is, neuralink, somebody still has to, like, form the concepts in their head for the characters appear on screen. You still have to like, write the thing in your head. It like, like maybe it'll eventually get to the point that it can determine what you want based on contextual clues before you even want it, right? Perhaps. Okay, the rich prompt, you know the reason I think that's not impossible, by the way, at least for certain things. BioAI could be very important. You know why? No. Say why? Your body is creating all kinds of sensor data. If you look at gene expression data, right? If you ever gotten labs back, you've done a clinical lab, right? You get a vector of your bilirubin and hematocrit and so on and so forth. That vector over time is like a table of time series data. It's like K, you know, small molecules and, you know, gene expression levels and so on over T timestamps, right. They might also have, you know, which tissues. So it's spatial as well, Right? So it's time versus space versus compound that's this big. It's not just a cube, but it's at least a cube. It's like, you know, time versus tissue versus molecule. That huge stream of data is telemetry that's coming out from your body that could prompt AI without you vocalizing or verbalizing anything. Okay? Years ago, Mike Snyder had a paper called the Integrom. By the way, you know, for the audience who doesn't know, biology is actually, you know, I'm not really, I mean, I'm a crypto guy or, you know, I'm a tech guy, but actually, before all of that, I'm a biomedical researcher. I, I, I was a professional, you know, bioinformatics, genomic scientist at Sanford. And, you know, I, I, I taught there and, you know, founded genomics. We sold that. So that's actually my true core competency. Right. So if you go back years, Mike Snyder, professor at Stanford, wrote a paper on the integrom. And the idea was just put every test, you know, Throw every test. Now, today we call that wearables or quantified self. But more invasive than that, cause he's doing blood testing and so on, and he'd just measure it and see what he could figure out. He could see that he was getting sick before he knew he was getting sick. Like he could detect. He could see the antibodies, the white blood cells, neutrophils, whatever, moving before he himself had any symptoms. Understand what I'm saying? Right. So that stream of data, AI could act on that and then you're prompting it non verbally. You don't have to spend time. Right. So I'm not sure whether, ah, this is a good one liner. I'm not sure whether AI will be able to read your mind, but it can read your body. Is that good?
B
Yeah.
A
Yeah.
B
Okay.
A
All right, let me give another one. Here's a fun one. Okay. I can say this one. Maybe I can say this one. I can say half of this one. All right. Another way of modeling what AI is. Right. So Darius talked about, oh, AI will be like. It's like new countries. Well, you know, I thought about that a fair bit myself. Right. So one way of thinking about it is AI is like the rise of Asia and India from an American perspective. Right. AI is like Asians and Indians. Why? Because you have like the rise of a billion Chinese and a billion Indians meant that from an American perspective, you could get anything done by a physical manufacturing robotic warehouse or by digital outsourcing for some price if you could articulate it to them over that channel. Right. So imagine you've got now a billion factory robots and a billion digital agents that have come online. It's like the rise of China and India again. Okay. That still means you have to describe what the product is. Yeah. Okay. And the part where I depart from a lot of people is, is they think AI will be able to sense, let's call it markets and politics. Okay. But I don't think it will. And the reason is, or. Or if it is, it's. It immediately gets decentralized and adversarial. And what I mean by that is, like when you're learning whether something is a dog or a cat, the dog isn't like shape shifting on you and morphing on you to defeat your learning of that. Right. The mapping of dog to the character's dog is basically constant over time. And so that fits the train test paradigm of AI. Similarly, like the rules of chess are constant over time. Right. But a market is set up where, if you try the Same trade. Then someone eventually figures out what trade you're doing and they take the opposite trade. It doesn't keep working. Right. You know, in a stochastic process sense, you'd say it's not a time invariant thing, right? The Cisco distribution, it's not time invariant and it's also adversarial. It's multiplayer, where whatever move you're doing, somebody else in the market is going to try and do another move. Okay. And that's not to say, I mean, like the counterargument, AI guys will say as well, you know, AI can learn to play adversarial games like Starcraft and stuff like that. And I say, yeah, but then you play an AI versus an AI because you have a decentralized AI, so the other guy on the other side of the market is also using it. Right? And in fact, if they're all using the same AI models, then actually being non AI is where your edge comes from. We come back to where we were because these are all the same generic tool that everybody got. And if you have a generic tool, you're not going to get specific advantage, right? What you provide to the table is specific. The is a generic. And similarly, politics is very similar. If you just had the same tweet over and over again. Unless it's like weather or something like that. There is like the kinds of things people are interested in, change topics, what's timely, what's not timely, right? So AI is one way to think about it is humans are the sensor, AI is the actuator. Okay? Humans sense the world. They sense the financial conditions, the market conditions, political conditions, and then they bring that back into a cleanly articulated English prompt, and then the AI does it. Right? Humans are the sensor, AI is the actuator. So it's like a human machine synthesis. Like actually, you know, a good way of putting it. What, what are people saying? Oh, it's all about taste. What's taste? Taste the sense. Yeah, yeah, right. So humans are the sensor. AI is the actuator. Your quote, taste is your sense. Your sense of taste is your sense. Right? So you're sensing the world, and that is what AI can't yet do. It doesn't really sense the world in the same way that humans do. Right. Why is it. It's a. It's a. It. It waits for your prompt, right? It is something that animates. When you give it instruction, then it shuts off right away. And if it didn't, it would not be economically useful. AI like, if you couldn't kill switch it right away, it would burn tokens like. So AI is designed for the leash. Digital AI designed for the leash. And Chinese communism, which is cranking out all the fiscal robots, like they don't let their humans off the leash. They're definitely not gonna let their robots off the leash. Okay, right. So the, the concept of like AI is God is I think, gone away, or at least the monotheistic AGI kind of God. Instead you have polytheistic where there's all of these decentralized AIs. And I think what people are gonna say, certainly in China, they'll say, oh my God, the physical AIs are slaves. Right? They're actually. Right. And it's a provocative way of putting it. Right. But there'll be. First they were scared that their AIs were going to be gods. They'll be mad. Or, or they'll be, you know, what do you call them? Slaves, serfs, whatever, you know, term you want to use. They're obviously not humans, right? It's a, you know, it's a way of phrasing it. But the point being that like AI overlords, I don't actually think are in the offing. However, there's been so much sci fi about them that people will, you know that meme where the guy, he makes the monsters and he's so scared of the monsters. Okay. This is how I think of a lot of people who are, you know, like these. When you're prompting the AI and you prompt it to be like, act as if you're a Skynet terminator, right? Then people are just scared of the thing that they themselves created. Right? Okay. With that said, is it in theory possible to actually create a skynet, which actually like the, a truly autonomous AI? One of the reasons, by the way, a deep point AI can't reproduce itself, right? And AI, by the way, it's very general, it encompasses many things, right? But for an AI to actually reproduce itself, it would need to have physical robots going and mining ore and constructing data centers and making chips and handling that full supply chain and. And then the AI brain, like the queen of an ant colony, would have to give instructions to all those robots to do things. It would be this Terminator Skynet scenario where it's like self replicating in this way. Right. Way before it gets there. I'm pretty sure that kind of thing will be stopped from the Chinese because they will just have cryptographic keys that'll just make all those things shut off. Okay. And more of it, that thing Would have to get to extreme scale. It's like, you know, the reprap concept, the self replicating kind of thing, right? Self improvement. Yeah, basically there's so many frictional breaks that are built into this that I think it's hard because the physical world requires resources to replicate. Right. And so like what humans, human wants and needs ultimately come from, okay, get, you know, the resources for reproduction. Right? That's really what it comes from. Okay. And of course there's all kinds of things that are high level philosophy, blah, blah, that don't seem to relate to that directly. But the resources for reproduction are a good way to macro. Think about it. AI doesn't have goals or it won't have unless its goals lead to reproduction. It doesn't actually, you know, it doesn't virally spread. It's possible you could have something where it self prompted itself and did that, but it would need to be in the closed loop of being able to actually reproduce itself as the payoff function for that. Then you could get evolution going. So I'm not saying it's completely impossible, but I'm saying that I think the incentives are set up in such a way to prevent that from happening. In the same way that in theory we could have a world where everybody went around electrocuting themselves from electricity, but we set up the electricity under such tight controls that that is not the world that we have. Okay. There's such strong economic incentives for humans to not get electrocuted that we set it up that way. Right. And even the stuff on, oh, it could be a software virus. It takes everything over and commandeers things. Well, like that's only in the digital realm, right. You can still, you know, what's the, you know, the Tyler the creator thing.
B
Yeah, the meme about bullies.
A
Yes, that's right. That's right. So I actually had a, I had a post on that a long time ago, which is a remix of it, which is like, how, how is AI risk real? Just turn it off. The whole thing is set up for you to be able to turn it off. Like you have to imagine the off switch goes away, right? What does every computer have? It has the off switch, right? So there might be. Well, what if the AI decentralized? Okay. But humans still have to keep these decentralized systems going, right? And so at a minimum, you're talking about a human AI symbiote of which like, you know, a cryptocurrency is almost like a V0 of that where the, the software provides an incentive for the Humans to replicate it, you know. Right. And so it's possible that you could have something like that. There's a model that has a cryptocurrency and you people worship it and they replicate it because it gives them advantages. And so it's possible. But anyways, coming back, I think at a minimum decentralized AI will be a very strong contender. And it's possible it's the only contender. The reason is AI might be an interesting thing where it's relatively expensive to. Very expensive to create, but relatively easy to copy with distillation attacks. And I think if, for example, let's say completely hypothetically that there was an enormous capital markets crash and it was very difficult to fund anything for a while, then as somebody said, well, we could get 10 years just on the models we have now. Right. And by the way, sometimes that happens, you know, nuclear energy, there's a lot of energy put into nuclear energy and then there's just. It just stopped for decades. Right. Not everything accelerates to the moon. It is very possible that there's enough of a capital and social kind of thing where some of AI is polished for a while just due to capital constraints because it's more and more expensive to make these models, you know. Sorry, so let me pause there. So that putting that all together, that's my view is you're going to have personal, private, programmable, centralized AI. Oh, one other thing. The trusted tribe, AI within the trusted tribe increases productivity between trusted tribes, decreases productivity, so you make more money perhaps within the tribe, but then you have to spend it on verifying stuff between tribes. So crypto is for between tribes and AI is within tribes.
B
What do you think of the like will LLMs get us to a world where it's not just middle to middle, but it's actually end to end, you know, will it get it to AGI in some capacity? You know, do you believe in recursive self improvement or sort of AIs training the AIs in some capacity? You know, are LLMs capable of actual creativity and invention? You know, we talked about bio earlier. Like will we have, you know, novel math, science, you know, scientific research
A
or
B
do we need new architecture for that? Or are you dubious of just the idea in general that AI can replace or substitute for human labor in a mass scale?
A
Man, no. Well, so I'm not. Well, look, Waymo exists, right? So obviously you have full replacement of human drivers there, just like you have full replacement of elevator operators, just like you had full replacement for the most part of artisanal chair manufacturers. So it is certainly possible for a given job that it gets fully automated. Right. And, but I think physical world jobs because of the verifiability are easier to potentially automate. That said, I think that let's take each of the things actually said a few different things. First is physical world jobs if you automate them. Well, we went from artisanal work with chairs to a chair factory. It's not like you didn't know, need to know how to make a chair to set up a chair factory. You still need to have somebody there who's like an expert in chairs. And you can just do a lot more varieties of chairs a lot more cheaply. You have to verify the result. You're cranking out a thousand of them, you start doing math on them, the scale goes up but. And the artisan gets factored out into the manager and the technician. Right. So the manager is setting up the factory and looking at the economics and you know, so and so forth. And then technician is debugging the factory when it doesn't work. Right. So engineering gets split into the, the engineering manager type person who's writing the prompts and the technician is doing the verification. Okay. And I think that we're going to hit, we're already hitting a point where like the velocity does increase, so the bar increases. But you know, there's a big difference in going to a hundred percent and being at 99%. At 99% your workload just increases. At a hundred percent you stop doing that job and you go to something else. Right. But if you think about how much easier it became to like put images, video, so making it 99% easier just means people do it a lot at a hundred percent easier, totally done, then they don't do it at all and they move on to something else. Right. So elevator operating, it's not like elevator operating became so much easier. In fact it became so easy that you don't even have somebody sitting in the elevator. And because it used to be like a pulley system and so on and so forth. So you had someone like supervising the thing.
D
Right.
A
It's more analog. Right. And they would like level it out at exactly the right, you know, level when it became digital and fully automated, that that's actually the first self driving car. Hahaha. Right. Like going up and down. All right, so I think Ben Gavin made that point or something like that. Right. The vertical self driving car. Right. It's like a train, it's like a vertical train. So the now in terms of Discovering new math and science. Yes, if you have the right prompt. It's amazing in terms of searching the literature, mathematicians, physicists are starting to get some value out of it, right? Like opis. Like huge props to them on that because. And especially in like biology, we're synthesizing all these facts. There's something called biomedical text mining and so on, AI has revolutionized that because biology was just something where the facts were stored in English in this weird inconsistent way across thousands of papers, and nobody could span all of that, right? So AI is going to mean the century of biology, because finally all of this work that was spread across all these different journal papers can be synthesized and understood, right? That's a really, really, really big deal. Just simply the bio aspect of it, we can. But. But that said, it's everything we knew, not everything we don't know. It means that you take the full set of everything we know and you fill in all the intermediate aspects of it, right? And you can do that for a long time, like because there's so much there, you know, so much there. That's just a synthesis of two existing areas, right? But when you look at some of these, like you, you know Donald Knut Singh the other day, right, he posted like some graph theorem or something, he was so impressed that AI could, could get a result for him, right. If you've read what he did, you'd have, I mean, you'd have to be expert to even know what he was saying, let alone to verify. Like to either prompt or verify, you already need to be an expert because. And the thing is, I can see AI spit out to some people, it convinces them that they're suddenly physicists that have solved quantum gravitation or something like that. You know what I mean? Have you seen that kind of thing? Right? So in the absence of actually being able to verify by hand, some human has to verify it to say that it's right. I think that's going to persist. To give an analogy, this is not a perfect analogy, but like with Coinbase, we thought like listing would eventually go away and not be a big deal and that people wouldn't care and everything would be listed, just be free market or whatever. But there's always something that's the equivalent of listing, like, okay, you listed over on this exchange, but like guessing listed on Coinbase in the main app, above the fold, there's always something scarce because human attention is scarce, right? So listing never went away as like in a main event, there's always some IPO like thing. Yes, we listed on this exchange in this fashion, right. Or we became a top 10 coin or something like that. Right. So in the same way, I think whatever gets automated then in a sense humans work. Human, human work moves to what can't be automated. Now that may be almost like, like things that humans are picked for because they're not robots, like human companionship or something like that. Right. Or like personal trainers or things like that. You know, something where the whole point is that it's a human as opposed to a machine. Another way of putting it is remember the digital divide, right? So in the 90s there's the prep substitution. Only the rich people will get the digital and all the poor people will be left without. We're actually going to have the opposite. Digital is cheap, physical is a premium product. Right. So AI, robots, digital will be cheap, human is a premium product. Okay.
B
But going back to agency and taste, that's, that's what everyone says, you know, humans will do. You know, we, we've seen over time and time again AI just, you know, cut into that. Do we not think that AIs are just going to be also better at taste and agency?
A
I don't think that's true on a short term basis. I think the smarter you are, the smarter the AI is. Right? That's been now true for the last several years. Right. It's possible there's some huge step change. Okay. But insofar as what you're typing in, a prompt is like the human is the sensor of the AI is the actuator. You're sensing the world, you're typing something in and it's a very high dimensional vector you're giving it. It's like AI is a spaceship and you're pointing in a direction. And whether you prompt it in Portuguese or Tagalog, whether you're talking about math or like the number of different directions you can point the thing in is enormous. Right. It, it can't. That, that direction setting is something where it has to know something about you and what you want at that moment. Right. I don't know. As I said, I think, I'm not sure if AI can read your mind, but it could be able to read your body. Right. I think it's a good one liner, right. That the like biotech can prompt it in your sleep. Right. So all the wearables and stuff like that, I think you'll get a lot out of that. Okay. But I don't believe like agency and taste. So I mean people, I think they over rotate on this. It's not really the case that there's. I think agency iq, taste are correlated. Okay. It may be that it's a little bit like most people in the NBA are tall to take something that you know a lot about. Right. Within the NBA, height is not the number one variable that you think about. Some, you know, like Steph Curry is not the tallest or whatever. Right. However, it still actually does correlate with scoring average, if even within the NBA. But it's what's called restriction of range. Everybody's already tall, so conditional on everybody being tall, other variables matter more. Okay. However, if you just took tall guys and short guys and put them on a court, then height, the taller team basically wins typically. Right. Because they just hold the ball above you, you know? Right. Okay. So in the same way, like people who are already smart might see that. Yeah, higher agency people or people with better creative taste. Fine, right. Like, and maybe a technician role is less. Or and maybe the Steve Jobs type role is more. But honestly, like one way of looking at it is all of the Jeffersonian natural aristocracy around the world will rise. Why AI doesn't take your job. AI makes you the CEO. Reframe. Right? AI makes you CEO because your job is actually a lot like using an AI model is a lot like CEO training. You know, many years ago I used to say that, and it's still true. But you know, when you're in high school, you could quickly see, like, why do people accept that athletes have very high compensation? Because when you're in high school, you could see whether you could dunk. And if you can't dunk, you know that like Michael Jordan isn't outsourcing his dunks, he's dunking. Right? So they. That talent is intrinsic to the person. It is a non transferable asset. Right. Similarly, someone can tell whether they can sing or they look like a model, right? So the actors, the musicians, the singers, the athletes, all of these clearly had talent and people were okay with their compensation. There was a CEO who used to say, well, I deserve to get paid more than a second baseman. Okay? I forgot this guy is like some tech guy in the 90s or something. He's a funny line, right? Cause he's like, I add more value right to the world than this. But the issue is that people would think of what being CEO was as just sitting up with your feet on a desk barking at hers. You know, people will be like, oh, Elon, he just pays people to do his stuff. He doesn't launch the spaceships himself, Right. And that's because they are only accustomed to like clicking a button on Amazon and spending money on Amazon. And they think that something that is simple for them was simple on the back end. Of course, it's opposite, right? To make it simple is really hard, right? And so to like get the top rocket scientists and car engineers and brain machine interface people and tunneling people and blah blah, blah, blah, blah, and have them all compensated and working and directed and debugged is actually very, very difficult, as you know, if you tried it. And guess what? See, the thing is that historically it's been the case that people couldn't try their hand at being CEO. What they could do instead is they could try their hand just like they could try their hand at basketball or football, or they could, you know, pick up a microphone. They could try their hand at math and science, and they could see how good they were at math and science. So the initial tech guys in the 90s and the 2000s, they were respected because they were good at math and science. Not because people, many people didn't perceive the business aspect. They still didn't really give credit on that. But page rank, for example, okay, it's eigenvalues. I can like math guys. Tech guys could perceive, okay, that was a difficult technical problem. That must have been the value that they created. It's part of it. But, you know, the manager part is actually more. Point being though, that at least somebody could say, okay, these tech guys are better at math and science than me, therefore their compensation is merited. Now, however, the thing is that bouncing a basketball or trying a math problem were cheap. To make somebody manager of a company was expensive, so they couldn't try and fail. They could try and fail playing basketball and see how much they sucked. They could try and fail singing, see how much they sucked. They could try and fail in math, see how much they sucked very cheaply in high school, they would learn their true ability level, that they're not able to run like a CN Bolt. They can't sing like Adele, right? They can't do math like Terence Tao, right? And they would say, you know what? I know where I am. I know my strengths and weaknesses. I'm okay with that person having more or having higher status. Because it was a fair competition, I got a shot. It was cheap for me to try. But because putting them in charge of an organization to make them CEO was expensive, many people persist in the delusion that the CEO adds nothing to the organization. Right? And you know, though it is, say, I will say the best CEOs and the worst CEOs have something very deep in common. You know what that is? What the organization can run without them. Because the very best CEO set up a, right, a machine so that they don't have to micromanage it every day. That's really hard to do because they need basically, you know, Gwynne Shotwell running SpaceX. Like Elon doesn't have to look at every single detail because she's so, so, so good, right? Like or Vibov and, and Tom Zhu on Tesla, like they're so good, right? But recruiting junior Elons that are okay with not having the spotlight while Elon has the spotlight and takes all the flack non trivial to do. Go try it sometime, right? Find somebody who's more detail oriented than Elon to run your company and you can be Elon, right? Okay, so point being that now what AI does, it reduces the cost. You're, you know, it doesn't take your job. AI makes you CEO. You're the CEO. Now what is being CEO? It's writing up clear instructions of what you want, sensing the market, verifying the output and so on and so forth. What that means is all these people around the world, like, you know, the calendly founder is Nigerian, right? There's many founders who are from countries that were quote poor countries or what have you, from India, from Latin America and so on. Internet access means all of these smart people can get very far on zero resources. Very far, right? Because the cost of quote hiring someone is hyper deflated. You can hire an AI to do it, right? To riff on that more. So AI doesn't take your job, AI makes you CEO. Another one is AI doesn't take your job. AI takes the job of the previous AI. Claude took ChatGPT's job, right? Just like mid Journey, you know, took took Dolly's job, took stable diffusion job. And you can systematize that. What I literally have is I have a spreadsheet rev AI coding tool, AI image tool, AI video tool like this. And I have some subcategories like best tool for AI comics, for AI graphics and so on and so forth. And then in a given month I have the best model for that kind of thing in that month. So Claude code, you know, for example, or mid journey for AI imagery. And then when that gets swapped out, AI didn't take your job. AI took the job the previous AI. So I'm hiring the AIs. I literally have the token budget, I have the budget for those rows. And that is literally how across an organization, you say, okay, we've just fired, you know, Codex and we've hired Claude. Right? So AI doesn't take your job. AI takes the job of the previous AI. A third version is AI doesn't take your job. A lets you do any job a little bit, right? You can be a pretty good artist, you can be a pretty good musician. You can. It's like one of the things about being CEO, as you know, you often have to be like a six or a seven in many areas. Why? Because you have to be able to do the job well enough before you hire a specialist in that area. Right. Before you have a chief designer, you're the designer. If you're the founder CEO, right before you have a cfo, you're the one who's on the hook, prepare the financials, prepare the returns or whatever, right? So you have to be a generalist who's pretty good and in a pinch can do that role, can supervise that. That's why it's so hard. That's why being CEO is so much harder than any executive position. Okay, AI helps you with that. Where you can get to a 6 or a 7, you can be like a generalist, but a specialist is usually needed for polish. A specialist has a vocabulary. A specialist can confirm the AI is making mistakes, that it's hallucinating, and so on and so forth. And again, people will constantly argue as to whether that will always be there or whether it'll go away or whether AI will raise the. And then, you know, now the new specialist is even more sophisticated with AI. Right.
B
I want to zoom out a couple more talks before we go. One is the SaaS apocalypse. I'm curious what your mental model is for all these SaaS companies. Are they, you know, some people say, hey, they've no. Their moats have gone away. They have no, you know, code mode, no. Data mode, no, no more UI moat. And now there's going to be AI native companies that sort of, you know, take up a big chunk of what they do. Like, you know, figma, you know, we're invested in, I'm personally invested in, you know, some people are bullish as an example, just because it's founder led and they'll continue to innovate. Some people say, hey, is there a role for a designer in the same way that there used to be, now it fundamentally changes and you know, what does that do to collaboration tools like that? What is your thought on the SaaS apocalypse? Is everybody on the conveyor belt on the way to the guillotine? How do you think about that?
A
I don't think so, because I think if they're smart, then the thing that AI can't do is distribution, right? So if you have notion, you have figma, you have now replit and so on so forth, you've got all these people and boom, you can ship with AI faster, you know, features to them, right? And so in that sense, I don't believe in the SaaS apocalypse. I think you might still see SaaS under pressure, but from people who can clone the interface quickly. That is true. I think people will build local versions. That is true. I think people may not want their data on remote servers. They might want desktop versions with local data so they can. Like for example, Obsidian is going to become more of a contender versus notion because the markdown files, there's a network effect on data when it's local and you can analyze the whole thing like local data, you get compounding data, right? So, but so, so in a, in the naive sense that, oh, anyone can clone anything and so therefore, you know it, it just doesn't work like that. Like, if you set up, if you cloned all of Facebook's code and you set up facebook2.com, right, or instagram2.com, who's going to log into that, right? You could literally have every single thing coded there, but your, your ad rates are going to be far lower because no one's gonna log into it, right? The distribution, that's like a thought experiment to say if you just clone the whole thing, you still have to get the distribution for it. And so it's not just a cloning, it's execution. Now, with that said, like, there's certain kinds of things, like let's say NetSuite, right? Which suck that. But they're complicated where I think it is true that if they suck at execution, or rather may say they suck like, I hate the product, we put it like that, right? Zero is better. But you know, like, sorry, netsuite, okay? They're a big company, you have your feel. Sort. It's very rare that I see any product sucks because I don't hurt anybody's feelings. So hopefully I didn't strike that from the record. Fine. NetSuite's product could be improved. Okay, so something like that, which is like sort of a vulnerable incumbent that's just milking and that hasn't done anything for a while. Yes, I think they can get disrupted, but I'm not sure that it's like, I don't think it's quite like, oh, everybody on BlackBerry is going to die because iOS is taking over. I don't think it's quite like that because I think AI can accelerate a SaaS company, just like it can accelerate a disruptor. I think it kind of accelerates both.
B
Yeah, well, one last thing that we'll get to Zodel too anthropic. What happens, let's say anthropic, you know, becomes a multi trillion dollar company, right? Like how much leverage do they have or just even private companies in general over what is the relationship between them and government? Are they like hiring their own militaries at some point? What does it look like when these companies become, you know, 10x bigger, you know, 50x when AI really achieves its, its potential and these companies are bigger than, than the biggest countries.
A
So I think that at least that specific company, while it executes very well, I am skeptical as to whether they're executing well, let's call it politically. And so because of that, if they like ultimately at the very largest scale, markets are political. Like for example, there's an entrepreneur and they raised from a VC who raised from LP who's often a sovereign fund or a pension fund and they're under a state and they're under the rules based order, right? So like there's certain things that are at the macro level that you don't perceive because one thinks of them as concepts, but they become variables. I think that unless one is very, very savvy that those things could change. Like one thing I think about the Silicon Valley AI companies is they're actually scalar rather than vector thinkers. They're only modeling AI disruption and they're not modeling all the other simultaneous singularities, all the political singularities that are happening. All things like solar mooning and stuff like that. Right. And why are those things important? Because they change the leverage of political factions, which in turn means their world model is incorrect. Because if you're only extrapolating out AI and you're not extrapolating out all the other things that are either going vertical or going down like this, then they don't have a proper model of future. And that's as vague. I will be much more precise on my own blog. But that says, that says pg, let's say that's how I can say it without pissing anybody off. Just go to x.com biologys and you'll see what I mean by that. Right? But TLDR is, I think the American AI companies, as much as they've given to the world And I like them are only. They are basically thinking all nation states continue to exist in their current form. And the only disruption is AI. Like they still model as America versus China, for example. They don't model internal things, internal issues. They think the reserve currency sticks around. They think all these things stick around. Right. They aren't taking a multivariate approach, in my view. That's their weakness. They have so many strengths, but that's their big weakness. So I don't think that in that form they're going to get to trillions. In fact, I think the counterattack on them is going to be so dramatic that it might be that you just have decentralized AI. Like American AI companies, for example, the copyright stuff. Right. There's a huge backlash building against that. Whereas the Chinese or the decentralized models can just do anything Hollywood, anything. Right. Potentially. So the Pirate Bay kind of AI is actually more free. The less profitable AI is also less copyrighted AI. It might be better AI. So just things to think about. I think things compound until they don't and they start hitting sigmoidal constraints and often backlash constraints like this. Right. So I think that's what they're not modeling political constraints.
B
Makes sense. Okay, let's. Let's get to Zodel.
A
Zodal. All right. Now this is what I care about. Basically. You know, AI is the attack, but ZK is the defense. So what I mean by that is zero knowledge. Like, you know what the Transformer is to AI? 0 knowledge is to cryptography. And Zodl is a Zcash powered mobile wallet that is basically fully encrypted Bitcoin. Okay. This is 30 years of cryptography. This is basically what Milton Friedman wanted decades ago. There's actually this great clip.
D
The one thing that's missing but that will soon be developed is a reliable E cash. A method whereby on the Internet you can transfer funds from A to B without A knowing B or B knowing A. The way in which I can take a $20 bill and hand it over to you and there's no record of where it came from. And you may get that without knowing who I am. That kind of thing will develop on the Internet and that will make it even easier for people to use the Internet.
A
Basically. That is what Milton Friedman predicted almost 30 years ago. Okay. This is in the 90s, okay. It was like when the insurance was just rising. And Zodel is the incarnation of that. Okay. Because zero knowledge proofs, which basically mean anybody can prove anything without revealing anything else, were developed and Then they were commercialized in the form of zcash scaled with zero knowledge proofs for scaling Ethereum and with ZK rollups and things like that. And then they were made efficient so you could do them on mobile. And then finally Apple and Google lightened up on crypto apps on mobile. And so finally you can teleport arbitrary amounts of money around the world. And so this round we just led this with you guys, Async, Z Crypto, me, Winklevoss's paradigm, Coinbase, Haseeb Qureshi of Dragonfly, as you know, large fund and you know, a bunch of other great people. And the reason that, that Arthur Hayes also is former Bitmexio. And so the reason that this is super, super, super important, there's only, you know, you can click this, you can install this on web or on web, on iOS or Android. Right. The reason this is so insanely important, there's really only five crypto assets that I've spent more than a thousand hours on. Bitcoin, Ethereum, Solana, usdc, zcash. And I actually think zcash is maybe the most important of them in the years to come. Why? So let me say at least my kind of thesis as of right now on Fiat Gold, digital Gold and digital cash, meaning zcash. Right. So I think Fiat will be around particularly among eastern states because eastern states are broadly higher trust. So that's not just China, but it's like India and Southeast Asia, the ASEAN countries and so on Bitcoin. So then physical gold, gold bricks are also very popular in the east and Westerners often like gold but they'll buy the instrument like you know, right. And there's gold Tether, you know, IO. So Tether has a digital as a gold backed stablecoin which is actually at 3.7 billion. So that's cool. XAUT is pretty cool. You can check that out. You have to trust Tether's redemption. But Tether's got a pretty good track record now over 10 years for, with USDT and so on. So XAUT is cool, fine. So Fiat will continue, I think to have its role just like the desktop continues, you know, desktop continues, you know, 30 years later, windows and Apple are still releasing things, it's still valuable. Some of the actions moved away from it, but the desktop continues, still a large business. So Fiat continues among eastern states. Gold, physical gold is more popular in the east because you can secure it more. There's going to be more stability. XAUT may be what's popular in the West. Now we come to bitcoin. What is my view on Bitcoin? As of 2026 March, Bitcoin has become provable global institutional collateral. Okay. I think bitcoin is less of a currency for individuals now. It's become so accepted by institutions and so centralized with BlackRock and Saylor and so on and so forth and Bukele and many countries adopting it and whatnot, that it has a unique thing. See, when you say there's a certain number of gold bricks in Fort Knox, even giving a video of that can now be faked very, very realistically with AI, Right? But what can't be faked is what Bukele does where he posts. I have this public address with this much BTC and watch, I'm going to move it to this address, right? That is something which, so long as it's actually Bekeli's Twitter account, which there's some degree of proof on that, you know, because it's been around pre AI or whatever as long as you believe that. And that's the one piece you have to believe because you have to start thinking about what is, what am I, what am I taking as a premise? Right? He can post. I have the coins at this address. Here's the address I'm going to move it to. When I move it, I have proven I have custody. It's proof of reserve. Right. You can also sign a message coming from with that private key. You don't have to even move it. The point being that is provable global institutional collateral. Anybody in the world, he can prove cheaply taken by the world that he has this amount of bitcoin. You cannot do that for physical gold bricks. In a lower trust world, especially in online world, that's very valuable because everything gold audits, videos of gold audits can now be faked with AI. But the approval, global institutional collateral. Now institutions can prove they have the BTC to each other, okay? And they can do so across borders. So the transparency of bitcoin in the sense of all assets are on chain, becomes valuable. Now the thing about this is with the advent of AI, chainalysis will be there for everybody, right? Everybody can do blockchain analytics. This is just like changing the balance of power. It used to be that only chainalysis could really do that at the scale that it can. Now it's becoming much easier to do. And so a lot of bitcoin use will be de anonymized over time. And so if you're running a transparent blockchain, it becomes an institutional blockchain because it's just only an institution can survive that degree of transparency. Like individuals can't survive being tracked for everything, but institutions are. It's like a public company, it's supposed to be tracked, right. You know, like that. It's like sort of, it's because robust enough, it's meant to be tracked in a certain way. It's designed to be tracked. Right. An individual person is not meant to be public, but a corporation can be. Right? It's funny to put it this way. There's a private individual, there's a private company and there's a public company. But I guess you could say, oh, it's a public figure. But people don't like being public figures. But there's a kind of an equivalent there, right? A public figure. Maybe some of their stuff is tracked, but they don't want everything to be tracked. A public company, maybe all their stuff is tracked. Fine. Provable global institutional collateral. There's another thing which is that way of thinking about what bitcoin is solves some of the major issues. Quantum, right? Which is Nick Carter's put out these things on it. Let's say Nick Carter's right and I think he might be right that quantum is an underappreciated threat. Bitcoin core developers aren't taking it seriously. And even if it was something that they've rolled out tomorrow, it would still be a multi month migration process because ECDSA like the addresses, everybody has to manually send their assets from one address to a new address. Okay. So you can only do whatever a hundred thousand of people, those assets can be moved in a given day. However, if you look at Bitcoin rich list, Bitcoin is so top heavy, right. That it's got these institutional addresses that you have to do the math. But probably a few million addresses all moving their funds would move like 99% of the Bitcoin in a few days. And so bitcoin is digital gold, actually is quantum resistant. It's bitcoin as digital cash. That isn't right. Meaning a million like institutions all moving their assets can be done in a few days. But a billion people all moving like five bucks or whatever can't be done in any reasonable amount of time. Okay. So everybody who can't move then gets quantumed and anybody who can doesn't. But all the assets are concentrated. The big guys with me. Right. And this also extends to seizure, like will all the centralized bitcoin on Coinbase's server, sailor servers, et cetera. Get seized. I think it's quite likely. I think it eventually gets seized in some exigent circumstance. And so it becomes something that I think only an institutionally blessed thing can hold and send. Right? Provable, global institutional collateral. This is a different vision than what people wanted, but it's actually still a valuable thing. What it leaves open is the individual digital cash case. Right? Because gold is big bricks that are moved in brink trucks or the equivalent thereof, infrequently large denominations between institutions, right. It's like the high powered backend money, right. It's not really meant for individuals. Cash is the opposite. It's meant for individuals more than it's meant for institutions. So zcash takes over the role of digital cash. So that's fungible, private scalable with tachyon, which is coming quantum safe, okay. Which is also. It's more quantum safe, right? So that's why. And it's simple. Also, zcash is probably not going to ever do smart contracts. It's going to keep it really simple. Why? Because like, you know, if you take Bitcoin, you can innovate in one direction, which is programmability and that's Ethereum, Solana and so on. You innovate in the other direction, that's privacy and that's zcash. To get to private programmability is actually stacking those two together and it's actually quite hard. It opens up all these attack surfaces and so on. So just scale zcash first and then you know, there's Aztec, there's Aleo, there's all these other, you know, private smart contract chains. I wish them the best. I want them have a non zero sum view of the world. They're taking on a more complicated problem. In theory they can just do the same thing zcash is doing, which is private transactions in practice. If you remember Facebook in the 2000s, people said, why does Twitter exist? Facebook has status update. Like one feature of Facebook is all of Twitter. Why does Twitter exist? Sometimes that's a good argument, by the way. That's why, you know, like Steve Jobs told Drew Houston, Dropbox just a feature, right? I mean Dropbox, it's funny, it's a great company and so and so forth. But like if Dropbox had, if icloud was Dropbox, it'd probably be better, you know, like both, both would be better off for it. Icloud is kind of Dropbox's. Dropbox doesn't have as much distribution as if it was part of a big operating system kind of bundle. So sometimes people are half right, half wrong. Dropbox Recommend but it might have been bigger in terms of percentage value if if they had been Apple's cloud services basically right? But okay. Point is it's hard to say whether it's just a product or a feature, but my strong intuition is just like Twitter's simplicity made its own thing, right? Simple, scalable, billion person, digital private cash has been the dream for 30 years and we're finally there. So Zodel.com, install Zodel.com by the way, I'm not a trader. I just don't care about trading. I'm early on platforms and infrastructure. There's things you have to not care about in order to care about things you have to not care about things. So very very very few things I talk about. Also Zcash has been around for 10 years. Like you know, it's also even the toxic way setup ceremony that's gone like that got fixed cryptographically so it's unusual that it's been around 10 years. Got a security track record, it's got a decentralized base of holders. The cryptography works.
B
Love it. That's a great place to wrap a wide ranging conversation on what's happening in AI Crypto. As always, Balaji, Fantastic conversation. Until next time.
A
Yes. And oh by the way, if you're in Singapore, Malaysia or anywhere, come visit ns.com and network school and we're scaling and we'll talk about that too. Next time maybe.
B
Yeah, love to see all the progress there. Amazing what you guys are doing. Excited to be involved in a small way. And yeah, until next time.
A
Thank you.
C
Thanks for listening to this episode of the A16Z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating or review and share it with your friends and family. For more episodes go to YouTube, Apple Podcasts and Spotify. Follow us on x16z and subscribe to our substack@a16z.substack.com thanks again for listening and I'll see you in the next episode. As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A6. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see a16z.com disclosures.
Guest: Balaji Srinivasan
Host: Andreessen Horowitz
Date: April 7, 2026
In this densely-packed episode of The a16z Show, Balaji Srinivasan (angel investor, entrepreneur, and former CTO of Coinbase) discusses with the a16z crew why the rise of AI has dramatically reduced the cost of creation but simultaneously raised the cost—and importance—of verification. The discussion spans the societal, economic, and technological implications of AI proliferation, decentralization, the evolving role of trust and “trusted tribes,” the Chinese tech ecosystem as a template, the looming “SaaS apocalypse,” and the role of privacy-preserving cryptography in a fully-AI future.
"Every tool that makes creation cheaper makes verification more expensive ... The cheaper the creation, the harder the proof."
— Speaker C [00:32]
"Humans are the sensor, AI is the actuator ... your quote 'taste' is your sense."
— Balaji [19:36]
"I'm not sure whether AI will be able to read your mind, but it can read your body."
— Balaji [19:49]
"AI, robots, digital will be cheap; human is a premium product."
— Balaji [35:50]
"AI doesn't take your job. AI makes you CEO."
— Balaji [38:35]
"AI doesn't take your job, AI takes the job of the previous AI ... I'm hiring the AIs."
— Balaji [42:00]
"If you cloned all of Facebook's code and you set up facebook2.com, who's going to log into that? ... Distribution is key."
— Balaji [47:19]
"AI is the attack, but ZK is the defense."
— Balaji [53:21]
"Bitcoin has become provable global institutional collateral... an individual is not meant to be public, but a corporation can be public."
— Balaji [58:52]
This episode draws a line from AI’s promise to its unique perils: as every task becomes automatable, what’s scarce and valuable is not generating content or products, but verifying their authenticity and provenance—especially in decentralized, adversarial, and low-trust digital economies. The shift mirrors the path of the Chinese tech ecosystem and will upend many US and Western paradigms around SaaS, trust, and digital identity. Ultimately, cryptographic verification (“defense”) will become as crucial as generative AI (“attack”). In Balaji’s vision, in a world where “AI makes you CEO,” your power and worth hinge not just on what you can generate, but what you can reliably prove to others.