A (23:29)
Right, so of course we're not there yet, but that's kind of, that's where we're going. And so why is this useful? So coming back to scaling, right, I said that there's basically three main elements of scaling. There's the, the bandwidth, the I O and then the actual computer. Now the amazing thing about real time ZK VM is that it actually is the core of a broad. Like the way I would say it is. Like it helps us scale all three of these, but not just on its own, but it's the unlocking piece that allows, that basically enables a broader transition that addresses all of these elements of scaling. And so that's why when we talk about zkvm, to me it's more like the most exciting element of this broader change. And that's why when you said at the top of the podcast, this might be the biggest change ever, I would agree. Not just the ZKVM itself. We'll talk in a second about statelessness, about data availability, sampling. All these things come together to unlock this. And so let's take it step by step. So the one of those three constraints, the one immediate impact you get is on the compute side, right? So because that's the nature of ZK proofs, right, you basically you're able, with very little compute effort on the verification side, to verify arbitrary length execution. So no matter how much you fill the block, now of course we can talk about constraints, there's still block building. Some node somewhere needs to do that. So it doesn't give you literally infinite throughput, but basically you can have whatever length of computation you have, you can compress it down into a constant size proof and then you can verify that with just very little compute. So compute scaling, that's in a way the easiest one. That's the one that you get very easily. Now you look at the other two and you're saying, okay, how does it impact IO, right? So historically, traditionally when you execute an Ethereum block, what you do is you start executing, you do some compute, at some point, you want to load some state. Actually, already at the beginning of a transaction, you need to load your account. You need to load the account that you're calling into that you're sending eth to. So you basically, you immediately need to go to disks, right? So you have this entire mixing of sometimes you go to disk, you load values, sometimes you do some computer, then you go to disk again. It's like this, there's intermixing one actual Change to Ethereum that we're already doing before zkevm. It's called Block Level Access list. So it allows us to. It basically it adds some annotations to a block of like this is the data you'll need. So actually what happens now is that you actually go to disk at the very beginning. You bring all the data and then you can do the execution. But you still have this element of having to go to disk both before the block and then again after the block to go and be okay. But what's, you know, like we have to update all the values and then we have to also like compute what is the new state route. So how does it look with zkvm? Well, there's a few things that are fundamentally like improved by zkvm. So the important part is that ZKBM basically already takes in as part of the claim. It's like, hey, assuming the blockchain was in this state and I apply these transactions, now the next state is this. So basically you no longer need to go and load the data from the values from disk. So basically you're saving this IO on the load side naturally. And then the thing that you'd normally still have to do is you have to go and still write the updates, right? So you still have the state of Ethereum. So after you verify the block you still have to go and say okay, these values change, right? So you have to go and apply that change, one that's no longer in the critical path. So you can do that after you've already finished verification. So if you have validator you can already vote, you can say ah, this block was valid. And then afterwards I go and actually apply the updates. So in terms of like what is the current price of this uniswap pool or what's the balance of this account, right? Like I might only go update this on disk after I already know that the block is valid. So, so, so that's, that's a natural benefit you get. But if you want to push it further, we have to. And this is what I was saying, like this is one of those changes that is enabled by zkvm, but it's its own change, it's state stateless Ethereum or partially stateful Ethereum. So what does that mean? Well, instead of like today, any node in Ethereum network basically has to have the full state and that's with free execution that is unavoidable, right? Because if you want to verify a block, you have to go and again load all the data you have to have it all locally. Once you have zkevm, that becomes optional because you don't actually need the data local to double check the validity of the block, right? So, so what you can do is you can, in principle, what you could do is you could throw away the entire data, right? So you can basically just, you can only keep like this, this root commitment and you can just always update the root commitment and that's it. In practice, what you'd want is, because Ethereum nodes have multiple functions, they also operate the Ethereum mempool, they have to understand validity of transactions in flight, all these kind of things. What you'd want to do is you don't want to run fully stateless, you want to run in what we're calling partial statelessness. So for example, there's this proposal called VOPS Validity Only partial statelessness. So it means you specifically have a subset of the state. And that can be defined by several different rules. It can be, say, the balances of all the accounts, or it can be, I don't know if you are specifically interested in some state that belongs to you as the user or something. You can define what state you're interested in, but basically now you can keep a subset of the Ethereum state, and that's totally safe because of zkvm, right? And you only have to apply the diff, you only have to go to disk, you only have to have the IO overhead of updating that subset. So that's the second, basically like you have ZKEVM for compute. Now you have partial statusness for more optimized IO and also by the way, for keeping your disk size contained. We'll talk about state growth maybe towards the end, but basically, so you don't have to have a huge disk and then it leaves the third one, which is bandwidth, right? And how do you actually keep scaling the chain now with the CK system while actually keeping bandwidth requirements the same or even reducing them? Well, that's yet another separate trick that's also again enabled by zkevm, but it's separate and that is you no longer actually need to download the full block. And that makes sense, right? Because you get the ZK proof, you have to download the proof, and the proof tells you, hey, assuming there is a block with this hash or something, once I apply the block, this is the result and that's proven. So the only thing you need to know about the block is that it exists. And that's a bit of a nuanced thing, like why do you Even need. I mean someone clearly must have created it otherwise they could not have created the ZK proof. So why do you have to verify that it exists? Well that's for the nuance reason that you can otherwise withhold the data. Like that's also the same for. That's why for example we even have blobs in the first place. Actually for L2s is the same story you have to publish. You have to basically prove that the block was published. So anyone can access it and anyone can get access to, to the transactions that were applied basically. But what you can do is, I mean that's again where like the synergy with the L2s, it's just a beautiful story. We have already built out specialized functionality for verifying the existence of data very efficiently without downloading it all. It's called data availability. It's called blobs. Right. So what we will do is we'll take the Ethereum blocks and we just basically become our own roll up. In a sense we're putting the data into the blobs. It's called block and blobs bip. And with that, now all an Ethereum node has to do is just sample, sample the data. And we are in the progress of making that more and more efficient because we want to provide more and more data for our L2 partners. And that now naturally also benefits ourselves because now you can have more and more like bigger and bigger blocks while keeping the footprint in terms of bandwidth also very constrained. So now you're right coming back. We have ZKVM and we have partial statusness and we have block and blobs data availability sampling together they scale bandwidth, they scale IO and they scale compute. And that is how you basically use all of these elements to scale the blockchain. And then there's some nuances. You don't get everything for free. You have state growth, we can talk about state growth that we have to separately address. And you have things like being able to efficiently sync an Ethereum client. There are things like being able to efficiently run an RPC node, like what Infura is doing these kind of things. So there's more to scaling than this. But, but the core story is that you have these three constraints and ZKEVM directly and indirectly addresses all three.