Jonathan (20:49)
Yep. All right, well, let's talk about, interestingly, something else that AMD has done. This is Linux adjacent, but I figured our audience would be very interested in it. And that is entry sign. This is a vulnerability in AMD processors that was actually discovered by researchers at Google, Tavis Ormody being one of them. And I've just sort of learned over the years that anytime Tavis's name is attached to something, it's going to be just wonderfully done, well written and really impressive. And this is absolutely the same. So entry sign. It's all about doing micro code updates for AMD processors, particularly the Zen line of processors. It's all the Ryzen and all of the EPYC processors. Micro code is. So we have to step back and say that intel has not made an actual native x86 processor since the 90s. AMD has not made an actual x86 processor since like 2004. What they actually make are RISC processors, reduced instruction set computer processors. And then they have this little tiny shim of firmware on top of that, which is microcode that essentially emulates the x86 and x86 64 instruction set. Right? So like that is sort of the background to this. We're not literally running x64 assembly directly on the real CPU. It's got this micro code in the middle. And so that microcode is sort of the. It's sitting as sort of God mode on your processor. And it makes everything work because it is software essentially. It does need to get updated from time to time. And both AMD and Intel will push out updates for their microcode in the Linux world. The Linux kernel will actually load those microcode updates for you during boot, which is pretty cool. And as you might imagine, AMD and Intel are particularly interested in making sure that you only run legitimate microcode on your processor. They don't want people to be able to run their own custom microcode for security reasons, if nothing else. Like to be able to do secure encrypted virtualization and some of those things. And AMD has this really interesting method of making sure that you're running signed microcode. So the way this works is inside the microcode, like the binary blob inside of it. You've got a public key is included and then the blob itself is signed using that public key that's included in it. And then you know the signature, if it matches, you know that, you know that public key. It was legitimate. But then you might ask, well, how does your processor know that that public key is the one that it cares about. Right? Like, how do we know that this is actually signed by AMD and not just signed by this random public key? Well, the scheme that AMD has used is they take that public key and they hash it. So the public key itself is like 2048 bits and they. Yeah, 2048 bit, it's an RSA public key. They hash it down to a 128bit value, which on one hand, you might look at that and go, oh, my goodness, you're losing so many bits. When you do that, you're reducing the security so much. Well, because it's RSA that is not exactly the same as being 2048 bits of like, you know, an AES key. Let's say bits are not necessarily one to one as to how much security they give you. So it's generally considered that taking a 2048 bit RSA key and hashing it, so long as you're doing the hashing securely down to 128bit value, you do not lose any security. All right, now we have to talk about hashing functions to understand a good hashing function is. Well, it's one way. It takes a whole bunch of data on one side, you run all that data through the hash, and then you get a small value on the other side. Every time you run the same data, you should get the same output and it should be essentially impossible to be able to engineer a scenario where you put two different inputs in and get the same output, right? So, like, there shouldn't be any way to like, reverse the process. There shouldn't be any way to game the process. And in fact, if essentially, if anyone ever discovered it's called hash collision, where two different inputs give you the same output, if anyone ever discovers a hashing collision for a given hash, then that hash algorithm is basically busted, broken. You should not use it anymore. It's done for a, what we call cryptographic hash. And those are for this sort of use case. Those are sort of the assurances that you want your cryptographic hash to give you. AMD used a version of AES. It's the AES. AES. These is the Advanced Encryption Standard. That's the encryption, like the encryption primitive that the US government has signed off on. And lots and lots of people around the world have looked at this and come to the conclusion that, yes, yes, what AES is doing is secure, okay? But there's like five or six or seven different variations of AES because you do different things with it. And one of these is the cipher message authentication code, the AES cmac. All right, so what this does is you give it an input and you give it a key and then it gives you an output hash. And the assurance, like the thing that it sort of guarantees, is that if an attacker does not know the key, then it's going to be impossible to change that input message and get the same output hash if the attacker does not know the key. Well, AMD used this to do that public key signature. The problem is that for that hashing step to happen on your cpu, the key for this hash also has to be burned into the cpu. And the researchers from Google were able to reverse engineer that and pull that key out and figure out what it is. It so happens that it was the example key from NIST from the National Institutes of Standards and Technology. Interestingly enough, like that shouldn't matter for what they're doing, but it kind of does. Remember I mentioned briefly this idea of what assurance does it give you? Well, again, the assurance for AES CMAC is if you don't know the key, you can't do collisions. But because of the way AES CMAC works, if you do know the key, like if you know the input and the key, you can do the calculation all the way through and you can sort of pause it partway through and then see the input that's going to, going to matter, the input that's going to be used next, and you can just sit there and twiddle that input until it gives you the output that you want. And so because of this hashing algorithm that was used, it's actually possible to randomly generate a tool. Not randomly. It is possible to generate and an RSA public key that will then match when they run it through this hashing algorithm, because it's not really what it's intended for. You can generate the public key that will match your hash on the outside, on the, on the, on the other side of it. And for those of you that are like security nerds, you're thinking about this and going, wait, wait, wait, wait, wait. You can sort of pseudo, randomly generate an RSA public key that doesn't give you anything because you still don't have a private key that goes along with it. And that's what, that's. That was my thought process too. It's like, wait a second, this idea of randomly generating a public key just doesn't even make sense. It turns out that it kind of does, because the way that you get a valid RSA key is you generate two very large primes, you multiply those together, and then, you know, that's essentially, that is your public key is the product of multiplying those two very large primes together. That means that a good, a secure RSA public key is the product of two large primes multiplied together. But if you're just generating this number randomly, it may not be the product of two large primes, it may be the product of multiple primes. And if that's the case, you can actually factor it, which means that from a bad public key, you can generate a legitimate private key, which means that you can then sign your microcode update with this, you know, this bad public key and bad private key, because you can get back to it. And because it's a public key that you generated specifically to match this not quite the right hashing algorithm, you can generate valid AMD microcode updates. And the thing that really is scary about that is, you know, potentially if you're on like a hypervisor, you could potentially use that to then break the encrypted virtualization stuff. Right? And so there's like some. For most of us, for the vast majority of us, we don't have to care about this. It's not going to affect us. But there are some enterprise use cases where this is a big deal. But the way that the Google engineers, the Google researchers got there was so fascinating to me and I thought hopefully everybody else would enjoy hearing the story too.