Steve Gibson (118:33)
Oh, we're going to find out. Last Wednesday, Let's Encrypt republished a letter from let's Encrypt's Executive director, Josh Oss. The letter originally appeared in their 2024 annual report. I've grabbed four interesting and important successive paragraphs from their Executive Director's letter. They read, Next year is the 10th anniversary of the launch of let's Encrypt. Internally, things have changed dramatically from what they looked like 10 years ago. But outwardly our service hasn't changed much since launch. That's because the vision we had for how best to do to do our job remains as powerful today as it ever was. Free 90 day TLS certificates via an automated API Pretty much as many as you need more than 500 million websites benefit from this offering today, and the vast majority of the web is encrypted. Our long standing offering won't fundamentally change next year, but we're going to introduce a new offering that's a big shift from anything we've done before. Short lived certificates, Specifically certificates with a lifetime of six days. This is a big upgrade for the security of the TLS ecosystem because it minimizes exposure time during a key compromise event. Because we've done so much to encourage automation over the past decade, most of our subscribers aren't going to have to do much in order to switch to shorter lived certificates. We, on the other hand, are going to have to think about the possibility that we will need to issue 20 times as many certificates as we do now. Of course, that's because if they expire more quickly, you got to issue them more often. He says it's not inconceivable that at some point in our next decade we may need to be prepared to issue 100 million certificates per day. You know, okay. And it's it. They're not getting paid per certificate, so. Okay. Anyway, he says. That sounds sort of nuts to me today. Uh huh. But issuing 5 million certificates per day would have sounded crazy to me 10 years ago. Here's the thing though, and this is what I love about the combination of our staff, partners and funders. Whatever it is we need to do to doggedly pursue our mission, we're going to get it done. It was hard to build. Let's encrypt. It was difficult to scale it to serve half a billion websites. Okay, so this raises so many questions. The first biggie is is website certificate theft and abuse somehow a far larger problem than anyone knows? We, and many of our podcast listeners track security news quite closely. One of the long time benefits of our listener feedback is that I'm always receiving pointers to news that I may have missed. But as far as I know, there have been exactly 0 instances of website certificates being stolen and abused. I can't recall a single instance of this occurring during the entire life of this podcast. Yes, it would be very bad if that happened, and we want to take measures to assure that it doesn't and can't. Or that if it does anyway, that we're somehow able to respond quickly enough to minimize any damage. Certificate revocation is the classic way that this has been handled. And we know from our recent coverage that the industry is moving back toward the use of browser side CRLS certificate revocation lists based on Bloom Filter technology, having tried to use OCSP online certificate status protocol and deciding that despite the total solution offered by server side stapling of OCSP certificates, not enough web servers had chosen to staple OCSP responses to their certificates, which resulted in a privacy threat to users whose web browsers were therefore forced to query the certificate authorities for the current status of certificates, thus leaking information about the sites they were visiting. Now, the heartbleed flaw which threatened to leak web server certificates truly upset everyone with the possibility that snapshots of a web server's RAM could be remotely obtained that might, and in a few verified instances did contain the web server's private key. So the entire industry scrambled around and quickly got that resolved. But even then, while Heartbleed was known and unpatched, there were no known instances of actual website spoofing through the use of stolen certificates. Not one. It's important to remember that just having a website stolen certificate does not automatically mean that the website can be spoofed. A web browser which knows where it wants to go first uses DNS to determine the current IP address of that website's domain. It then initiates a TCP TLS connection to that remote ip, asserting in the TLS handshake the web domain it wishes to connect with. That's when the remote site returns the certificate to the browser which asserts the site's identity. What this means is that any site that intends to spoof another site's identity must not only be in possession of a valid and trusted identity certificate for that spoof target site, but also before that stolen certificate even has the opportunity of coming into play, the attacker must somehow arrange for the victim's browser to believe it is connecting to the real web server, when in fact it's connecting to the attacker's server. There are two ways this can be done. The first is to somehow poison the victim's DNS lookup to cause it to obtain the attacker's IP address rather than the authentic web server's ip. This is why poisoning DNS has always been another real hot button for the industry. Back in 2008, Dan Kaminsky realized that poorly randomized query IDs and ports for queries which were being made from the Internet's big DNS name servers meant that attackers could predict the exact replies those name servers were expecting and inject their own false replies onto the Internet as a means for poisoning the caches of these name servers. While those faked replies remained cached, bogus IP addresses would be returned to anyone on the Internet who asked. Once again, the Internet had a meltdown and quickly worked in a rare, concerted effort to update all name servers at once. And because this promised to take some time, I quickly created GRC's online DNS spoofability test to allow anyone to determine whether the name servers they were using had been updated and were now safe for them to use. I said there were two ways to divert a user to a malicious machine. The second way is by physically intercepting and manipulating the user's traffic. This could be done at scale by attacking and manipulating bgp, the border gateway protocol, which is used to synchronize the routing tables of the Internet's big iron traffic routers. We've covered various mistakes in BGP routing through the years, and also some mysteries that may or may not have ever been malicious. The main problem with doing this is that it's an extremely visible attack, and also that there have been so many innocent mistakes made where all of the Internet's traffic is suddenly rerouted through Moldova or whatever, that the Internet's routers have acquired much better defenses through the years against blindly believing whatever routing instructions are received. If it's no longer feasible to get the Internet itself to reroute traffic bound for one IP to another, what's left is intercepting traffic by getting close to either of the endpoints. If an attacker can get near enough to the web server's Internet connection to divert the traffic bound for it to somewhere else, then an illegitimate certificate for the diverted web server would finally be both useful and required to complete the roos. Or if an attacker wished to selectively target a specific individual user or group, then being near enough to the user's or group's Internet connection to interfere with it directly could also accomplish the same task, though only for those users who were downstream of the traffic interception. My intention here has been to create a bit of a reality check. Just obtaining a valid and not yet expired or revoked web server certificate is not the end of the challenge. It's just the beginning. Most bad guys who obtained someone else's web certificate, if they somehow could, might think, well, that's nice, now what? Because as I've just demonstrated, a stolen Web server identity certificate may be cool to have, but it's quite difficult to actually use it to spoof the stolen site's identity. There's a lot more involved that being the case, it's probably less surprising to note that to the best of our knowledge, this has never actually happened. It's not a big problem. In fact, it's not even a small problem. Remember that we used to have certificates that lasted five or 10 years, while at the same time we had a completely broken and non functional certificate revocation system and it still never happened. Okay, so today let's Encrypt's ACME Protocol certificate issuing Automation is creating 90 day certificates and there are no problems. Just as there were also no problems with everyone else's are no problems with it with everyone else's one year certificates. Just as there weren't when certificates lasted two years and three years or more. Meanwhile, the browser side of the industry is gearing up to solve the problem that isn't actually a problem by finally making certificate revocation lists work. Yet for some reason that I'm at a loss to understand, let's Encrypt is announcing that they're voluntarily going to make their job 20 times more difficult by shortening the lifetimes of their certificates from 90 days, which is not a problem, to just six days, which will only be a problem for them. There is however, one potentially monumental problem that has not been talked about as far as I can tell, anywhere. The reason GRC will be sticking with the longest life web server certificates Digicert will offer. Having all of those 500 million websites using let's Encrypts free six day certificates means that not one of those websites will be providing a certificate with a longer than six day life. I know that seems obvious, but think about that. Having all of those 500 million websites using let's Encrypt free six day certificates means that not one of those websites will be providing a certificate with a longer than six day life. After all, that's the entire point of having website websites using six day certificates. If one gets stolen, it won't be usable after an average of three days from the time of its theft. Right? Because on average, if certificates have a six day life, they'll if you just did a random sampling, you'd catch them at three days on average. But now consider that this in turn makes those 500 million websites, as I said, among which will not be GRC, totally dependent on let's Encrypt's service being continuously available. This creates a single point of failure for those 500 million websites, which among other things is completely contrary to the fundamental and deliberately distributed design of the Internet. We are creating a single point of failure for no reason. We saw what happened recently when the Internet Archive came under sustained DDoS attack and was forced offline for days. If LetsEncrypt's services were to ever come under a similar sustained attack, the consequences for the Internet would quickly be devastating. With websites using six day certificates, on average, half of those will have expired after three days. Put another way, there are 144 hours in six days. If a concerted DDoS attack were to be launched at let's Encrypt, for every hour of the attack's duration, on average 3.47 million websites would lose their identity certification. 3.47 million websites per hour of a DDoS attack on let's Encrypt. They would not be offline because the attack would not be at them. But these days they might as well be. And an attack would that could be prolonged. If it could be prolonged through all 144 hours of those six days. By the end of that time, every one of those 500 million websites using let's Encrypt would have lost their certification. We know that while we're sitting in front of our web browsers, it's usually possible to force a browser to accept an expired certificate. Sometimes it's not simple. And I've seen instances where it doesn't seem possible. Depends entirely upon the browser. And most people wouldn't anyway. We've seen how adamant and frightening web browsers have become about insisting upon HTTPs. But forcing a web browser to open a web page wouldn't work anyway because a great many HTTPs TLS connections have no user interface. The only thing we're able to force our browser to open is the primary web page of a site. All of the HTTPs links modern web pages depend upon behind the scenes would fail, scripts would not load and sites would not function. And why? For what? Because this solves some great problem with certificates that it's necessary for the secure connectivity of 500 million websites to all be put at risk at once. No. As we've seen both theoretically and practically through history, there's no problem that this solves. The industry has never had a any problem with stolen certificates. It's a made up problem. So in conclusion, I cannot find any need for let's Encrypt to move their current 90 day free certificates to just six days. It makes no sense. Not only is there no demonstrated problem with the current 90 day certificates, but the web browsers really are finally going to be bringing working certificate revocation technology online. And that technology will be able to selectively revoke certificates in minutes or hours rather than waiting for them to expire in days. Josh's letter said, because we've done so much to encourage automation over the past decade, most of our subscribers aren't going to have to do much in order to switch to shorter lived certificates. Now, it's not clear from this, and perhaps I'm grasping at straws here, but it might be possible to read this as let's Encrypt subscribers will be given a choice. So perhaps super paranoid sites will elect to use super short lifetime certificates, whereas others will choose to remain with 90 day certificates if they're permitted to do so. It's not clear at this point. Josh's letter also claimed quote this is a big upgrade for the security of the TLS ecosystem because it minimizes exposure time during a key compromise event. Well, okay, yeah, this is a bit like saying we're switching from 4096 bit public keys to 10 times longer 40,960 bit keys because these will be so much more secure than keys which are only 1/10 as long. Sure, okay, technically that's true, but there's already no problem whatsoever with 4096bit keys which no one is able to crack, and which all the cryptographers agree will be completely secure for another several decades at least. Josh says that it minimizes exposure time during a key compromise event, except that we don't actually have key compromise events and browsers equipped with CRL Lite Bloom Filter certificate revocation will be able to respond in minutes rather than days. And what's more, let's Encrypt is actively feeding their certificate revocations to the industry's CRLite projects. So let's Encrypt is already depending upon browser side revocation. The bottom line for me is that I'll be steering clear of let's Encrypt's automation for as long as Digicert is able to offer longer life certificates. Taking a few minutes once every year to update certificates is not a problem for me, for our listeners and for the 70% of the Internet's websites that are currently using let's Encrypt certificates. You know, it's been a terrific service so far. I mean, it is. It has achieved what Josh says it has. But all I see is downside with the move to six day certificates. If you have the choice, I'd suggest remaining with the longest life certificates you can.