Transcript
A (0:00)
Cost is always the burning topic when it comes to building in the cloud. Luckily we've been treated to quite a few cost reductions from AWS over the years now and then we can get cost increases too. And today we're going to talk about some new updates from AWS and zoom in on some recent cost increases as well as decreases. I'm Owen, I'm here with Luciano and this is AWS Bytes. AWS Bytes is brought to you by four Theorem. Stay tuned to the end of the episode so we can tell you a lot more all about that. Now, whenever we talk about AWS Lambda, we discuss its cost model. With lambda, you pay for the number of requests you make, which is generally the much smaller component. But you also pay for the execution duration that's built in tiny 1 millisecond chunks, and the cost is proportional to the amount of memory you allocate for that function. Now there are a few different things that can affect this cost. Further back in 2022 year pricing was introduced and that lets you save up to 20%. If you've got high volumes of lambda usage, you've also got compute savings plans, which don't just apply to containers and EC2, but Lambda as well, and they let you commit to a certain spend and save up to 12%. And of course you've also got provisioned concurrency. Now that lets you keep functions warm, which means you're paying as long as they're provisioned and ready to process a request, but the rate you pay is less than the standard on demand rate. Now, for a long time there has been a bit of a free lunch when it comes to lambda billing. We're not just talking about the pretty nice free here, but about the cold start duration, also known as the INIT phase. The INIT duration. This is the time your function takes to load the runtime and the handler code before it actually passes the event to your function. A lot has been written about this. We'll link to a nice article by Luke van Dankershut, which I think we've mentioned before. Up until now the initialization phase was not billed for managed runtimes. Those are the official runtimes like the Python node js, Java Net Ruby runtimes. By the way, if you want to learn about how a runtime works, we have a whole episode dedicated to that topic. That's episode 104 and you'll find the link in the description. Now this free lunch free init phase thing was never something you could avail of if you had custom runtimes so if you were doing something like C Go Lang, C Rust or your very own custom runtime, that was never something you got. Similarly with Container image packaging, which we use quite a lot now, especially for large dependencies that never had this free INIT phase and provisioned concurrency never had it either. But there is a couple of benefits. So not only was it free, you, you also get two VCPUs during this init phase, regardless of the amount of memory you allocate for your function. And normally the VCU pus are tied and scaled linearly with the amount of memory you allocate. So if you wanted two VCPUs, you'd have to allocate at least 3,538 megabytes of memory to get those two cores. So you had a little bit of a performance boost in that init phase to get things up and running. The maximum execution time of that phase, by the way, is 10 second. And if your initialization exceeds this, the function times out and gets retried. But you could get a lot done potentially in those 10 seconds. So AWS has potentially spotted an issue there. Do you want to tell us more about that and what they've done about it?
