
is a software engineer at Solo.io, where she’s worked on Istio and API Gateway projects. She’s been part of the Kubernetes release team since v1.27 and is currently serving as the Release Lead for v1.33. Do you have something cool to share?...
Loading summary
Kaslin Fields
Hello and welcome to the Kubernetes Podcast from Google. I'm your host Kaslin Fields.
Abdel Sigiwa
And I am Abdel Sigiwa.
Kaslin Fields
This week we're diving into Kubernetes 1.33 with the release lead Nina Polshakova. Stay tuned to get highlights on new features, deprecations and removals.
Nina Polshakova
But first let's get to the news.
Abdel Sigiwa
Google Cloud Next took place April 9 to 11 in Las Vegas, Nevada. Google announced 229 new things ranging from agentic AI to Tensor accelerators and AI infrastructure to the 10 year anniversary of GKE and the Inference Gateway. We left a link in the description if you want to read more.
Kaslin Fields
At Kubecon EU 2025, Google announced the public preview of MCO, or multicluster orchestrator, a new service for manage managing workloads across clusters. MCO acts as a recommendation engine that recommends which cluster should host which application based on capacity and availability constraints. We had an opportunity to discuss the.
Nina Polshakova
New feature with the folks who work.
Kaslin Fields
On MCO and we'll publish the episode soon.
Abdel Sigiwa
The CNCF announced adding two new certifications to its portfolio, the Golden Kubestonaut, which recognizes individuals who obtained all 13 certifications offered by the CNCF. In addition to the Linux Foundation Certified Linux Administrator and the Cloud Native Platform Engineering Associate, which is an entry level certification for platform professionals, the Kubernetes community.
Kaslin Fields
Introduced the Kube Scheduler Simulator, a web UI tool that allows users to create Kubernetes resources and observe how the scheduler and its plugins make scheduling decisions. The tool is meant to help users understand the internals of the Kubernetes scheduler decisions and how constraints influence it.
Abdel Sigiwa
At Kubecon EU in London, Mirantis announced they will be donating two Kubernetes projects to the CNCF K0S, which is a lightweight Kubernetes distribution, and K0S Motron, which is a cluster management tool. Both projects joined the CNCF ecosystem at.
Nina Polshakova
The sandbox stage and that's the news. Welcome to our Kubernetes 1.33 release episode. I am excited today to be speaking with the release lead Nina Polshikov, who is a software Engineer at Solo I.O. where she's worked on ISTIO and API Gateway projects. She's been part of the Kubernetes Release team since 1.27 and is currently serving as the release lead for 1.33. Welcome Nina.
Nina Polshikova
Hey, thanks for having me.
Nina Polshakova
Thank you very much for the bio. I feel like you went over a good amount of your history there. But is there anything else that you want to elaborate on how you got to where you are now?
Nina Polshikova
Yeah, I mean, I think when I first joined Solo, because there are open source projects that Solo has that you can kind of go on GitHub and see in the repo what people have contributed to. It was definitely one of the selling points. So before I joined Solo, I hadn't really done much open source, but it was an opportunity to work in the open source space, contribute to code that other people could see, which is kind of scary working in public. I remember my first issue was a community issue, actually. Somebody was asking to add support to our API gateway for the ISTIO integration to work with ISTIO revisions. I remember we'd talked about it offline with my team and it was my first time responding to a GitHub issue and I had to tell him that we couldn't support Revisions, but we could expose this other field and hopefully that was okay. I was so worried that for some reason that I'd get a negative response online, but it really wasn't that scary. I think most open source communities are very welcoming and, you know, encourage collaboration. So I've only had like positive experiences like commenting on GitHub issues. Even, you know, as a member of the Kubernetes release team, sometimes you have to be, you know, tell people that your enhancement isn't going to make the cut, like, please file an exception. And even like there people are always like very welcoming and like understanding. And I think it's, you know, a good community to be a part of because, like, people are nice online. It's not really like social media. It's like a project you're all working together to build and make better.
Nina Polshakova
Yeah. One of my favorite comments that I always love to pull out is from the book Working in Public by Nadia Egbal where she says that the thing that keeps most people from working in open source is not actually technical competency, it's the fear of committing a social faux pas. So people usually are pretty nice in these communities.
Nina Polshikova
Yeah, that was definitely like my biggest fear. I was like talking to my manager, I'm like, oh, I sent him the message before that I was going to Send on the GitHub. And he's like, yeah, that sounds fine, just post it. But once you get over that initial hurdle, I think it's actually a very welcoming space and you get to work on cool stuff.
Nina Polshakova
Granted, there aren't exceptions and if you want to be amused by them, Dims and Tim Hockin did a talk at Kubecon a couple of years ago where Kubernetes contributors, Kubernetes maintainers read mean tweets very good.
Nina Polshikova
On GitHub, there's a professional level of communication. Yes, you can downvote someone's comment, but it's not as mean as Twitter per se. This is true.
Nina Polshakova
They have some GitHub comments in there.
Nina Polshikova
Too that are just spicy, but I don't know, there's like a little spicy.
Nina Polshakova
Not super spicy, but anyway, let's bring it back to Kubernetes 1.33. So as the release lead, you are very familiar of course with what's going on with this release of Kubernetes. Doing three releases a year is a lot, so let's help folks keep up with what's going on. What have we got going on in 1.33?
Nina Polshikova
Yeah, so I think this is a pretty exciting release because we have 64 enhancements going in, which is a pretty big jump from previous releases where like in 1.32 we had 44, which is a pretty good number too, but jumping.
Nina Polshakova
I thought that was impressive.
Nina Polshikova
64 is definitely like a significant, significantly larger release and we have a lot of exciting features moving to stable in this release like Sidecars, which has been a very long awaited one. So native sidecar support is now stable and then the other one that people like mentioning is multiple service CIDR support. So yeah, a lot of cool stuff going into stable in this release, but we also have some fun, exciting features that are more hot topic features like Dynamic Resource allocation has six new features in this release, all related to dra. So yeah, it's I think a good mix of both like stability and you know, new and exciting things coming on the horizon.
Nina Polshakova
Let's dive into some of these. There are a bunch of blogs that come out around the release, so if you want to dive into details on these things, definitely check out the blogs. I've got a list here from the Highlights blog that came out before the release and there's one deprecation mentioned which is the Endpoints API, which was stable, is being replaced by the Endpoint Slices API, which I believe makes some improvements to the way that endpoints are actually used. The Endpoints API was a bit overly simple, right?
Nina Polshikova
Yeah, exactly. One thing to call out is deprecated means marked for removals, so features will continue to function until they're actually removed. In Kubernetes, that's at least one year. So anything that's deprecated, it's still going to function. And the Endpoints API specifically is getting deprecated in favor of Endpoint Slices to make sure it's possible to run clusters without the Endpoints controller. So Endpoint Slices have effectively replaced endpoints since 121. And even several new service features like Dual Stack and Topology are implemented only for Endpoint slices, not endpoints. So the direction of the community is also moving in the direction of Endpoint Slices, not really developing as many new features on top of endpoints. Another thing to call out is Kube Proxy doesn't even use endpoints anymore. The Kubernetes Gateway API conformance tests are also not using Endpoint Slices. So in general, the community in Kubernetes is moving on to Endpoint Slices. And what this KEP means is it's mostly about just documentation and tests. It's not actually deleting or modifying the Endpoints API. That's not a goal of the kep. It's actually explicitly listed as a non goal. It will improve the end to end tests and documentation to show that Kubernetes is moving towards a world where most users run Kubernetes with Endpoints and Endpoint Slice mirroring controllers disabled and more in line with the direction the community's going towards.
Nina Polshakova
Very cool. Thank you for that additional context and for explaining the deprecations removals bit. Always something good to bring up. And I also want to mention that in the Highlights blog post there's a really nice description at the top about what the term means of about the rules of how deprecations work for different levels of features. If it was an alpha feature versus a beta feature, the rules for how the deprecation works are apparently a little bit different. And I thought it was a really good explanation.
Nina Polshikova
Yeah, definitely.
Nina Polshakova
And so that's the one deprecation that was mentioned in the Highlights. But there are three removals that I want to call out. Two of them were in the blog post. Then we added one more. So there's the removal of Cube proxy version information in node status, removal of host network support for Windows Pods, and the Git repo volume removal. So let's start off with the Kube Proxy one.
Nina Polshikova
So this is a field for nodes which is removed in 133. The field actually wasn't accurate because it's set by Kubelet which doesn't actually know what Kube Proxy version or even if Kube Proxy is running. So it's not a very useful field and it was marked as deprecated before and now it's getting removed in 133 makes sense.
Nina Polshakova
I've heard people talk about before how that field is inconsistent anyway, so probably shouldn't be using it if you are. And the next one is host network support for Windows Pods. This one was interesting. The implementation faced unexpected container D behaviors which plagued its usefulness and apparently alternative solutions were available. So removing host network support for native Windows Pods in Kubernetes.
Nina Polshikova
Yeah, exactly. It aimed to achieve the feature parity with Linux and provide support there. But basically I think the original implementation landed in Alpha in 1.26 but it faced some challenges and the project decided to the SIG Windows decided to withdraw support for it and in 133 it's getting removed. I think one thing to call out because there was a question in the SIG Windows Slack channel. This doesn't mean that it's not affecting the host process containers. So host process containers are a special type of Windows container that run directly on the host and this kept does not remove that. It's just aimed at providing the host networking and it again was never stable. Just because of the issues it faced the Windows SIG has decided to deprecate it or remove it.
Nina Polshakova
Yeah, it sounds like it was always unstable, so it makes sense to remove it. And the last one we were going to talk about is the git repo volume. This is something that's been deprecated apparently for seven years and there were security concerns. This involves entry driver code because it's about git repos so it makes sense to be removing it.
Nina Polshikova
Yeah, I think it's been marked as deprecated since 1 11. So seven years. Like you said, it's been a long time coming. And since it's been marked deprecated there have been security concerns like you mentioned, because your git repo volume it can be exploited in some ways to get remote code execution as root. So not ideal. In 133 the in driver code support is getting removed.
Nina Polshakova
Makes sense. And I mean generally entry things that are associated with a vendor or like third party system outside of Kubernetes itself, we're generally moving in the direction of removing those things. So follows the trend. Yep. And so that's the deprecations and removals to be highlighted. Beyond that, feature improvements, new things that folks can look forward to. In 1.33 we've got quite a few. I mean you've got 64 enhancements.
Nina Polshikova
So.
Nina Polshakova
There'S certainly more than we'll point out. But we've got a Few on our list to go over. So starting off support for user namespaces within Linux pods. This was one I hadn't really heard about, apparently was alpha in 1.25, beta in 1.30 and 1.3 makes it enabled by default. So what does this one do?
Nina Polshikova
Yes, so this has been years in the making. I think that the KEP number is 127, so if you look at other KEP numbers they're in the 5,000 ranges. This is a very early KEP, so it's been open since 2016 and implementing it. One of the reasons that it took a while for this KEP to mature is that it's required a lot of changes across different projects, not just Kubernetes. In the KEP details, it highlights that they've had to have changes in Kubernetes containerd, crio, runc, crun and the kernel to actually make it happen. It's a very exciting security feature in Kubernetes because it allows developers to isolate user IDs inside containers from those on the host. So that is great if you want to reduce the attack surface if your container gets compromised. So this is specifically a big win for like multi tenant Kubernetes systems where you have like shared clusters, different teams and different organizations deploying workloads. Because if one of your, you know, workloads gets compromised from one tenant, it doesn't affect potentially like the other tenants or the host system. So it really aligns nicely with like that principle of, you know, least privilege that is very important for security. And it's as you mentioned, like still in beta in 133, but it's now on by default. So we're, you know, you can try it out in 133 and you know, see what you think.
Nina Polshakova
Some of the Linux sysadmins out there might be excited about this one. Folks who have been looking into Kubernetes for multi tenant use cases for a long time. I love that call out for the use case. Makes a lot of sense to be able to separate the user IDs on the system so that if you were to escape the container, you could maybe shut down those user IDs and it wouldn't affect the rest of the system more, I would imagine. I'm very curious about the details on that one. Now another feature improvement in 1.33 that I want to go over is in place resource resize for vertical scaling of pods. This is something that I've been talking with folks a lot in place VPA in place, vertical scaling in place, resize. So the ability to change the resource allocations associated with your POD without restarting the POD is very exciting. And so this has been in alpha since 127 and it's beta in 133.
Nina Polshikova
Yep. And it's another oldie, so it's been opened like I think since 2019. So yeah, very, very old feature that has been long awaited again and it is going to beta in 133 and basically like you mentioned, allows changes to the resource allocation. So before when you had to change like CPU and memory requests, you had to restart the podcast, but now you can do it as the name implies, in place. So this is really great for stateful workloads like databases, ML training jobs, inference servers, because you can't really disrupt them while they're running, but you might need dynamic resource tuning based on usage. So basically anything that can't be scaled horizontally or disrupted in execution can now benefit from this feature.
Nina Polshakova
So which is huge with AI workloads which are so resource hungry, this makes it a lot easier to run your systems efficiently while giving those AI resources the AI workloads, the resources that they need when they need them. I've talked to a lot of folks about vertical POD auto scaling and how most folks don't really use it because because it'll restart your pod if you have it in the mode where it it'll do that. There's another mode where the vertical POD auto scaling will just tell you how you should set your resource requests and limits so that you can do it yourself and manage the disruption that way. But being able to do it in place is going to unlock a lot of potential I think, especially for those resource hungry workloads. Maybe Java too. I've heard some interesting rumblings about Java workloads in this. Moving on, another very AI related area, dynamic resource allocation. So dra, you said has a bunch of new features actually in this release, right?
Nina Polshikova
We actually have a section called DRA Galore in our blog.
Nina Polshakova
I love that.
Nina Polshikova
A lot of them are relatively small dynamic resource allocation improvements. So. So I guess taking a step back, dynamic resource allocation is the new API in Kubernetes to set the requesting and sharing of resources between your pod. So for things like GPUs, TPUs, like FPGAs, you can adjust the requests for those resources and third party resource drivers are usually responsible for tracking and preparing those resources, but the allocation of the resources is handled by kubernetes with structured parameters. So this was something that was added in 1.30 and in 133, there's a bunch of, like I mentioned, small improvements that kind of make user experience better. So there's the additional support for partitioned devices, there's DRA device taints, so very similar to node taints. But your cluster admin can now taint devices to limit their usage. DRA prioritize list that defines how a request can be satisfied in different ways. So as you can see, it's a lot of improvements to improve the user experience using DRA and fill in some of the gaps that maybe weren't there in 130, but now are things that people need using these features.
Nina Polshakova
We had an episode actually earlier this year from this podcast where we talked about working group device management, which is doing a lot of work on dynamic resource allocation. It was very interesting to hear some of the history of it in that episode as well. With any big technical architectural adventure, there are going to be challenges. So I know there were some debates about how to do various different things for the dra, so seeing a bunch of those come to fruition in 1.33 is really exciting.
Nina Polshikova
Yeah. And I know there's like a classic DRA implementation.
Nina Polshakova
Yeah, that was something we talked about.
Nina Polshikova
And it rolled back like now this is the new version. So yeah, DRA has had a long history in kubernetes in different forms, but the new form now has more features for you to play around with. That's exciting.
Nina Polshakova
Something I always think about with stuff like that is when we talked with folks about the Gateway API, one thing we always like to mention is that it's the new version of how you manage ingress traffic in your kubernetes clusters, rather than the original Ingress API that came out with Kubernetes. Because the original Ingress API was very theoretical. It was like, here's what we think people are going to need. But now, since we have a better idea and we have all these use cases and users to learn from, we have a better idea of what people actually need. So we've created the Gateway API, and I feel like dynamic resource allocation is kind of in one of those spots where we had an implementation of it, but now the use cases are real, very real and very prominent. So that's kind of driving this innovation of figuring out how to actually make it work better for what people need it for.
Nina Polshikova
Yeah, I definitely. I think it's always easier to design when you have like a specific real world use case in mind. Then you can actually Meet users where they are not like designing for a theoretical user, like, oh, it would be nice to configure this. Like we should just make it configurable for whatever users need. You either like, you know, limit the scope too much or it's like too open ended so people don't know what the best practices are. So I think it takes a couple iterations to get to a place where it's something that people will actually use and make sense for modern use cases.
Nina Polshakova
And I hope that's something folks can get out of these release episodes that we do is we get to see Kubernetes changing over time and we've got. We've already always got the release lead on, but also Abdul and I have done a bunch of these release interviews so we kind of have the context of how these things have changed over time. And so I hope that we go over use cases that help make these things more attainable for folks rather than just reading the release notes which are great and contain all the technical information. But it's kind of hard to put it into context without a little bit more pizzazz.
Nina Polshikova
This release specifically builds up on the previous couple releases very well because you see a lot of the usual suspects reappear. Sidecar is graduating to stable dynamic resource allocation again. Has been 131, really highlighted it and now there's more features that build on top of it. So every release is kind of building on top of the other releases. It's not zero sum game. It's really like the project is just having more and more things build on top of it in the ecosystem. It's not like one release owns a specific feature.
Nina Polshakova
Yeah, when you have three releases a year, it's not like the release cycle ever really stops. Folks are often working on features that don't make it into the release because they're not done in time or various reasons of reviewing or something's just not there or something and so they end up in the next release. So it's always rolling.
Nina Polshikova
Yeah, exactly.
Nina Polshakova
Coming back to the feature improvements in 1.33, there are a couple more we wanted to go over. One is ordered namespace deletion. I love it when I see the word ordered in any new feature in Kubernetes. That's something I hear from users a lot, is that they want more control over the order in which Kubernetes does things. So ordered namespace deletion in this case.
Nina Polshikova
Yeah. So this one I honestly was kind of surprised didn't exist already in Kubernetes but it's going to directly into beta in 133 and is actually getting cherry picked all the way back to 130 for a good reason. So ordered namespace deletion introduces deletion priority for your namespace. So currently the deletion order is semi random, which can result in not great behavior. So if you have a network policy deleted first before your pod, that's kind of a security gap. Right? Because you want to make sure that the pod gets deleted first and cleaned up and then all resources based on the logic and security dependencies of that pod are cleaned up after. So there's no gaps in time where you don't have a network policy in place. But the pod exists.
Nina Polshakova
Yeah, you don't have access to it, but the pod is there running whatever was on it. Exactly.
Nina Polshikova
So not great. But yeah, this is like again, it introduces order so you have more control over how it gets deleted and it's no longer whatever was in place. There's actual thought put into what the deletion order should be.
Nina Polshakova
Yeah, very exciting to see. And the last one that I wanted to go over was enhancements for index job management. The couple things mentioned in the highlights were per index backoff limits for index jobs. And then I put this one in my notes as a quote. Define conditions for marking an index job as successfully completed when not all indexes have succeeded. There's a lot of interesting cases with jobs determining whether they're done or not that I think become even more important in an AI based world. And I've seen some interesting talks on that. So good to see some more features going into jobs to make them more robust.
Nina Polshikova
Yeah, the job succession policy was another one that was kind of surprised. Kubernetes didn't really support so your Pytorch workloads. Only a specific leader index determines if the job actually succeeds or not. So the current behavior in kubernetes is all indexed jobs have to succeed for the job to be marked as complete. That seems like a limitation because you might want a specific index like the leader to be marked as successful or a specific count. And now you have that flexibility to either determine the success count, how many job indexes were successful to be considered a success, or which specific index must succeed in order to mark it as successful.
Nina Polshakova
Excellent. So that's all the feature improvements I had on my list to go over. I'm looking forward to seeing more of those release feature blogs and the release blog itself to get more in depth with all of these. Are there any Other deprecations, removals or feature improvements that we didn't go over that you wanted to bring up.
Nina Polshikova
I think sidecars are another one that I.
Nina Polshakova
That is a good one.
Nina Polshikova
Just working on Istio is important in that space. So it's also personally, when I joined the release team, I think it either became Alpha in 127 or in 129, but it was my first time either being on the release team or being the enhancements lead. Now I'm forgetting which cycle it was, but following that one through, I kind of followed it all the way through. And it's also related to my work. So it's like a nice parallel and open source work and day job. The sidecars follow me around everywhere.
Nina Polshakova
That's what they're supposed to do.
Nina Polshikova
Exactly. Now they follow me around correctly.
Nina Polshakova
Yeah, now they follow you around natively.
Nina Polshikova
Exactly. So the sidecar, it's a common pattern in kubernetes often used by service meshes, like I've been implying. So Istio and Linkerd have this sidecar mode where you get your sidecar container injected next to the application container. And that kind of enables the service mesh to do all the observability, connectivity and security functionality, abstraction because it abstracts it away from the main container and the sidecar handles all of that. And although it's like a very common pattern, it wasn't natively supported. So you couldn't really coordinate the sidecar with the main container natively in kubernetes until now. And now your sidecar, if you turn on this restored policy field, they're guaranteed to start before and terminate after the main container. So that kind of reduces friction of sidecar adoption and improves the reliability of how the sidecar's lifecycle actually works. So I'm pretty excited about that just because I saw the enhancements like through my career on the release team. And I think it is very important for users who are using sidecars in the community.
Nina Polshakova
And the reason it's so common and popular is that a sidecar is just a container that exists alongside the container running your application in a pod in Kubernetes. And since pods are the single unit that kubernetes manages, that means that both of those containers share networking, they share storage, they share basically any resource that could kubernetes manages for them. And since it's shared, that means they both have access to those things, which means you can do really cool things that especially service meshes do. Things like intercepting traffic and ensuring that it adheres to MTLs and all sorts of stuff you can do with sidecars, which makes them very dangerous, as well as a security tool if you wanted to try and hack containers, which is why it's important to have robust support for the pattern, to make sure that you're creating those sidecars in a way that makes sense and managing them throughout the life cycle of the pot. Wonderful. So we've gone over the features, removals and deprecations in 1.33. Let's get to the fun part. Yeah, I don't know. A lot of the the listeners might debate me that that's the fun part, but I always enjoy hearing about the release theme. What have we got for 1.33?
Nina Polshikova
So the theme for 1.33 is octarine. So if you're familiar with Terry Pratchett's Discworld series in it, Octarine is the color of Magic. So on a personal note, I love Terry Pratchett's Discworld series. He's one of my favorite fantasy authors of all time. It's a very long series. So similar to the release cycles in Kubernetes, there's a lot of different books in discw, but I think this release specifically, there's a couple of things that reminded me of Discworld and Octarin as a theme. The first thing was Kubecon. EU was set in London and Terry Pratchett is a British author. And I think a lot of the conversation around Kubecon was on the magic of AI, AI, observability, things like that. And everyone keeps using the word magic alongside AI. But I think Kubernetes is also pretty magical. It enables a lot of the magic that we see running workloads across different industries, even nowadays, running AI workloads. In one of his theories about this teenage witch who it's called the Tiffany Aching series. The magic's not very flashy like Harry Potter. It's. It's just like day to day average magic. And one of his quotes is it's still magic even if you know how it's done. And I think that's very applicable to Kubernetes where, you know, it might be like infrastructure, it's like just running things, but it is like, if you take a step back, pretty magical that this open source project exists. It's run in so many different industries, there's so many contributors around the world contributing to it, to it. And every like release it has more and more enhancements going into it. So yeah, I think it's on A personal note. It fits in with my favorite author, but it also, I think is a reflection of the Kubernetes community and Kubernetes magic that enables that community.
Nina Polshakova
I love that. I have been interested in the Discworld series for a long time. I've heard great things about it and I've never read it. So maybe this is my sign.
Nina Polshikova
Yeah. Everyone keeps asking for recommendations on what book to to start and you can always start at the very beginning. But I feel like Terry Pratchett, kind of like Kubernetes has also had. I think his early works are a little more rough than his later works. So if you want a one off book to try instead of sitting down and reading the whole series, I think Gars Guards is a good one. The logo of 133 is actually based on that book. It has a tiny little cute for everyone to see on the Ankh Morport Wizard Tower. So that one's very fun. You can read it as a standalone. Another good one is Going Postal. If you like more satire works. It's like a satirical view of the postal system in this magical world. So it's like if magic ran the postal service, how would it work? So those are the two that I would highlight as good starting points if you want an introduction to Terry Pratchett.
Nina Polshakova
I feel like the Post Office one also might have some very interesting parallels to Kubernetes. Networking is at the core of all the Kubernetes does.
Nina Polshikova
So yeah, a lot of message passing clack towers that send signals. So yeah, if it's the theme.
Nina Polshakova
Interesting. All right, I've got new books on my list from this one, Fantasy books, which is a genre that I love. So We've talked about 1.33. We've talked about the release theme. Let's get back to you. You are the Release lead for 1.33. You've been involved since 1.27. What was your path like? Becoming the release lead?
Nina Polshikova
Yeah. So the Kubernetes release team has this shadowing program that's really great because you get introduced to different teams on the release team as the shadow and then you can decide if you want to switch teams or become the sub team lead. I hadn't had much experience with Kubernetes other than doing bumps in our git repos. And I remember being burnt by one deprecation where Kubernetes removed the cluster name from the like as the cluster name label. And I remember being like, why was this removed? Like, I don't get it like all.
Nina Polshakova
The things that could have thrown you off. All right, very good.
Nina Polshikova
Tiny, but yeah. So that kind of like got me reading the release notes and then I saw that there was this shadow program that you could apply to and I was like, I kind of want to see like how the, you know, the sausage is made. Like, how do people determine what gets in a release? How do deprecations get into release? So you fill out a Google form and you write why you want to be a member of the release team. And I'm sure I mentioned how cluster names scarred me and now I wanted to make sure no one was scarred ever again. Something like that. But I was lucky enough to get chosen as a shadow on 1.27 for enhancements and I really enjoyed the people I got to work with. Actually I attended kubecon that year in Amsterdam. I met a lot of people that were part of the release team and I kind of like, I think that community made me feel like I wanted to continue contributing. So I came back for another shadow experience on enhancements and then eventually led it and then jumped around to the release notes. Team was a release lead shadow and then and eventually became their lease lead. But I was very lucky this cycle I think because I knew most of the people that were either my release lead shadows or the sub team leads. So I feel like I had a trusting relationship with everyone who was working with me. So I felt comfortable enough to reach out to them if things weren't going so well or if I was concerned about, you know, hiccup on the, on the release path. I knew that the people I had with me would be able to step in and help me handle it. So yeah, I think like it, like it started off as like a, like Kubernetes bumps are miserable. Like I want to see how this is made and kind of turned into like this is a great welcoming community and I want to like keep being a part of it.
Nina Polshakova
I bet a lot of our listeners out there can relate to that. I love that you were burned and then got involved as a way to better understand that feeling and maybe address it for others. I'm sure a lot of folks listening out there have been burned by Kubernetes releases proposal. Get involved.
Nina Polshikova
I think we have a really good shadow program. So if you're interested in Kubernetes, if you use Kubernetes definitely apply. And there's a lot of ways to, to get involved that don't involve writing big enhancement design docs. You can improve documentation, you can Help out as a release shadow on the Comms team. So helping reviewing blogs that get written. There's a lot of ways to get involved and learn what's happening in a new release without actually committing that much time to writing the code and the design docs that you might think are the main contributions. There's a lot of different ways that you can help out.
Nina Polshakova
Excellent. So make sure that you check out the release shadow application when that opens next time. We always try to post about it on social media and there's the Kubernetes developers Google Group which is the primary method that contributors use to communicate with each other. I don't know that we post anything about it on Kubernetes IO, but we do definitely on social media and on.
Nina Polshikova
The Google Group the GitHub issues and the Google Group will definitely have links. And then we also try to post it in the Kubernetes Slack. So in the Sig release Slack channel usually we have a post to the Google forum where you can apply.
Nina Polshakova
Yeah. So keep an eye on those things if you're interested in being part of the next release and trying to prevent that pain that you experienced for others. And so I want to wrap this up with a couple of last items. One is any advice for anyone who does take your advice and fill out that shadow form?
Nina Polshikova
Yeah, I think that the biggest advice is like don't be afraid to ask questions and get involved. Just because we're doing things a certain way doesn't mean we have to keep doing them that way. I think when I joined Enhancements there was the first cycle where we moved away from the Google Sheets tracking Doc to actually using a GitHub project board. And that saved so many hours of work. It was great. And ideas like that come from shadows who join, take a look at the project and want to automate processes or make improvements and make other Shadows lives better. So it's not like this is the way that it always has to be. It's like an evolving process and we're always open to improvements and asking questions is a good thing. So that's. That's how you learn.
Nina Polshakova
It's always wonderful to have fresh eyes on a process and doing this three times a year. We've got lots of opportunities for feedback. Just need folks who are willing to give it.
Nina Polshikova
And I think another thing to call out is this release cycle is we've consolidated the release notes team of the Docs team into one team. So there are only four teams in the release cycle. There is enhancements, CI signal, that tracks the state of CI comms and docs. But there are other ways that you can get involved. If you're interested in a specific sig, you can always join the Batsig's meetings and comment on the KEP shadow other roles in the release process. There's branch management, there's a lot of different ways to get involved. Like Release team is a great way to start and it's how I personally started and I love the people I get to work with but that doesn't have to be the way that you get involved in the community. If there's a specific field in kubernetes you're interested in, just attend some meetings and get to know people or attend.
Nina Polshakova
New Contributor Orientation which we hold once a month on the third Tuesday of the month and we have a playlist of recordings. So if you're worried about where to start, worried about committing a social faux pas as many of us are when we join open source projects, make sure you check out New Contributor Orientation. We'll give you the guide on how the whole community works and how to find a place that works for you. I love that call out and to close things up, I always like to ask the question of the release lead. Upgrading can be very painful, as you know from personal experience. So why should listeners upgrade to 1.33?
Nina Polshikova
Well, I think we highlighted a lot of really cool features and some that we didn't even get a chance to mention that you can read about in the blog like nftables are here, instable multi service CIDR support is here. So if you read the blog and are excited about some features I think that's a good sign that maybe it's time to look into upgrading. I do think if I had read the blog and the release notes I would have found the cluster name issue back when I was doing that.
Nina Polshakova
So make sure you read those.
Nina Polshikova
This is important. Yeah use try to find anything that might affect you. But it is I think important to keep up with the latest features that we have and we have a lot of exciting things this Release. There are 64 enhancements and they range from things that improve user experience, like the KUBRC feature that we didn't really get a chance to talk about, but a really cool feature going into alpha, things that are affecting stability, like improving stability, like sidecar support that we talked about and really new exciting things like dynamic resource allocation. So this release kind of has it all. Check it out, read the blog and hopefully it inspires you to upgrade.
Nina Polshakova
And with that, thank you very much, Nina. I've really enjoyed learning about 1.33 from you.
Nina Polshikova
Yeah, thanks for having us. Or me. Sorry.
Nina Polshakova
I've had the whole release team on through you.
Nina Polshikova
Yes. Vicariously channeling, that's the job of the release lead.
Nina Polshakova
Right.
Nina Polshikova
You're the avatar of the next release.
Nina Polshakova
And I feel like this fits into the fantasy theme as well. I'm sure there's some of that in Discworld, right?
Nina Polshikova
I don't think there's any avatar references, but yeah, I mean, that's okay.
Nina Polshakova
It's in the genre.
Nina Polshikova
It's in the genre.
Nina Polshakova
But thank you very much.
Nina Polshikova
Yeah, thanks for having me.
Kubernetes Podcast from Google – Episode Summary: Kubernetes v1.33 Octarine with Nina Polshakova
Release Date: April 24, 2025
Hosts: Abdel Sighouar & Kaslin Fields
Guest: Nina Polshakova, Release Lead for Kubernetes v1.33
In the latest episode of the Kubernetes Podcast from Google, hosts Kaslin Fields and Abdel Sighouar kick off with a roundup of significant announcements from recent events:
Google Cloud Next (April 9-11, Las Vegas): Google unveiled 229 new innovations, including advancements in agentic AI, Tensor accelerators, AI infrastructure, the 10-year milestone of Google Kubernetes Engine (GKE), and the introduction of the Inference Gateway. Abdel mentions, “[...] we’ve left a link in the description if you want to read more.” [00:33]
KubeCon EU 2025 Highlights: Google introduced the public preview of Multicluster Orchestrator (MCO), a service designed to manage workloads across multiple Kubernetes clusters. MCO functions as a recommendation engine, optimizing application placement based on cluster capacity and availability. Kaslin teases an upcoming discussion on MCO with its development team. [00:53 – 01:16]
CNCF Certifications: The Cloud Native Computing Foundation (CNCF) expanded its certification portfolio by adding the Golden Kubestonaut for individuals who achieve all 13 CNCF certifications. Additionally, the Linux Foundation Certified Linux Administrator and the Cloud Native Platform Engineering Associate certifications were introduced to cater to newcomers in platform engineering. [01:14]
Kube Scheduler Simulator: Kaslin highlights the Kube Scheduler Simulator, a new web UI tool that allows users to create Kubernetes resources and visualize how the scheduler makes decisions based on various constraints. This tool aims to demystify the scheduler's internal workings. [01:42]
Mirantis Contributions: At KubeCon EU in London, Mirantis announced the donation of two Kubernetes projects to CNCF's sandbox stage: K0S, a lightweight Kubernetes distribution, and K0S Motron, a cluster management tool. [02:00 – 02:18]
Nina Polshakova shares her journey into the Kubernetes release team:
Initial Involvement: Joining Solo I.O., Nina was attracted to the open-source nature of the projects. She recounts her first experience responding to a GitHub issue, expressing initial fears about negative feedback. However, her experience was overwhelmingly positive, emphasizing the welcoming nature of open-source communities. [02:48 – 05:12]
Community Support: Nina highlights a key insight from Nadia Egbal’s Working in Public: “the thing that keeps most people from working in open source is not actually technical competency, it's the fear of committing a social faux pas.” This resonated with her initial hesitations. [04:33 – 04:55]
Becoming Release Lead: Through CNCF's shadowing program, Nina transitioned from contributing to Enhancements in version 1.27 to becoming the release lead for 1.33. Her growth was supported by a trusting and collaborative community. [34:53 – 37:34]
Nina delves into the specifics of the Kubernetes v1.33 release, highlighting its significance:
Enhancement Count: With 64 enhancements, v1.33 marks a substantial increase from previous releases, emphasizing both stability and innovation. [06:06 – 06:21]
Stable Features: Notable features moving to stable include:
Dynamic Resource Allocation (DRA): Introduces six new features enhancing DRA, an API for managing resources like GPUs and TPUs. Improvements focus on user experience, including device taints and prioritized resource requests. [17:25 – 21:02]
In-Place Resource Resize for Vertical Scaling: This feature, now in beta, allows for the adjustment of pod resources without restarting them, crucial for stateful and resource-intensive workloads like databases and AI applications. [15:24 – 17:25]
Notable Quote:
"The ability to change resource allocations without restarting the pod is a game-changer for stateful workloads," Nina explains. [16:31]
Nina outlines significant deprecations and removals in v1.33:
Deprecated:
Removals:
Notable Quote:
"Deprecated means marked for removals; features will continue to function until they're actually removed," Nina clarifies. [07:40]
The release theme for Kubernetes v1.33 is Octarine, inspired by Terry Pratchett's Discworld series, where Octarine is the color of magic.
Symbolism: The theme reflects Kubernetes' "magical" capabilities in orchestrating complex workloads across diverse environments. Nina draws parallels between the subtle, dependable magic in Discworld and Kubernetes’ foundational role. [30:55 – 32:58]
Personal Touch: Nina recommends starting with standalone books like Guards! Guards! or Going Postal for those new to Discworld, linking the theme to Kubernetes' networking and communication prowess. [33:06 – 34:10]
Notable Quote:
"It's still magic even if you know how it's done," Nina relates this Discworld philosophy to Kubernetes, highlighting the seamless orchestration it provides. [31:34]
Nina emphasizes the importance of community involvement:
Shadowing Program: Encourages listeners to apply for the shadowing program to gain insights into the release process and contribute meaningfully. [37:56 – 38:33]
Diverse Contribution Paths: Contributions aren't limited to code; improving documentation, assisting with communication, and participating in various SIG meetings are valuable. [38:33 – 39:10]
Continuous Improvement: Stresses that Kubernetes' processes are evolving, welcoming fresh perspectives to enhance efficiency and effectiveness. [39:30 – 40:33]
Notable Quote:
"Don’t be afraid to ask questions and get involved. Fresh eyes can really improve the process," Nina advises aspiring contributors. [39:30]
Nina outlines compelling reasons to upgrade:
Feature Rich: With 64 enhancements, including impactful features like dynamic resource allocation and in-place pod resizing, v1.33 offers significant improvements in functionality and efficiency. [42:07 – 43:25]
Security and Stability: Deprecations and removals, such as the Endpoint Slices API and removal of insecure features, bolster Kubernetes' security posture. [07:08 – 12:54]
Enhanced User Experience: Features like ordered namespace deletion provide greater control and predictability in cluster management. [24:10 – 25:44]
Notable Quote:
"This release has it all—user experience improvements, stability enhancements, and new features like dynamic resource allocation," Nina summarizes. [42:07]
The episode wraps up with a spirited endorsement of the Kubernetes v1.33 release. Nina Polshakova’s insights shed light on the substantial advancements and thoughtful deprecations that position Kubernetes for continued leadership in container orchestration. The discussion not only highlights the technical enhancements but also underscores the vibrant, supportive community that drives Kubernetes forward.
Thank you, Nina, for sharing your expertise and experiences. Listeners are encouraged to explore Kubernetes v1.33, contribute to the community, and embrace the magical orchestration that Kubernetes offers.
Connect with Hosts and Guests:
For more details on the discussed features and updates, refer to the Kubernetes v1.33 release notes.