
Podman is an open-source container management tool that allows developers to build, run, and manage containers. Unlike Docker, it supports rootless containers for improved security and is fully compatible with standards from the Open Container Initiati...
Loading summary
Jordi
Podman is an open source container management tool that allows developers to build, run and manage containers. Unlike Docker, it supports rootless containers for improved security and is fully compatible with standards from the Open Container Initiative, or oci. Brent Boddy is a senior principal software engineer at Red Hat where he works on Podman. In this episode, Brent joins the show to talk about the project. This episode of Software Engineering Daily is hosted by Jordymon Companies. Check the show notes for more information on Jordi's work and where to find him. Hi Brent, welcome to Software Engineering Daily.
Brent Boddy
Ah, welcome. I appreciate that.
Jordi
So why did you introduce yourself?
Brent Boddy
Yeah, my name is Brent Boudi and I currently work for Red Hat as the architect of Podman and been with red hat about 11 years. Prior to that I was with IBM for 17 and change and I'm located in the United States in the great state of Minnesota.
Jordi
Beautiful state. So we're here to talk about Podman, as you mentioned in your part of your title, but There is a GitHub containers organization GitHub.com containers that contains, no pun intended, there a few projects and you probably don't maintain and certainly are not the architect of the others. But can you give us a sense of what is the overarching theme of all the projects hosted within the containers. Org?
Brent Boddy
The obvious answer here is they have something to do with containers. Many of them are maybe library oriented that are brought into larger applications that you might know, but you might not know the libraries. We do prefer some affiliation with OCI and OCI standards and there's certainly outliers on those kinds of ideas. But that's the general theme throughout container related, OCI related. And either they're going to be small applications or libraries that build up much larger applications.
Jordi
What is the Open Container initiative like? Could you describe what the standards there? What is the difference without it? What came before, maybe and what did it come about?
Brent Boddy
The OCI standards give us a roadmap, if you will, or a skeleton in which we can provide program and be able to have a standard that will work, for example with an application like podman, but also could work with Cryo, which is another one of those projects in the containers area that we handle containers and images in particular in the same manner.
Jordi
Yeah. Is it a way of making container images agnostic of the container runtime? Right. Of the multiple and as long as.
Brent Boddy
We follow the standard, of course.
Jordi
Yeah, exactly. So the grunt of the work is on the container runtimes, project maintainers like yourself.
Brent Boddy
Yep.
Jordi
So how does Podman fit into this containers. Org. Like is it the central project or what is it?
Brent Boddy
It's certainly the most active and popular. I believe that it's actually one of the more popular in GitHub period. But yeah, Podman fits into that just in sheer popularity. Like I said earlier, there are a bunch of libraries in that organization that make up big parts of podman, like its storage subsystem and how we handle containers images, how our network stack work. All those are sort of modulated into various libraries and GitHub repos and applications under there.
Jordi
Yeah, I'm looking at Ossint site, which is one of the places that I go to analyze success and contributions of certain projects. And Podman is very close to 30k stars. I mean, I know that there's other metrics that measure project success, but blimey, that's a good one. So yeah, before we dive into Podman itself, which you will delve into considerably today, give us a sense of the history, like where does it come from? What was the need for this? What was the design decisions? What was the vision? In a way, I guess, why did you get involved in this?
Brent Boddy
Podman has an interesting origin story, as though I think it's popular to say now. Initially we were part of the Cryo project as a small utility called K Pod at the time. And the idea was Kpod was going to be with Kryl being kubernetes. They needed a utility that you could run locally on those nodes and kind of get a feel for what might be going on if you're maybe debugging or you're just trying to work with something. And that was really the role of Kpod initially. And once we sort of began to work on it, it became clear to us that this could be a much larger scoped, not even now utility, more of a tool or an application. And we came upon this idea that we kind of looked at Containers, its security context, and how it should be run a little bit differently than what was going on, maybe with some of the other projects, including Docker out there. So we dropped out of CRI O, so to speak, into our own repository, formed a small team around that, that kind of did some skunk works initially to make sure we were on the right track. And of course, at some point you have to rename it and we renamed it to Litpod actually is what it was called because we were still viewing it largely as a library. And then shortly thereafter we renamed it again to Podman, which stands for podmanager. Not not a superhero, not A superhero, even though we have an origin story. Yes.
Jordi
Okay, but now. So I assume that at that point it was a Red Hat only. I mean it was open source since the get go, but was sort of like living in the Red Hat ecosystem only. But now it's a contributor to the cncf. So how did that come about and why did that happen?
Brent Boddy
Probably similar reasons that anything else would get donated to the cncf. I will say of. And even the CNCF has told us we were doing it right. A lot of projects go to CNCF because they need structure, they maybe need CI and automation and they need maybe help with the structure. In terms of the contributing rules and maintainership rules and things like that. We were largely doing everything right. We were always very transparent. If someone asked us where we were going with something, there's zero reason not to just be truthful. Our submissions process was very lightweight. Even though our CI can be difficult at times for someone to contribute. We tried to make it as easy as possible and we just, I think was were very well behaved upstream projects. But I think there's also an impression that it was a largely one vendor solution and that's just simply not true. We certainly have had contributions from others and other companies and just random flybys. So this just takes it to the next level of saying we are committed to open source, we are transparent, we'll remain that way if you decide that you want to contribute and also to just. It's a little bit of honey to attract more contributors. And that's our whole.
Jordi
Tell us about the product itself. What actually has been contributed? What is the state of the product itself? Because it has evolved considerably from a component in CRI O to what is a standalone container runtime container management system in itself. So let's actually describe it and then we'll go into the particularly core capabilities of it.
Brent Boddy
Yeah, it's an interesting question because I do periodically get asked is it production ready? Which I always sort of smile about because there was a day when we would say use at your own risk. That's certainly certainly well into the past now. The very fact that it does ship as the single container runtime in rhel, you know, Red Hat's flagship, it's production ready. It's more than that. It's not to say it's without bugs, but who isn't? So that to me it's production ready. I'm always amazed. You'll probably hear me say that a couple of times today at how people have chosen to use IT and implement around it, or write a wrapper for it, or use it and embedded in their actual self or products. It's always catches me off guard. I suppose I should know better by now.
Jordi
So I think what comes to mind when someone mentions Pop man in any kind of conversation is the demonless architecture, right? This is probably the thing that stands the most. There are more things. We'll touch upon those later. But I guess why was it designed as a demonless product application? And what is it that makes it demonless? Obviously the absence of a demon. But how does that affect the usage of the product?
Brent Boddy
I suppose there's a good argument to be made that it reduces attack vectors as being one of the key things, but it also allows resources to be freed up. In the case of a demon, typically that daemon runs and takes up resources even when it's listening. In the case of Podman, when you run it, it doesn't have a daemon. Podman itself runs. It does some of the, I'll say, all the setup for the container to be able to run. We then call a runtime like Crun to actually run the container. And then we also have a small utility called kanmon that it's short for container monitor. And kanmon actually then is the only thing that's left running at the end of this. And it just does all the logging for the container. It keeps standard in and out open and allows us to exec back into the container. If you want to kind of join that container namespace and take a peek at what's going on. And that's a very, very small C program right now. It takes very little memory to sort of run. And then it also its final job is with the container exits to sort of what was the exit code and was it normal or not.
Jordi
So I guess it was the performance decision in a way to make Putman really performant.
Brent Boddy
There's a little bit of everything being sprinkled in there. Yes, indeed. And also having a service open is like having a door open or a window partly open, or however you want to put it in terms of being taken advantage of. Also, we kind of like there's an advantage to demons, without a doubt, but there's also disadvantages. If the demon goes down, it tends to be catastrophic. Not to say that we're perfect either, but it's more of a one for one loss. If something is catastrophic, you lose maybe one container. Unless it's something like systemd running around trying to handle things, or the kernel has decided it's out of memory and it's going to start choosing. So there are some trade offs there as well that I think influence things. And then finally is the rootless portions daemon has to be run as a user. Hudmin can be run as either rootful or a privileged user, or an unprivileged user and we don't need to start and stop a daemon to do so.
Jordi
Yeah, exactly. Could you expand on that on the rootless container execution? Because that seems like every admin's. I mean I'm exaggerating by saying every admin's nightmare, but it seems like when you're managing containers, creating them and spinning up, you would rather have all the privileged just possible because you'd rather kill completely a cluster wherever do. But that also comes with serious trade offs. So Podman by design, I guess is designed to manage less container execution. Is this correct?
Brent Boddy
Yeah, and the idea behind it is really, I guess start with you said the word nightmare. Our nightmare is container escape. Someone being able to escape the container and get on the hose. One way to mitigate a considerable portion of that is to ensure that even if they did escape, they had the minimum amount of privilege possible. In other words, not root. But the fact is, as you kind of pointed out, the less privilege that a user needs to run its job, probably the better, because again, it's about reducing the size of that vector and the landscape around it to be able to reduce any opportunity for something bad to happen. And I think in the ideal world, an administrator might admit that they would like every single job to run with the least amount of authority and the most isolated that it could possibly be. And in this case we're using a lot of the kernel, which is very well trusted, on managing privileges in that way. So we're simply following along with the Linux kernel and using some of the best practices they've developed and offer as functionally to stay within that circus ring, if you will.
Jordi
And I think that removing so to speak, debugging capabilities that a privileged user may have, as opposed to someone without those, I think puts the burden rather on the built trust. Like if you've packaged the correct things, if you're able to repackage it, the container again, the container image, rebuild it again, easy. You just don't care about debugging it, just remove it and probably spin it up again with the amended rebuilt image. I was going to ask you, how does that process of building images with builder one of the other. I'm not sure if I'm pronouncing it correctly by the way, you are one of the other projects in the containers. Org. How does that work? I'm particularly interested in multi stage builds, but in general give us a sense of how does that process look like.
Brent Boddy
Yeah, Build A and I can't do it properly either is meant to be said with a strong Boston accent. Oh, almost like Builder.
Jordi
Oh, okay.
Brent Boddy
In the way that Bostonian would do that interestingly enough, named by Dan Walsh, I believe, who I know is a Bostonian.
Jordi
Okay, makes sense.
Brent Boddy
But nevertheless, Buildup is both an application and a library in the relative world. Podman build it is a standalone application designed specifically for building and building, experimenting and kind of putting together images and manifests. However, Podman for the command Podman build actually pulls in the Builder as a library and we use the exact same code that Builder uses. It's actually a subset because there are a lot of complex features that Build A does offer that aren't available as part of Podman build. So we see a mix of people using both. If they're purely all they do is build images, they're very likely using Builder. If they're more of a developer where they're building, tearing down, following the ephemeral nature of containers, then they tend to be using podman. There's no set rule. The one thing that maybe differentiates is Podman does have a restful API backend where it's a pretty well established API to interact with it. So if you're doing builds with something that needs that, you tend to drift that way.
Jordi
Going back to the demonless nature of Podman was it more difficult to build? Let's imagine that we could go back in time and you could become one of the architects of Docker of a daemon based container engineering would have been easier to build that as opposed to Potman did. It came with unique challenges to build a demonless.
Brent Boddy
Yeah, it would be easier. It would have been easier.
Jordi
So what are those challenges then?
Brent Boddy
Basically over the years, the first few for certain, we had to develop some strategies to sort of combat what you don't get with a demon. And there it could be as simple as there's not an all knowing source of truth. Whereas a demon, you can always ask it what's the source of truth? And it knows for everybody. So dealing with state or dealing with conflict over locking or racing or things like that become they're paramount. And we developed a number of techniques to really learn how to deal with that. File locks, shared memory locks for various things, but also the order in which a container lifetime is. If you look at how it starts as an image and rolls up and goes all the way out down to an exit code. We had to just sort of be able to, at every portion of that, be able to sort of know what its state is. So we didn't have somebody starting and stopping the same container at the same time. Somebody needed to win and then somebody needed to wait. And that's really what the challenge has become. And we still do find, admittedly very small, like thin use cases where it's like, yep, missed it. And they tend to be harder and harder to deal with. I suppose on a demon side, you have the inverse problem as well, which is that you have the source of truth everywhere, but then you have all kinds of things. Trying to ask a single source, what's the source truth? A lot of people don't know. We have a SQL in general, it's a default SQLite database riding underneath all this, helping to manage that. And that's also one of the locks that we sort of implement is right through there. And state and status at a given time in a container lifecycle may be in there as well.
Jordi
Okay, that was my next question. If you could describe the typical container lifecycle, is there any sense you could give us of that?
Brent Boddy
If we step back a little bit, kind of off the cuff here, it starts with identifying a container image that you want to run with. So whether that's a Fedora or Alpine or an Ubuntu, it really. And again, that doesn't matter. It should be clear with our folks that if you're running a Fedora workstation, you can run an Ubuntu container perfectly well. I don't know how well understood that is. At times it's completely indifferent, so it's identifying the image. And if that image out of the box is not good enough, then it's likely, as we've talked about, you need to build it for your purpose. So if you're using an Ubuntu image and you're trying to run, I don't know, a LAMP or an NGINX service, you need to install those applications in the container image so that you can use them. And then there's probably some configuration and some other very custom things that you want added. And that's sort of like the, the third step of the build in my mind. After that, you got to know the next piece is assembling how the user wants to run it. Do they want to run it in the background? Do they want to run it in the foreground? What security Options do they want on or off or networks do they want to join and all that information is assembled. That's generally the role of Podman then. And that's all kind of brought together. All right. This is how the user wants to run it. And then when it actually becomes run, typically it's executed by a container runtime like Crun run C Krun and it does the name spacing and a lot of that related stuff. And then something has to hold that PID process inside the container. We like to just call PID 1. Something's got to hold that open and keep it alive. In our case that's Kanmon. In other cases it's something else. Container runs for its duration and then there's a exit strategy for that and off that goes. And typically then users want to know whether it was a good exit or a bad exit and if it was bad, what kind of bad exit was it? And depending on how you ran it, it may just completely clean up behind it or it may leave it to be able to be run again. That to me is. That's not Podman specific. That's in general how containers kind of go.
Jordi
APIs are the foundation of Reliable AI. And Reliable APIs start with Postman. Trusted by 98% of the Fortune 500, Postman is the platform that helps over 40 million developers build and scale the APIs behind their most critical business workflows. With Postman, teams get centralized access to the latest LLMs and APIs, MCP support and no code workflows all in one platform, quickly integrate critical tools and build multi step agents without writing a single line of code. Start building smarter, more reliable agents today. Visit postman.comsed to learn more. Are there any other specific features that you feel particularly proud of? The community has praised or it was tremendous hard work to get into Podman.
Brent Boddy
Yeah, they're all sort of like children, babies. So there's quite a few. There's some that. Are you wanting to know some that are unique to Podman maybe, or any.
Jordi
That you particularly are fond of?
Brent Boddy
Yeah. Where to start? I'm quite fond of our implementation of kubernetes. And to be clear, we don't interact with or orchestrate, but what we can do is we can generate kubernetes, YAML, which is the backbone of how to orchestrate containers on a large scale. And we can generate that from a running container, meaning go ahead and figure out how you want to run the container and then once you do have that, you can snapshot it. If you Will you can replay it locally or you can pitch it off to a Kubernetes orchestrator and say, run this. That was one of those back of the envelope, super fast, let's see what we can make. Is this like even a thing? And oh, it is. Okay, so let's make it real. And it's really one of our key community. It's one of the key areas in podman. The community really helps us tell us, you know, this option needs to be implemented. And so the community has really steered that to success. Quadlets, another huge one for us that was actually contributed to us by it. It was a Red Hatter, but someone not on the team. And then the Podman machine, which is akin to Docker Desktop, is something personally for me is something to behold. And we've kind of gone through two iterations of it now, but that has been wildly successful. And again, another one of those I can't believe how this thing's being used kind of technologies.
Jordi
When you say akin to Docker Desktop, you mean that it's a GUI that runs locally.
Brent Boddy
It's not the GUI that the GUI is now provided by podman Desktop, another project. It's all the underpinnings we can run on a Mac without the gui. We don't bundle that all together. So you can run podman with the CLI just like you would in Linux. You just got to initialize a machine and have it running what we call a machine, which is essentially just a really special custom virtual machine underneath the covers. So it's all the underpinnings and we can do all CLI related stuff and still have that function and. Or you can add podman Desktop and get a real nice GUI that does have a lot of value added itself.
Jordi
Podman is currently the only container management system within the cncf. Is this correct?
Brent Boddy
No, I think Containerd is there.
Jordi
Oh, right, yes, you're right.
Brent Boddy
Cryo is certainly there. I don't know about others offhand. I know those guys are there. Yeah, absolutely.
Jordi
I don't know how much you can talk then about because I'm going to ask about the future, like the roadmap and what it looks like and obviously this is a question for the community, I guess, but since you're so involved in, and the contributions of your team are probably one of the biggest, can you talk us a bit about what's coming up in podsman's future?
Brent Boddy
Sure. And again, I can be absolutely transparent about it because. And in fact, on our GitHub repository, that would be GitHub, container, podman in that repository as a part of our CNCF activities, actually publish a roadmap now. They can be kind of generalized, but I just published what we're doing for 2Q this year. So April to June and I'll just step back our project. We do a quarterly priority analysis. So we for the most part scoop up, you know, a couple weeks before the quarter is going to end. I scoop up all the stuff that's been asked of us that we had to defer, that has bugs that have been reported or features that have been asked for. I scoop all that up and we review it as a team and kind of decide we have a ranking process and from that we derive what we're going to do the next quarter. So some of the things that have come out for the next, say three months, but their themes moving forward, even well beyond that would be significantly increasing our scope and support for OCI artifacts. We currently don't support them on Mac and Windows. I think by the end of the quarter we will have all that for the Mac clients. We continue to have improvements in our rootless networking via a project called pasta. There'll be some integration of Quadlets directly into Podman, whereas today they're. They're sort of somewhat unlinked. You'll see that in Podman we'll be able to sort of have a view of what's going on with Quadlets and for example, get a list of them and what their status is and things like that. You'll see an increased adoption of composefs in particular for edge deployments and other special utility projects. And we have been for the last six months looking at push and pull speeds. So for example, you have a local container image and you want to push it up to a registry like Quay IO or Docker IO or vice versa, pulling it down and looking at ways that we can minimize the amount of network traffic or speed that we've been putting a lot of time into, something called partial polls where we pull down just what's, you know, portions that are needed at the time as well. And then using the standard compression around all that. So quite a bit we have a good development team, not even counting the external contributions. So quarter to quarter now we're able to advance quite well.
Jordi
Partial pool sounds like the shallow clones that get added years ago kind of to not pull over the tree of blobs and just get the latest or if you.
Brent Boddy
Yeah, I think there are several sort of derivatives of that model and even just in file systems and I guess I would say file systems is not an area of expertise for me at all. But standing outside and kind of looking at those guys, they wouldn't be redoing it if they haven't, if they were satisfied with it. So there must be some weaknesses that we're still trying to iron out.
Jordi
So moving up a bit in the sense of getting a bird's eye view of the market in general. I know that in open source projects don't necessarily compete against each other, but it is true that one projects want attention, contributions, engagement and all that. So with that in mind, with Containerd in mind, with Cryo in mind, with Docker in mind, with all the, I guess, competitive alternatives, alternatives in that space, does the Podman team address a specific audience? Does it want to tackle a specific use case? Do you guys see the. Your audience, the global audience sort of like divided into camps in a way, I guess. Is my question to you or not?
Brent Boddy
Definitely. Although I would be less hesitant to say that we all agree on what that actually is and what that looks like. But my view is Docker, Cryo or just Kubernetes in general. Containerd, Podman, they were all made because they have a strength. And so that strength does tend to dictate where they, I'll say best play, at least perception, and probably in reality as well. So for Podman, we pretty much can run anywhere or anything that Docker does. But probably our sweet spots, like, let's just say in the last year, our sweet spots are around developer environment. So the, the single node developer that needs to either develop using containers, you know, they're writing something around containers, or they're for example, writing code and they need containers to race, say, run their regression tests or something like that in various environments. So that's a. Certainly a large one. The other one where we're really starting to stick out, particularly with the use of Quadlets, is being able to run containers as a service in conjunction with System D. So edge applications tend to be embracing this model of using a Quadlet to manage and systemd to manage a service. You could almost say, well, what is Podman even doing in that other than running the container? And that's somewhat true. We sort of step into the background and allow systemd and Quadlet to do its thing that is really seemingly kind of come out of the woodwork, if you will. And the piece that is complemented that well has actually been our Kubernetes ability. So they use Kubernetes to describe the service in a very, you know, Sort of meaningful way as opposed to a lengthy run command or something like that. People do this with Docker Compose as well. And so we've got a nice descriptor and then you know, the quadlet can ingest that and use Podman to run it. And it's very repeatable. And of course the added advantage is you could then kick that, you could, you know, in an emergency or whatever, you kick it to kubernetes and you know, scale it out or whatever. So I think we also have play well in the terms of that pass off from single node developer to I want to, I want to scale. And we really believe in that pass off. That's something that someone else does better than us. And of course we have an opinion on that too. But you know, here, that's your world, you guys do that best. And we play within our sandbox. So to me that's kind of how things are evolving. If you'd asked me that question five years ago to get entirely different an answer.
Jordi
Yeah, I agree. So I'm taking for granted that most of the people that are listening to us are more familiar with Docker is of the projects that I've mentioned, the oldest, probably, I could have that wrong, but it's probably the most popular. Very likely. So those that are local first, those developers that are listening to us that as you say, are playing with one node, running stuff locally and are familiar with Docker Compose, but at the same time want to dabble on Podman and try it. Can they play around with Docker Compose and Podman? How would you onboard those guys?
Brent Boddy
Yeah, and it's a reality. Docker Compose has a large following people, companies, development teams have invested a fair amount of time fine tuning those files. The question makes perfect sense. Our answer is that you can take your existing Docker Compose file and Podman will honor that. So what we do is it takes a little bit of an explanation on how it actually works, but a Podman being demonless doesn't exactly jive with Compose out of the box. Well, because Compose is expecting a restful service to be answering. So what we do is we run usually a socket activated system service with System D that will start the restful portion of Podman. And then what we tell you is just use Docker Compose against that. And it has worked quite well. Not every single use case is covered. If you've gotten some very specific things like there's some new build related API stuff that Docker's done that's in their API, it's not in ours yet. And we've even gone to the point of if you have the Docker Compose binary, this would be the v2 docker compose. If you have that binary in Path, in other words, it's easily found. You can actually type Podman Space Compose and it'll behave just like Docker Compose. And we do that just for convenience for people that have scripted this on top of. So it's Docker Compose and maybe they've written a Bash script or maybe they've written a powershell script and they want, you know, they don't want to go in and change all the things. So that's one thing that we've done. We've had very good success with that. And we talk a lot because of questions about how do I get from Docker Compose all the way over to Kubernetes? And Podman can kind of do that for them. They can run their workload against Podman with Docker Compose. They can snapshot that and generate YAML. They can stop all that, replay it, make sure it's doing what they expect, and then they can throw it out to an orchestration. And that has been sort of interesting, that idea of kind of intermediary to get to where they want to be. So I think that's been interesting. Yes, that's how we deal with Dr. Compose. And it was, I will say back. So we started honoring Dr. Compose in Podman 3. We're at Podman 5 and change now. You know, that was a one of those decisions where you look over the cliff and you say, you know, yeah, this is possible. Do you want to do this? Because it's going to come with baggage. And it did, but it was pretty much the com. Pretty much conversation was, can we not do this? And back then that was Docker v1, which was a little bit of a simpler approach. It was all Python based, so it's a little easier to deal with. I think we're in a good spot with that now. And we certainly see people bumping back and forth between Docker and Podman and Podman to Docker. That to me is just the open source freedom that allows you and not wanting to lock people into, you know, specific vendor solutions, which is what the CNC is about, and frankly, what, you know, we're about, it's true.
Jordi
I wish that Compose worked a bit better myself with Kubernetes. That's my personal opinion and I think others. But you're right, I mean, you have alternatives, you can go on to other projects and that work with kubernetes really well, like Podman and just launch yourself into the CNCF ecosystem. Tell us about any projects production that are running in Production Live that are based on Potman. Has the community come back with real world examples of hey, we're running Podman for this and that and it's amazing.
Brent Boddy
Yeah, absolutely. I mean it is amazing, particularly when you think about the first line of code going in, you know, and where we are now. I guess to me the most impressive and successful ones are obviously that we can't share.
Jordi
Yeah, that's usually the case.
Brent Boddy
Yeah, I mean that is the case. That said, we have maybe a bad topic these days, but we have a lot of government agencies using it. HPC community has definitely accepted and they're doing the HPC thing of pushing it as hard as they can push and squeeze everything out that they can.
Jordi
The hpc. Could you elaborate on that? What does it sound like?
Brent Boddy
Well, HPC's environments typically, you know, they're at massive scale or latency or pushing it to launch as many containers as possible, for example. It's probably not a great example but you know, those are things they're concerned about. Whereas you know, the design of this was a person sitting on a laptop. If it took 1 1/2 seconds versus 1 second. To us we don't even recognize that to HPC that's important. We've seen a very, very broad acceptance in the banking and FSS for some reason that it just seems to have really stuck very well there. I think the primary driver behind that has been to get off of privileged solutions like virtual machines and some of the other runtimes we have seen Edge, as I mentioned, a real embracement from Edge. Related things where we have some interesting functions like auto update, where if a new image is released, Podman can detect that and it will update it and if it doesn't work, it'll roll it back. So you know, that satellite system sitting on an iceberg somewhere floating around doesn't go down and someone has to go go visit it kind of mentality. We've certainly seen, you know, Edge embrace that and then just befuddles us. But Mac and Windows, it's just been like a totally unexpected embracing of that. And obviously it's an interesting thing coming from Red Hat, you know, it just really is. There are a lot of Macs being used in Red Hat, you know, so there's some familiarity around that. I don't think it's a trade secret that IBM itself has adopted Podman and Podman desktop, and there are tens of thousands of users of that going on there.
Jordi
So someone like you that knows the nooks and crannies of every single place and component of Podman, are there any quirky features, any quirky design decisions that you're particularly fond of because it's just fun, or it provides a cute experience, so to speak? Is there anything that stands out in that sense?
Brent Boddy
There's probably a handful. The first one is in our name. People forget that we run pods. And there are advantages to running containers in pods. There just are. Networking is greatly simplified there. You can have shared namespaces between them for things like IPC and otherwise. These are, you know, if you sit back and think about them, these are really important on how you may construct your application. And I'm surprised it hasn't. People use it and Certainly all the K8 stuff we do has to use it, because that's all based on pods. But that's one. The other one is just a little quirky one called Run Label and you can type Podman. I think it's Podman Container, Run Label or Run Label. I forget off the top of my head because I don't use it as much as I should. But it basically allows you to have a label attribute on the OCI image that says this is how I want to run this image. And so it might have. You know, if you've ever run a Docker Podman command line, there's I think like 300 options that can be passed in or so various incantations. Well, that can be embedded in the image as a label. And then the user doesn't have to type out this. Oh, okay, 100 character how to run this? That's just bang. And it goes, you know, somewhat composed, like somewhat in that territory with maybe a lot less orchestration, but simple. We've got a number of sort of advanced network scenarios too that we support because people have come and asked us, could you do this? And so we've got some kind of quirky ones there in particular about network isolation. Again, kind of building on the reduce vector story. And if your container doesn't need a network or only needs to talk to each other, then, you know, then do that.
Jordi
Yeah, nice. Well, Brent, anything else that we didn't touch upon about Podman that we should have or the audience should know?
Brent Boddy
I think that as outsiders watching our journey through CNCF will be something. We will likely have to move our repository as a term of that. That would also include Builda. And then there's a command line utility called Scopio that would also likely go with all three are kind of in the same basket. Watch us continue to mature and and smooth out the rough edges on our Windows and Mac support, and particularly around GPUs and alternative architectures. They'll continue to be movement in there and contribute. That's what we're here for. And nothing makes us happier than someone who files a request for enhancement or an RFP on our issues page describes it and several us comment like, yeah, that's a great idea. And then a pull request shows up to add it with tests. And you know, that's just awesome because it's like a shot in the arm. I mean, every contribution you wonder is this going to be a flyby and now we're going to be left holding it, or is this somebody that wants to be part of this? And it's an ongoing effort. But regardless, contributions are great because it's, it's a way for the community to tell us where to go. And I love that because, you know, otherwise we're trying to always figure it out and it's extra work and you do get it wrong sometimes.
Jordi
Well, Brett, this has been fantastic. I wish the Potman project the best that it graduates. It becomes a widely used project within the CNCF and elsewhere. And thank you for being with us, Jordi.
Brent Boddy
It was my absolute pleasure. Thank you.
Podcast Summary: Software Engineering Daily – Podman with Brent Boddy
Release Date: August 12, 2025
In this engaging episode of Software Engineering Daily, host Jordi welcomes Brent Boddy, a Senior Principal Software Engineer at Red Hat, to discuss Podman, an open-source container management tool. Brent delves into the intricacies of Podman, its evolution, and its place within the container ecosystem.
Brent begins by outlining Podman as an open-source container management tool that empowers developers to build, run, and manage containers. Unlike Docker, Podman supports rootless containers, enhancing security, and maintains full compatibility with the Open Container Initiative (OCI) standards.
“Podman itself runs. It does some of the setup for the container to be able to run. We then call a runtime like Crun to actually run the container.”
— Brent Boddy [10:17]
Podman adheres to OCI standards, ensuring interoperability across different container runtimes. Brent explains that OCI provides a “roadmap” or “skeleton” allowing developers to maintain consistency and compatibility within the container ecosystem.
“The OCI standards give us a roadmap, if you will, or a skeleton in which we can provide program and be able to have a standard that will work...”
— Brent Boddy [02:20]
Podman originated from the CRI-O project, initially named Kpod. It was designed as a local utility for Kubernetes nodes to debug and manage containers. Recognizing its potential beyond a small utility, the team expanded its scope, leading to the creation of Podman (short for Pod Manager).
“...Podman has an interesting origin story... we formed a small team around that, that kind of did some skunk works initially to make sure we were on the right track.”
— Brent Boddy [05:47]
One of Podman's standout features is its daemonless architecture. Unlike Docker, which relies on a long-running daemon, Podman operates without one, reducing attack vectors and conserving system resources.
“Having a daemon runs and takes up resources even when it's listening. In the case of Podman, when you run it, it doesn't have a daemon.”
— Brent Boddy [08:57]
This design choice enhances security and performance, allowing Podman to be more lightweight and resilient. If the daemon were to fail in Docker, it could be catastrophic; Podman's approach mitigates this risk.
Podman’s support for rootless containers is a significant security advantage. Running containers without root privileges minimizes the risk of container escapes and reduces the overall attack surface.
“Our nightmare is container escape. Someone being able to escape the container and get on the host. One way to mitigate that is to ensure they have the minimum amount of privilege possible.”
— Brent Boddy [11:47]
By leveraging Linux kernel features, Podman ensures that containers operate with the least necessary privileges, aligning with best security practices.
Podman integrates seamlessly with Buildah, another project within the Containers GitHub organization. Buildah specializes in building container images, and Podman leverages its capabilities to handle complex build tasks.
“Podman build ... pulls in the Builder as a library and we use the exact same code that Builder uses.”
— Brent Boddy [14:03]
This integration allows developers to utilize Podman for both building and managing containers, providing a cohesive workflow.
Building a daemonless system presented unique challenges, primarily in managing container state and avoiding conflicts without a central authority.
“Dealing with state or dealing with conflict over locking or racing or things like that become they're paramount.”
— Brent Boddy [15:43]
To address these, Podman employs techniques like file locks and shared memory locks, ensuring reliable container lifecycle management without a daemon.
Brent outlines the typical container lifecycle managed by Podman:
“...this is not Podman specific. That's in general how containers kind of go.”
— Brent Boddy [20:13]
Podman excels in generating Kubernetes YAML from running containers, facilitating easy transitions to orchestration platforms. This feature bridges the gap between single-node development and scalable deployments.
“We can generate kubernetes YAML, which is the backbone of how to orchestrate containers on a large scale.”
— Brent Boddy [21:10]
Additionally, Podman complements Kubernetes by allowing seamless scalability and integration with orchestration tools.
Podman offers robust support for Docker Compose, enabling developers to migrate existing Docker Compose workflows with minimal friction. By emulating Docker Compose commands, Podman allows users to leverage their existing scripts and configurations.
“You can take your existing Docker Compose file and Podman will honor that.”
— Brent Boddy [31:52]
Furthermore, Podman provides podman compose, ensuring familiarity for those transitioning from Docker.
While specific production use cases remain confidential, Brent highlights Podman's adoption across various sectors, including government agencies, HPC (High-Performance Computing) communities, banking, and financial services. These industries appreciate Podman's security, performance, and flexibility.
“...we have a lot of government agencies using it. HPC community has definitely accepted...”
— Brent Boddy [36:02]
Notably, HPC environments benefit from Podman's ability to handle massive scales and low-latency container operations.
Looking ahead, Podman’s roadmap includes:
“We see an increased adoption of composefs in particular for edge deployments...”
— Brent Boddy [24:18]
These developments aim to bolster Podman's performance, usability, and integration within diverse environments.
Brent shares some quirky and unique features of Podman that enhance user experience:
Run Label: Allows embedding run configurations within OCI image labels, simplifying container execution without extensive command-line options.
“You can have a label attribute on the OCI image that says this is how I want to run this image.”
— Brent Boddy [39:12]
Pods Management: Emphasizing the importance of managing containers in pods, simplifying networking and shared namespaces.
“People forget that we run pods. And there are advantages to running containers in pods.”
— Brent Boddy [39:12]
Podman Desktop Integration: Provides a GUI for managing containers on Mac and Windows, complementing the CLI-based operations.
“...you can run podman with the CLI just like you would in Linux. You just got to initialize a machine and have it running...”
— Brent Boddy [22:58]
These features showcase Podman's versatility and user-centric design.
The episode wraps up with Brent expressing enthusiasm for community contributions and the ongoing evolution of Podman within the CNCF ecosystem. He emphasizes the importance of transparency and collaboration in driving Podman's success.
“Nothing makes us happier than someone who files a request for enhancement or an RFP on our issues page describes it and several us comment like, yeah, that's a great idea.”
— Brent Boddy [41:23]
Jordi thanks Brent for his insights, wishing Podman continued success as it matures and gains wider adoption.
Key Takeaways:
For those keen on exploring Podman, this episode provides a comprehensive overview of its capabilities, design philosophy, and future directions, making it a valuable resource for both newcomers and seasoned professionals in the container space.