Kubernetes Podcast from Google: Episode Summary - "Linkerd, with William Morgan"
Release Date: January 29, 2025
Hosts: Abdel Sghiouar and Kaslin Fields
Guest: William Morgan, CEO of Buoyant (the company behind Linkerd)
1. Introduction
In this episode of the Kubernetes Podcast from Google, hosts Abdel Sghiouar and Kaslin Fields engage in an in-depth conversation with William Morgan, CEO of Buoyant, the company responsible for developing Linkerd. The discussion delves into Linkerd's architecture, use cases, and the intricate balance between maintaining an open-source project and sustaining a viable business model.
2. Understanding Linkerd
Overview of Linkerd
William Morgan begins by explaining Linkerd's core functionality:
“It's an open source project and it's a service mesh. ... the idea is that you add Linkerd to a Kubernetes cluster or a set of clusters, and it provides this application layer of networking.”
— William Morgan [02:24]
Linkerd focuses on managing HTTP requests between pods, offering features such as retries, timeouts, request-level load balancing, cross-cluster communication, and mutual TLS for secure, encrypted connections. The primary goal is to integrate seamlessly with existing Kubernetes applications without necessitating configuration changes, thereby enhancing capabilities effortlessly.
3. Differentiating Linkerd from Other Service Meshes
Philosophical Approach to Simplicity
Morgan emphasizes Linkerd's commitment to simplicity and operational ease:
“Our angle for Linkerd has always been around maximizing simplicity and especially operational simplicity.”
— William Morgan [04:06]
Unlike some other service meshes that may introduce significant complexity, Linkerd strives to align closely with Kubernetes' native operations. This philosophy ensures that even during high-stress scenarios, such as 3 AM outages, operators can quickly understand, diagnose, and resolve issues within Linkerd.
Comparison with Istio and Envoy
The discussion highlights the differences between Linkerd and other service meshes like Istio, particularly regarding their control plane architectures and proxy implementations. While Istio leverages Envoy as its proxy, Linkerd opts for a more lightweight, Rust-based microproxy tailored specifically for Kubernetes environments.
4. Evolution and Adoption of Linkerd
From Linkerd 1.0 to Linkerd 2.0
Initially, Linkerd utilized heavyweight JVM-based proxies written in Scala, deployed on a per-node basis. However, operational challenges and security concerns led to a significant architectural shift:
“We moved to Sidecars because ... if that proxy died ... some random section of your application is not working.”
— William Morgan [11:00]
The transition to Linkerd 2.0 introduced a Rust-written microproxy deployed as a sidecar, enhancing security and reducing the operational blast radius by containing proxy failures within individual pods.
Multi-Cluster Enhancements
Morgan discusses the evolution of multi-cluster features in Linkerd, reflecting changes in Kubernetes adoption patterns:
“Now there's a planned adoption versus ad hoc adoption. So you just see things like that in the way that, that people are adopting Kubernetes ends up influencing the Linkerd roadmap.”
— William Morgan [22:23]
As organizations increasingly deploy numerous Kubernetes clusters for high availability, latency optimization, and resource accessibility, Linkerd has adapted to support federated services, allowing seamless load balancing and failover across extensive cluster infrastructures.
5. Technical Choices in Linkerd
Rust-Based Microproxy
A pivotal decision in Linkerd's development was adopting Rust for its proxy implementation:
“The core insight was the data plane is kind of the heartbeat of... robust security. Rust prevents you from misusing memory in unsafe ways.”
— William Morgan [14:57]
This choice was driven by the need for a secure, efficient proxy capable of handling sensitive data without the vulnerabilities inherent in languages like C. Writing the proxy in Rust allowed Linkerd to eliminate a class of security issues, ensuring reliable and safe communication within Kubernetes clusters.
Sidecar Deployment Model
Linkerd's move to a sidecar model was driven by the necessity to minimize resource consumption and isolate proxy failures:
“It was really annoying operationally because you'd have this one proxy responsible for a random selection of pods... It defeats containerization and isolation.”
— William Morgan [12:26]
The sidecar approach ensures that each pod benefits from proxy features without introducing single points of failure or excessive resource overhead.
6. Balancing Open Source and Business
Sustainable Open Source Practices
A significant portion of the conversation centers on the challenges of maintaining an open-source project while ensuring its financial viability:
“Open source costs money. You need people to build and maintain this stuff.”
— William Morgan [49:16]
Morgan recounts Buoyant's strategic shift in February of the previous year, where they altered how Linkerd's stable release artifacts were distributed. Instead of providing them freely, stable releases with semantic versioning guarantees became a paid feature for larger companies. This change was essential to fund the project's maintenance and growth:
“We changed the way we were providing stable release artifacts... if you're a big company... you got to find a way to pay for it.”
— William Morgan [35:58]
This move underscores the necessity of aligning open-source endeavors with sustainable business models to ensure long-term project health and reliability for users.
Industry-Wide Open Source Challenges
The discussion extends to broader industry issues, such as the sustainability of open-source projects within the CNCF ecosystem. Morgan highlights instances where projects faced abandonment or license changes due to financial strains, emphasizing the need for transparent and honest approaches to open-source funding and support.
7. The Future of Service Meshes
Necessity and Adoption Trends
Morgan posits that while small-scale Kubernetes deployments might not require a service mesh, its importance grows with complexity and scale:
“Linkerd is no longer cool... If you have 200 clusters, you kind of need a plan for that... it's part of why our focus has been on providing these features without changing code.”
— William Morgan [25:19]
As Kubernetes adoption becomes more sophisticated, with multi-cluster deployments and stringent security requirements, service meshes like Linkerd become indispensable for managing inter-service communications efficiently and securely.
Beyond mTLS: Expanding Capabilities
While mutual TLS (mTLS) is a prominent feature of service meshes, Morgan points out that Linkerd offers a suite of functionalities essential for modern, distributed applications. These include load balancing across clusters, dynamic service discovery, and enhancing observability without imposing significant configuration burdens on developers.
8. Conclusion
The episode concludes with reflections on the evolving landscape of service meshes and open-source sustainability. Morgan's insights highlight the critical balance between advancing technical capabilities and ensuring the financial stability of open-source projects. Hosts Abdel Sghiouar and Kaslin Fields commend Morgan for his thorough explanations and candid discussions about the intersection of open source and business dynamics.
“We did it in a nice way... we're [Buoyant] profitable. We're adding more maintainers to the project... what do we have? Yeah.”
— William Morgan [35:58]
This candid conversation offers valuable perspectives for both developers and organizations navigating the complexities of adopting and sustaining service meshes within the Kubernetes ecosystem.
Key Takeaways:
-
Linkerd's Philosophy: Focus on simplicity and operational ease, minimizing configuration overhead while enhancing Kubernetes' native capabilities.
-
Technical Innovations: Transition to a Rust-based microproxy as a sidecar enhances security and reduces resource consumption.
-
Open Source Sustainability: Balancing open-source contributions with viable business models is essential for long-term project health.
-
Service Mesh Necessity: As Kubernetes deployments scale and become more complex, service meshes like Linkerd become crucial for managing inter-service communications effectively.
Notable Quotes with Timestamps:
-
William Morgan [02:24]: “... you add Linkerd to a Kubernetes cluster... it provides this application layer of networking.”
-
William Morgan [04:06]: “Our angle for Linkerd has always been around maximizing simplicity and especially operational simplicity.”
-
William Morgan [12:26]: “... it was really annoying operationally because you'd have this one proxy responsible for a random selection of pods...”
-
William Morgan [14:57]: “Rust prevents you from misusing memory in unsafe ways.”
-
William Morgan [25:19]: “Linkerd is no longer cool... If you have 200 clusters, you kind of need a plan for that...”
-
William Morgan [35:58]: “... we changed the way we were providing stable release artifacts...”
-
William Morgan [49:16]: “Open source costs money. You need people to build and maintain this stuff.”
This comprehensive summary encapsulates the rich discussions and insights shared by William Morgan on the "Linkerd, with William Morgan" episode of the Kubernetes Podcast from Google. Whether you're a Kubernetes enthusiast, a service mesh user, or someone interested in the dynamics of open-source business models, this episode offers valuable perspectives and actionable knowledge.
