Linkerd 2.2 welcomes two experimental features, graduates auto-inject to be a fully-supported feature
Linkerd 2.2 represents the result of months of work so it’s no wonder that this release is packed with features. Plus, auto-inject is no longer experimental! Let’s have a closer look at all the goodies included in this release.
Linkerd 2.1 was introduced in December 2018 so we were anxious to see what goodies the next version will bring.
Linkerd 2.2 introduces a lot of important features, including automatic request retries and timeouts and graduates auto-inject to be a fully-supported feature. Furthermore, there are a few new CLI commands (including
endpoints) which offer diagnostic visibility into Linkerd’s control plane.
Last but not least, the newest version of Linkerd brings two experimental features, namely a cryptographically-secured client identity header, and a CNI plugin that avoids the need for NET_ADMIN kernel capabilities at deploy time.
Let’s have a closer look at the highlights!
Retries and timeouts
You won’t have to worry anymore about the success of your application in the presence of partial failures because Linkerd 2.2 can now automatically retry failed requests, according to the blog post announcing the new version. Building on top of the service profiles model introduced in the previous release, Linkerd allows you to configure this behavior on a per-route basis.
Making retries safe to use cannot be achieved unless you control when retries can be implemented. That’s no longer a problem; Linkerd 2.2 allows you to mark which routes are idempotent (
isRetryable), limit the maximum time spent retrying an individual request (
timeout), and configure the percentage of overall requests that can be retries (
retryBudget). This way, retries happen safely and don’t escalate issues in an already-failing system.
And now, some good news! Auto-inject is now a fully-supported (non-experimental) feature. If you’re not yet familiar with this feature, you should know that it allows a Kubernetes cluster to automatically add (“inject”) Linkerd’s data plane proxies to application pods as they’re deployed. By moving proxy injection out of the client and onto the cluster, you make sure that all pods are running the proxy uniformly, regardless of how they’re deployed.
What’s more, Linkerd 2.2 also switches auto-inject’s behavior to be opt-in rather than opt-out. Therefore, once you enable it, only namespaces or pods that have the
linkerd.io/inject: enabled annotation will have auto-inject behavior.
Last but not least, for client side (non-auto) injection, Linkerd 2.2 improves the
linkerd inject command to upgrade the proxy versions if they’re already specified in the manifest, and introduces a
linkerd uninject command for removing Linked’s proxy from given Kubernetes manifest.
Better NET_ADMIN handling with a CNI plugin
There’s a new, experimental CNI plugin that does network configuration outside of the security context of the user deploying their application. This will ensure that Linkerd is better suited for multi-tenant clusters, where administrators may not want to grant kernel capabilities (specifically, NET_ADMIN) to users.
This plugin was contributed by Nordstrom Engineering and was inspired by Istio’s CNI plugin.
The goody bag is filled to the brim this time; next on the list is a new, secure mechanism for providing client identity on incoming requests. When
—tls=optional is enabled, Linkerd adds
l5d-client-id header to each request. You can use it by application code to implement authorization, e.g. requiring all requests to be authenticated or restricting access to specific services.
Even though this header is currently experimental, it represents an important milestone towards providing comprehensive authentication and authorization mechanisms for Linkerd. By the way, the roadmap for securely providing both identity and confidentiality of communication within a Kubernetes cluster will be published shortly so stay tuned!
What’s next for Linkerd
As far as Linkerd 2.x releases are concerned, users should expect more features around reliability, traffic shifting, and security (especially around identity and confidentiality of communication). In the medium term, the team aims to reduce Linkerd’s dependence on Kubernetes.