Digging deeper into Kubernetes 1.12 – Two of the most promising features so far
When Kubernetes 1.12 was released, we had a quick overview of the main changes, improvements, and additions featured in the latest release. This time around, we are revisiting two of the most important features of Kubernetes 1.12 and take a closer look at what they bring and what the future prospects are.
Less than a month ago, we welcomed the newest release of Kubernetes 1.12 that was stuffed with new features, important updates, and changes. However, in our overview of the new release, we decided to focus only on the most important highlights, since the official changelog was truly extensive.
This time around, I am happy to revisit two of the most important features of Kubernetes 1.12 and take a closer look at what’s under their hood and what the future prospects are.
Let’s dig in!
RuntimeClass is a new cluster-scoped resource that surfaces container runtime properties to the control plane. It was created as a response to the issues that arose from runtimes targeting many different use cases and it was released with Kubernetes 1.12 as an alpha feature.
RuntimeClass aims to tackle problems like:
- How do users know which runtimes are available, and select the runtime for their workloads?
- How do we ensure pods are scheduled to the nodes that support the desired runtime?
- Which runtimes support which features, and how can we surface incompatibilities to the user?
- How do we account for the varying resource overheads of the runtimes?
According to the Kuberenetes blog post, “the
RuntimeClass resource represents a container runtime supported in a Kubernetes cluster. The cluster provisioner sets up, configures, and defines the concrete runtimes backing the
RuntimeClass. In its current form, a
RuntimeClassSpec holds a single field, the RuntimeHandler. The
RuntimeHandler is interpreted by the CRI implementation running on a node and mapped to the actual runtime configuration.”
RuntimeClass resource can be considered a significant foundation for surfacing runtime properties to the control plane but we are far from reaching the finish line with its development. Among the extensions that have been proposed so far are:
- Add NodeAffinity terms to the
- Surfacing optional features supported by runtimes, and better visibility into errors caused by incompatible , .
- Automatic runtime or feature discovery, to support scheduling decisions without manual configuration.
- Standardized or conformant
RuntimeClassnames that define a set of properties that should be supported across clusters with RuntimeClasses of the same name.
- Dynamic registration of additional runtimes, so users can install new runtimes on existing clusters with no downtime.
- “Fitting” a
RuntimeClassto a pod’s requirements. For instance, specifying runtime properties and letting the system match an appropriate
RuntimeClass, rather than explicitly assigning a
Beta feature: Topology-aware dynamic provisioning
With the topology-aware dynamic provisioning storage, resources can now understand where they live! According to the official blog post, “in multi-zone clusters, this means that volumes will get provisioned in an appropriate zone that can run your pod, allowing you to easily deploy and scale your stateful workloads across failure domains to provide high availability and fault tolerance.” This feature also includes beta support to AWS EBS and GCE PD.
Running stateful workloads with zonal persistent disks in multi-zone clusters has been quite a challenge so far and this has been the main driver behind the development of this feature. Among others, topology-aware volume provisioning aims to address issues that resulted in unschedulable pods. Namely, volumes were provisioned in zones that:
- did not have enough CPU or memory resources to run the pod
- conflicted with node selectors, pod affinity or anti-affinity policies
- could not run the pod due to taints
So, what does the future hold for this feature? The team is actively trying to improve topology-aware dynamic provisioning to support:
- more volume types, including dynamic provisioning for local volumes
- dynamic volume attachable count and capacity limits per node
If you are interested in diving even deeper into this feature’s specifics, make sure you check out the official documentation.