Kubernetes 1.11 brings improved flexibility for container orchestration
Kubernetes 1.11 is here! This is the second release for the container orchestration manager for 2018 and it’s a big one, with improve scalability, flexibility, and all-around performance enhancements. Two of the biggest changes include IPVS-based in-cluster service load balancing and CoreDNS.
Not three months ago, Kubernetes released v1.10 to general appreciation. This week, the popular container orchestration manager released v1.11 to improve scalability, flexibility, and enhance performance.
What’s in store for developers with Kubernetes 1.11? This release is focusing on the maturity of Kubernetes, with a host of features meant to address some of the fundamental networking and storage requirements. It’s getting to the point where developers can start to plug any infrastructure into Kubernetes, whether it’s in the cloud or on-premise.
IPVS-based in-cluster service load balancing
Kubernetes 1.11 has graduated the IPVS-based in-cluster service load balancing feature to stable. This change is meant to improve network performance and scalability. The IP Vistual Server (IPVS) takes care of in-kernel load balancing. The programming interface is simpler than iptables, making it easier for developers to take advantage of the high performance and scalability limits for the Kubernetes Service model.
Currently, IVPS is not a default option. However, developers can begin to uses it for their clusters production traffic.
Looking for a DNS server? CoreDNS is the now the default option for kubeadm and can be added on as an option for your cluster DNS needs. This DNS server is lighter than ever, with fewer moving parts than its predecessor. Flexible and extensible, CoreDNS is a single executable and a single process. It even supports flexible use cases with custom DNS entries.
CoreDNS is written in Go.
Custom resource definitions
Previously, custom resource definitions were restricted to defining a single version of the custom resource. Kubernetes 1.11 lifts this restriction, making it possible for developers to define multiple versions of a resource. Currently, custom resource authors can “promote with safe changes” and make a migration path for resources that have changes.
This feature also supports “status” and “scale” subresouces. This helps developers to integrate the custom resources with monitoring frameworks. In particular, this change makes it even easier for developers to run cloud-native applications in production with Kubernetes.
While this feature is still in beta, future versions are intended to support automatic conversions.
Storage, storage, storage
Want to hang onto more of your stuff? The 1.11 release has a lot of options for developers, including a number of new storage features.
Online resizing of Persistent Volumes is now an alpha feature. Users can no increase the size of their PVs without needing to first get rid of pods or unmount the volume.
Additionally, the dynamic maximum volume count is also an alpha feature. In-tree volume plugins are no able to specify the maximum number of volumes for any specific node. It also can limit the number depending on the type of node.
StorageObjectInUseProtection is a stable feature which prevents the removal of Persistent Volumes and Persistent Volume Claims that are in use. This keeps developers from making any awkward mistakes and deleting a PV or PVC that is tied to an active pod.
Get Kubernetes 1.11
Want to upgrade? Better take a look at the urgent upgrade notes (yes, really!) or suffer the consequences. There are two things that developers need to be aware of before they move on to the next Kubernetes version.
- JSON configuration files that contain fields with incorrect case will no longer be valid. You must correct these files before upgrading. The keys are case sensitive when specifying JSON resource definitions during direct API server communication. The 1.11 API server will be enforcing the correct case.
- Pod priority and preemptionis now enabled by default. This means pods from any namespace. This feature ca be disabled, but it limits critical pods to the kube-system namespace.