Kubernetes 1.6 focuses on scale and automation
Megaphone, loudspeaker with bubbles speech image via Shutterstock
Kubernetes 1.6 has just been released. 5,000 node clusters are now supported and dynamic storage provisioning has been moved to stable. Let’s take a look at the highlights.
Kubernetes 1.6 is here. According to the release notes, 1.6 encourages etcd3, and switching from etcd2 to etcd3 involves a full migration of data between different storage engines. Users must stop the API from writing to etcd during an etcd2 -> etcd3 migration. Furthermore, HA installations cannot be migrated at the current time using the official Kubernetes procedure.
Before updating to 1.6, users are strongly recommended to back up their etcd data.
It will also default to protobuf encoding if using etcd3. This change is irreversible. To rollback, users must restore from a backup made before the protobuf/etcd3 switch, and any changes since the backup will be lost. 1.5 does not support
Since the previous version does not support protobuf encoding, rolling back to 1.5 after upgrading to protobuf will force users to restore from backup, which means that any changes since converting to protobuf will be lost. After conversion to protobuf, users should validate the correct operation of their cluster thoroughly before returning it to normal operation.
Kubernetes 1.6 — Overview
Scale and federation
According to the blog post announcing the new release, Kubernetes’ stringent scalability SLO now supports 5,000 node (150,000 pod) clusters. This 150 percent increase in total cluster size (powered by a new version of etcd v3 by CoreOS) will come in handy if you are deploying applications such as search or games which can grow to consume larger clusters.
If you want to scale beyond 5,000 nodes or spread across multiple regions or clouds, federation allows you to combine multiple Kubernetes clusters and address them through a single API endpoint. In 1.6, the kubefed command line utility graduated to beta – with improved support for on-premise clusters. It now automatically configures kube-dns on joining clusters and can pass arguments to federated components.
Dynamic storage provisioning
In Kubernetes 1.6, StorageClass and dynamic volume provisioning are promoted to stable. The design allows cluster administrators to define and expose multiple flavors of storage within a cluster, each with a custom set of parameters. This release comes with a set of built-in defaults to completely automate the storage provisioning lifecycle, freeing you to work on your applications. Specifically, Kubernetes now pre-installs system-defined StorageClass objects for AWS, Azure, GCP, OpenStack and VMware vSphere
1.6 comes with a set of built-in defaults to completely automate the storage provisioning lifecycle so that you can work on your applications. Kubernetes now pre-installs system-defined StorageClass objects for AWS, Azure, GCP, OpenStack and VMware vSphere by default. This is a change in the default behavior of PVC objects on these clouds.
Kubernetes 1.6 adds a set of scheduling constructs to give users greater control over how pods are scheduled.
Node affinity/anti-affinity [currently in beta] allows users to restrict pods to schedule only on certain nodes based on node labels, according to the blog post announcing the release. Built-in or custom node labels will help users select specific zones, hostnames, hardware architecture, operating system version, specialized hardware, etc. Furthermore, the scheduling rules can be required or preferred.
A related feature, called taints and tolerations [currently in beta], makes it possible to compactly represent rules for excluding pods from particular nodes. The feature makes it easy to dedicate sets of nodes to particular sets of users, or to keep nodes that have special hardware available for pods that need the special hardware by excluding pods that don’t need it.
Pod affinity and anti-affinity [currently in beta] allows users to set hard or soft requirements for spreading and packing pods relative to one another within arbitrary topologies (node, zone, etc.).
Last but not least, multiple schedulers [also in beta] allow users to run their own custom scheduler(s) alongside —or instead of— the default Kubernetes scheduler. Each scheduler is responsible for different sets of pods.
For more information about Kubernetes 1.6, check out the release notes.