“Kubernetes: Not only microservices, but also high performance workloads”
We talked to Klaus Ma, one of the leaders for Kubernetes’ SIG-Scheduling. Ma explains how Kubernetes has become the dominant container orchestrator, what his favorite K8s feature is, and where it might go in the future.
JAXenter.com: Hi Klaus! Thanks for taking the time to answer these questions. You are leader of the Special interest Group (SIG) “Scheduling”. What is the group all about?
Klaus Ma: Yep, I’m glad to share my experience in Kubernetes community. I’m the co-leader of SIG-scheduling; there’s also another leader, named Bobby, from Google :). SIG Scheduling is a really interesting group, which is responsible for the components that make Pod placement decisions. It interacts with several SIG for the placement, e.g. sig-node, sig-apps.
We build Kubernetes schedulers and scheduling features for Pods. We design and implement features that allows users to customize placement of Pods on the nodes of a cluster. These features include those that improve reliability of workloads, more efficient use of cluster resources, and/or enforces placement policies. We really look forward to having more developers involved.
JAXenter: It seems that Kubernetes has won the “orchestration war” against Docker Swarm. Why do you think Kubernetes prevailed?
Ma: Oh, we do not say “war” in community; both Swarm and Mesos are great community, and I also contributed to Mesos at ~2016. Personally, k8s does have some factors made it more popular. For example, the API server of Kubernetes (a server for metadata of cluster) decouples the scheduler, lifecycle manager and agents, so the community can setup several SIGs to focus on different area and work together (e.g. working group) for some advanced features/area. “Kubernetes is extensible by default” is another import factor which lets vendor to build customized features for their business scenarios; that makes more and more vendors would like to leverage Kubernetes core to build their own platform for business.
JAXenter: Kubernetes is more than just a tool for managing containers and clusters. What is your favorite feature that’s not that well-known?
In addition to the daily work in core, I’m also working on an incubator named kube-arbitrator, which is an effort to build batch capability for Kubernetes. That’s my favorite feature. I’d like to see more workloads running on Kubernetes: not only microservices, but also high performance workloads.
JAXenter: K8s celebrated its 4th birthday recently. How has it changed over the years?
Ma: Yes, 4 years! It’s still young comparing to other well known community like Linux; but it is growing quickly. There have been several changes over the years, but two changes have been very important, in my opinion.
The first one is that community defined which parts/components are the core of Kubernetes. At the beginning, there were several components in main branch of GitHub, including customized feature from vendors. That made the code base of Kubernetes very large and hard for new contributors to pick up. Currently, Kubernetes defines which components are core, and makes it extendable for vendors to build customized feature for their business like CNI, CSI, CRI.
Another important change is that it’s better organized right now :), there’s sig-architect, the steering committee, sig leaders to guide the technical direction of community, sig-release, PM to track the progress of release, and approvers/reviewers to ensure the quality of code. It’s also easier to let new contributors know where to get help.
JAXenter: The last GA version of Kubernetes was 1.10 – what do you think Kubernetes 2.0 will have in store for developers?
Ma: That’s really hard to say what we’re going to provide in 2.0, as there are so many great ideas here. In my opinion, these features should be included:
- Easier to use. In previous years, we did lots of enhancement to make k8s easier to use like kubeadm. That will be continually enhanced in 2.0.
- Easier to build customized features. As mentioned above, Kubernetes is extendable by default, but the SDK is not good enough right now. In my opinion, that’ll be improved in 2.0.
- Support more workloads. We have talked so much about microservices, but data centers need different workloads like ML and BigData. In 2.0, there should be more enhancemenst on that; I think that’s what the “ML working group” and “sig-bigdata” are doing right now.
JAXenter: How will the next step in the evolution of container technologies look like? Or are we done innovating in this field?
Ma: Docker opened the door to the lightweight runtime environment as compared to VM and there are several implementations after that like rkt. “Containers” are not hot as before, because we need a platform like Kubernetes to support end-to-end scenarios; an application runtime is not enough. But it doesn’t means the end of innovation; there are still new “container” technologies announced like Kata, and gVisor.
JAXenter: Serverless computing has definitely triggered a new trend. Do you think serverless will, at one point, render containers and the whole surrounding technology moot?
Ma: Serverless is great and it triggered a new trend as you said. But they’re in a different layer: “container” is the application runtime, Kubernetes is an orchestrator, and the serverless is one of a variety of workload types.
I think knative is a good example. knative is a Kubernetes-based platform for serverless workloads. knative is the framework/platform for serverless workload, Kubernetes is an orchestrator of data center and the “container” is the runtime of serverless. The user doesn’t need to know how to use Kubernetes or containers, but they’re still here and used by serverless framework.