Using resource quota and namespaces to manage multiple teams on your Kubernetes cluster

We’ve talked about how to control resource usage of containers, now it’s time to learn how to use resource quota and namespaces to manage multiple teams on your Kubernetes cluster.
In a previous blog post, we discussed how to use resource requests and limits in Kubernetes to constrain the resource usage per individual pod or container. In this follow-up blog post, we look at the concept of resource quota to better control the amount of resources assigned to a particular namespace.
Why use namespaces anyway?
When multiple teams work on a shared Kubernetes cluster (or OpenShift, or any other Kubernetes flavor), it is a good practice to create different namespaces for each team. Namespaces allow you to subdivide your cluster in virtual clusters for each team. This provides an isolated environment with a unique scope for naming resources, as well as to set policies. Resources created in one namespace are hidden from other namespaces. Within each namespace, the cluster administrator can also control the amount of resources that each team has access to, to ensure that the cluster capacity is shared in a fair way.
How do resource quota work?
Resource quota are set via the ResourceQuota object that should be configured with the appropriate constraints per namespace. Constraints can be set on the amount of objects (pods, services, persistent volume claims, etc.) in a namespace, as well as on the total amount of compute resources (CPU and memory) that can be used per namespace. Users can then create resources (pods, services, etc.) in the namespace, and the quota system tracks usage to ensure it does not exceed the constraints defined in a ResourceQuota. If a new pod (or other object) exceeds the resource quota, it will not be created.
SEE ALSO: “Kublr speeds and eases Kubernetes adoption for the enterprise”
How to create a namespace?
The first thing is to create a namespace to which you can assign a resource quota. This can be done using the kubectl create namespace command
$ kubectl create namespace my-namespace
How to set resource quota?
As mentioned, a ResourceQuota object needs to be created for a particular namespace and the constraints should be set in its configuration, as in the example below.
apiVersion: v1 kind: ResourceQuota metadata: name: quota-example spec: hard: pods: "8" requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi
This basically states that no more than 8 pods are allowed in the namespace and the total amount of requests and limits overall containers of a namespace must remain below the set constraints. It’s important to note that this inherently also implies that every container must have requests and limits specified for CPU and memory. If you attempt to create a default pod with unbounded CPU and memory, it will be rejected if resource quota have been set in its namespace (unless the administrator has set default values for requests and limits via the LimitRange object).
You can now save this YAML to a file and create the resource quota object for your namespace.
$ kubectl create -f quota-example.yaml --namespace=my-namespace
How to analyze resource quota usage?
Once the resource quota have been set, you want to analyze how the quota are being used. This can be done in the following way.
$ kubectl get quota --namespace=myspace NAME AGE quota-example 30s
$ kubectl describe quota quota-example --namespace=my-namespace Name: quota-example Namespace: my-namespace Resource Used Hard -------- ---- ---- limits.cpu 0 2 limits.memory 0 2Gi pods 0 8 requests.cpu 0 1 requests.memory 0 1Gi
In this example, no pods have been created yet. As long as the resource requirements of the new pods in the namespace stay below the resource quota, they will be created. Else, an error message will be returned.
Instead of manually tracking resource quota and their usage via commands, you ideally want to monitor this automatically and on an ongoing basis, to get alerts of when you are close to exhausting your resource quota or when new pods fail to be created. This is where a monitoring tool such as CoScale comes in handy.
Monitoring resource quota usage with CoScale
CoScale was built specifically for container and Kubernetes monitoring. It integrates with Docker, Kubernetes and other container technologies to collect container-specific metrics and events. That means that in CoScale you can also check the resource quota of each namespace.
The dashboard below shows an overview of much of the resources and pods are assigned and used per namespace. You can also drill into an individual namespace to get a better understanding of which containers and services are using the most resources.
SEE ALSO: Red Hat expands Kubernetes and containers options with purchase of CoreOS
CoScale collects this information on a continuous basis and allows you to report on real-time as well as historic data. In addition, you can also set alerts, for example when CPU usage has reached a certain limit of the values specified in the resource quota. In the example below, we specify that want to get an alert if the CPU usage within a namespace reaches 80% of the CPU limits value. CoScale even supports forecasted alerts so you can get notified ahead of time.
Besides getting early warnings like with the alert above, you might also want to get an immediate alert if a pod cannot be created because of exceeded resource quota. Thanks to its deep Kubernetes integration, CoScale automatically captures all Kubernetes events and messages. Then it’s just a matter of alerting on the right message, as shown in the screenshot below.
Conclusion
In the previous blog post, we discussed how to control resources for individual pods using limits and requests. In this blog post, we discussed how you can assign resource quota to namespaces to control fair resource usage of your Kubernetes cluster shared between different teams. Together, these two concepts allow you to efficiently manage how resources are being utilized on your cluster. Next, to taking advantage of these concepts, it is equally important to monitor and alert on resource usage relative to these constraints. Whether you are a cluster administrator or user, this helps you to ensure that capacity is used as planned and to be able to quickly intervene and make changes as needed.
This post was originally published on The Container Monitoring Blog.