Turbocharging your ML models with GPUs

Running ML workloads in the cloud: GPUs now available in Google Kubernetes Engine

Jane Elizabeth
Kubernetes Engine
© Shutterstock / Artie Medvedev

After the successful launch and positive reception of the NVIDIA Tesla GPUs for Google Kubernetes Engine, the folks at Google Cloud Platform have introduced GPUs in Kubernetes Engine for a wider audience. We take a look at what this leap forward means for machine learning.

Last year, Google Cloud announced they now supported NVIDIA Tesla GPUs for the Container Engine. After all, one of the most popular workloads on the platform was training machine learning models for a better predictive analysis. Additional GPUs speed up training time, so it was only natural for Google Cloud to look in this direction. Since its debut, the GPUs have been a success.

Now, no longer limited to experimentation in alpha clusters, GPUs in Kubernetes Engine have made it to beta and are now available for wider use along with the latest Kubernetes 1.9 Engine.

GPUs in Kubernetes Engine

Using GPUs in Kubernetes Engine is like adding a little bit of nitrous to your machine. It serves as a turbocharger for accelerating specific computationally-intensive workloads in your clusters like machine learning, image processing, and financial modeling.

Right now, the list of compatible GPUs cards is not exceptionally long. As of the initial beta release, NVIDIA Tesla P100 and K80 GPUs are compatible with Kubenetes Engine. However, Google Cloud promises that V100s are on their way shortly.

While Kubenetes Engine does handle a lot of the automation, it also includes a number of metrics to help users understand how their GPUs work. If you are using a container for your GPUs, then Kubernetes Engine will create metrics for how busy the GPUs are as well memory usage, availability, and allocation. There’s even a visualization of these metrics with Stackdriver.

SEE MORE: Kubeflow: Bringing together Kubernetes and machine learning

The addition of GPUs makes Google Cloud a flexible, high-performance platform for running ML workloads in the cloud.

It’s easy to create a cluster in Kubernetes Engine with GPUs. You can expand the machine type on the “Creating Kubernetes Cluster” page on the Cloud Console and select the types and number of GPUs.

It’s also fairly simple if you want to add nodes with GPUs to your existing cluster with the Node Pools and Cluster Autoscaler features. Your cluster can use GPUs whenever you want with the node pools. Autoscaler automatically makes nodes with GPUs whenever it is scheduled and scales down if nothing is being used by active pods.

Basically, the introduction of GPUs on Google Kubernetes Engine means faster, more efficient training of machine learning models. Which is something everyone can get behind!

How do I get started?

The complete documentation for GPUs on Kubernetes Engine is available here.  Since GPUs on Kubernetes Engine are still in beta testing, you need to apply for a GPU quote with your Google Cloud platform. Right now, there’s a free-trial with $300 credits of GPU available.


Jane Elizabeth
Jane Elizabeth is an assistant editor for

Inline Feedbacks
View all comments