Where’s all this heading?

Kubernetes for data science and machine learning applications: Benefits

Terry Shea
© Shutterstock / Priewe

What is the driver behind the growing interest in using Kubernetes for data science and machine learning applications? Terry Shea of Kublr explains why you shouldn’t avoid Kubernetes if you have ML and data science projects.

At Kublr we’ve been talking with customers and the community about the workloads they plan to run using containers and Kubernetes. We’re seeing a rapid uptick in interest in using Kubernetes for data science and machine learning applications.

Frameworks from MapReduce to Hadoop to Spark have created parallel processing capabilities that leverage clusters to speed processing tasks. These clusters have been frequently managed with their own cluster management solution (eg. Spark) or with Apache Yarn or Mesos Marathon.

Recent developments in Kubernetes for data science and machine learning include the 2.3 release of Apache Spark with “native” Kubernetes support. Mesosphere, the commercial company behind Marathon, announced its own support for Kubernetes at the end of last year. Google has developed, and of course open-sourced, Kubeflow, “A composable, portable, scalable, machine learning stack for Kubernetes”, for their popular TensorFlow machine learning framework.

There are tutorials and Github repositories for running Hadoop on Kubernetes. There’s a Special Interest Group (SIG) in the Kubernetes Community on big data. There was a hackathon at the KubeCon conference in Copenhagen last week for Managing Data in Cloud Native and Data Science. And this list just scratches the surface of the activity.

SEE ALSO: Kubernetes installation and deployment: Key capabilities checklist

We think that one of the reasons for the increase in this activity around Kubernetes for data science and machine learning is that it enables IT to better support these applications. Having a common orchestration layer for all containerized applications has several benefits:

  • Better resource utilization through centralized scheduling of data science and other containerized applications,
  • (Potential) Portability for workloads,
  • Single scheduling solution for multiple environments, on premise or in multiple clouds,
  • Ability for IT to create self-service environments for data scientists and other data users.

Kubernetes can also support GPUs to speed parallel processing and auto-scaling in environments that support it. In fact, Kubernetes provides two types of auto-scaling — pod auto-scaling where more pods are automatically created in a cluster based on scaling rules, and cluster auto-scaling where more nodes are added to a cluster based on flexible rules. With the addition of custom metrics, Kubernetes is now able to utilize finer grained scaling rules, such as tasks in a queue, rather than earlier scaling metrics, which revolved only around CPU and memory.

The “native” integration with Spark really is something new. Spark assumes that the Kubernetes clusters already exist and provides a method for creating container images that can be deployed to Kubernetes. Spark-submit can be used to submit an application to Kubernetes. In Kubernetes, one or more containers are placed in a pod.

Multiple pods are scheduled per node, and two types of nodes exist — master nodes and worker nodes. With native integration, Spark creates a Spark driver in a Kubernetes pod. The driver creates executors that also run in Kubernetes pods, then connects to them and executes applications. A complete description of the Spark on Kubernetes capabilities can be found at on the Apache Spark project site.

SEE ALSO: JAX Magazine is out: Machine learning from A to Z

This integration takes advantage of Kubernetes’ resource management capabilities while maintaining Spark as the application-level scheduling mechanism. The roadmap for Spark on Kubernetes includes integrating with advanced Kubernetes features like affinity/anti-affinity pod scheduling parameters.  More information on advanced scheduling in Kubernetes can be found here.

So, where’s all this heading? Most likely towards supporting new use cases that may leverage Kubernetes’ broad extensibility. IoT use cases are often data science-related on the back-end. At the edge, they may require pre-processing of data, the ability to run nodes on ARM processors, handling low bandwidth and limited connectivity, and automated software deployments.

From a sensor or device to an edge node, the communication may use MQTT or a similar messaging protocol, but from the edge node back to the data center or cloud IoT is really a streaming data application.

So, the potential exists for Kubernetes to provide application abstractions that simplify a broad range of use cases while enabling self-healing, infrastructure management. At Kublr, we’ve been working on an architecture that supports this broad range of use cases. Feel free to download Kublr-in-a-Box to set up a Kubernetes Cluster and then test Spark on Kubernetes.


Terry Shea

Terry Shea is the Chief Revenue Officer for Kublr, a comprehensive Kubernetes platform for the enterprise.

Inline Feedbacks
View all comments