GraphPipe makes deploying a machine learning model easy and comfortable
Oracle has just released a new open source tool: GraphPipe is designed to simplify and standardize the deployment of machine learning models. We talked to Vish Abrams, Architect, Cloud Development at Oracle about the new tool, its benefits, challenges and more.
“GraphPipe is also a standard protocol”
JAXenter: What is GraphPipe and who should use it?
Vish Abrams: GraphPipe is a set of tools and a protocol for efficiently serving and querying machine learning models. It is ideal for people looking to move their machine learning models into production. Model serving should be simple!
JAXenter: With all the frameworks (and tools in general) out there, one could think that having a working machine learning model is a piece of cake but it really isn’t. What are the pitfalls one could fall into while trying to create a machine learning model?
Vish Abrams: The tooling and documentation for building machine learning models has gotten incredibly good over the past few years. That means that the difficult parts of machine learning applications have changed. The first challenge is trying to figure out how to use all of the fancy tooling to actually solve a business problem. There is no magic for solving that one: it requires a mix of engineering and experimentation. The second challenge is how to get the machine learning model you built into production.
JAXenter: How does GraphPipe plan to solve these challenges?
Vish Abrams: GraphPipe is focused on the deployment challenge; it makes it easy to deploy a machine learning model. It provides simple model servers that are a piece-of-cake to deploy as well as standard clients so that anyone can efficiently communicate with a model deployed via GraphPipe. In addition, the entire protocol is open, so its easy to build new clients and servers and make it better for everyone.
GraphPipe can actually help make its “competitors”, the existing model-servers, better!
JAXenter: Is GraphPipe a better TensorFlow?
Vish Abrams: Definitely not! GraphPipe is a protocol for serving models built with any framework. So it plays very nicely with all of the existing training frameworks, including TensorFlow.
JAXenter: There’s also a plug-in for TensorFlow which allows the inclusion of a remote model inside a local TesorFlow graph. What was the motive behind this decision?
Vish Abrams: We see GraphPipe as unlocking latent potential for new model architectures. One of these architectures is a hybrid that does some computation locally and some remotely. You could imagine a local image recognition model combining the results of multiple remote state-of-the art models, for example. The TensorFlow plug-in is one way that you could accomplish this.
JAXenter: What does GraphPipe have that its competitors don’t?
Vish Abrams: It is clear that model serving has not received nearly the attention that model training has received in the various machine learning libraries. There isn’t anything out there that is as simple and efficient as our example model servers. But the best news is GraphPipe is also a standard protocol. The absence of a standard means that each server has a custom (and in many cases, inefficient) API. This means that GraphPipe can actually help make its “competitors”, the existing model-servers, better! That’s the magic of open source.
JAXenter: How can developers get started with GraphPipe?
Vish Abrams: You can find documentation, links, and examples at our landing page on GitHub.