Machine learning breaks free of the stacks of servers

TensorFlow Lite makes ML even more mobile-friendly

Jane Elizabeth
TensorFlow Lite
© Shutterstock / jannoon028

The most popular machine learning project becomes even more mobile-friendly with the introduction of TensorFlow Lite. Designed to be lightweight, cross-platform, and fast, this makes it even easier for machine learning models to be deployed on mobile or embedded devices.

TensorFlow is the most popular open source machine learning project on GitHub by a clear mile. And now, it’s even more mobile friendly with the developer preview of TensorFlow Lite!

To be fair, developers have been able to use TensorFlow on mobile and embedded deployment of models thanks to the TensorFlow Mobile API from the 1.0 release. However, TensorFlow Lite is specifically designed to be lightweight and fast, perfect for on-device machine learning.

TensorFlow Lite

As machine learning has grown in popularity, it’s become increasingly more necessary to deploy ML models on mobile and embedded devices.

TensorFlow Lite is designed to be lightweight, with a small binary size and fast initialization. It also supports a variety of platforms, including Android and iOS. And, in order to make the mobile experience better, it’s optimized for mobile devices with improved loading times and hardware acceleration.

Since TensorFlow Lite supports the Android Neural Networks API, it will be able to take advantage of new mobile devices with custom built hardware for ML. However, TensorFlow Lite falls back on optimized CPU execution if accelerator hardware isn’t available. So, it doesn’t matter whether your mobile device is specifically enabled for ML: your models will run regardless.

SEE MORE: What’s new in TensorFlow 1.4?

Architecture

Here’s a close look at the structural design for TensorFlow Lite:

It is made up of:

  • A trained TensorFlow model saved on disk
  • A TensorFlow Lite Converter program to change the model into the correct file format
  • And a TensorFlow Lite Model File that has been optimized for maximum speed and minimum size.

Then, the model file is deployed within the mobile app, with a:

  • Java API, which acts as a convenience wrapper around the C++ API on Android
  • C++ API, which loads the model file and invokes the interpreter. This library is available cross platform for similar usage on both Android and iOS.
  • Interpreter, which executes the model using a set of operators. This supports selective operator loading. Developers can decide which operators to load: without operators it is only 70KB and with all of them it is 300KB. This is a serious reduction in size from the 1.5M required by TensorFlow Mobile.
  • If an Android device has the specific hardware added for ML, the Interpreter can use the Android Neural Networks API. If not, it will default to CPU execution.

Of course, developers can also use custom kernels thanks to the C++ API and the Interpreter.

SEE MORE: ML design is not unbiased: “Algorithms are neither neutral nor value-free”

Model support

Since TensorFlow was already venturing into the realm of mobile-friendly ML, there are already a number of models that have been trained and optimized for this format. Here’s a few that are already good to go for developers:

  • MobileNets is a family of mobile-first computer vision models for TensorFlow, designed to effectively maximize accuracy while being mindful of the restricted resources for an on-device or embedded application. MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, is used.
  • Smart Reply is an on-device conversational model that provides one-touch replies to incoming conversational chat messages. First-party and third-party messaging apps use this feature on Android Wear.

SEE MORE: Open source speech recognition toolkit Kaldi now offers TensorFlow integration

Why switch from TensorFlow Mobile?

ML on mobile already exists, thanks to TensorFlow movile. Why switch? According to the TensorFlow team, TensorFlow Lite is the “evolution of TensorFlow Mobile”. As it matures, it will become the recommended solution in the future for deploying models on mobile and embedded devices.

Right now, TensorFlow Lite is still in developer preview and under active development. It’s not meant to support production apps, like TensorFlow Mobile. But, as the community grows and develops, it will begin to surpass TensorFlow Mobile.

In the meantime, you should head over to GitHub or the documentation pages to learn more about this exciting new approach to machine learning on mobile! I highly suggest checking out the overview and introduction here; it was super helpful and informative. Contributions and suggestions are welcome.

Author
Jane Elizabeth
Jane Elizabeth is an assistant editor for JAXenter.com

Comments
comments powered by Disqus