Another update for ML fans

TensorFlow 1.6: Increased support and bug fixes

Jane Elizabeth
TensorFlow 1.6
© Shutterstock / josefkubes

The internet’s favorite open source machine learning project is back with another update. What’s in TensorFlow 1.6? We take a look at some of the major features and improvements, bug fixes, breaking changes, and other issues.

It’s only been two months since the last release from TensorFlow. However, they’ve certainly been busy. This update focuses mostly on bug fixes, API changes, and a few new features. We’re not holding that against them, though. Let’s take a look at what’s new in this ML favorite.

Major updates in TensorFlow 1.6

There’s not a lot of major changes in TensorFlow 1.6. This release focuses on improved support, documentation, and a few other API changes.

TensorFlow 1.6 has added a second version of their Getting Stared document, aimed specifically at ML newcomers. It’s an excellent resource if you’re an absolute beginner at machine learning.

There’s a couple of API changes, including a prepare_variance Boolean with a default setting to false for backwards compatibility.

Other big changes:

  •  New Optimizer internal API for non-slot variables. Descendants of AdamOptimizer that access _beta[12]_power will need to be updated.
  • tf.estimator.{FinalExporter,LatestExporter} now export stripped SavedModels. This improves forward compatibility of the SavedModel.
  • FFT support added to XLA CPU/GPU.
  • Android TF can now be built with CUDA acceleration on compatible Tegra devices (see contrib/makefile/ for more information)
  • Add convolutional Flipout layers.
  • Add probabilistic convolutional layers.
  • Added client-side throttle for Google Cloud Storage

SEE MORE: TensorFlow 1.5: Streamlined execution, lightweight options for ML

Breaking changes, bug fixes, and other known issues

It’s not an update without any breaking changes. TensorFlow 1.6 is no exception. In this release, prebuilt binaries are now built against CUDA 9.0 and cuDNN 7. Also, the prebuilt binaries will use AVX instructions, which may break TF on older CPUs.

As for bugs, there’s a pretty big one regarding the tensorboard command. It occasionally goes missing after certain upgrade flows due to a pip package conflict. See the TensorBoard 1.6.0 release notes for the fix.

Additionally, using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results or CUDA_ILLEGAL_ADDRESS failures.

In December 2017, Google discovered that the PTX-to-SASS compiler in CUDA 9 and CUDA 9.1 does not properly compute the carry bit when decomposing 64-bit address calculations with large bits into 32-bit arithmetic in SASS. As a result of this error, these versions of ptxas miscompile most XLA programs with more than 4GB of memory, leading to garbage results or CUDA_ERROR_ILLEGAL_ADDRESS failures.

A fix for CUDA 9.1.121 is not expected until late February 2018. However, TensorFlow does not expect a fix for CUDA 9.0.x. Right now, the only workaround is to downgrade to CUDA 8.0.x or disable XLA:GPU.

SEE MORE: Top 5 open source machine learning projects

Get it now

Interested in trying out this machine learning favorite? Upgrade to TensorFlow 1.6 here or via GitHub.

Jane Elizabeth
Jane Elizabeth is an assistant editor for

Inline Feedbacks
View all comments