PyTorch 1.1 improves JIT compilation and offers TensorBoard support
PyTorch version 1.1 arrives with new APIs, improvements, and features, including experimental TensorBoard support, and the ability to add custom Recurrent Neural Networks. The PyTorch team also includes some newly open sourced developer tools and offerings for machine learning.
Several months after the release of PyTorch 1.0, meet the new feature update. PyTorch 1.1 arrives with new developer tools, official TensorBoard support, a few breaking changes, improvements, new features, and new APIs.
See what’s new in the deep learning platform’s latest release.
Experimental TensorBoard support
Version 1.1 supports TensorBoard for visualization and data bugging. TensorBoard is a visualization toolkit made up of a suite of web applications.
This new implementation is currently experimental, so report any issues that you may catch and watch for future news and potential changes. Use the
from torch.utils.tensorboard import SummaryWriter command to begin using TensorBoard.
The release notes on GitHub list just some of its use cases: “Histograms, embeddings, scalars, images, text, graphs, and more can be visualized across training runs.”
Just-in-time (JIT) compilation
1.1 introduces new improvements to just-in-time (JIT) compilation. Referring to the release notes on GitHub, here’s what’s been modified:
- Attributes in ScriptModules: Assign attributes on a
ScriptModuleby wrapping them with
torch.jit.Attribute. This update supports all types available in TorchScript. After assigning an attribute, PyTorch saves the attribute in a separate archive in the serialized model binary.
- Dictionary and list support in TorchScript: Lists and dictionary types behave like Python lists and dictionaries.
- User-defined classes in TorchScript: Currently in the experimental phase, so as usual, be aware of potential future changes. TorchScript supports annotating a class using
Recurrent neural networks
The PyTorch team wrote a tutorial on one of the new features in v1.1: support for custom Recurrent Neural Networks.
According to the PyTorch Team:
Our goal is for users to be able to write fast, custom RNNs in TorchScript without writing specialized CUDA kernels to achieve similar performance. In this post, we’ll provide a tutorial for how to write your own fast RNNs with TorchScript. To better understand the optimizations TorchScript applies, we’ll examine how those work on a standard LSTM implementation but most of the optimizations can be applied to general RNNs.
Follow the tutorial to begin writing custom RNNs.
New tools & machine learning offerings
With the release of 1.1, PyTorch also fostered some projects and tools for machine learning engineers.
View the full list of these new offerings on the Facebook for Developers blog.
These include the newly open sourced PyTorch BigGraph, which allows faster embedding of graphs where the model is too large to fit in memory. For demonstration, PyTorch released a public embedding of of the full Wikidata graph, with 50 million Wikipedia concepts for the AI research community.
The blog also highlights noteworthy open source projects from the PyTorch community, as well as new resources for the machine learning community. PyTorch continues to grow, even in academia, where it now finds a home in Universities across the United States. A new Udacity course has also been added for some out of the classroom knowledge.
These are only just some of the highlights of what’s new in version 1.1. View the full release notes on GitHub and take note of the latest deprecations, bug fixes, and more.