Celebrations: Happy Thanksgiving to all our American readers!
Written like math, but with less quadratic formulas

Tile: A simple, compact language for describing machine learning

Jane Elizabeth
Tile
© Shutterstock / jelome

What’s cooler than machine learning? Machine learning that’s made by machines. In Tile, a new machine learning language from Vertex.AI, crucial support structures are automatically generated to save time and effort.

Recently, Vertex.AI released a new machine learning framework called PlaidML that is intended to make deep learning work everywhere. However, one of the big hurdles in scaling out to a wide variety of platforms is software support. It’s hard to bring deep learning to every developer if the framework isn’t compatible at its most fundamental level. And that brings us to Tile.

Kernels, or hand-crafted software libraries, are usually what bridges the gap between new frameworks and the underlying system. But Tile takes a different approach. While it does use kernels, they certainly aren’t coded by people. Instead, these kernels for new platforms are machine generated.

It’s Tiles all the way down

Tile is a simple, compact language for describing machine learning operations. It’s an intermediate tensor manipulation language that is used in PlaidML’s backend to produce custom kernels for each specific operation on each GPU. That’s right, the kernels for your machine learning framework are themselves written by a machine.

Tile describes machine learning operations in a simple and efficient manner, making it easy to implement on parallel computing architectures. The automatically generated kernels make is substantially easier to add support of GPUs and new processors.

SEE MORE: Top 5 open source machine learning projects

More about Tile:

  • Control-flow & side-effect free operations on n-dimensional tensors
  • Mathematically oriented syntax resembling tensor calculus
  • N-Dimensional, parametric, composable, and type-agnostic functions
  • Automatic Nth-order differentiation of all operations
  • Suitable for both JITing and pre-compilation
  • Transparent support for resizing, padding & transposition

Tile’s syntax balances expressiveness and optimization to cover the widest range of operations to build neural networks. Let’s take a look at one example of a matrix multipier:

function (A[M, L], B[L, N]) -> (C) {
    C[i, j: M, N] = +(A[i, k] * B[k, j]);
} 

Well, it certainly looks like something out of a high school math text book. Besides giving us all flashbacks to Algebra 1, Tile fully supports automatic differentiation. It was also designed to be parallelizable as well as analyzable. In Tile, it’s possible to analyze issues ranging from cache coherency, use of shared memory, and memory bank conflicts.

SEE MORE: What’s new in TensorFlow 1.4?

Tile also helps keeping the Keras backend for PlaidML quite small. Since Tile is the intermediate representation, the entire Keras backend is written in less than 3000 lines of Python. This allows for quick implementations of new ops. In the future, the Vertex.AI team hopes to take this approach to make PlaidML more compatible with popular ML frameworks like TensorFlow and PyTorch.

The whole language is still quite new and subject to change, but it’s getting close to formal specification. If you’re interested in a closer look at Tile and PlaidML, head over to Vertex.AI. There’s more information available on GitHub as well as a tutorial on how to write Tile.

Author
Jane Elizabeth
Jane Elizabeth is an assistant editor for JAXenter.com

Comments
comments powered by Disqus