Interview with the creators of Julia

“Julia is comparable to Python for simple machine learning tasks and better for complex ones”

Maika Möbus
© Shutterstock / vs148

The initial release of the Julia programming language was eight years ago, in 2012. We spoke to the four creators of the language, Dr. Viral B. Shah, Dr. Jeff Bezanson, Stefan Karpinski and Prof. Alan Edelman, to find out whether Julia has been able to live up to their high expectations. They also went into detail about the various use cases Julia is applied to today, how the language compares to Python, and where it is headed in the future.

JAXenter: When you created Julia, you stated your reasons for developing the language in a blog post from 2012. Looking at it now, has Julia lived up to—or even exceeded—your expectations?

Native Julia programs are often 10x-100x faster than similar programs in R, Python, Matlab, etc.

Julia team: We certainly believe that Julia has lived up to the expectations of the original blog post. Today, Julia achieves performance that is orders of magnitude better than other dynamic languages for technical computing. Native Julia programs are often 10x-100x faster than similar programs in R, Python, Matlab, etc. Here’s a recent machine learning example discussed on Twitter, where Julia is 10x-100x faster than Python.

SEE ALSO: Julia: The programming language of the future?

At the same time, Julia is general purpose, and provides facilities for creating dashboards, documentation, REST APIs, web applications, integration with databases, and much more. As a result, Julia is now seeing significant commercial adoption in a number of industries. Data scientists and engineers across industries not only use Julia to develop their models, but are able to deploy their programs to production with a single click using Julia Computing’s products.

There’s still a lot to be done, naturally. While we now have a thriving ecosystem of over 3000 Julia packages and the ability to call any library written in Python, R, C, C++ or Java, our recent focus has been on making our multicore, GPU and distributed computing better.

JAXenter: What was the most difficult part in creating a new programming language—and would you approach it differently with the knowledge and experience you have today?

Julia team: Programming languages are one of the most general constructs in computing. They are fundamentally the interface between the programmer and the computer. Every small decision one makes as a designer affects the productivity of hundreds of thousands of programmers and researchers on a daily basis.

A programming language takes at least 10 years to reach critical mass – on both technical and social fronts.

One of the difficult parts is staying power. We started the project in 2009 and made a public open source announcement in 2012. It was clear in the early days that we could solve the difficult “two language problem” – having performance and productivity in the same language. It was unclear, though, whether the world at large would adopt it. While it is easy to work on a hobby for 2-3 years, it is hard to sustain such effort over a long period and put in the kind of serious work that takes it from being a hobby to the truly robust, industrial grade system that it is today. A programming language takes at least 10 years to reach critical mass – on both technical and social fronts. We all did many different things in the early years. Viral worked for the Govt of India’s Aadhaar project, Stefan was a data scientist at Etsy, and Jeff was working on a PhD at MIT. Over the years, we were able to keep our resolve and gravitate towards spending more of our time on Julia. About 5 years ago, we founded Julia Computing, in order to bring Julia to enterprises, and as a way for us to work on what we are passionate about full time.

We are incredibly lucky to be working on Julia for the last 10 years. The original founding team is not only together, but the core team that builds Julia has grown significantly. Looking back at things, while we don’t think we have done everything perfectly, we do not believe we have had any major missteps or setbacks.

JAXenter: In general, what use cases do you believe Julia is best suited for?

Julia team: Julia is well suited for all forms of technical computing. In data science and engineering workflows, performance is essential – whether it is to analyze extremely large datasets on the newest CPUs, run complex AI models at scale on GPUs, or build scientific simulations on supercomputers. In all these areas, Julia provides at least a 10x advantage over the competition. It empowers a small team of data scientists and engineers to achieve what would have otherwise needed a much larger team. It makes it possible to be first in the market with a new product, while doing it all at lower cost.

JAXenter: How does Julia compare to Python when used for machine learning tasks?

Julia team: Julia is comparable to Python for simple machine learning tasks and better for complex ones. In situations where an existing Python library does not provide a ready-to-use function, you have to write it on your own, but in Python this is a challenge because you either have to live with poor performance or start writing Python extensions in C or Cython or something like that. In Julia, when you have to roll your own solution, it’s often not only simple to implement but often leads to even higher performance than using pre-existing libraries. We notice that many users like Julia’s Flux and Knet packages for this reason – you just write normal code and you can differentiate it and it’s often already fast enough. For the same reason, many users also like Julia’s Turing.jl package for probabilistic programming.

Julia is comparable to Python for simple machine learning tasks and better for complex ones.

In systems such as TensorFlow, one is usually unable to use capabilities of standard python packages. Julia machine learning frameworks also allow reusing existing Julia packages for things like file I/O, statistical distributions, distributed computing, image processing, etc. Deployment is simply about compiling your Julia code just like any other. Those larger frameworks are paying close attention and Julia’s tools have influenced the design of next-gen redesigns like TensorFlow 2.0.

As a result, we’ve seen researchers doing many things that would be hard (or impossible) in other languages. For example, one can turn complex Julia packages like ray tracers into ML models and build computer vision or robotic control systems that train incredibly quickly. These ideas are very general and have been applied in areas as far out as quantum algorithms and quantum ML, nordic energy trading, designing photonic chips, power flow in electrical circuits, medical imaging, exascale computing, infectious disease modelling, and even traffic management. You can even turn Julia’s ML packages back on themselves, and learn better training algorithms. It’s amazing how far people are pushing this system to do things we’d never have imagined ourselves.

In the area of scientific machine learning, Julia provides a complete SciML stack that is simply not available in Python or any other language ecosystem.

JAXenter: Julia 1.4 was released in March 2020. What are the features you are most excited about?

Julia team: Multithreading was a major capability that we recently announced in 1.3 and it has gotten even faster and more robust in 1.4. There are a lot of other performance enhancements in Julia 1.4 as well. One of the major ones is that “the time to first plot” issue, where it took Julia over 30 seconds to generate the first plot after startup, has been improved. The time to first plot is now 12 seconds. Reducing compilation latency is a top priority for the compiler team these days.

Reducing compilation latency is a top priority for the compiler team these days.

Julia’s new BinaryBuilder for binary artifacts is also being used extensively in Julia 1.4 for the first time. This means that instead of trying to build binary dependencies on each client machine – which sometimes works, but can easily fail and be very frustrating – in 1.4, for binary dependencies that support it, they are just unpacked and can be used without any build step. This provides a much more reliable experience and means that complex stacks like GUI toolkits “just work” out of the box and do so in a consistent way across platforms. GPU capabilities also work out of the box when you install packages such as CuArrays.jl. It’s kind of a magical experience. The underlying tech for BinaryBuilder is pretty mind-blowing, but the real magic has been that so many people from the community have come together to create server-side cross-platform build recipes for various libraries. All of this technology also means that application deployment is super reliable, reproducible and portable.

SEE ALSO: Julia 1.4 adds new language features and library updates

JAXenter: What do you have planned for the future?

Julia team: There are several areas of work in the future – parallel computing, more compiler support for differentiable programming, support for other GPU accelerators in addition to NVIDIA (such as Intel and AMD), improvements to code-generation, improvements to debugging workflows, better error messages and an improved parser, better IDE integration, package statistics for package authors, and much more.

Also, at Julia Computing, we continue to focus on stability and reliability and building products for supporting enterprises with Julia. Towards this end, we are building JuliaTeam (to make it easy for a team of developers to build Julia applications) and JuliaRun (for effective scale-up and deployment of Julia applications).

JAXenter: Thank you for the interview!

Maika Möbus
Maika Möbus has been an editor for Software & Support Media since January 2019. She studied Sociology at Goethe University Frankfurt and Johannes Gutenberg University Mainz.

Inline Feedbacks
View all comments