days
0
-8
hours
-2
-3
minutes
-5
-8
seconds
-2
-4
search
Academic ingenuity gives Google a run for its money

Fast.ai is David to Google’s AI Goliath

Eirini-Eleni Papadopoulou
AI machine learning
© Shutterstock / Leremy  

Prepare to be dazzled by what this small team of academics managed to achieve. Fast.ai is an organization that aspires to make machine learning accessible to everyone. Their most crowning achievement? Well, not much. Just outperforming the AI giant of Google!

Fast.ai was created to “make neural nets uncool again”. Yeah, you read that right!

Put together by the entrepreneur, business strategist, developer, and educator Jeremy Howard and co-founder Rachel Thomas, fast.ai aims to make machine learning available for everyone.

According to their own interpretation of the organization’s slogan  Making neural nets uncool again, cool is all about been exclusive. And fast.ai aims to achieve exactly the opposite.

Despite the undisputably noble cause of the organization, this is not why the team made it to the headlines across the tech world. The fast.ai team managed to create an AI algorithm that outperforms code from Google’s researchers, according to DAWNBench.

Yup, you read that one right as well!

Let’s take a step back and see how that happened.

Goliath and David

It all started 4 mounts ago, when the fast.ai team, alongside with some of the students from MOOC and the in-person course at the Data Institute at USF, achieved enormous success in the DAWNBench competition by winning the race for fastest training of CIFAR-10 overall, and fastest training of Imagenet on a single machine (a standard AWS public cloud instance).

SEE ALSO: The state of machine learning in 2018

The driver behind the team’s participation in this competition was the aspiration to show the world that “you don’t have to have huge resources to be at the cutting edge of AI research.”

An AI speed test shows clever coders can still beat tech giants like Google and Intel.

The outcome of the competition picked the interest of many AI aficionados and experts. Among them was DIU researcher Yaroslav Bulatov who asked the question – Could you even beat Google’s impressive TPU Pod result? This led to Jeremy Howard, Yaroslav Bulatov, and fast.ai alum Andrew Shaw teaming up to achieve just that.

The results?

The team managed to train Imagenet to 93% accuracy in just 18 minutes, using 16 public AWS cloud instances, each with 8 NVIDIA V100 GPUs, running the fast.ai and PyTorch libraries.

This constitutes a new speed record for training Imagenet to this accuracy on publicly available infrastructure and is 40% faster than Google’s DAWNBench record on their proprietary TPU Pod cluster!

Are you dazzled yet?

Not only that, but the experiment used the same number of processing units as Google’s benchmark (128) and cost around $40 to run (including the cost of machine setup time)!

So, what does that mean? Being able to train on datasets of >1 million images has a significant impact on how AI is approached. More specifically, according to the official blog post by Jeremy Howard, the benefits include:

  • Organizations with large image libraries, such as radiology centers, car insurance companies, real estate listing services, and e-commerce sites, can now create their own customized models. Whilst with transfer learning using so many images is often overkill, for highly specialized image types or fine-grained classification (as is common in medical imaging) using larger volumes of data may give even better results
  • Smaller research labs can experiment with different architectures, loss functions, optimizers, and so forth, and test on Imagenet, which many reviewers expect to see in published papers
  • By allowing the use of standard public cloud infrastructure, no up-front capital expense is required to get started on cutting-edge deep learning research.

Visit the original blog post by Jeremy Howard for more detailed information on the experiment infrastructure and results. You can also have a look at the GitHub repo of the fast.ai deep learning library.

Does size matter after all?

It is inevitable for big companies to get most of the spotlight on their AI research. That does not mean, however, that only big companies have what it takes to compete in the big AI research arena.

For all the industries talk about democratization it’s really the case that advantages accrue to people with big computers.

Jack Clark, Strategy & Communications Director at OpenAI

Big results do not, necessarily, need big compute and fast.ai’s success confirms just that.

The ingenuity of a small group of academics has managed to create an AI algorithm that outperformed the giant that is Google, its enormous team, years of research and vast resources.

Making deep learning more accessible has a far higher impact than focusing on enabling the largest organizations.

Jeremy Howard, founder of fast.ai

Not everyone seems to agree with this argument, though.

Matei Zaharia, a professor at Stanford University and one of the creators of DAWNBench, despite being an outspoken supporter of what fast.ai has achieved and the benefits of the results, he still believes that big compute is still key for many AI tasks.

SEE ALSO: Why are so many machine learning tools open source?

How do you stand in this debate? It is safe to say that the fast.ai achievement is an extraordinary one. But does that mean that ingenuity, and ingenuity alone, makes big compute redundant for AI research? Let us know in the poll below.

qweqgwouze

“Big results need big compute”

Loading ... Loading ...

jqhwegquz

Author
Eirini-Eleni Papadopoulou
Eirini-Eleni Papadopoulou is an assistant editor for JAXenter.com. Just finished her masters in Modern East Asian Studies and plans to continue with her old hobby that is computer science.

Leave a Reply

Be the First to Comment!

avatar
400
  Subscribe  
Notify of