Is it the future yet? Machine learning real world uses
Did you know you can count bees with AI? We take a look at some real world use cases for machine learning that you might have missed.
By now, we are all familiar with how machine learning is changing our daily lives. Retail recommendations, voice recognition software, and search engine results are just some of the daily uses for machine learning that hog the spotlight.
While these are all life-changing use cases, what about some lesser-known ways that machine learning is being used? It’s time to take a deeper look into some of the more creative and eclectic real world innovations happening in the realm of ML.
Learn how to walk
DeepMind explored locomotion and created a stick figure that learned how to walk, run, and jump through the use of complex machine learning. Humanoid creations, alongside insect-like models, learned the delicate movements to scale staircases and maneuver through rough terrain.
While the results may look goofy at times, it shows that machine learning can tackle agile and flexible movements.
On their blog, DeepMind states, “Achieving flexible and adaptive control of simulated bodies is a key element of AI research. Our work aims to develop flexible systems which learn and adapt skills to solve motor control tasks while reducing the manual engineering required to achieve this goal. Future work could extend these approaches to enable coordination of a greater range of behaviours in more complex situations. ”
Predict wine quality
Overwhelmed by wine choices? Let a robot decide. FreeCodeCamp’s tutorial on machine learning has everything to need to teach a machine about wine. (Bonus: the tutorial teaches alongside Game of Throne memes. Soon you too will drink and know things.) The tutorial is a step-by-step walkthrough that even beginners can follow to create their own prediction model.
Since 1987, children have been hunting for Waldo (or Wally as he is known in the UK) in the popular picture books Where’s Waldo?. The red and white striped sweater is iconic and eye-catching, even for a robot. Redpepper developed a robot using Google’s AutoML Vision service to search and spot Waldo in a matter of seconds.
Images of Waldo were fed into the machine to learn his parameters and recognize him in a crowd. The robot then searches through faces, comparing them to the images sent via Google Auto ML Vision, looking for a confidence rating of over 95%. Once it is found, it lowers a finger over the page, controlled by a Raspberry Pi device using the PYARM library and bingo- there’s Waldo.
SEE ALSO: The state of machine learning in 2018
Want to turn the summer heat wave to a a chills winter scene? How about turning a horse into a zebra, a cat into a dog, a photograph into a classic Monet? Easy. A team from UC Berkeley used PyTorch to develop a software that uses machine learning to turn similar objects into one another.
The project abstract states, “We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G: X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F: Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).”
Hide from your boss
Not every machine learning use case is going to be lawful good. Hiroki Nakayama created BossSensor to allow for easier slacking off at work.
The strategy is as follows: “First, let the computer learn the face of the boss with deep learning. Then, set up a web camera at my desk and switch the screen when the web camera captures his face.” Machine learning helped recognize the boss’ face by collecting enough images through video in order to recognize when he comes near. (The larger amount of images, the more confident the sensor will be at detecting him.) Then, a machine learning model was built. In order to trick the boss, the screen switches to a programming text editor when he comes near Nakayama’s screen.
Go on, give it a try and check out the repo. All you need is a webcam, Python3.5, OSX, Anaconda, and lots and lots of photos of your boss.
Nado used a Generative Adversarial Network (GAN) to generate new images that are similar to the ones fed into the system. He explains on his blog post about how GAN work as two inner programs:
- “the discriminator learns the flaws in the images the generator creates so that it can pick out real images from the fakes
- the generator learns how to generate images that are similar to the real ones so that the discriminator can’t tell the difference”
The results are as adorable as they are impressive. You can generate your own cats and play with the GAN numbers to make your dream Pusheens.
(I am quite fond of the two headed creation in the top row.)
Peter Sobot combined his love for music and software engineering to create Machine Learning for Drummers. ML for drummers is an app that takes an audio sample and determines whether it is a kick drum, snare drum, or other drum sample. It currently boasts an 87% accuracy.
Training computers how to classify sounds has been a problem in machine learning for some time. Sobot writes that although the human brain is good at classifying drum audio, AI requires intensive training. He created a complex decision tree that a sample runs through, determining the sample’s loudness levels and frequency.
Grab the code on GitHub!
Yes, even beekeeping can benefit from machine learning. Mat Kelcey found a way to count bees coming and going into his bee hive using a Raspberry Pi and a solar panel. He tracked the data of each bee and saw what time they were most active and when they all returned home.
What’s next for this buzz-worthy tech? Kelcey plans to track bees over multiple frames with multiple cameras and port his project to the je vois embedded camera (allowing for 120fps).
All the code is available on GitHub, just in case you want to keep and eye on your own hive.
Have a phone conversation
Let’s face it – no one likes having to make phone calls. For many people with a disability, it can even be impossible. Machine learning is changing the game for this real-world task.
Google Duplex is an AI system that helps make phone calls and sounds as natural as a human voice. It can help schedule appointments, such as for a hair cut and color, or reserving a table at a restaurant.
Google Duplex creates a natural sounding conversation that isn’t easily recognizable as an AI voice. It inserts beats of silence, creates complex statements, adds filler sounds such as ‘umm’ and ‘hmm’, and has a natural sounding tone. It has learned how to handle interruptions and how to elaborate when asked for clarification. Don’t take our word for it, give the sample calls a listen on the blog and welcome the future of AI.
Know any other gems in the world of machine learning? We’d love to hear about your favorites!