Diving deep into Machine Learning

Machine intelligence vs. machine learning

Matt Taylor
Apache Mahout
Machine learning image via Shutterstock

Machine learning may sound futuristic, but it’s not. Speech recognition systems such as Cortana or Search in e-commerce systems have already showed us the benefits and challenges that go hand in hand with these systems. In our machine learning series we will introduce you to several tools that make all this possible. Third stop: HTM (Hierarchical Temporal Memory), Numenta’s machine intelligence technology.

This article is part of a Machine Learning series. Our third expert is Matt Taylor, the open source community flag-bearer for the Numenta Platform for Intelligent Computing. In this article, he talks about HTM (Hierarchical Temporal Memory), Numenta’s machine intelligence technology.

HTM —What’s under the hood

The idea behind HTM (Hierarchical Temporal Memory), Numenta’s machine intelligence technology,  is to understand how intelligence works in the neocortex of the mammalian brain, and apply those principles to creating software that works on the same principles. Numenta’s goal is not necessarily to solve “machine learning” as it is understood today. But we think that our approach will be able to solve a whole new class of problems, many of which we can’t even contemplate yet. All the HTM algorithms Numenta has created are open source. We are actively trying to foster a community of passionate people working with HTM technology.

Numenta’s primary HTM implementation is called NuPIC (Numenta Platform for Intelligent Computing), which is written in Python and C++. But because all of our code is open source, we have several community-maintained HTM projects in different languages like Java and Clojure. Any of these implementations could potentially be used to solve similar problems.

Today’s HTM implementations are best at understanding fast-moving streams of data, just like your brain is very good at processing streams of neural activations from your senses. In HTM, input data is converted into Sparse Distributed Representations (SDRs), which in turn represent the states of neurons. The HTM system processes streams of SDRs, finding both spatial and temporal patterns.  From these patterns, an HTM system can make predictions and identify anomalies. Learn more about SDRs in these videos from an educational series we’ve produced called HTM School.

You can see how well NuPIC can do in anomaly detection with real-world labeled data in the Numenta Anomaly Benchmark. We also have a partner using NuPIC to do IT analytics by finding anomalies in AWS servers.

Try HTM Studio which allows you to process your own streams of temporal data. It will automatically model the data stream and identify any anomalies in the data. Download it for free and read more about how to use it here.

If you are more of the developer type, here’s the typical example we give. We call it “Hot Gym”. In this example, the power consumption for an Australian gym is processed by NuPIC and predictions are made about how much power the gym will consume in the future. There are two parts to this tutorial, one for predictions and the second for extracting the anomaly score at each time step, which represents how unexpected the current state of the data is when compared to patterns that have already been seen. The tutorial videos below build out the code pretty much from scratch.

Hot Gym Prediction Tutorial

Hot Gym Anomaly Tutorial

Plans for HTM

HTM theory is still evolving. Numenta is currently in a research phase, looking into how sensorimotor integration works in the neocortex. We will soon be putting together experimental research code (which is also transparent) to explore these new theories. As we build out more of the theory in software, we expect more capabilities to emerge. This next wave of work should open up some interesting applications in robotics, among other things.

Machine learning vs. machine intelligence

Machine learning tends to be focused on creating solutions for very specific problems.  It requires machine learning experts, a huge amount of labeled data, and highly tuned solutions.  The exciting thing about machine intelligence is that it is closer to human generalized intelligence.  It does not require millions of data points for training, just as your human brain learns quickly from only a few examples.  An intelligent machine would adapt quickly to new information, not needing to go offline and be re-programmed.  There are many potential applications for intelligent machines such as in monitoring all kinds of IoT sensors, medical diagnosis and treatment and exploration of challenging ideas and places.

The part we are modeling of the human brain is the neocortex, the “new brain”, as opposed to the old brain.  The old brain is the part of humans that has desires and needs, that wants power, that can get angry.  We are modeling purely the analytical part of the brain.  So unless we program our computer brain to have needs and desires, it simply won’t have them.  In the meantime, by having better analytical capabilities in computing, we hope to be able to serve humankind with all sorts of capabilities to better the human condition.

Machine intelligence: Resources for getting started

First, read On Intelligence, by our co-founder Jeff Hawkins. This was the book I read back in 2006 that got me interested in how intelligence works and how we could implement it in software. Furthermore, an interesting fictional Utopian example of the type of future I would imagine with truly intelligent machines is represented in the Culture Series by Iain M. Banks.

Lastly, I have always liked the HAL 9000 representation of machine intelligence from 2001: A Space Odyssey for one reason. HAL was “born” and learned everything it knew over time, just like a human. It mentioned in the movie that its creator taught it a song when it was young. I believe this is the type of intelligence we will be eventually creating with HTM, one that is brought into the world knowing nothing, like an empty brain, and must interact with the world to learn what it is all about.

We asked Matt Taylor to finish the following sentences:

In 50 years’ time machine learning will be ubiquitous.
If machines become more intelligent than humans humanity will see the biggest and fastest technological advances in its history.
Compared to a human being, a machine will never be emotional.
Without the help of machine learning, mankind would never colonize Mars.



Matt Taylor

Matt Taylor is the open source community flag-bearer for the Numenta Platform for Intelligent Computing. He’s been working with and on open source projects for years, and he spends most of his time managing, encouraging, and interacting with the NuPIC OS community. Originally from a farming community in rural Missouri, Matt has been living in Northern California for over five years, and is finding it hard to leave.

Inline Feedbacks
View all comments