The inevitability of AIOps

How AI will earn your trust

Will Cappelli
© Shutterstock / Zapp2Photo

We’re going to need high dimensional probability and statistics and model it in high dimensional geometry. This is why AIOps is inevitable. This article examines four of the reasons why people distrust AI and what properties define big data.

In the world of applying AI to IT Operations one of the major enterprise concerns is a lack of trust in the technology. This tends to be an emotional rather than intellectual response. When I evaluate the sources of distrust in relation to IT Ops, I can narrow it down to four specific causes.

1. It’s all about the maths

The algorithms used in AIOps are fairly complex, even if you are addressing an audience which has a background in computer science. The way in which these algorithms are constructed and deployed is not covered in academia. Modern AI is mathematically intensive and many IT practitioners haven’t even seen this kind of mathematics before. The algorithms are outside the knowledge base of today’s professional developers and IT operators.

SEE ALSO: 3 global manufacturing brands at the forefront of AI and ML

2. The intractability of AI

When you analyse the specific types of mathematics used in popular AI-based algorithms, deployed in an IT operations context, the maths is basically intractable. What is going on inside the algorithms cannot be teased out or reverse engineered. The mathematics generates patterns whose sources cannot be determined due to the very nature of the algorithm itself.

For example, an algorithm might tell you a number of CPUs have passed a usage threshold of 90% which will result in end user response time degrading. Consequently, the implicit instruction is to offload the usage of some servers. When you have this situation, executive decision makers will want to know why the algorithm indicates there is an issue. If you were using an expert system it could go back and show you all the invoked rules until you reverted back to the original premise. It’s almost like doing a logical inference in reverse. The fact that you can trace it backwards lends credibility and validates the conclusion.

What happens in the case of AI is that things get mixed up and switched around, which means links are broken from the conclusion back to the original premise. Even if you have enormous computer power it doesn’t help as the algorithm loses track of its previous steps. You’re left with a general description of the algorithm, the start and end data, but no way to link all these things together. You can’t run it in reverse. It’s intractable. This generates further distrust, which lives on a deeper level. It’s not just about not being familiar with the mathematical logic.

3. The fear of automation

Let’s look at the way AI has been marketed since its inception in the late 1950s. The general marketing theme has been that AI is trying to create a human mind, when this is translated into a professional context people view it as a threat to their jobs. This notion has been resented for a long time. Scepticism is rife but it is often a tactic used to preserve livelihoods.

How AI has been marketed as an intellectual goal and a meaningful business endeavour, lends credibility to that concern. This is when scepticism starts to shade into genuine distrust. Not only is this technology that may not work, it is also my personal enemy.

IT Operations, in terms of all the various enterprise disciplines, is always being threatened with cost cutting and role reduction. Therefore, this isn’t just paranoia, there’s a lot of justification behind the fear.

4. The false promise of AI

IT Operations has had a number of bouts with commercialized AI which first emerged in the final days of the cold war when a lot of code was repackaged and sold to the IT Ops as it was a plausible use case. Many of the people who are now in senior enterprise positions, were among the first wave of people who were excited about AI and what it could achieve. Unfortunately, AI didn’t initially deliver on expectations. So for these people, AI is not something new, it’s a false promise. Therefore, in many IT Operations circles there is a bad memory of previous hype. A historical reason for scepticism which is unique to the IT Ops world.

These are my four reasons why enterprises don’t trust AIOps and AI in general. Despite these four concerns, the use of AI-based algorithms in an IT Operations context is inevitable, despite the distrust.

Gartner and its 3Vs

Take your mind back to a very influential Gartner definition of big data in 2001. Gartner came up with the idea of the 3Vs. The 3Vs (volume, variety and velocity) are three defining properties or dimensions of big data. Volume refers to the amount of data, variety refers to the number of types of data and velocity refers to the speed of data processing. At the time the definition was very valuable and made a lot of sense.

The missing V (or D)

The one thing Gartner missed is the issue of dimensionality i.e. how many attributes a dataset has. Traditional data has maybe four or five attributes. If you have millions of these datasets, with a few attributes, you can store them in a database and it is fairly straightforward to search on key values and conduct analytics to obtain answers from the data.

However, when you’re dealing with high dimensions and a data item that has a thousand or a million attributes, suddenly your traditional statistical techniques don’t work. Your traditional search methods become ungainly. It becomes impossible to formulate a query.

As our systems become more volatile and dynamic, we are unintentionally multiplying data items and attributes which leads me onto AI. Almost all of the AI techniques developed to date are attempts to handle high dimensional data structures and collapse them into a smaller number of manageable attributes.

The role of high dimensional probability and statistics

When you go to the leading Universities, you’re seeing fewer courses on Machine Learning, but more geared towards embedding Machine Learning topics into courses on high dimensional probability and statistics. What’s happening is that Machine Learning per se is starting to resemble practical oriented bootcamps, while the study of AI is now more focussed on understanding probability, geometry and statistics in relation to high dimensions.

How did we end up here? The brain uses algorithms to process high dimensional data and reduces it to low dimensional attributes, it then processes and ends up with a conclusion. This is the path AI has taken. Let’s codify what the brain is doing and you end up realizing that what you’re actually doing is high dimensional probability and statistics.

SEE ALSO: Facebook AI’s Demucs teaches AI to hear in a more human-like way

The inevitability of AIOps

I can see discussions about AI being repositioned around high dimensional data which will provide a much clearer vision of what is trying to be achieved. In terms of IT operations, there will soon be an acknowledgement that modern IT systems contain high volume, high velocity and high variety data, but now also high dimensional datasets. In order to cope with this we’re going to need high dimensional probability and statistics and model it in high dimensional geometry. This is why AIOps is inevitable.


Will Cappelli

Will Cappelli is CTO EMEA and Global VP of Product Strategy for Moogsoft. Will studied math and philosophy at university, has been involved in the IT industry for over 30 years, and for most of his professional life has focused on both AI and IT operations management technology and practices. A former Gartner analyst, Will is widely credited for having been the first to define the AIOps market. In his spare time, he dabbles in ancient languages.

Inline Feedbacks
View all comments