Bringing everyone down

Messy architecture: Who’s causing your crashes?

Lucy Carey
fail1

Study reveals that almost one third of IT organisations racking up huge volumes of downtime due to daily issues.

It seems like everything is getting more complicated
these days – and IT architectures are no different.  Whatever
you’re dealing with, when things get more complex and knotty, it’s
much more likely to result in snaggles down the line.

A recent US study by tech and market research company
Forrester has revealed
that almost one third of IT organisations are running into daily
issues. That’s a lot of downtime – and can cost businesses anything
from $10,000 to $1 million per hour.

Ostensibly looking at the “State of IT Monitoring,”
the study focused on the tactics operations teams use to solve
downtime, what is most important to them in a monitoring tool, and
where they currently stand on addressing the complexities of the
modern data center.

Even as organisations drift towards virtualised or
public / private cloud environments, many of the operational
aspects remain firmly tethered to localised systems such as
mainframes and client servers, resulting in fragmented
infrastructures. This isn’t necessarily an issue in itself,
 but, as Forrester’s Chris Smith points out, a fragmented
approach to managing these infrastructures can bring about all
manner of chaos.

According to Smith, if you take a siloed approach to
management, it’s simply not possible to determine the context and
correlate the root cause of issues. As he says, “it’s like looking
at a single piece of evidence instead of examining the entire crime
scene – you can’t solve the crime if you don’t understand the big
picture.” 

He told JAXenter that, “The more heterogeneous the
infrastructure, the more important it is to have a holistic
approach to management,” noting that an ideal approach would be to
take a unified approach to managing performance and availability,
regardless of the type of infrastructure in place.

Another interesting point the survey brought to light
was the fact that less than half of the 127 companies are failing
to recognise the strategic nature of their monitoring data – or
even if they do, utterly failing to leverage it. More often than
not though, organisations tend to view it as “a low-end function
that exists to support incident management, i.e. getting alerts
when an issue is detected.

Part of this failure to make the most of monitoring
tools could be attributed to a widespread distrust amongst
organisations of these technologies. Smith states that, “A key
value of a monitoring tool is to generate an alert before the end
users notice a problem.” However, only 62% of organizations in the
study were getting this benefit. 38% had already experienced an
issue before their IT team was aware there were gremlins in the
machine.

Unfortunately for companies impacted by costly downtime
build up, Forrester expects crashes are only going to become even
more frequent as datacentres continue to evolve.

Smith says that, under the new paradigm of “hybrid
IT”, the relationships “between IT assets and the services
leveraging them are highly dynamic.” Without well thought out and
tightly executed infrastructure management, when issues do arise,
it’s extremely difficult to quickly track them down and
 identify culprit. And without intervention, it’s company
funds that will continue to suffer.

Image by Sarabbit

Author
Comments
comments powered by Disqus