days
-6
-4
hours
0
-5
minutes
-4
0
seconds
-5
-1
search
Why containers are growing

Monitoring container-based applications: Why you need a different approach

Colin Fernandes
© Shutterstock / ShooLandia

Companies are deploying cloud-native applications to meet their business requirements. Whether it is speed, scalability or new functionality, containers offer a route to achieving these goals faster and more efficiently to manage the cost of change for developers and operational stakeholders. However, are we aware of the challenges as well?

Why containers are growing

Use of microservices-based technologies like Docker, for example, has grown from 18 percent in 2016 to 24 percent in 2017 in the EMEA region according to our research. Kubernetes is also growing to address the multi-cloud promise to easily port and migrate workloads in the cloud — around nine percent of companies covered in the research had deployed this to enable containers.

Containers allow developers to quickly package applications with all their dependencies, running one or more abstracted and encapsulated processes from a host operating system. There are no libraries or long-running executions to speak of, and application mobility becomes much easier to track and monitor. Continuous deployment and continuous integration become easier to automate and manage. This frees developers up to focus more on developing software services for new business revenue streams.

While containers offer advantages for developers, challenges do exist and must be managed. While developers can benefit from greater flexibility and scale, there are other critical elements that have to be considered around the long-term management of the infrastructure and how it is assembled. For the wider IT team, management and monitoring capabilities are essential for observing and predicting a plethora of potential problems that can lead to latency and poor performance.

While traditional software tools were built to monitor physical and virtual environments, these approaches can be insufficient when applied to containerised environments. Container-based applications tend to be built out of more components than traditional applications, and each of these components can add more nodes in order to cope with additional demand. Each of these elements will create data while they exist. These applications, therefore, create a large volume of log data and time-series metrics at high velocity that requires quick ingestion and analysis. Similarly, the volume of data will vary wildly based on how many nodes are active at any one time and may possess higher levels of ephemerality and immutability.

How will immutable infrastructures impact container deployments?

Market and customer appetite grows for immutable infrastructures to speed up deployment of containers and the applications that these containers support. Using immutable infrastructure should help developers manage higher frequency and volumes of change, optimise those deployments, and then provide quick resolution of failure conditions.

Immutable infrastructures and containers are becoming important architectural components to manage real-time changes to applications. These have to be pre-defined and configured so that no deviation or updates are allowed to any underlying (immutable) object, which allows these elements to be monitored for operational resilience, consistency and security. Rapid development and security as service architectures can, therefore, act as the catalysts to rethink tools and processes for the future.

What does this mean in practice and what are the issues?

Ironically, the speedy nature of container development and deployment can be both a blessing … and a curse. The fast pace at which container technology is advancing creates challenges for effective monitoring and lifecycle management. Developers want to deploy and change containers at an unprecedented rate. IT teams are very familiar with virtual machine (VM) sprawl, where VMs are created for a specific task but not shut down when they are no longer needed.

Containers also present a similar risk as more containers can be spawned to respond to demand. Setting rules to scale back deployments over time will also be required. Importantly, developers will need more insight into their data across all their container infrastructure, from individual components through an overall application stack, in order to make better decisions. Using this data, developers can continuously observe, optimise and regulate resources effectively to manage their container sprawl. This insight depends on deeper tracing to get visibility into container usage, behaviour and performance, regardless of platform or cloud used.

SEE ALSO: Monitoring serverless computing for modern business applications

When containers are ephemeral, how can continuous monitoring of the componentry be achieved?

In order to keep up with all these moving parts, monitoring must be embedded into container objects from inception. Traditional tools and private cloud IaaS are not built for multi-cloud, multi-instance, multi-tenanted computing. For example, analysing errors or faults can be more difficult when applications are deployed in a distributed fashion. However, quickly finding and resolving the underlying problem is essential for limiting downtime and the impact on customer experience.

How do you know which of these layers or which specific version of an application component is affected or at fault? How can you understand where internal services require fixes, or where third party application components are misconfigured, especially when you may have less insight into those components where you don’t own them? Using log analytics and cloud monitoring is necessary to continually observe and deep dive into the application, in order to find and address the real problem that is affecting the application and fix it quickly.

In terms of risk mitigation, container monitoring enables more early diagnosis and problem detection as well. Using continuous health checks, with good telemetry and feedback, means problems can be identified and rectified before they fully manifest. Good monitoring best practice prevents blind spots and helps teams deal with relevant compliance rules as well.

Looking ahead, monitoring also allows companies to create new opportunities from their data. Using this monitoring data alongside other business data sources can demonstrate how the business is performing as a whole. With more companies embarking on digital transformation — driven in part by new applications based on containers — providing this link between application performance and customer experience can show how investments are paying off. Machine data can, therefore, play an important role in strategic business decisions, like budgeting and planning.

As organisations increase their adoption of containerisation, it is easy to focus on the benefits containers can bring. However, planning ahead for unforeseen unknowns and additional complexity should not be underestimated. As more applications move to the cloud and containers, developers must look at monitoring and troubleshooting through a “cloud-native lifecycle” lens. This can not only reveal the benefits that are being delivered around performance but also help manage the potential challenges around safety, privacy and security of your container estate.

Author

Colin Fernandes

Colin Fernandes is director of product marketing EMEA at Sumo Logic, a cloud-native machine analytics service. He leads the company’s education and marketing campaigns in Europe, helping companies understand the challenges and opportunities around modern application design and implementation at scale. Prior to Sumo Logic, Colin worked at VMware where he was responsible for the company’s operations around the telecoms and cloud management sectors.