days
-1
0
hours
-1
-3
minutes
-1
-3
seconds
-2
-1
search
You need to think differently about monitoring traditional and modern applications, containers and infrastructure

Monitoring serverless computing for modern business applications

Colin Fernandes
serverless
© Shutterstock / Stock-Asso

Following on from containers, serverless computing is the next wave management in application deployment and software delivery. In this article, Colin Fernandes of Sumo Logic explains why investing in operational and security analytics solutions that understand serverless is a worthwhile step to being successful with serverless.

In our recent report on The State of Modern Applications in the Cloud, companies in the EMEA region increased their use of the serverless platform AWS Lambda by almost double. This increase went from 12 percent of companies using Lambda in 2016 to 23 percent in 2017.

Like most things cloud, the terminology developing around serverless can be esoteric – even the name can be seen as a contradiction, as the “serverless” platforms are still backed by infrastructure in a data centre somewhere. At its heart, serverless computing aims to make it even easier for developers to implement and run their applications by letting developers focus entirely on their custom code.

Rather than having to care about the IT operations side at all, developers expect the cloud platform that they are running on to scale and optimise resources automatically in real time. Serverless is a logical extension of the fundamental cloud value proposition, by abstracting everything away and letting the customer focus entirely on their essential intellectual property – their code.

Serverless computing models can offer advantages with containers

Serverless computing is part of the same technological revolution as container technology. However, it doesn’t replace the use of containers. Both technologies bring synergistic benefits, and both enable optimisation and efficiency across the delivery chain.

The Kubernetes community is addressing the growth of serverless with integrated serverless container infrastructures with higher level Kubernetes concepts. The development of the open source virtual kubelet project has taken a lead in advancing this discussion within both the Kubernetes node and scheduling special-interest groups (SIGs).

Containers enable developers to be more productive, but they also require internal infrastructure and security management. This means that the developer and operations teams need to manage the process of container creation, integration, testing and deployment. In contrast, serverless computing removes this management overhead and places it squarely on the service provider involved.

A good example is AWS Lambda. It allows developers to write their code and then upload it to Lambda as a function. The function is then created within a container that includes all the specific requirements for the function to operate. Lambda orchestrates the container instance. Once each function has been created, this frees up the developer to concentrate on developing applications, rather than provisioning and managing resources – or management systems. Each function can be automatically scaled, and have further capacity created, managed, restored and destroyed automatically.

The unforeseen challenges of serverless computing

Using serverless, application developers and site reliability engineers can now create high-performance, high availability application run-time workloads at scale, in a very short space of time, using a combination of functions that are created, executed, and managed by the cloud provider. While the underlying infrastructure still remains complex, the management headaches are abstracted from the developer team.

While this is great to start with, it can lead to issues over time. For instance, without the insight into how the serverless platform is running and consuming resources, it is difficult to control hidden costs. Alongside lack of real-time insight on usage and consumption, monitoring overall performance by individual functions and across the entire application lifecycle stack can be challenging. With serverless computing, when a problem arises, developers can lack visibility into every layer of the system. This can pose a major issue for troubleshooting and can impact customer experience. Often the only way to effectively discover root causes in distributed cloud-based systems is to have access to the right data at the right time.

Continuous planning ahead around serverless deployments is therefore important. Alongside composing functions, developers have to understand the business value models for their functions and how they are consumed and bought over time. This can have a big impact on decisions made around how to solve problems – for example, a high performance function that is paid for based on the volume and traffic of API calls made will be very different in cost compared to something that is based on storage used and other methods of metrics measurement.

SEE ALSO: Serverless’ greatest strength is also its greatest weakness

Understanding the financial value should encourage you to put some appropriate policies in place so automated scaling does not end up creating unexpected increase in service costs. This should also amplify the need for run-time observability. Log and time-series metric data from each function can provide a complete history of an event that happened within different domains of an application, while intelligent correlation and outlier detection can be combined to measure the health of the entire application throughout its lifecycle. Combining log and metric data with analytics can provide much more clarity and enables predictive root cause analysis.

Manual review of large volumes of log data files is neither exciting, nor scalable, so putting some unified log and metrics analytics service in place from the start is essential. Like with container technology, traditional IT and software monitoring tools built for on-premise or hybrid models were not designed for these highly dynamic architectures. It is therefore worth looking at analytics tools that understand these new architectures natively in order to get visibility into application performance where and when it matters.

Building business level insight on top of serverless

While it might be a relief to offload the provisioning and management of infrastructure, you still need to understand what’s happening across your distributed cloud deployments. While traditional IT operations approaches may be less relevant, the need to actively manage business SLAs and KPIs in order to maintain great customer experience is more relevant today than ever.

Developers aren’t the only teams that can use this data. For example, this data can be used to track customer experience and increase the effectiveness of other teams like customer support and customer success. Similarly, getting insight into the real-time billable durations of function invocations can help developers manage their budgets over time, and avoid awkward questions from line of business teams and finance. Similarly, for any developers that are handling customer data, serverless does not remove the need for trust, privacy and compliance around how data is secured and managed.

Serverless adoption continues to grow in Europe rapidly as developers want more flexibility and speed without the need to rely on operations teams for compute, infrastructure and security. Your customers don’t care where your applications reside, but they do care about poor service levels or missing applications. It’s therefore essential to maintain that consistent customer experience through getting insight into your applications, preferably in real time. Investing in operational and security analytics solutions that understand serverless is a worthwhile step to being successful with serverless.

Author

Colin Fernandes

Colin Fernandes is product marketing director, EMEA at Sumo Logic.


Leave a Reply

Be the First to Comment!

avatar
400
  Subscribe  
Notify of