5 predictions for serverless in 2019
More organizations are riding the wave of serverless and Kubernetes, and many are starting to see tangible results. Here are five trends in serverless that are sure to impact the way organizations develop and deliver software for years to come.
Continuing the trend from last year, in 2019 we see more organizations riding the wave of serverless and Kubernetes, and many are starting to see tangible results. The widespread adoption of these technologies, however, has only just begun. Below, we examine five trends in serverless that are sure to impact the way organizations develop and deliver software for years to come.
1. Serverless sees large scale adoption – for enterprise applications, too!
2018 witnessed the emergence of Functions-as-a-Service (FaaS) and serverless computing. 2019 will be the year of large-scale adoption – for enterprise use cases, too. The growing adoption of serverless is fueled by the increased proliferation of container-based applications which are cloud-native — an architecture that’s required for Serverless.
The evolution of modern software delivery is such that the versatility and power of containers have accelerated the development of cloud-native applications for both greenfield as well as for modernizing legacy applications. This means that enterprise business scenarios were cloud-native modernization was previously thought of as impossible – such as for Edge devices, data in transit or stateful apps – are now becoming cloud-native. As cloud-native, containerized applications grow, developers take advantage of serverless functions to more easily perform a variety of tasks across a wide range of applications. We’ll also see teams delivering large scale microservices transition some of these over to FaaS as a way of reducing the complexity of the application.
Higher-end FaaS features such as Workflows will make it easy to build more complex serverless applications in a modular and more composable manner.
2. Serverless on Kubernetes becomes the standard, driving Serverless on-premises, multi-cloud
In 2018, Kubernetes became the de-facto standard in container orchestration across multiple cloud providers, and – in essence – is becoming the default Operating System and the number one enabler for cloud-native applications. As Kubernetes becomes ubiquitous, it would also become the standard for running serverless applications. Kubernetes is the perfect infrastructure for serverless. It enables easy development and running of serverless apps that take advantage of Kubernetes’ built-in features – such as scheduler, cluster management, scaling, service discovery, networking, and more – all required for serverless runtime, along with portability and interoperability to any environment.
This standardization on Kubernetes as the infrastructure for serverless allows organizations to run serverless applications on their own data centers or in multi-cloud environments, without having to be locked-in to a specific public cloud service or incur additional cloud costs. Being able to benefit from the speed, cost savings and improved utilization of serverless while leveraging their own datacenters, along with the ability to port serverless apps between environments (or even at the Edge)— all increase the adoption of Serverless in the enterprise and make it a compelling architecture not just to accelerate development of new applications, but as a compelling pattern for modernizing brownfield, legacy apps.
As Kubernetes deployments around cloud-native architectures gain further refinement, expect the integration of Kubernetes-based FaaS frameworks with Service Meshes and Chaos Engineering concepts. In other words, if Kubernetes is the new Linux, then serverless is the new Java Virtual Machine.
3. Serverless will be applied to Stateful and long-running apps as well
While serverless apps are still mostly used for short-lived, stateless applications, we’d see growing adoption of serverless for stateful use cases – fueled by advances in both serverless technologies and Kubernetes-based storage solutions.
Examples of such workloads include test and validation of machine learning models and applications that perform complex credit checks with wait states in between. Serverless Workflows will be a key aspect of ensuring that such use cases can not only perform well but also scale as needed.
4. Serverless tooling will enter an era of transformation
Tooling or the lack of maturity of tooling for serverless & FaaS has been an issue. This includes developer and operational team tooling and ecosystem support.
In 2019, the leading FaaS projects will begin to take an assembly line view of tooling with vastly improved developer experience, unit testing, capabilities such as live-reload and smooth pipelining for CI/CD. GitOps as a paradigm for FaaS development will also take off in 2019. This ensures that every artifact can be versioned using Git and used for things such as rollbacks or roll forwards, solving the versioning challenge that bedevils fast moving and frequently updated projects.
5. In 2019, Serverless cost would become an issue
As more enterprises adopt Serverless for large-scale, mission-critical applications, and as the load increases- the cost of the serverless offering on public clouds and the cloud-lock in would become a growing concern.
In 2019, companies will attempt to rein in cloud costs and ensure interoperability and portability, by standardizing on open source serverless solutions, Kubernetes, employing strategies to always use the optimal cloud provider without having to re-code an application, as well as running Serverless on their own private clouds. This last point would have a dramatic impact on their bottom line: improving their resource utilization and leveraging their existing infrastructure and the investment made in on-premise data centers to deliver the same developer experience and cloud operations experience as that of the public cloud.
We expect these predictions to hold as markers of a greater serverless architecture adoption wave, where every single application component is modeled as services, and executed upon triggers and only run for the duration that satisfies the request of the service request. The model when fully embraced end-to-end will have simplified further not only what it takes to write software but to make it possible to write software with the guarantee of running as fast as possible, at the lowest cost, and securely.