search
Ready, set, GO!

Serverless is ready for production

Nathan Taggart
serverless

© Shutterstock / Rawpixel.com

 

Going serverless is quickly growing in the cloud computing market. With major service offerings from Amazon, Google, and Microsoft, this technology is here to stay. In this article, Nathan Taggart of Stackery.io explains why serverless is the best choice.

In late 2014, Amazon Web Services (AWS) released a new product offering called AWS Lambda, a Function-as-a-Service (FaaS) offering colloquially known as “serverless.” What, at first glance, may have seemed a simple and curious little service has been quietly and quickly exploding in the cloud computing market. Lambda is at the forefront of a massive shift in cloud computing.

As one of over 100 products and services offered by AWS, you’d be excused for having glossed over it for the past few years. But, as Microsoft answered back in 2015 with Azure Functions, followed soon by Google Cloud Platform’s Cloud Functions, it’s rapidly becoming apparent that serverless is something to pay attention to.

Serverless

Serverless, at least when used in the FaaS sense, is fundamentally an on-demand, managed compute offering. It abstracts the underlying infrastructure away from software developers, so they can focus on code. By connecting their functions to event triggers (like an API call, or a file upload, or a database transaction), the cloud provider is responsible for ensuring that the code runs in response.

SEE MORE: “Serverless is a revolution of the cloud”

Beyond the nominal advantage of not managing infrastructure directly (at least, not the compute infrastructure), serverless offers pay-per-execution pricing which can be substantially more cost effective than running traditional cloud servers by the hour. In addition, serverless enables a new, event-driven architectural model. Rather than architecting your entire application in a real-time request-response paradigm, event-driven architectures work well for asynchronous tasks, “bursty” and unpredictable traffic, and complex distributed systems. By many accounts, this will be the model of the new web.

Of course, as with any emerging technology, those on the forefront of adoption are facing a variety of implementation challenges. It’s tempting, but inaccurate, to think that operations responsibility is offloaded to the cloud provider in a serverless model. These early adopters are quickly learning that is not the case: because FaaS instances spin up and down within seconds – and only run as long as code execution lasts – lifecycle monitoring must be conducted in real-time, with metrics, log collection, and error diagnostics extracted in the moment. In this new event-driven world, what’s been missing are developer tools to handle such boilerplate issues as dependency management, environment configuration, CI/CD pipeline integration, to name but a few.

SEE MORE: The benefits and drawbacks of serverless computing

These operational problems are why my team built Stackery. We’ve spent more than a year tackling the toughest challenges with serverless, perfecting the best operational practices, and creating our Serverless Operations Console. After nine months of a production beta, with over 1000 AWS deployments, we’re releasing Stackery as a V1 publicly available release.

Stackery was built with one overarching goal: help organizations run serverless in production. We’ve been focused on the key development and operational obstacles that have left too many companies on the sideline of this technology trend.

Serverless is transforming cloud infrastructure. It’s backed by every major cloud provider, is significantly less expensive to run, and provides greater flexibility in how applications can be designed. We’re happy to say, serverless is ready for production.

Author

Nathan Taggart

Nathan Taggart is the CEO at Stackery.io. Find him on Twitter @ntaggart.


Comments
comments powered by Disqus