“Serverless future will depend on the serverless orchestrator”
We spoke with Veselin Pizurica, CTO and co-founder of Waylay about the serverless paradigm. What concerns do enterprises have regarding serverless adoption and security issues, how can they achieve monitoring and observability of serverless applications, and how will the world of serverless evolve?
JAXenter: Hello Veselin and thanks for taking the time for our interview! Serverless is not the “new kid on the block” in the world of development anymore. Still, not that many enterprises have adopted the new paradigm already. Why is that?
Veselin Pizurica: It is correct that two different impediments are hampering serverless adoption. That is also confirmed and described in the O’Reilly survey on serverless: Concerns, what works, and what to expect.
Everything starts with a rather simple idea of developing microservices and connecting different cloud functions, but in most cases that leads to eventual architecture complexity. Therefore we often see this being manifested as a problem of tracing and observability. The architecture complexity is often accompanied by a complex deployment model.
The second hindrance is related to nontechnical concerns and fear: fear of losing control, fear of vendor lock-in, fear of weak security, unpredictable cost etc.
Fear of (losing) ownership kicks in because you are delegating your application and business operation to someone else. Many big companies are offloading their workload into the cloud without knowing whether the total cost of ownership of the application will reflect their business goals. They will be charged per volume consumption, and hence their cost might skyrocket while their business is still in the early stages. As the cost is driven by the use of cloud microservices, cloud-native architects have also become responsible for the OPEX cloud cost.
JAXenter: You said that there are concerns regarding security. Can you give us a little insight into why the security aspect is problematic?
Veselin Pizurica: Attack vectors by hackers can be coming from all directions, either by trying to get into the network via compromised devices, by hijacking the login credential of the person with access to the application or by exploiting a weak/outdated security stack in the application.
Therefore each of these threats needs to be addressed carefully. Security and identity are extremely important aspects of any software solution. I mentioned identity separately, even though it is often seen as part of the Identity and Access Management (IAM). The reason is that in the IoT world, proving the identity of a device is not trivial.
Having said that, LPWAN solutions such as LoRA and Sigfox are extremely robust in that sense. Another aspect of security is API integration with 3rd party systems. The trust and the access scope are often achieved by OAuth2 integration or via API keys, and in any case, very careful handling of these communication patterns and capabilities need to be provided by SAAS integration platforms. Finally, analyzing the usage patterns from outside and inside (by monitoring network traffic patterns, CPU load etc.) should be an integral part of any SAAS solution.
JAXenter: What about monitoring and observability of serverless applications? Aren’t there tons of tools available for this?
Veselin Pizurica: One might ask himself how we got here in the first place? If the promise of serverless is to make things simple, by creating and sharing small snippets of code and mixing them with other services, how is that everyone talks about traceability, debugging and observability as the major problem in serverless? The answer is deeply buried in the problem of stateless functions. For functions to be reusable, they must be stateless.
But overall logic, use of these functions is not, the context in which these functions are used needs to be captured. That is what we call a “stateful” part of the process. It might be about object relations, or a “pointer” in the overall process handling, whatever.
That is where observability comes into the picture, making sense of all these moving parts. In order to solve this problem, we don’t need yet another – better observation tool. Before we even talk about observation tools, we need a good orchestration paradigm, something that will “guide” these code snippets, something that can make sense of all these moving parts.
JAXenter: Serverless in itself is an abstraction of cloud computing. With Waylay IO you now provide yet another layer of abstraction for serverless. What are the benefits of such a high level of abstraction from “what is going on” and what are the downsides?
Veselin Pizurica: Let’s start with the downsides. Experienced developers and architects are always skeptical when they hear about yet another model-based automation platform. They feel that soon they will discover “gotchas”: instead of frameworks helping them with implementation, they will need to work around the limitations of the framework imposed on them.
There is a term used in computability theory called Turing completeness. If somebody says “My new thing is Turing complete” that means that in principle, it could be used to solve any computational problem. Software languages are Turing Complete. When serverless hit the mainstream, it was widely accepted that serverless was the best candidate for “the low code lego brick approach”. And that brings me to the story of Turing Complete automation. If we are to use code snippets to implement our logic, we need an extremely powerful rules engine that can orchestrate these code snippets – without needing to resolve back to coding all that in the software language, otherwise what’s the point.
At Waylay, we have created almost, if not Turing complete automation technology, which can orchestrate these snippets of code. That means you don’t have to code lower layers. We also want to liberate developers from stitching all these microservices together. That means that you don’t have to worry about infrastructure. Our ultimate goal is to liberate developers from getting bogged down in things that have nothing to do with the problems they ought to solve. The Waylay platform is a pre-made automation stack where API gateways, multi-tenancy, lambdas, databases, and all other services are embedded and pre-installed. Developers have nothing to set up, nothing to manage.
JAXenter: Serverless and Machine Learning – how are they connected, or rather: How CAN you connect these two?
Veselin Pizurica: There is already one very successful “marriage” in place, and it is called machine learning pipelines, for training and benchmark of machine learning models. In most cases, they are built using serverless triggers on new data arrival, like for instance setting S3 object store triggers, which would trigger post-processing serverless functions that might include executing model training or model validation.
In the wider context of automation, that is a less obvious path. When the model starts drifting do we retrain it or not? If that was used in the context of anomaly detection, what made that model not perform? Is this a data problem, is it a model issue, or real anomaly detected? When the same model is applied for prediction, let’s say energy consumption prediction, we want to train the model constantly, such that new data and model benchmarks are as close as possible to reality.
When we use models for process optimization, we might need yet another feedback to know what we do with machine learning model outcomes. Going through this thought process, we realize that the application of machine models is more than a simple pipeline problem. It eventually becomes a process automation – orchestration problem.
JAXenter: How will the world of serverless evolve in the coming years?
Veselin Pizurica: Serverless future, beyond use cases of pipelines and simple CRUD infinite scale HTTP exposed functions, will depend on the serverless orchestrator. That is exactly what Waylay IO has to offer.