- The breaching of natural scaling limits
- Node.js and Docker
- … and of course Microservices.
That’s good news for startups: not only do I have easy accessibility and high development speed, but also the necessary performance. And not to mention support for asynchronous behavior and events, originally introduced for slow networks and GUI, as well as rudimentary functional skills, which at first played a rather strange role in language design. All this makes writing scalable software even easier.
JAXenter: Of course, a special role is played by Node.js – but even that has its natural limits. How far can you go down this road without aid?
Of course, the initial approach of Node.js – back then a rather simple wrapper around the V8, just sitting on a port, listening to traffic in a continuous loop – didn’t scale anything. After all, there is only one CPU-Core involved and even in 2011 the average server was offering much greater possibilities. The rest was just floating around unused… The evil thing here is what we call the Highlander-Factor of Google’s solid basis technology: there can only be one and that one lives forever.
Just think about the numbers for a moment: nowadays, a simple NodeScript can process up to 20.000 requests per second to the core. With typical interaction patterns, this would result in 50.000 users online at the same time during peak times. If you assume one visit per week is the normal behavior of those users, a single core is already capable of serving 10 Mio active users.
Even if this is just a theoretical scenario, its choke point is not to be found in Node, but rather in storage, network or the application’s logic. It’s typically there, that the problems with scaling arise. Nonetheless, in form of its cluster module Node itself already contains the ability to start child processes and distribute requests to them, thereby using all cores. The next step would be to distribute onto several servers, typically by using a reverse proxy. Although the Node-World provides us with own solutions, developers usually resort to NGINX and HAProxy.
But yes, you can still do a lot of stuff by just relying on a straight Node infrastructure. It just gets tiresome at some point, for example considering elasticity or geographical distribution.
JAXenter: There are currently ways to break through these limits – you will present them in your Night Session “Drone Wars”. Can you briefly outline your solution?
Johann-Peter Hartmann: The tools for this already exist, they are a key component within cloud infrastructures and especially popular at Amazon, the pioneer in this field. But setting it all up is traditionally a painstaking and time-consuming task, it requires a lot of specialized knowledge, brainpower and patience to operate this kind of infrastructure.
To be honest, the whole PaaS-business with companies like Heroku and EngineYard is based on this complexity. They put their knowledge in a more understandable form, allowing us, simple application developers, to use it too.
It was only in the last two years that a slow process of democratization took place in this field. This wasn’t the result of a smart strategy. Moreover, it happened merely by mistake – again. For more than ten years, containers existed in a wide range of variations: OpenVZ, Linux-VServer, even LXC is already eight years old. But as with Node it was necessary that someone developed the proper easy-to-handle interface accompanying the technology and a lightweight, open infrastructure around it: that’s right, I’m talking about Docker.
Suddenly, containers were easy to create, to maintain and to reuse. Just like a Lego set, developers are now able to build their personalized platform without tying themselves to the conventions and costs of a PaaS. Also, the entity “container” can be put to good use on the infrastructures side too. Plus, a variety of full-grown powerful tools like CoreOS, Kubernetes, Deis or Apache Mesos allow container-infrastructures to scale in virtually unlimited ways.
This is the topic of my Night Session – I want to show how to benefit from this democratization and how easy it has become to access it. In an effort to be more credible, I use a simple microservices structure to emphasize that these structures are fully capable of reproducing more complex environments.
Node.js, Docker and Microservices to the rescue!
JAXenter: So again, containers play a major role. How difficult is it to bring Node.js and – for example – Docker together?
Johann-Peter Hartmann: If you just take the required knowledge or let’s say the time necessary to acquire that knowledge the answer is the following: there is not much difficulty to it in any way. I already mentioned that the underlying philosophies are very much alike – a powerful technology in its prime combined with a simple interface and a high-quality infrastructure for sharing solutions. That way it takes minutes or maybe just a couple of hours to get familiar with it.
Also, the easy access to many existing solutions meets the criterion of “Earning while Learning”. I can acquire the necessary knowledge whilst using the tools. This is a clear contrast to the classical way of out-scaling based on Amazon Clouds and thereby can nearly be compared to such elegant interfaces like Heroku.
JAXenter: Microservices in general experienced a downright hype, even though there are some critical voices too. Are microservices really the last word on the subject?
Johann-Peter Hartmann: I for once don’t think that the discovery of microservices will end the software crisis we’ve been experiencing for some time now. I have seen that strange microservices happiness befalling a team of six persons seriously considering replacing an application of manageable size with microservices just because the existing monolith became impossible to maintain. Obviously that can’t work. If you fail when dealing with normal complexity, you can’t win by adding even more complexity in form of microservices.
Of course this isn’t to say that these developers are incapable and microservices are a faulty solution! But often there is an organizational problem lurking in the background that already made the first software so difficult to maintain. In consequence, there is no architecture which by itself could offer a solution. Instead, architecture and organization must be viewed and moved as a whole in order to find a solution. The “Inverse Conway Moneuver” – designing the organisation in a way that leads to the right architecture – is very popular among the microservices-crowd. It’s a quite naïve approach, but it goes in the right direction. Architecture and Organization will fully merge in the foreseeable future, while microservices will stay with us in any case.
JAXenter: From your experience, where are the limits of scalability if all current technologies would be combined?
Johann-Peter Hartmann: This may sound arrogant, but I think we technicians in the field of web- plus mobile-development have done quite a good job. Facebook services roughly 14 percent of humankind a day, there are approximately four WhatsApp-messages per day per human. Before we start to scale even better, I say we need new problems first.