Interview with Johann-Peter Hartmann

JavaScript, Scaling and Microservices – a team that can’t be beaten

Gabriela Motroc
Johann-Peter Hartmann

JavaScript and Scalability – isn’t this just asking for trouble? Or are we already crossing the finishing line? And what role are Node and Docker playing in this epic tale?

Is NodeJS scaling? Of course it is. But what about JavaScript in general? What happens if you really want to scale? If you want to scale even beyond the server limits? Even beyond the cluster limits? Well, these are all fascinating and important questions – especially nowadays, when applications are most successful when they prove to be able to adapt to ever-growing user numbers.

This brings us to Johann-Peter Hartmann’s Night Session titled “Drone Army – Scaling JavaScript”, which he will hold at the JavaScript Days 2016 in Munich. We couldn’t pass this opportunity up so we decided to shed some light on the upcoming session. We discussed about the following topics:

We invite you to have fun with our interesting insights and outlook into the world of scaling and JavaScript.

JavaScript and Scaling? Purely by mistake!

JAXenter: Johann, today it’s all about scalability. An app capable of swiftly adapting to growing user numbers is most likely to become one of the frontrunners of the app game. How is JavaScript doing in this field?

Johann-Peter Hartmann: Historically, the involvement of JavaScript in this field happened purely by mistake. Technically, the whole language design was conceived for an entirely different purpose, focussing significantly more on development speed, accessibility of functionality and the ability to wire things such as Glue-Code. With really big platforms you would expect a type solid language, in which the engine can refrain from continuously evaluating the type, as well as from casting. A language like Java – featuring solid types and having so much more thought put into in terms of compiling and JVM than JavaScript – is actually made for this kind of work.

Unfortunately – or fortunately, from today’s perspective – things have changed with JavaScript. Together with Web 2.0 ,the world of applications became a huge part of the web, thus making JavaScript the No.1 GUI-Language. Thereby the Browser War turned into a JavaScript-Engine-War – providing us with such brilliant JavaScript-Engines like Google V8, Nitro and Chakra. And considering the already quite solid distribution of asm.js support, we still haven’t reached the end of this development. There, the 50 percent mark with optimized C-Compilation-Performances is already being reached.

That’s good news for startups: not only do I have easy accessibility and high development speed, but also the necessary performance. And not to mention support for asynchronous behavior and events, originally introduced for slow networks and GUI, as well as rudimentary functional skills, which at first played a rather strange role in language design. All this makes writing scalable software even easier.

JAXenter: Of course, a special role is played by Node.js – but even that has its natural limits. How far can you go down this road without aid?

Johann-Peter Hartmann: Picture Node.js as the striker scoring the goals with the balls passed by the JavaScript-Engine-War. Node, NPM and the already incredibly rich world of tools in its immediate environment made JavaScript very easy and accessible from the server side. In addition, the success of node shows how well today’s problems can be met with just the small and flexible JavaScript combined with a powerful engine and an open infrastructure.

Of course, the initial approach of Node.js – back then a rather simple wrapper around the V8, just sitting on a port, listening to traffic in a continuous loop – didn’t scale anything. After all, there is only one CPU-Core involved and even in 2011 the average server was offering much greater possibilities. The rest was just floating around unused… The evil thing here is what we call the Highlander-Factor of Google’s solid basis technology: there can only be one and that one lives forever.

Just think about the numbers for a moment: nowadays, a simple NodeScript can process up to 20.000 requests per second to the core. With typical interaction patterns, this would result in 50.000 users online at the same time during peak times. If you assume one visit per week is the normal behavior of those users, a single core is already capable of serving 10 Mio active users.

Even if this is just a theoretical scenario, its choke point is not to be found in Node, but rather in storage, network or the application’s logic. It’s typically there, that the problems with scaling arise. Nonetheless, in form of its cluster module Node itself already contains the ability to start child processes and distribute requests to them, thereby using all cores. The next step would be to distribute onto several servers, typically by using a reverse proxy. Although the Node-World provides us with own solutions, developers usually resort to NGINX and HAProxy.

But yes, you can still do a lot of stuff by just relying on a straight Node infrastructure. It just gets tiresome at some point, for example considering elasticity or geographical distribution.

JAXenter: There are currently ways to break through these limits – you will present them in your Night Session “Drone Wars”. Can you briefly outline your solution?

Johann-Peter Hartmann: The tools for this already exist, they are a key component within cloud infrastructures and especially popular at Amazon, the pioneer in this field. But setting it all up is traditionally a painstaking and time-consuming task, it requires a lot of specialized knowledge, brainpower and patience to operate this kind of infrastructure.

To be honest, the whole PaaS-business with companies like Heroku and EngineYard is based on this complexity. They put their knowledge in a more understandable form, allowing us, simple application developers, to use it too.

It was only in the last two years that a slow process of democratization took place in this field. This wasn’t the result of a smart strategy. Moreover, it happened merely by mistake – again. For more than ten years, containers existed in a wide range of variations: OpenVZ, Linux-VServer, even LXC is already eight years old. But as with Node it was necessary that someone developed the proper easy-to-handle interface accompanying the technology and a lightweight, open infrastructure around it: that’s right, I’m talking about Docker.

Suddenly, containers were easy to create, to maintain and to reuse. Just like a Lego set, developers are now able to build their personalized platform without tying themselves to the conventions and costs of a PaaS. Also, the entity “container” can be put to good use on the infrastructures side too. Plus, a variety of full-grown powerful tools like CoreOS, Kubernetes, Deis or Apache Mesos allow container-infrastructures to scale in virtually unlimited ways.

This is the topic of my Night Session – I want to show how to benefit from this democratization and how easy it has become to access it. In an effort to be more credible, I use a simple microservices structure to emphasize that these structures are fully capable of reproducing more complex environments.

Node.js, Docker and Microservices to the rescue!

JAXenter: So again, containers play a major role. How difficult is it to bring Node.js and – for example – Docker together?

Johann-Peter Hartmann: If you just take the required knowledge or  let’s say the time necessary to acquire that knowledge the answer is the following: there is not much difficulty to it in any way. I already mentioned that the underlying philosophies are very much alike – a powerful technology in its prime combined with a simple interface and a high-quality infrastructure for sharing solutions. That way it takes minutes or maybe just a couple of hours to get familiar with it.

Also, the easy access to many existing solutions meets the criterion of “Earning while Learning”. I can acquire the necessary knowledge whilst using the tools. This is a clear contrast to the classical way of out-scaling based on Amazon Clouds and thereby can nearly be compared to such elegant interfaces like Heroku.

JAXenter: Microservices in general experienced a downright hype, even though there are some critical voices too. Are microservices really the last word on the subject?

Johann-Peter Hartmann: I for once don’t think that the discovery of microservices will end the software crisis we’ve been experiencing for some time now. I have seen that strange microservices happiness befalling a team of six persons seriously considering replacing an application of manageable size with microservices just because the existing monolith became impossible to maintain. Obviously that can’t work. If you fail when dealing with normal complexity, you can’t win by adding even more complexity in form of microservices.

Of course this isn’t to say that these developers are incapable and microservices are a faulty solution! But often there is an organizational problem lurking in the background that already made the first software so difficult to maintain. In consequence, there is no architecture which by itself could offer a solution. Instead, architecture and organization must be viewed and moved as a whole in order to find a solution. The “Inverse Conway Moneuver” – designing the organisation in a way that leads to the right architecture – is very popular among the microservices-crowd. It’s a quite naïve approach, but it goes in the right direction. Architecture and Organization will fully merge in the foreseeable future, while microservices will stay with us in any case.

JAXenter: From your experience, where are the limits of scalability if all current technologies would be combined?

Johann-Peter Hartmann: This may sound arrogant, but I think we technicians in the field of web- plus mobile-development have done quite a good job. Facebook services roughly 14 percent of humankind a day, there are approximately four WhatsApp-messages per day per human. Before we start to scale even better, I say we need new problems first.


Gabriela Motroc
Gabriela Motroc was editor of and JAX Magazine. Before working at Software & Support Media Group, she studied International Communication Management at the Hague University of Applied Sciences.

Inline Feedbacks
View all comments