Four expert opinions on the future of container technology

All eyes on “Container 2.0”: What will be the next battle in the container revolution?

Hartmut Schlosser
Container 2.0
© Shutterstock.com / Ulrich Mueller

Docker started a container revolution which completely changed the way one develops and operates modern software. In this expert checklist, our focus lies on what is at the heart of this revolution and how the next step of “containerization” might look like.

Welcome to the era of Container 2.0

Our discussion is based on the vision Mesosphere CEO Florian Leibert described in his blog post “Welcome to the era of Container 2.0“. Leibert assumes that a true application of container technologies needs to be based on more than packaging and running code in containers (what he calls “Container 1.0”).

According to Leibert, the next phase, named “Container 2.0”, will be about orchestration of stateless and stateful services in a distributed infrastructure instead of running individual containers. On a higher level of abstraction, whole applications shall become coherent, deployable objects consisting of a multitude of containers and infrastructure services like databases or message queues.

At its simplest, Container 2.0 is the ability to run (and orchestrate) both stateless and stateful services on the same set of resources.

However, while this stateless-plus-stateful definition is accurate, easy to grasp and very powerful—especially for anyone who has built applications that connect to big data systems such as Kafka or Cassandra—even it is not comprehensive.

Realistically, delivering Container 2.0 means delivering a platform that can run application logic along with the gamut of backend services on shared infrastructure, combining all workloads onto a single platform that improves efficiency and simplifies complex operations. The collection of capabilities that modern applications require includes monitoring, continuous deployment, relational databases, web servers, virtual networking and more.
– Florian Leibert

We invited four experts to share their opinion with regard to Leibert’s position and offer us insights into their vision of a world run by container 2.0 technology. Furthermore, we asked about significant trends in the current container-sphere that might lay the foundation for the next level of containerization.

Our Container-Experts

johannes_unterstein

Johannes Unterstein – Head of JUG Kassel and Distributed Applications Engineer at Mesosphere.

stropek_rainer_300x210

Rainer Stropek – Founder of software architects GmbH and MVP for Windows-Azure-Platform

roland_huss

Roland Huß – Principal Software Engineer at Red Hat. Developer of fabric8io/docker-maven-plugins.

Philipp Garbe

Philipp Garbe – Docker Captain and Lead Software Developer at AutoScout24.

In your opinion: What needs to be part of a container 2.0 world?

Container 2.0 addresses both stateful containers and the combination with “big data” frameworks

Johannes Unterstein: Persistence is one of the challenges in today’s container landscape. Stateless containers are fairly easy to handle: for example, they can be rebooted on each and every node after failing. Stateful containers are not that easy, though. What will happen after some network issues were fixed and a container with persistent data goes back online when another instance has already been launched?

Container 2.0 addresses both stateful containers and the combination with “big data” frameworks like Spark, Cassandra, Kafka, ideally running on the same cluster as the container.

Rainer Stropek: I gained a different understanding of things through my work. Running a container infrastructure is not that important to me. My point of view on containers and Docker is that of a software developer. How can containers be implemented in the lifecycle of software? How can I build container-friendly architectures? How do I write code that does deploy well in a container environment? How to automate packaging of software in images as part of a CI/CD pipeline? I’m more concerned with questions like these.

I’d like to refrain from the topic of operational use of containers. It’s not easy to do something like that in a truly professional way. PaaS in the cloud is my solution to that problem, with Azure App Service being a perfect example. Microsoft offers PaaS based on Linux for PHP and Node.js. Dockerfiles for those are available on Docker Hub. This makes it possible to test locally using the exact images that will be used in the cloud later on. However, I don’t have to deal with the background infrastructure. Azure PaaS App Service takes care of that for me.

Roland Huß: The characteristic feature of container technologies is that all containers take a unitary format and can be handled in the same way, irrespective of their content. The definition of Container 2.0 quoted above drops this criterion altogether. Instead, the orchestration of “stateless” and “stateful” “services” are emphasized (as opposed to solely “stateless” container 1.0).

I’m not disputing in any way that stateful services running persistent data are one of the largest challenges right now. There are different approaches to this like Kubernetes “PetSets” (or “Stateful Sets”) and the two-level scheduler by Mesos that can distribute specialized workloads and services.

However, this has nothing to do with a definition of “Container 2.0”, if you ask me, because for example just some of the services administrated by Mesos got a standardized container format.

There are different stages in the containerization of the world. However, to me, these stages can be better characterized as a transition from local containers (Docker) to orchestration of many containers involving lots of nodes (Docker Swarm, Kubernetes and Mesos).

Philipp Garbe: It’s important to me that you can specify in a declarative way the needs of an application consisting of multiple containers and the dependencies between single containers. Partially that is already possible to use such declarations; however there is no way of deployment that resolves those dependencies automatically. For example, I can specify that my container needs a volume. Further information on size and necessary throughput is missing, though. Therefore I need to manually build volumes and clusters and make them available to my container, making automated scaling impossible. I could start additional containers automatically, but not create the necessary volumes that way.

Which container initiative do you believe will be shaping the future of containerization?

Roland Huß: To me, the efforts towards standardization for the container format and container runtime by Open Container Initiative (OCI) are really interesting. Docker did extremely well in enhancing the OS level virtualization for use by mere mortals with an outstanding user experience (UX). One cannot thank Docker enough for it. The resulting vast popularity of the container format defined by Docker makes it the de facto standard.

However, Docker Inc., righteously having a financial interest as a company, seems to be not at all too interested in setting an open and independent standard. It will be interesting to see if and how OCI will manage to reconcile the interests of Docker and other OCI members at the same time. Platforms for container orchestration are another topic of major importance right now, with quite a lot energy being put into. In contrast to the local container-sphere, dominated by Docker, there are many players around in this field of interest, above all Kubernetes, Docker Swarm and Apache Mesos.

Docker seems to be not all too interested in an open and independent standard.

Yet this also illustrates the number of conflicts Docker as a company faces: the decision to ship Docker Swarm as part of every Docker setup resembles the aggressive strategy applied by Microsoft in the integration of Internet Explorer 6 into Windows. One can only hope Docker won’t hit the same dead end concerning innovations IE6 did (luckily there’s no reason to assume such thing).

Rainer Stropek: Focusing on Microsoft in development, I think that Microsoft’s commitment to Docker is very interesting. I’m not just talking about Docker compatibility of containers in Windows.  Microsoft also offers ready-made Linux images at Docker Hub for such important technologies like .NET, PowerShell and Azure. An important part of my world is Linux based. It’s nice to see that Microsoft is fully committed to that trend.

Things start to get interesting when deploying complex applications consisting of multiple containers

Philipp Garbe: Starting a container is not hard anymore. Things start to get interesting when deploying complex applications consisting of multiple containers when there are several constraints to consider. Docker, Kubernetes and ECS all got their own approach to that problem and it will be interesting to see where this goes.

Johannes Unterstein: In the field of shipping containers to production easily and running them reliably, there have been many interesting novelties recently. In the current discussion it’s particularly interesting for me to see how fast levels are changing. Be it on some fine-grained field like in the controversial discussion about Docker runtime or on some abstract level addressing how to run hundreds or thousands of containers. Furthermore, it’s interesting to keep an eye on development of different standards like CNI for networking.

Quote Zone

Every developer/operator should look into container technologies because …

… they are tools no developer should forgo.
Rainer Stropek

… they make testing apps and tools locally easy. How else would it be possible to run an Elasticsearch Cluster locally within minutes? – Philipp Garbe

… they will facilitate his work life considerably.
Johannes Unterstein

…in the future, most applications will be shipped in containers.
Roland Huß

Every manager/IT responsible should look into container technologies because…

… container technology massively influences how we develop software (in a positive way). I’m not just talking about technical aspects, but also organizational ones.
Rainer Stropek

… they increase efficiency of developers, therefore making use of existing resources more efficient, whether in datacentre or cloud.
Philipp Garbe

…they make life easier for their employees and help utilize available cluster resources in the best way possible, therefore saving money.
Johannes Unterstein

… they will optimize processes, including their parameters like costs or development time. Furthermore it’s nice to see developers and administrators run around the office smiling ;)
Roland Huß

 

Author
Hartmut Schlosser
Hartmut Schlosser is an editor for JAXenter and a specialist in Java Enterprise-Technologies, Eclipse & ALM, Android and Business Technology. Before working at S&S Media he studied Computer Science, Music, Anthropology and French Philology.

Comments
comments powered by Disqus