See Jussi Nummelin talk live at JAX DevOps on 4 April 2017

The automated container deployment pipeline

Jussi Nummelin
Conveyor belt system image via Shutterstock.

“Ship code at the push of a button!” In preparation for his talk at JAX DevOps, speaker Jussi Nummelin shows us how to keep up with contemporary software development: move fast, make rapid changes, and adapt.

Containers have brought great opportunities to organizations – big and small – to enable them to efficiently run their applications and services. As containers provide the same environment for the apps and services in any deployment target, they really sound like some pixie-dust ingredient that makes all your cooking taste wonderful. While containers really make some things easier and more manageable, the truth is that you still have to use them properly. Containers can really make a difference in the automated deployment pipeline.


No, we’re not talking about the highly controversial oil pipelines. 🙂 For us the automated deployment pipeline means the capability to ship the code at the “push of a button”. Whether the button is a real button on some UI, a function of git push or heck, I’ve seen demos of teams using real physical buttons in their coffee rooms to do deployments. I think Martin Fowler summarizes the subject really well in his article Continuous Delivery:

“…Continuous Delivery is a software development discipline where you build software in such a way that the software can be released to production at any time.”

Although arguably, continuous deployment is something more than that, in the continuous deployment model you pretty much deploy every single change to production. But naturally that means that you have to have the automation in place to handle this rapid deployment cycle.

Why should I use container pipelines?

Today’s business world is extremely fast-paced with new competing start-ups popping up like mushrooms. To be able to compete in this environment you need to be able to move fast, make rapid changes and adapt. In software development this really means that you need to be able to push the changes to production fast.

On the other hand, when you are pushing smaller changes the chances of introducing stop-the-press bugs are smaller. And when the whole pipeline is developed to such an extent that the testing process is fully automated, you have the confidence that every push meets the needed quality criteria.

How can containers help?

Teams have been building delivery pipelines long before the container hype using various technologies. Now with containers on the table we see lots of potential benefits from using them to build the deployment pipeline. Especially when used with proper container orchestration tooling.

Firstly, and most importantly, containers provide the same environment for your application no matter what the deployment target environment is. We’ve all heard, and probably said, the infamous words: “works on my machine”. There are always slight differences between the testing environment and the production environment. Now when the application is packaged and run within a container, the container image is exactly the same in all the steps in the pipeline.

Secondly, containers provide a “standard” way to deploy and run any application or service of yours. With plain Docker for example, the process is always the same:

docker pull my-app:1.2.3 
docker run my-app:1.2.3 
Repeat on every machine running the app. 

And when you are running a container orchestration tool, such as Kontena, your cluster-wide deployment becomes really simple:

kontena stack upgrade my-app my-app.yml  

Container image as a build artifact

When you use a container image as a build artifact, you can ensure that in every step of the pipeline there is zero variance of the app. In practice this means that the container image built should be tagged with a build number, git commit hash or other similar identifier that uniquely identifies the version of the application (sources) being built.

SEE ALSO: Containers “treat” the “works-on-my-machine” syndrome

Integration testing

When doing automated integration testing, there’s typically a truckload of dependencies needed for your application. Those might be databases or some other downstream apps that provide some APIs that your application uses. How to get those easily available during testing? Doh, with containers of course.

With the help of containers and tools like docker-compose it is almost trivial to spin up an integration testing environment with all the dependencies running. This also means that you don’t have to host and maintain dedicated integration environments as anyone can spin up the integration environment in seconds. You can even bake custom database images that contain proper data for testing purposes.

Application deployment

In the last steps of the pipeline, when doing the actual deployment you have two alternatives. One option is to use the same tagged image that was built during the previous steps in the pipeline. This means that you probably have to change the deployment descriptor to refer to this newly built image on-the-fly. With Kontena you’d be using some variable logic in your stack yaml file:

stack: jussi/web  
description: My cool web app  
version: 0.0.1  
      env: GIT_COMMIT
    image: my-app:${git_commit}
      - 80:80

The benefit, over the second approach, is that it’s right away clear what version of the app is running in production.

Another option is to use a special “deployable-version” tag for your images. In practice this means that you’d be tagging your built and tested image before deploying it to e.g. production with a special tag that is used in your deployment descriptor. With Kontena, the stack yaml would look something like:

stack: jussi/web  
description: My cool web app  
version: 0.0.1  
    image: my-app:production
      - 80:80

And in the pipeline before the actual deployment, you’d have something like this executed:

docker tag my-app:$GIT_COMMIT my-app:production  

The benefit of this approach is that your deployment descriptors don’t change during the pipeline but you lose some visibility of what exact image is actually running in production. It is still identifiable with image sha checksums, as all the layers are shared but requires some detective work.

What comes to the actual application deployment, there’s really no way better than doing it with container orchestration tools. The out-of-box orchestrator takes care of things like:

  • cluster wide rolling deployment
  • dynamic loadbalancer configuration
  • scheduling


Containers are no magic button that’ll make your build and deployment pipeline sing but they can really provide a lot of help when used properly. And especially when the pipeline integrates with container orchestration tools, such as Kontena, you can pretty easily create fully automated cluster-wide deployment pipelines.

This post was originally published on the Kontena Blog.


Jussi Nummelin will be delivering one talk at JAX DevOps which will focus on the solutions and benefits of a deeply-integrated deployment pipeline using technologies such as container management platforms, Docker containers, and the Cl tool.


Jussi Nummelin

Jussi Nummelin has orchestrated  and operated numerous software platforms and applications during his 15+ year career. Having worked for companies ranging from mobile operator Elisa, to telecom systems and mobile phone provider Nokia, to systems integrator Digia, Jussi has gained deep and wide experience on creating and running highly scalable fault-tolerant systems. Jussi is now one of the core engineers building container orchestration at Kontena, Inc.

Inline Feedbacks
View all comments