“It is crucial for Docker to be the industry-wide accepted standard”
Docker is revolutionizing IT — you’re probably hearing this phrase quite often. Still, these questions linger: If we were to look beyond the hype, what’s so disruptive about Docker technology? What are the differences between Docker and a virtual machine? What is hype and where does the real added value lie? We talked with Vincent De Smet about all this and more.
Docker manages to insert itself into all our conversations — why? Because it is extremely helpful and everyone loves it. There’s a lot going on in the Docker world (for example, the Docker platform and Moby Project are now integrating support for Kubernetes) but this is not why we’re doing this interview series with Docker Captains.
Don’t miss our Docker Captains interview series
We’d like to hear more about their love stories with Docker, their likes and dislikes, their battle scars and more. Without further ado, we’d like to introduce Vincent De Smet, DevOps Engineer for Honestbee.
JAXenter: Can you tell us a little bit about your first contact with Docker? Was it love at first sight?
Vincent De Smet: Mid-2013, I was running game servers in my free time on a hobby server in a co-loc and wanted to find a better way to share the server with my friends running different workloads. Virtualisation had too much overhead for game servers and I started reading about LXC as well as discovered Docker which was a hot new project that was gaining a lot in popularity.
After reading more about the concepts, a friend of mine approached me regarding a startup idea, so mid-2014, I saw an opportunity to use Docker to ship and run the backend side of that application. This gave me the chance to use Docker in a low-pressure environment (we never quit our main jobs and worked on the idea on weekends).
I also found that websites such as meetup.com — which Docker cleverly used to build communities around the world — were really great to share experiences and I became very active in presenting what I learned at the Docker Saigon group as well as get feedback from other developers.
Given my background as a consultant, I focused more on staying up to date on the latest evolutions in the Docker landscape and presenting those findings in meetup groups. This prompted Docker to invite me to their Captain program in early 2016.
JAXenter: Docker is revolutionizing IT — that is what we read and hear very often. Do you think this is true? If we were to look beyond the hype, what’s so disruptive about Docker technology?
Vincent De Smet: Machine Virtualization revolutionized IT first by improving stability, manageability and cost savings. Time-sharing resources were a fundamental model early (the 1970s) in computing and many of the time-sharing concepts have been revived as cloud computing since the prevalence of the internet.
Comparing Docker containers and VMs is like comparing apartments and houses.
When I saw the core concepts Docker introduced on top of the LXC technology, which has its roots back in 1979 (chroots — pun intended), I was sure this was going to be an important technology in the way we develop and deliver applications going forward. The core concepts introduced by Docker are:
- Immutable container Images which can be reproducibly built following an open standard across several platforms
- Central Registries to share these images to and from, with strong governance on a strict interface to and addressing of these registries
- Container Runtime engine with clearly defined responsibilities of setting up and starting containers from the container Images
Docker provided the first implementation of these concepts as open source in 2013 and was able to manage and grow a very large community of contributors (Red Hat, Microsoft, …) which ensured the industry centralized on the definition of these components and made sure they are here to stay.
JAXenter: How is Docker different from a normal virtual machine?
Vincent De Smet: There are many analogies — apartments (shared plumbing/facilities) versus stand-alone houses being a popular one. I think there is already a lot of material out there, one example being such as this article written by Mike Coleman.
JAXenter: How do you use Docker in your daily work?
Vincent De Smet: Docker adoption started out mainly in the CI/CD pipeline and from there on through staging environments to our production environments.
At my current company, developer adoption (using containers to develop new features for existing web services) is still lacking as each developer has their own preferred way of working. Given containers are prevalent everywhere else and Docker tools for developers keep improving, it will only take time before developers will choose to adopt these into their daily workflow.
I, personally, as a DevOps engineer in charge of maintaining containerized production environments, as well as improving developer workflows, troubleshoot most issues through Docker containers and use containers daily.
JAXenter: What issues do you experience when working with Docker? What are the current challenges?
Vincent De Smet: A lot of Integrated Development Environments still heavily rely on having everything locally on the machine; with Docker, you aim to keep everything in remote containers. On OSX there are also still some performance issues due to Docker requiring a Linux kernel which on OSX is run through Virtualization (xhyve / virtualbox), sometimes introducing unacceptable overheads (depending on the type of workload you are working with).
Most power is also available to terminal interfaces and sadly not all developers are as comfortable on the Terminal, so this gives them a significant challenge until they learn to love the terminal.
More and more competing implementations have emerged and matured as potential replacements [for Docker].
JAXenter: Talking about the evolution of the Docker ecosystem: How do you comment on Docker’s decision to donate containerd runtime to CNCF?
Vincent De Smet: It is crucial for Docker to be the industry-wide accepted standard — to maintain the power position they currently have within the market they carved out. Companies such as Google have built on the work of Docker with their container orchestration frameworks such as Kubernetes (which CoreOS and RedHat built on to create OpenShift and Tectonic respectively). Within these frameworks (which have gained a lot of market traction, more so than Docker’s own orchestration solution), every component has very clearly defined roles and responsibilities.
If these players are no longer happy with the implementation Docker provides (due to Docker’s fast iteration, adopting roles it shouldn’t, or other concerns with Docker’s governance of the technology), more and more competing implementations (such as rkt from CoreOS) have emerged and matured as potential replacements.
Docker themselves iterated on many of the core implementations within Kubernetes to re-work their clustering offering called Swarm in last year’s introduction of Docker 1.12’s new “Swarm mode”. This creates an interesting dynamic where each improves on the weaknesses of the other offerings — driving every solution to become a complete offering. (Docker has since also separated their company’s products —Docker CE / EE— from the core technology by creating the Moby project at April’s DockerCon).
PS: I’d refer to the ecosystem as “The container ecosystem” rather than “Docker ecosystem.”
JAXenter: Is there a special feature you would like to see in one of the next Docker releases?
Vincent De Smet: Not that I can think of, as I have adopted a full-time position managing Kubernetes clusters running Docker containers in production so I can no longer spend as much time on learning and playing with the latest features. Therefore, I feel I can not make a good guess on a feature as it might have already been implemented given the speed at which the container ecosystem moves.
JAXenter: Could you share one of your favorite tips when using Docker?
Vincent De Smet: Make sure to follow https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/.
These sources provide very good reasoning on why you should do things a certain way and I see way too many existing Dockerfiles that do not follow these. Anyone slightly more advanced with Docker will also gain a lot from mastering the Linux Alpine distribution and its package manager.
And if you’re getting started, training.play-with-docker.com is an amazing resource to start with.