Containers offer great power but they also demand operational responsibility
Implementing a continuous delivery pipeline is not trivial, and the introduction of container technology to the development stack can introduce additional challenges and requirements. We talked to Daniel Bryant, CTO of SpectoLabs and speaker at the upcoming JAX London conference, about the challenges, tradeoffs, and impact of bringing together containers and CD.
JAXenter: What impact do containers have on CD?
Daniel Bryant: That’s a great question. The top three takeaways from my talk are related exactly to this! From my experience, when we introduce containers into a continuous delivery build pipeline, the container image must become the ‘single binary’. This is the “unit of deployment” that gets built within the first stage of the pipeline and is the artifact that any tests are executed against. Adding metadata to containers images is vital, but often challenging. We must validate constraints (NFRs) that are introduced by the underlying container infrastructure, such as the effects of running within a container with limited CPU or using a virtual network overlay.
JAXenter: What is the difference between containers and virtualization technology?
Daniel Bryant: As the answer to this question could be quite long, I will instead recommend reading the article “Containers vs VMs: Which is better in the data center?”.
JAXenter: Container technology is hardly new. Still, Docker seems to be the most popular player. Are there other players that caught your attention?
Daniel Bryant: No, you’re right — container technology has been available for quite some time, for example Solaris Zones, FreeBSD Jails and LXC. However, Docker was the first to provide a great developer experience with containers by creating good APIs and a centralized container image registry. They were also the leaders in marketing within this space.
There are plenty of other container technologies: CoreOS’s rkt and Canonical’s LXD are interesting in the Linux container space; Intel’s Clear Containers and Hyper.sh’s runv offer interesting hybrids between containers and VMs. The Docker story is also still evolving, what with the creation of the Moby project and the contribution of containerd to the CNCF.
As hypervisors provide stronger isolation guarantees closer to the hardware, traditional VMs don’t share the Linux kernel like containers do.
In reality, this is probably only interesting if you are working in the infrastructure engineering space. Increasingly, we are seeing container implementation details being pushed away from a typical developer’s workflow. This is a good thing in my opinion. In addition, standardization is taking place throughout the technology stack. This will increase interoperability, such as the Open Container Initiative (OCI) container image and runtime specifications, and runc and cri-o.
JAXenter: What trade-offs should we be aware of when using containers?
Daniel Bryant: The core advice is that as containers offer great power, they also demand operational responsibility. This advice especially relates to developers creating images. In my experience, although Docker enables rapid experimentation and deployment, it is often the case that developers don’t have much exposure to operational concerns, such as provisioning infrastructure, hardening operating systems, or ensuring configuration is valid and performant. By packaging application artifacts within a typical container image, you will be exposed to these issues.
For example, I have heard of worst-case scenario where production containers were deployed with an old and unpatched “full-fat” operating system, running an application server in debug mode with a wide range of ports exposed.
JAXenter: Do virtual machines offer better security than containers?
Daniel Bryant: Yes and no! Yes, in that the application artifacts that are now running can be isolated at a more granular level. This can increase the core security principles of “defense in depth” via the appropriate use of network ACLs around each service. Or it can apply the “principle of least privilege” by tailoring kernel security configuration using SELinux or AppArmor for each container.
The answer is also no. As hypervisors provide stronger isolation guarantees closer to the hardware, traditional VMs don’t share the Linux kernel like containers do. (And the shared Kernel is the most common attack vector.) However, in fairness, hypervisors have been around a lot longer than container technology, and vulnerabilities are still occasionally found here.
If people are interested in this space, then I recommend reading an article I wrote that summarizes Aaron Grattafiori’s excellent DockerCon 2016 talk on “High Security Microservices”. For operators looking towards the future, researching into Unikernels could also be interesting.
JAXenter: Are containers revolutionizing the IT infrastructure? How?
Daniel Bryant: Containers are part of the current revolutionary cycle, which includes the co-evolution of architecture in microservices, infrastructure like cloud and containers, and practices such as DevOps, continuous delivery and Infrastructure as Code.
The key advantage with containers that wasn’t quite realized with VMs is the ability to easily package application artifacts in an underlying platform agnostic way.
JAXenter: What are your favorite container tools right now?
Bryant: I like a lot of the Cloud Native Computing Foundation (CNCF) technologies. The Governing Board and Technical Oversight Committee are doing great work. For example, Kubernetes for orchestration and Prometheus for monitoring (combined with gRPC and Linkerd) are firm favorites within the industry. Other people I would like to mention include Weaveworks, who are producing a lot of great tooling around the cloud native continuous delivery experience; CoreOS, who are innovating with rkt and Kubernetes Operators; and Sysdig, who produce some great container debugging tools.
JAXenter: What are some of the common mistakes when introducing containers to the development stack?
Bryant: These are the ones I have seen (and made!) the most:
- Not deploying to a container enabled/supporting platform
- Investing too much time and resource creating a platform
- Treating containers like VMs
- Packaging full operating systems into containers, rather than using something like Alpine, Container OS or RancherOS
- Packaging a deployment artifact late in the continuous delivery pipeline i.e. not running tests against the containerized artifact
- Not understanding that some technologies don’t play well with container technology e.g. several Linux tools (and the JVM) aren’t fully cgroup-aware, and generating entropy for security operations can be challenging on a host running many containers.
JAXenter: If you were to choose one area where containers can really make a difference, what would that be?
Bryant: In my opinion, the key advantage with containers that wasn’t quite realized with VMs is the ability to easily package application artifacts in an underlying platform agnostic way. This allows developers to run a more production-like configuration locally. It can also facilitate the transition of artifacts between Dev and Ops.
JAXenter: What can attendees get out of your talk?
Bryant: Hopefully, the answers above have provided some hints. Other than that, all I say is that if you are looking to implement continuous delivery with containers, then you should definitely join me at JAX London!
Daniel Bryant will be a panelist at a keynote speech at JAX London on the future of Java SE. They will discuss increased cadence and Oracle produced OpenJDK builds. He will also deliver a talk on continuous delivery with containers.
i hope you’re somewhere praying, praying
To find out more about buzzwords such as Java 9, microservices, containers and DevOps, download the latest issue of JAX Magazine:
What comes to your mind when you hear the word Java? How about blockchain, DevOps and microservices? If you want to learn about emerging technologies or dive deeper into topics you’re already familiar with (I’m looking at you Java 9), what better way to do that than to go straight to the source?
Open the magazine and allow the experts (all 14 of them) to shower you with information.