Docker and Security: How do they fit together?
While Docker images are famously simple and practical, Docker security remains a tricky maze. Docker pros Dustin Huptas and Andreas Schmidt show us the essential security features you need to know for building a secure system with Docker.
More and more companies are about to adopt container technology and are bringing solutions like Docker into their technology stack. What started as a developer-centric approach is sooner or later going to impact productive environments. As there are a lot of security-related concerns, operations teams and people with a DevOps background want to be prepared to run Docker in production. Luckily, the Docker engine offers a lot of knobs and levers to tune security and blog posts offer links to various articles on how to “tighten the ship” – all this helps to make a Docker environment secure. And yet one can certainly get lost in the various options of configuring the Docker daemon, images and containers. What do we need to look at when building a secure system from scratch?
This article will focus on the server system and Docker-related security features (aspects concerning the network layer are equally important and require a dedicated article on their own). Instead, we aim to provide several core security measures to be taken into account when working with Docker in productive environments. We will start by looking at the host layer and take generic hardening steps, then securing the Docker daemon. One of the most popular aspects of Docker is the simplicity and ubiquity of its images provided by the Docker Hub. Therefore, quite a number of best practices revolve around securing those. After that, we will look into running containers and improving security at runtime by tuning available parameters. Finally, we will conclude the article with thoughts on how to monitor a running environment for the deployed security configuration.
Securing the host
First, the machine that hosts containers should be secured and hardened. The goal is to minimize attack vectors on the host system itself as a high number of different containers could be run at the same time. If containers from different teams or customers are running on the same host, breaking out of one of those containers due to a flaw would risk exposing all of them. It includes generic steps of system hardening such as removing unnecessary services including their packages, thus reducing the attack vector and further locking down user rights. There are quite a number of tools around for these tasks (i. e. OpenSCAP) and new tools are still in development (i. e. hardening. io for use with configuration management tools, or Lynis).
All containers running on a host are running on the same, shared kernel. So it makes sense to invest time into hardening the kernel itself. The linux kernel is quite complex and includes a number of modules that are not necessarily required on a (possibly virtualized) server system.
Grsecurity offers patches for a number of kernel versions which harden a number of aspects within the kernel code itself. The patches are applied to a vanilla kernel (meaning the unmodified code from kernel.org without additions from Linux distributions) manually before the patched kernel can be compiled and installed. For kernel-savvy administrators this provides a way to achieve a security-tailored runtime environment for containers. At the same time it introduces an additional layer of dependencies, which needs to be managed.
Securing the Docker daemon
On top of the host layer sits the Docker daemon/API through which an attacker could gain access to containers and/or the host. Securing communication and authentication on this layer reduces the potential attack surface. Probably the best starting point for securing a Docker setup involves securing the use of registries. Traffic to registries is secured by https unless an –insecure-registry is supplied to the Docker daemon. Thus an easy hardening step is ensuring that this switch is NEVER set. Additionally when using a registry mirror with –registry-mirror also make sure to only allow queries using HTTPS.
The word has been spread – users who have access to the REST API of a Docker instance can acquire root-like privileges. Docker makes it possible to start –privileged containers as root, having de-containerized aspects of host resources such as network and process space (i. e. –net=host, –pid=host etc.). As long as the API does not include a fine-grained authorization model, restricting access to it is a good start. When using the socket option (/var/run/docker.sock) one should check for access rights (writeable for the root user and Docker group only), especially that the Docker group does not contain unwanted users. Ideally, no users are assigned to the Docker group or socket access is disabled altogether. A good approach to accessing the API via network is using TLS. This way, client-side TLS certificates and keys are distributed to machines and users which then have access. Also TLS keys should be secured by restricting the file access rights. A TLS setup should NOT include only the –tls switch, which enforces TLS-encryption, but instead the –tlsverify parameter in order to enable authentication as well. Certificates and keys need to be advertised to Docker as shown in the following listing, i. e. in /etc/default/docker. Always a good point: Bind a service to a certain IP address and not to 0.0.0.0:
DOCKER_OPTS="$DOCKER_OPTS --tlsverify --tlscacert=/etc/docker-tls/cacert.pem --tlscert=/etc/docker-tls/certs/server-cert.pem --tlskey=/etc/docker-tls/private/ server-key.pem -H=192.168.0.10:2376"
An additional layer of isolation is provided by SELinux or AppArmor-enabled systems. Installations based on RHEL7 or Fedora >21 have SELinux turned on by default, and the Docker daemon usually runs in a specific SELinux domain. On Debian-based systems, an AppArmor profile is enabled by default and provides a similar layer. For example, on SELinux-enabled systems you can check if Docker runs in a confined domain by using the -Z option on some basic commands such as netstat and ps:
# ps -efZ | grep docker system_u:system_r:docker_t:SystemLow root 1873 1 2 07:21 ? 00:00:00 / usr/bin/docker -d --selinux-enabled
The first column shows the security context. The part docker_t is a special domain type with restricted rights. Also note the –selinux-enabled parameter to the Docker daemon.
Securing images and image processing
So now the host and Docker daemon are secured and only rightful access to those resources is granted. Much like a trojan horse the same level of care needs to be taken when downloading and running images from untrusted and/or unknown sources. Creating a verifiable chain of trust during the image creation process ensures that the running container was spawned from the image you created and trusted to run it from.
Each creation process for a Docker container usually starts with an image – by downloading it from the public registry at Docker Hub. As this represents a userland installation of an operating system, hardening topics apply here as well. This affects for example the number of packages installed – the less, the better. Recent images of i. e. Fedora do not even contain a /bin/ip any more. Trusted repos such as Ubuntu or CentOS maintain documentation on how they are dealing with updates to base images, i. e. how rolling updates are done and how patches flow into image tags.
When it comes to building images, a number of best practises are available on the Docker documentation web site, giving hints for writing Dockerfiles that lead to more stable and secure images. Another good approach is to look at the Dockerfiles of trusted repos where security best practises are applied on a large scale.
In the end it requires trusting a certain party to offer a valid base image that is free of known vulnerabilities. Administrators who do not wish to rely on these images still have the opportunity to custom-build container images from scratch, using tools such as debootstrap or appliance-creator.
Alternatively as a middle ground, available since Docker 1.8 is content trust. This feature restricts the Docker client to only use image tags that were signed by publishers before pushing them to the Docker registry. It can be enabled with the following environment variable:
[email protected]:~# export DOCKER_CONTENT_TRUST=1
or directly passed to each command:
[email protected]:~# env DOCKER_CONTENT_TRUST=1 docker pull (...)
As soon as content trust is enabled pulling an image only works with a signed image tag else the user receives an error:
[email protected]:~# env DOCKER_CONTENT_TRUST=1 docker pull dewiring/ trustit:latest no trust data available
After the publisher signed the tag the image can be successfully pulled (below). When running with content trust enabled only signed image tags are visible to the Docker client. Signing is done using offline and tagging keys which are created on first push. The offline key is the master key for a publisher and individual tagging keys will be created for each repository. All keys are stored client-side and only the timestamp and signature are stored as metadata alongside the image tags on the Docker registry. More on pushing a signed image can be found at content trust.
[email protected]:~# env DOCKER_CONTENT_TRUST=1 docker pull dewiring/ trustit:latest Pull (1 of 1): dewiring/trustit:[email protected]:c58ee9f9d1b1a0b59471cac2c089ac99 5dd559949ee088533fc6f4a0dcd2719f sha256:c58ee9f9d1b1a0b59471cac2c089ac995dd559949ee088533fc6f4a0dcd2719f: Pulling from dewiring/trustit 2c49f83e0b13: Already exists 4a5e6db8c069: Already exists 88ab9df21bce: Already exists 2c900b53c032: Already exists 8d86df29cb44: Already exists Digest: sha256:c58ee9f9d1b1a0b59471cac2c089ac995dd559949ee088533fc6f4a0d cd2719f Status: Downloaded newer image for dewiring/[email protected]:c58ee9f9d1b1a0b594 71cac2c089ac995dd559949ee088533fc6f4a0dcd2719f Tagging dewiring/[email protected]:c58ee9f9d1b1a0b59471cac2c089ac995dd559949e e088533fc6f4a0dcd2719f as dewiring/trustit:latest