days
-1
-1
hours
0
-5
minutes
0
-6
seconds
-5
-3
search
Docker tips

How to share Docker volumes across hosts

Ian Miell and Aidan Hobson Sayers
Container image via Shutterstock

Sharing data between Docker containers across different hosts is a tricky business. In this tutorial taken from “Docker in Practice”, we’ll examine one lightweight and one more involved way to share Docker volumes across different hosts.

This is an excerpt from Docker in Practice by Ian Miell and Aidan Hobson Sayers, a Manning publication available here. JAXenter readers can get a 39% discount by using the following code: jedocker

While sharing data between containers on the same host is made easy with volumes, sharing Docker volumes across hosts is trickier. Data containers are one approach to this problem, but can become clumsy if the data is frequently changing or particularly large.

We’re going to examine two solutions to this problem. The first is a lightweight distributed solution using the BitTorrent protocol that requires only Docker to be installed. The second is a more involved solution that uses NFS and and introduces the concept of infrastructure containers.

Technique #1: Distributed volumes with BitTorrent Sync

When experimenting with Docker in a team, the team may want to be able to share large quantities of data among the team, but may not be allocated the resources for a shared server with sufficient capacity. The lazy solution to this is copying the latest files from other team members when you need them – this quickly gets out of hand for a larger team!

The solution to this is to use a decentralised tool for sharing files – no dedicated resource required.

Problem

You want to share volumes across hosts over the internet

Solution

Use a BitTorrent Sync image to share a volume

Discussion

The figure below illustrates shows the setup we’re aiming to end up with:

figure 1

Figure 1: setup

#A – The BTSync server is a Docker container that owns the /data volume we are going to share

#B A container is set up on the same host that mounts the volumes from the BTSync server

#C On another host in a separate network, the BTSync server generates a key that clients can reference to access the shared data via the BitTorrent protocol

#D The BTSync client, which sits on another host, mounts the volume and synchronises the /data volume with the first host’s BTSync server

#E Containers mount the volumes from the BTSync client

The end result is a volume – /data – that is conveniently synchronised over the internet without requiring any complicated setup.

On your primary server run these commands to set up the containers on the first host:

[host1]$ [#A]docker run -d -p 8888:8888 -p 55555:55555 --name btsync ctlc/btsync
$ [#B]docker logs btsync
Starting btsync with secret: [#C]ALSVEUABQQ5ILRS2OQJKAOKCU5SIIP6A3 By using this application, you agree to our Privacy Policy and Terms. http://www.bittorrent.com/legal/privacy http://www.bittorrent.com/legal/terms-of-use
total physical memory 536870912 max disk cache 2097152 Using IP address 172.17.4.121
[host1]$ [#D]docker run -i -t --volumes-from btsync ubuntu /bin/bash
$ touch /data/shared_from_server_one [#E] $ ls /data shared_from_server_one

#ARun the published ctlc/btsync image as a daemon container calls btsync and open the required ports up

#B Get the output of the btsync container so we can make a note of the key

#CMake a note of this key – it will be different for your run

#D Start up an interactive container with the volumes from the btsync server #E – Add a file to the /data volume

On the second server open up a terminal and run these commands to set synchronise the volume.

[host2]$ docker run -d
 --name btsync-client -p 8888:8888 -p
 55555:55555 ctlc/btsync ALSVEUABQQ5ILRS2OQJKAOKCU5SIIP6A3 [#A]
[host2]$ docker run -i -t --volumes-from btsync-client ubuntu bash [#B] $ ls /data shared_from_server_one  [#C] $ touch /data/shared_from_server_two   [#D] $ ls /data
shared_from_server_one  shared_from_server_two

#A Start a btsync client container as a daemon with the key generated by the daemon run on host1

#B Start an interactive container that mounts the volumes from our client daemon

#C The file created on host1 has been transferred to host2

#D Create a second file on host2

Back on host1’s running container we should see the file has been synchronised between the hosts just as the first was:

$ ls /data
shared_from_server_one  shared_from_server_two

NOTE: –The synchronisation of files comes with no timing guarantees, so you may have to wait for the data to sync. This is especially true for larger files.

WARNING: –As the data is being sent over the internet and is processed by a protocol over which you don’t have control, don’t rely on this if you have any meaningful security, scalability or performance constraints.

Technique #2: Sharing Data Over NFS

In a larger company it’s highly likely there are NFS shared directories already in use NFS is a well-proven option for serving files out of a central location. For Docker to get traction, it’s usually fairly important to be able to get access to these shared files!

However, Docker does not support NFS out of the box and installing an NFS client on every container to be able to mount the remote folders is not considered best practice. Instead, the suggested approach is to have one container act as a translator from NFS to a more Docker-friendly concept – volumes!

Problem

You want seamless access to a remote filesystem over NFS

Solution

Use an infrastructure data container to broker access

Discussion

This technique builds on the data container technique we saw in chapter 4.

The figure below shows the idea in the abstract.

figure 2

Figure 2: An infrastructure container that brokers NFS access

The NFS server exposes the internal directory as the /export folder, which is bind-mounted on the host. The Docker host then mounts this folder using the NFS protocol to its /mnt folder. Then a so-called infrastructure container is created which binds the mount folder.

This may seem a little over-engineered at first glance, but the benefit is that it provides a level of indirection as far as the Docker containers are concerned: all they need to do is mount the volumes from a pre-agreed infrastructure container, and whoever is responsible for the infrastructure can worry about the internal plumbing, availability, network etc.
A thoroughgoing treatment of NFS is beyond the scope of this book. However, we are going to go through the steps of setting up such a share on a single host (i.e. the NFS server’s elements are on the same host as the Docker containers’). This has been tested on Ubuntu 14.04.

Imagine you want to share the contents of your host’s /opt/test/db, which contains the file mybigdb.db.

As root, install the NFS server and create an export directory with open permissions:

apt-get install nfs-kernel-server mkdir /export chmod 777 /export

Now bind mount the db directory to our export directory.

$ mount --bind /opt/test/db /export

You should now be able to see the contents of the /opt/test/db directory in /export:

TIP: Persisting the bind mount
If you want this to persist following a reboot, add this line to your

/etc/fstab file: /opt/test/db /export none bind 0 0

Now add this line to your /etc/exports file:

/export       [#A]127.0.0.1([#B]ro,fsid=0,insecure,no_subtree_check,async)

#A For this proof of concept example we’re mounting locally on 127.0.0.1, which defeats the object a little. In a real-world scenario you’d lock this down to a class of IP addresses such as 192.168.1.0/24. If you really like playing with fire you can open it up to the world with instead of 127.0.0.1!

#B – For safety we are mounting read-only here,* but you can mount read-write by replacing ro with rw. Remember that if you do this, then you will need to add a no_root_squash flag after the async flag there – but think about security before going outside this sandpit!
Mount the directory over NFS to the /mnt directory, export the filesystems we specified in /etc/exports above, then restart the NFS service to pick up the changes:

$ mount -t nfs 127.0.0.1:/export /mnt
$ exportfs -a
$ service nfs-kernel-server restart

Now you’re ready to run your infrastructure container:

$ docker run -ti --name nfs_client --privileged -v /mnt:/mnt busybox /bin/true

TODO: check that priv required above

And now we can run – without privileged, or knowledge of the underlying implementation – the directory we want to access:

$ docker run -ti --volumes-from nfs_client debian /bin/bash root@079d70f79d84:/# ls /mnt myb root@079d70f79d84:/# cd /mnt root@079d70f79d84:/mnt# touch asd touch: cannot touch `asd': Read-only file system

TIP: Use a naming convention for operational efficiency If you have a lot of these containers to manage, you can make this operationally easier to manage by having a naming convention, eg –-name nfs_client_opt_database_live for a container that exposes the /opt/database/live path.

This pattern of a shared resource mounted with privileged access centrally for use by others in multiple containers is a powerful one that can make development workflows much simpler.

Author

Ian Miell and Aidan Hobson Sayers


Leave a Reply

Be the First to Comment!

avatar
400
  Subscribe  
Notify of