Connecting Docker Containers, Part Two

This post is part two of a miniseries looking at how to connect Docker containers.

In part one, we looked at the bridge network driver that allows us to connect containers that all live on the same Docker host. Specifically, we looked at three basic, older uses of this network driver: port exposure, port binding, and linking.

In this post, we’ll look at a more advanced, and up-to-date use of the bridge network driver.

We’ll also look at using the overlay network driver for connecting Docker containers across multiple hosts.

User-Defined Networks

Docker 1.9.0 was released in early November 2015 and shipped with some exciting new networking features. With these changes, now, for two containers to communicate, all that is required is to place them in the same network or sub-network.

Let’s demonstrate that.

First, let’s see what we already have:

$ sudo docker network ls
NETWORK ID NAME DRIVER
362c9d3713cc bridge bridge
fbd276b0df0a singlehost bridge
591d6ac8b537 none null
ac7971601441 host host

Now, let’s create a network :

$ sudo docker network create backend
If that worked, our network list will show our newly created network:

$ sudo docker network ls
NETWORK ID NAME DRIVER
362c9d3713cc bridge bridge
fbd276b0df0a singlehost bridge
591d6ac8b537 none null
ac7971601441 host host
d97889cef288 backend bridge

Here we can see the backend network has been created using the default bridge driver. This is a bridge network, as covered in part one of this miniseries, and is available to all containers on the local host.

We’ll use the client_img and server_img images we created in part one of this miniseries. So, if you don’t already have them set up on your machine, go back and do that now. It won’t take a moment.

Got your images set up? Cool.

Let’s run a server container from the server_img image and put it on the backend network using the –net option.

Like so:

$ sudo docker run -itd --net=backend --name=server server_img /bin/bash
Like before, attach to the container:

$ sudo docker attach server
If you do not see the shell, click the up arrow.

Now start the Apache HTTP server:

$ /etc/init.d/apache2 start
At this point, any container on the backend network will be able to access our Apache HTTP server.

We can test this by starting a client container on a different terminal, and putting it on the backend network.

Like so:

$ sudo docker run -itd --net=backend --name=client client_img /bin/bash
Attach to the container:

$ sudo docker attach client
Again, if you do not see the shell, click the up arrow.

Now run:

$ curl server
You should see the default web page HTML. This tells us our network is functioning as expected.

Like mentioned in part one of this miniseries, Docker takes care of setting up the container names as resolvable hostnames, which is why we can curl server directly without knowing the IP address.

Multiple user-defined networks can be created, and containers can be placed in one or more networks, according to application topology. This flexibility, then, is especially useful for anyone wanting to deliver microservices, multitenancy, and micro-segmentation architectures.

Multi-Host Networking

What if you want to create networks that span multiple hosts? Well, since Docker 1.9.0, you can do just that!

So far, we’ve been using the bridge network driver, which has a local scope, meaning bridge networks are local to the Docker host. Docker now provides a new overlay network driver, which has global scope, meaning overlay networks can exist across multiple Docker hosts. And those Docker hosts can exist in different datacenters, or even different cloud providers!

To set up an overlay network, you’ll need:

A host with a 3.16 kernel version or higher
A key-value store (e.g. etcd, Consul, and Apache ZooKeeper)
A cluster of hosts with connectivity to the key-value store
A properly configured Docker Engine daemon on each host in the cluster
Let’s take a look at an example.

For the purposes of this post, I am going to use the multihost-local.sh script with Docker Machine to get three virtual hosts up and running.

This script spins up Virtual Machines (VMs), not containers. We then run Docker on these VMs to simulate a cluster of Docker hosts.

After running the script, here’s what I have:

$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM ERRORS
mhl-consul - virtualbox Running tcp://192.168.99.100:2376
mhl-demo0 - virtualbox Running tcp://192.168.99.101:2376
mhl-demo1 - virtualbox Running tcp://192.168.99.102:2376

Okay, let’s rewind and look at what just happened.

This script makes use of Docker Machine, which you must have installed. For this post, we used Docker Machine 0.5.2. For instructions on how to download and install 0.5.2 for yourself, see the release notes.

The multihost-local.sh script uses Docker Machine to provision three VirtualBox VMs, installs Docker Engine on them, and configure them appropriately.

Docker Machine works with most major virtualization hypervisors and cloud service providers. It has support for AWS, Digital Ocean, Google Cloud Platform, IBM Softlayer, Microsoft Azure and Hyper-V, OpenStack, Rackspace, VirtualBox, VMware Fusion®, vCloud® Air™ and vSphere®.

We now have three VMs:

mhl-consul: runs Consul
mhl-demo0: Docker cluster node
mhl-demo1: Docker cluster node
The Docker cluster nodes are configured to coordinate through the VM running Consul, our key-value store. This is how the cluster comes to life.

Cool. Fastforward.

Now, let’s set up an overlay network.

First, we need to grab a console on the mhl-demo0 VM, like so:

$ eval $(docker-machine env mhl-demo0)
Once there, run:

$ docker network create -d overlay myapp
This command creates an overlay network called myapp across all the hosts in the cluster. This is possible because Docker is coordinating with the rest of the cluster through the key-value store.

To confirm this has worked, we can grab a console on each VM in the cluster and list out the Docker networks.

Copy the eval command above, replacing mhl-demo0 with the relevent host name.

Then run:

$ docker network ls
NETWORK ID NAME DRIVER
7b9e349b2f01 host host
1f6a49cf5d40 bridge bridge
38e2eba8fbc8 none null
385a8bd92085 myapp overlay

Here you see the myapp overlay network.

Success!

Remember though: all we’ve done so far is create a cluster of Docker VMs and configure an overlay network which they all share. We’ve not actually created any Docker containers yet. So let’s do that and test the network.

We’re going to:

Run the default nginx image on the mhl-demo0 host (this provides us with a preconfigured Nginx HTTP server)
Run the default busybox image on the mhl-demo1 host (this provides us with a basic OS and tools like GNU Wget)
Add both containers into the myapp network
Test they can communicate
First, grab a console on the mhl-demo0 host:

$ eval $(docker-machine env mhl-demo0)
Then, run the nginx image:

$ docker run --name ng1 --net=myapp -d nginx
To recap, we now have:

A Nginx HTTP server,
Running in a container called ng1,
In the myapp network,
On the mhl-demo0 host
To test this is working, let’s try to access it from another container on another host.

Grab a console on the mhl-demo1 host this time:

$ eval $(docker-machine env mhl-demo1)
Then run:

$ docker run -it --net=myapp busybox wget -qO- ng1
What this does:

Creates an unnamed container from the busybox image,
Adds it to the myapp network,
Runs the command wget -qO- ng1,
And stops the container (we left our other containers running before)
The ng1 in that Wget command is the the name of our Nginx container. Docker lets us use the container name as a resolvable hostname, even though the container is running on a different Docker host.

If everything is successful, you should see something like this:

Welcome to nginx!
Voila! We have a multi-host container network.

Conclusion

Docker offer the advantages of lightweight self-contained and isolated environments. However, it is crucial that containers are able to communicate with each other and with the host network if they are going to be useful for us.

In this miniseries, we have explored a few ways to connect containers locally and across multiple hosts. We’ve also looked at how to network containers with the host network.