Now that you have a working docker environment, it’s time to use it. You can launch almost any service in it - we’ll see it later. One of the most useful containers to launch is a reverse proxy to respond and drive the requests to underlying containers.
There are many solutions around. One of them is Traefik, which is great and that we will see later. What I’ll be describing now is another kind of setup, based on two containers, and it will help you to understand some of the features of Docker. The first is based on nginx and this will be the edge container. Then, we’ll launch a companion container that will take care of generating SSL certificates and keeping them updated.
Since the official nginx-proxy container has some limits (no gzip support, small body size limit, etc.), I’m using a custom nginx-proxy container with some small modifications.
Let’s start with it:
docker run -d -p 80:80 -p 443:443 --restart=unless-stopped \ --name nginx-proxy \ -v /docker/nginx/certs:/etc/nginx/certs:ro \ -v /etc/nginx/vhost.d \ -v /usr/share/nginx/html \ -v /var/run/docker.sock:/tmp/docker.sock:ro \ --label com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true \ dragas/nginx-proxy:gzip
What we’re doing here is launching a container that will listen to host ports 80 and 443, storing the certificates in the /docker/nginx/certs local directory of the host (so we’ll keep the certificates even when updating the container - more about it later), some volumes to share with the letsencrypt companion container. A label to tell the letsencrypt container to take this container as his “companion”.
Ok, now you have a container listening to ports 80 and 443 and waiting for connections. It’s ok for http (port 80) but what should we do for https?
Let’s launch a companion container. It will take care of seeing which containers need an SSL certificate (based on an environment variable that we will pass to final containers), generate it, pass it to the previous container and reload nginx.
docker run -d --restart=unless-stopped \ --name nginx-letsencrypt \ --volumes-from nginx-proxy \ -v /docker/nginx/certs:/etc/nginx/certs:rw \ -v /var/run/docker.sock:/var/run/docker.sock:ro \ jrcs/letsencrypt-nginx-proxy-companion
This container will take the volumes from nginx-proxy (to know which virtual hosts are running) and store the certificates in a persistent storage, shared with nginx-proxy.
The volume on the line "-v /var/run/docker.sock:/var/run/docker.sock:ro" is needed because the two containers connect to the Docker socket and know when another container, like a web server, has been started.
Now that those two containers are running, nothing has apparently changed. To make them work, we’ll need to launch any kind of web container with some variables. Let’s try with a simple example:
docker run -d --restart=unless-stopped --name nginx -e 'LETSENCRYPT_EMAILemail@example.com' \ -e 'LETSENCRYPT_HOST=www.myhostname.net' \ -e 'VIRTUAL_HOST=www.myhostname.net' \ nginx
This container will launch a default nginx image. The two “-e” variables will tell the proxy that this container will respond when connections to “www.myhostname.net” arrive (VIRTUAL_HOST) and that a certificate will have to be generated and updated for the same hostname (LETSENCRYPT_HOST).
Everything is now up and running. If you connect to www.myhostname.net you’ll reach that container. Of course, you can launch as many (final) containers as you want and all of them will automatically be configured to be reached by their hostname thanks to autoconfigurations performed by the nginx-proxy and nginx-letsencrypt.
For more information about this setup, have a look at the official documentation.
LESSON IN LESSON:
Containers are stateless and they will lose any data you store every time you stop and generate a new one. Because of this, it’s important to store data in a volume, external persistent storage, network storage, etc. or map a specific host directory to a container mount point. In this example, we have the “/docker/nginx/certs” on the host that will be mapped (and persist even when the containers will be destroyed and re-created) into the /etc/nginx/certs of the proxy and companion containers. Also note that the proxy container can only read from (and not write to) that directory, for security reasons.
In the next tutorial we will see how to configure and run a web server, creating a fully featured web hosting platform.