Multiple Websites and Webapps Served with Nginx and Docker as Non Root User

Purpose and Summary

The purpose of this post is to describe how I set up my private servers (both virtual and home) and to lay a foundation for future posts describing how I implement pages and apps.

The figure below shows my basic setup.

I assume a basic understanding of the tools being used: a debian stretch virtual private server, docker-ce (community edition) version 18.09.3, docker-compose version 1.18.0, and nginx (free edition) version 1.15.9. Feel free to ask questions about anything that is too vague or confusing.

Basic Setup

Though later posts will refer to use of a home server running on a laptop, this post will center on my virtual private server being hosted by the amazing and incredible Frantech.

Seriously these guys are fantastic. I have been with them for years and they have always provided top of the line support.

As you can see in the figure above I use a single docker container running the primary front facing nginx server that accepts all incoming http and https requests. That container then proxies those requests to the relevant secondary nginx container, which then serves the file or proxies to the requested app (green 1 through 5).

Creating a Hierarchy and Reducing Risk to Point of Failure

One reason for doing it this way is that, though I still technically have a single point of failure with the front facing container, I have reduced the risk of any single app taking down the entire system.

By spreading out the task that each container performs I distribute any load (though on my site there is pretty much none) and most importantly I create a hierarchy of systems/applications that makes it much easier to troubleshoot the inevitable issues that arise.

The task of the primary nginx container then is redirect all requests from http to https and therefore include all necessary security files and then proxy that request to and from the necessary secondary nginx container.

Creating and Running the Nginx Docker Container as Non Root User

Using Docker to handle any and all incoming traffic is a security solution in itself and any server I use that communicates with the outside, or is dependent on a server that communicates with the outside, runs in a container. An added layer of security is to run containers whenever possible as a non root user. This is easy to do with Nginx and since this container is accepting traffic from standard 80 and 443 ports it just makes sense to take a couple of extra steps.

Since we need to modify a couple of files that will be passed to the image during setup, that will be the first step.

Modify nginx.conf and default.conf

First modify nginx.conf by taking the user designation out of the first line. Everything else stays the same. If you need to copy paste mine looks like this:

Next we modify the default file to listen to the non standard port we will be using to accept traffic from port 80. Running as non root, nginx cannot make connections to any port below 1024, so if you leave the file as is nginx will not run. We will accept on 8080 from 80 so the default file will look like this:

Keeping this file will be useful later when we use letsencrypt to add SSL/HTTPS encryption to our sites.

If you want to get the two files yourself just run an nginx container - which will pull the latest image that we’ll be using when we build our own image - and copy the files out of that like so:

docker run --rm -d nginx
docker ps # verfiy it is running and check the name
docker cp <nginx-name>:/etc/nginx/nginx.conf .
docker cp <nginx-name>:/etc/nginx/conf.d/default.conf .
ls # verify you have the files
docker stop <nginx-name>

Build the Nginx as non root Image

To build our image, which we will tag as it nginxbase since it will serve as the base image for all our containers, use the dockerfile below - substituting your own values in the areas marked by .

You now have a ready to go base image to use as a forward facing container. You could easily just run it wih a simple run command

docker run --name frontnginx -p 80:8080 nginxbase


But we want to be able to build out a fairly robust structure with this as a base. So we’ll user docker-compose.

Using Docker-Compose to Create the Container

I prefer to use a docker-compose file because I can make changes much easier to a structured file. Since we will be attaching multiple volumes and possibly adding them in the future as well as creating a new network for this container to communicate with its child containers on, a compose file just makes things easier.

Mine looks something like this:

Prior to running this you will have to

docker network create frontnetwork # create the main network
docker volume create front-config # create the conf.d volume
docker volume create front-ssl # create the directory to put the ssl files in

Then you can run docker-compose -f docker-compose.nginxbase.yml up first and connect to the site in your browser. This will give you a running output so you can troubleshoot initial issues. After that run docker-compose stop then docker-compose start and the containers will come up detached.

Adding config files, Communicating with Container, And What’s Next

To add .conf files you can either use docker cp as above, transposing source and destination, or run a seperate container attached to volumes $(pwd) and front-config and just copy files that way. I use this method to edit files using an image that I beefed up with vim-gtk, tmux, etc for the sole purpose of editing within volumes.

Note that when using the docker-compose setup as above you will have to put a new altered default.conf file into the conf.d directory because the new persistent volume will overwrite the one that the image contains.

To communicate with the container use docker exec. If you need to do anything as root be sure to use docker exec -it --user root frontnginx /bin/bash.

In the next post I will talk about how I added HTTPS to all my sites and then I will describe my setup for this blog, which is built with the hexo static site generator.