The dragonhive/dergnz web stack

Setting up this website reminded me how happy I am with my current automation and web stack.

It just works, it’s not flawless but I barely ever have to touch it, and if I want to add something new, I just copy a folder with a template docker-compose file, customize it to what I need it to do, and voila, a new service is up and running, automatically updated on a daily basis, and backed up to an external server with some extra scripts (not in this tutorial).

Setting up something like that takes a little effort, especially the figuring out part, but once you have it going, it is absolutely the most blissful and managable way to deal with more than a few services, even though this is obviously a little bit much to set up if you only care about a single web app.

But how does it work?

The core idea

I like Containers. Some people may prefer Virtual Machines, and that’s fine too, you can even combine the two concepts if you like. But this current concept is very easy to deploy, and I’ve been pondering turning it into a quick deploy stack with some basic web interface to add more containers and services to run, as it is very robust and very easy to use so far.

The core concept, is to have a few simple containers that together form the base for a very easy to use webservices host, while also being very easy to update, back up and manage. It contains the following base containers:

  • An nginx frontend webserver that sits on both 80 and 443, completely vanilla nginx, with its configuration folders exported out of the container.
  • JWilders’ nginx-proxy/docker-gen container will look at a read-only copy of the docker socket to query what containers are running, and what their desired outside hostnames would be, to generate configuration files for our frontend nginx webserver to use
  • acme-companion, which will request letsencrypt certificates and cooperate with the docker-gen container to set up HTTPS
  • Any number of other services, with at very least an environment variable for the VIRTUAL_HOST name (domain/hostname) and one for the LETSENCRYPT_HOST name (the https domain/hostname). In this case I will demonstrate it with Jekyll, which is what this website runs on :)

A basic example

For our core we need to set up a container host program. There are some options you could pick, but for simplicity sake and most widespread usage, I’ll focus on Docker for now. One could also pick for example Podman, which should in theory also have an API socket that docker-gen could use, but I have not gotten this to work myself so YMMV.

One of the advantages (and disadvantages) of Docker is that it’s centrally managed by a daemon, which you can query and command over a socket to do your bidding automatically, and we’re going to use this socket. This has some downsides in terms of security of course, if a hacker ever manages to compromise a container that has access to the docker socket, they can read everything that’s running on the docker host– as long as the docker socket is mounted read only, otherwise it’s much more dangerous and will allow the attacker to spawn new containers that could do anything. But it works exceptionally well for automation purposes, if you secure the machines that have access to it well, and realize that these are one of your weak points.

JWilders’ nginx-proxy/docker-gen

A tiny but very useful little tool I found, which uses the docker socket to see what containers there are running, checks the environment variables, and uses these to automatically generate a nginx configuration. Optionally, one can add letsencrypt-companion to also generate matching LetsEncrypt certificates, to make everything nice and HTTPS-y.

  • More info about nginx-proxy can be found here
  • More info about LetsEncrypt-Companion can be found here

Setting up a full server and Jekyll

The best way to see how this works, is by just diving right in and trying to set up the stack, with some test application like Jekyll.

  1. Install your favorite Linux distro. It doesn’t matter much if you’re using Fedora, Ubuntu, Debian, OpenSuse or something else, as long as you have Docker available. Note that some distros (Like Fedora) may have SELinux enabled, which can be painful to configure correctly during setup. If you have SELinux, I recommend using setenforce 0 to temporarily set it to permissive mode. You can always generate an SELinux policy based on the logs it’ll generate at a later stage
  2. Install both docker and docker-compose
  3. Set up your files the way you like; Be it some externally mounted disk on its own /serverfiles mounting point, or somewhere in the existing filesystem, it doesn’t matter too much what you pick. For the sake of this tutorial I’m going to use /serverfiles as the top level directory for our stored files. You can search/replace this string with whatever you end up using in practice.
  4. Create a folder for your docker-compose files, and your container’s working files. You can put those in the same directory, but I recommend splitting them out for easier backups and management, although in the end you’ll have to backup both if you want to keep your data, so it’s up to you. I’m going to assume using /serverfiles/compose-files for the compose files, and /serverfiles/data for the data that’s mounted inside the container during runtime. Docker also supports a virtual file system stored elsewhere, which is more compatible with Swarm, but for this tutorial I’ll focus on the old folder mounting system.
  5. Let’s get the actual webserver running first, which won’t do anything yet, but we’ll need in order to make Jekyll show up later on.
    • Make sure docker is running (systemctl enable --now docker or something alike for your distro)
    • Create the folder named something like /serverfiles/compose-files/nginx-frontweb and open it, then create a new docker-compose.yml file with the following contents:
 version: '2'
services:
# First we set up an nginx container as our frontend proxy. We grab and export the config folders so we can control its behavior from jwilder's docker-gen container.
  nginx-proxy:
    image: nginx:alpine
    container_name: nginx-proxy
    environment:
      - DHPARAM_GENERATION=false
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /serverfiles/data/nginx-frontweb/conf:/etc/nginx/conf.d
      - /serverfiles/data/nginx-frontweb/vhost:/etc/nginx/vhost.d
      - /serverfiles/data/nginx-frontweb/html:/usr/share/nginx/html
      - /serverfiles/data/nginx-frontweb/certs:/etc/nginx/certs:ro
    privileged: true


# This container will automatically generate nginx configs based on the environment variable `VIRTUAL_HOST` of running containers. This allows you to add any number of services very easily without every having to touch nginx configs.
  docker-gen:
    image: nginxproxy/docker-gen
    container_name: nginx-proxy-gen
    command: -notify-sighup nginx-proxy -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
    environment:
      - DHPARAM_GENERATION=false
    volumes_from:
      - nginx-proxy
    volumes:
      - /serverfiles/data/nginx-frontweb/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
      - /run/docker.sock:/tmp/docker.sock:ro # We map it read-only so that if our container gets hacked, we cannot write to the docker socket.
      - /serverfiles/data/nginx-frontweb/conf:/etc/nginx/conf.d
      - /serverfiles/data/nginx-frontweb/vhost:/etc/nginx/vhost.d
      - /serverfiles/data/nginx-frontweb/html:/usr/share/nginx/html
      - /serverfiles/data/nginx-frontweb/certs:/etc/nginx/certs:ro
    labels:
      - "com.github.jrcs.letsencrypt_nginx_proxy_companion.docker_gen"
    privileged: true


# And finally the Acme-companion requests certificates from LetsEncrypt for every machine that has the `LETSENCRYPT_HOST` variable defined
      acme-companion:
    image: nginxproxy/acme-companion
    container_name: nginx-proxy-acme
    environment:
      - DHPARAM_GENERATION=false
    volumes_from:
      - nginx-proxy
    volumes:
      - /serverfiles/data/nginx-frontweb/certs:/etc/nginx/certs:rw
      - /serverfiles/data/nginx-frontweb/acme:/etc/acme.sh
      - /run/docker.sock:/var/run/docker.sock:ro
      - /serverfiles/data/nginx-frontweb/conf:/etc/nginx/conf.d
      - /serverfiles/data/nginx-frontweb/vhost:/etc/nginx/vhost.d
      - /serverfiles/data/nginx-frontweb/html:/usr/share/nginx/html
    privileged: true

# Here we're defining an external network for other containers to join, which is required for nginx to reach those containers internally.
networks:
  default:
    external:
      name: frontendweb

This will serve as the base for all your webservices, as it will take care of all the subdomain routing, certificate requesting, et cetera, for you. All you have to do after this, is set up extra containers with whatever services you might like, and they’ll be usable and have HTTPS certificates as soon as you start them!

  1. Now that we have that, we can set up Jekyll and have it be forwarded to the internet.
    • Go back up one folder, and create a new folder named something like mywebsite-jekyll, open it, and create another docker-compose.yml file.
    • Here’s the docker-compose.yml file that I’m using for this very website (paths changed): ``` version: ‘3’ services: jekyll: image: jekyll/jekyll environment:
      • VIRTUAL_HOST=derg.nz,dragonhive.net
      • VIRTUAL_PROTO=http
      • VIRTUAL_PORT=4000
      • LETSENCRYPT_HOST=derg.nz,dragonhive.net

      command: jekyll serve –watch –trace –incremental
      expose: - 4000 volumes: - /serverfiles/data/mywebsite-jekyll:/srv/jekyll - /serverfiles/data/mywebsite-jekyll/vendor/bundle:/usr/local/bundle:cached networks: default: external: name: frontendweb ``` Notice the VIRTUAL_HOST, VIRTUAL_PROTO, and VIRTUAL_PORT environment variables? That’s the magic. That’s what’s going to instruct docker-gen to create a nginx config appropriate for that (sub)domain name, and also request a HTTPS certificate while at it.

  2. Jekyll requires some manual set up before it can be used. To prepare Jekyll to be used the first time, execute the following:
    • docker-compose run jekyll jekyll new . # docker-compose run (container name) (the jekyll command) (new subcommand) (current folder)
    • you should now have the Jekyll files in your /serverfiles/data/mywebsite-jekyll folder
    • docker-compose run jekyll jekyll build # will build the website and make it ready for serving (seems optional as it does it automatically on container startup though)
  3. Start your engines!
    • Please note, I did not explain any DNS aspects in this tutorial, I will assume you have a wildcard subdomain pointing to your server before you configure any subdomains or domains in the config files. If a DNS tutorial is desired, let me know! I’ll be happy to write one or add it here if needed.
    • Neither did I explain port forwarding here. I’ll assume you have both 80 and 443 forwarded to your webserver, as this is required for HTTP and HTTPS traffic. You’ll still need port 80 even if you only intend to use 443, as it’s a hard requirement for the LetsEncrypt certificates to be renewed.
    • cd into your frontweb-nginx folder and run: docker-compose up -d. This will bring up the frontend webserver stack, and if all goes well it should stay running (you can check with docker-compose ps)
    • cd into your ../mywebsite-jekyll folder and execute docker-compose up -d again.
    • If all went well, you should be see the Jekyll default page now, and you can start customizing it by editing the markdown files!

Keeping it up-to-date

Once you have everything running for a while, one of the first things you’ll probably want to do, is update everything to the latest versions once in a while.

Because of how docker works, and how we’ve laid out our folder of compose files, automating this process is actually incredibly trivial;

#!/bin/bash
start=`date +%s`
dockercompose="docker compose" # change this if you use e.g docker-compose or podman-compose instead
cd /storage/podmanstorage/compose-files

for d in */ ; do
    cd "$d"
    if [[ $d == *"mastodon"* ]]; then
        echo "running git pull before docker for $d"
        git pull
    fi
    echo "updating $d"
    $dockercompose down
    $dockercompose pull
    $dockercompose up --force-recreate --build -d
    cd ..
    echo "Done with: $d"
    echo "----------------------------------------------"
done

end=`date +%s`
runtime=$((end-start))
echo "Finished! Time took: $runtime seconds."

Let’s break it down a little: for d in */ ; do <- for directory in all(*) strings in the current directory that end with /, i.e all directories, do these command

    if [[ $d == *"mastodon"* ]]; then
        echo "running git pull before docker for $d"
        git pull
    fi

^^^^^^^^– Run git pull if it’s Mastodon, because Mastodon requires a git pull to be run. It may also require you to run ‘migrations’ afterwards for proper operations, for more information on that and how to fix mastodon if it breaks, see this post. You can also choose to skip it and run a different script manually by using continue, like so:

    if [[ $d == *"mastodon"* ]]; then
        echo "Skipping $d"
        cd ..
        continue

    fi

Anyway,

    echo "updating $d"
    $dockercompose down
    $dockercompose pull
    $dockercompose up --force-recreate --build -d

^^^^^^— This will echo the name of the directory, take down the containers related to the current compose file, then pull new updates, and bring back up the container, making sure it’s being --force-recreated to avoid any inconsistencies, any --build instructions in the individual containers will be executed to make sure e.g Mastodon updates properly, and finally detach or daemonize to let the containers run in peace in the background.

After the script completed it’ll output the time it took to do this, this is achieved by using the date command at the start and end, and then subtracting the the start from the end amount of seconds:

start=`date +%s`
 # Stuff happens
end=`date +%s`
runtime=$((end-start))
echo "Finished! Time took: $runtime seconds."

Backing it up

Once you’ve nailed keeping things up-to-date, probably the next thing you’ll want to have down, is backups. Again, this is beautifully simple because of how we’ve set things up so far:

Because our containers store everything in /serverfiles/data, and all the definition files are in /serverfiles/compose, all you really have to do, is backup /serverfiles somewhere*

*: The only caveat is, that Docker may store some container temp files in /var/lib/docker, and if you want to be entirely sure, you can either symlink that folder into /serverfiles/docker or actually tell docker to put it there in the first place. a symlink is made with ln -s /serverfiles/docker /var/lib/docker. If you want to tell docker to use a specific directory, create /etc/docker/daemon.json and put this in it:

{
  "data-root": "/serverfiles/docker"
}

There are a ton of ways to achieve backing up a specific folder, such as rsync, but I really like BorgBackup, because it deduplicates and compresses everything, and also makes sure you have some previous versions to actually restore from, rather than just having a single copy somewhere else (imagine if you get delete something and then sync it! Or worse, ransomware, and then sync it ..)

Here’s a script that you can run, which will use BorgBackup to create a backup, and then run a prune, to make sure only the defined days/weeks/months to keep are kept over time:

#!/bin/bash

# Set the repository location
REPOSITORY="/path/to/your/borg/repository"

# Set the data location
DATADIR="/serverfiles"

# Backup name format based on current date and time
BACKUP_NAME="::{{now:%Y-%m-%d_%H:%M:%S}}"

# Pruning variables
DAYS_TO_KEEP=7      # Number of days to keep daily backups
WEEKS_TO_KEEP=4     # Number of weeks to keep weekly backups
MONTHS_TO_KEEP=6    # Number of months to keep monthly backups

# Creating a backup
borg create $REPOSITORY$BACKUP_NAME \
    $DATADIR \
    --stats

# Pruning the backups
borg prune $REPOSITORY \
    --stats --list --show-rc \
    --keep-daily $DAYS_TO_KEEP \
    --keep-weekly $WEEKS_TO_KEEP \
    --keep-monthly $MONTHS_TO_KEEP

don’t forget to mount the backup repository location where you specify it– you can also add the mount command at the top of the script to make your life easier, and unmount after for security if you prefer.

F.A.Q, common issues, and fixes

  • If the container doesn’t start, try running it without -d (daemon), it’ll spit out all the logs and stay attached to the console output of the containers while they run. You can also use docker-compose logs to get the last logs.
  • as of 22-8-2022 it’s still an early version of this large writeup, I hope I got everything right, if not, please feel free to poke me.

Updated: