Skip to content

Docker setup

Docker

Docker is the definitive containerisation system for running applications.

From https://docs.docker.com/engine/install/debian/

sudo curl -sSL https://get.docker.com/ | sh

Enable non-root access to the Docker daemon

sudo usermod -aG docker <username>
(need to logout and back in for this to become active)

Portainer

Portainer is a powerful GUI for managing Docker containers.

Create Portainer volume and then start Docker container, but for security bind port only to localhost, so that it cannot be accessed remotely except via an SSH tunnel.

Create portainer container

docker run -d -p 127.0.0.1:8000:8000 -p 127.0.0.1:9000:9000 \
--name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock \
-v /home/alan/containers/portainer:/data portainer/portainer-ce:latest

Setup SSH tunnel - example SSH connection string

SSH tunnel

ssh -L 9000:127.0.0.1:9000 <user>@<server.FQDN> -i <PATH-TO-PRIVATE-KEY>
Then connect using http://localhost:9000

Go to Environments > local and add public IP to allow all the ports links to be clickable

Aim to put volumes in ~/containers/[containername] for consistency.

Watchtower

Watchtower is a container-based solution for automating Docker container base image updates.

Initial docker config setup

Watchtower can pull from public repositories but to link to a private Docker Hub you need to supply login credentials. This is best achieved by running a docker login command in the terminal, which will create a file in $HOME/.docker/config.json that we can then link as a volume to the Watchtower container. If this is not done prior to running the container then Docker will instead create the config.json file as a directory!

If 2FA enabled on Docker account then go to https://hub.docker.com/settings/security?generateToken=true to setup the access token

The configuration below links to this config file and also links to the local time and tells Watchtower to include stopped containers and verbose logging.

Warning

Remember to change the email settings below

bash
docker run --detach \
    --name watchtower \
    --volume /var/run/docker.sock:/var/run/docker.sock \
    --volume $HOME/.docker/config.json:/config.json \
    -v /etc/localtime:/etc/localtime:ro \
    -e WATCHTOWER_NOTIFICATIONS=email
    -e WATCHTOWER_NOTIFICATIONS_HOSTNAME=<hostname>
    -e WATCHTOWER_NOTIFICATION_EMAIL_TO=<target email>
    -e WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PASSWORD=<password>
    -e WATCHTOWER_NOTIFICATION_EMAIL_DELAY=2
    -e WATCHTOWER_NOTIFICATION_EMAIL_FROM=<sending email>
    -e WATCHTOWER_NOTIFICATION_EMAIL_SERVER=<mailserver>
    -e WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT=587
    -e WATCHTOWER_NOTIFICATION_EMAIL_SERVER_USER=<maillogin>
    containrrr/watchtower --include-stopped --debug
docker-compose/watchtower.yml
version: "3"
services:
  watchtower:
    command:
      - --include-stopped
      - --debug
    container_name: watchtower
    entrypoint:
      - /watchtower
    environment:
      WATCHTOWER_NOTIFICATIONS: email
      WATCHTOWER_NOTIFICATIONS_HOSTNAME: "<hostname>"
      WATCHTOWER_NOTIFICATION_EMAIL_TO: "<target email>"
      WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PASSWORD: "<password>"
      WATCHTOWER_NOTIFICATION_EMAIL_DELAY: 2
      WATCHTOWER_NOTIFICATION_EMAIL_FROM: "<sending email>"
      WATCHTOWER_NOTIFICATION_EMAIL_SERVER: "<mailserver>"
      WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT: 587
      WATCHTOWER_NOTIFICATION_EMAIL_SERVER_USER: "<maillogin>"
      WATCHTOWER_CLEANUP: true
      WATCHTOWER_SCHEDULE: 0 0 4 * * * # this will run at 4am daily - uses Spring cron format
      PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
      TZ: Europe/London
    expose:
      - 8080/tcp
    hostname: ffbba889a746
    image: containrrr/watchtower
    ipc: private
    labels:
      com.centurylinklabs.watchtower: true
    logging:
      driver: json-file
      options: {}
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /home/alan/.docker/config.json:/config.json
      - /etc/localtime:/etc/localtime:ro
    working_dir: /
    restart: unless-stopped    
#This section only required if wanting to be able to ping it from Uptime Kuma
#    networks:
#      - nginx-proxy-manager_default    
#networks:
#  nginx-proxy-manager_default:
#    external: true
#    name: nginx-proxy-manager_default

Run frequency

By default Watchtower runs once per day, with the first run 24h after container activation. This can be adjusted by passing the --interval command and specifying the number of seconds. There is also the option of using the --run-once flag to immediately check all containers and then stop Watchtower running.

Private Docker Hub images

Ensure any private docker images have been started as index.docker.io/<user>/main:tag rather than <user>/main:tag

Exclude containers

To exclude a container from being checked it needs to be built with a label set in the docker-compose to tell Watchtower to ignore it

labels:
  - "com.centurylinklabs.watchtower.enable=false"

To compare images with those on Docker Hub use docker images --digests to show the sha2566 hash.

Nginx Proxy Manager

Nginx Proxy Manager lets private containerised applications run via secure HTTPS proxies (including free Let's Encrypt SSL certificates).

Apply this docker-compose (based on https://nginxproxymanager.com/setup/#running-the-app) as a stack in Portainer to deploy:

docker-compose/nginx-proxy-manager.yml
version: "3"
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: unless-stopped
    ports:
      # These ports are in format <host-port>:<container-port>
      - '80:80' # Public HTTP Port
      - '443:443' # Public HTTPS Port
      - '81:81' # Admin Web Port [can comment this out later once reverse proxy host setup for npm itself]
      # Add any other Stream port you want to expose
      # - '21:21' # FTP

    # Uncomment the next line if you uncomment anything in the section
    # environment:
      # Uncomment this if you want to change the location of 
      # the SQLite DB file within the container
      # DB_SQLITE_FILE: "/data/database.sqlite"

      # Uncomment this if IPv6 is not enabled on your host
      # DISABLE_IPV6: 'true'
    extra_hosts:  # doesn't currently work but preparation in case fixed in the future
      - "host.docker.internal:host-gateway"
    volumes:
      - /home/alan/containers/nginx-proxy-manager/data:/data
      - ./letsencrypt:/etc/letsencrypt


# The below is something to consider if issues with needing a fixed IP, however in this case need to setup the NPM network and then specify it as external here.
#     networks:
#       nginx-proxy-manager_default:
#         ipv4_address: 172.19.0.100 # set fixed IP for NPM - this is especially important for MeshCentral and SSL cert passthrough
# networks:
#   nginx-proxy-manager_default:
#     external: true
#     name: nginx-proxy-manager_default

Login to the admin console at <serverIP>:81 with email admin@example.com and password changeme. Then change user email/password combo.

Setup new proxy host for NPM itself with scheme http, forward hostname of localhost and forward port of 81.

Force SSL access to admin interface

Once initial setup is completed, change & reload the NPM stack in Portainer to comment out port 81 so that access to admin interface is only via SSL.

Remember to change the Default Site in NPM settings for appropriate redirections for invalid subdomain requests.

Certbot errors on certificate renewal

In general using the custom image I created at image: 'jc21/nginx-proxy-manager:github-pr-3121' should resolve this issue.

If there is an error re a duplicate instance, check whether there are .certbot.lock files in your system.

find / -type f -name ".certbot.lock"
If there are, you can remove them:
find / -type f -name ".certbot.lock" -exec rm {} \;
(from https://community.letsencrypt.org/t/solved-another-instance-of-certbot-is-already-running/44690/2)

After clearing the certbot lock, go through site by site and 1) disable SSL, 2) renew cert then 3) re-enable SSL (and all sub-options)

If mistakenly delete an old certificate first and get stuck with file not found error messages - then copy and existing known good folder across (e.g. cp -r /etc/letsencrypt/live/npm-1 /etc/letsencrypt/live/npm-7).

(If looking at Traefik instead then there's a reasonably helpful config guide that's worth looking at).

Dozzle

Nice logviewer application that lets you monitor all the container logs - https://dozzle.dev/

Apply this docker-compose as a stack in Portainer to deploy:

docker-compose/dozzle.yml
version: "3"
services:
  dozzle:
    container_name: dozzle
    entrypoint:
      - /dozzle
    environment:
      - PATH=/bin
      - TZ=Europe/London
      - DOZZLE_AUTH_PROVIDER=forward-proxy #this is to enable settings sync
    expose:
      - 8080/tcp
    image: docker.io/amir20/dozzle:latest
    networks:
      - nginx-proxy-manager_default
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    working_dir: /
    restart: unless-stopped    
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default

Add to nginx proxy manager as usual (forward hostname dozzle and port 8080), but with the addition of proxy_read_timeout 30m; in the advanced settings tab to minimise the issue of the default 60s proxy timeout causing repeat log entries.

To view log files that are written to disk create an alpine container and tail the log file to a shared volume Dozzle documentation. Also note the Docker maintains its own logs so with the weekly reset the container has to be recreated.

diskmonitor_stream.yml
version: "3"
services:
  dozzle-from-file:
    container_name: dozzle-from-file-diskmonitor
    image: alpine
    volumes:
      - /home/alan/scripts/diskmonitor.log:/var/log/stream.log
    command:
      - tail
      - -f
      - /var/log/stream.log     
    network_mode: none
    restart: unless-stopped

Filebrowser

A nice GUI file browser - https://github.com/filebrowser/filebrowser

Create the empty db first

mkdir -p $HOME/containers/filebrowser/branding && touch $HOME/containers/filebrowser/filebrowser.db

Then install via docker-compose:

docker-compose/filebrowser.yml
---
version: '3'
services:
  filebrowser:
    image: filebrowser/filebrowser
    container_name: filebrowser
    user: 1000:1000
    expose:
      - 80/tcp
    volumes:
      - /home/alan/:/srv
      - /home/alan/containers/filebrowser/filebrowser.db:/database.db:rw
      - /home/alan/containers/filebrowser/branding:/branding
    environment:
      - TZ=Europe/London
    restart: unless-stopped
    security_opt:
      - no-new-privileges:true
    networks:
      - nginx-proxy-manager_default
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default    

Then setup NPM SSH reverse proxy (remember to include websocket support, with forward hostname filebrowser and port 80) and then login:

Default credentials

Username: admin
Password: admin

To customise the appearance change the instance name (e.g., Deployment server) and set the branding directory path (e.g., /branding) in Settings > Global Settings. Then create img and img/icons directories in the previously created containers/filebrowser/branding directory and add the logo.svg and favicon.ico and 16x16 and 32x32 PNGs (if you only do the .ico) then the browser will pick the internal higher resolution PNGs.

Generating favicons

The favicon generator is a very useful website to generate all the required favicons for different platforms.

Optional containers

Bytestash

A handy site for storing code snippets - https://github.com/jordan-dalby/ByteStash

Install via docker-compose:

docker-compose/bytestash.yml
services:
  bytestash:
    image: "ghcr.io/jordan-dalby/bytestash:latest"
    container_name: bytestash
    volumes:
      - /home/alan/containers/bytestash:/data/snippets
    expose:
      - 5000
    environment:
      - BASE_PATH=
      # if auth username or password are left blank then authorisation is disabled
      # the username used for logging in
      - AUTH_USERNAME=
      # the password used for logging in
      - AUTH_PASSWORD=
      # the jwt secret used by the server, make sure to generate your own secret token to replace this one
      - JWT_SECRET=[generate JWT token]
      # how long the token lasts, examples: "2 days", "10h", "7d", "1m", "60s"
      - TOKEN_EXPIRY=24h
    restart: unless-stopped
    networks:
      - nginx-proxy-manager_default

networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default    

Stirling PDF

A Swiss army knife for interacting with PDFs - https://www.stirlingpdf.com/

Install via docker-compose:

docker-compose/stirling-pdf.yml
services:
  stirling-pdf:
    container_name: stirling-pdf
    image: frooodle/s-pdf:latest
    expose:
      - 8080
    volumes:
      - /home/alan/containers/StirlingPDF/trainingData:/usr/share/tessdata # Required for extra OCR languages
      - /home/alan/containers/StirlingPDF/extraConfigs:/configs
      - /home/alan/containers/StirlingPDF/customFiles:/customFiles/
      - /home/alan/containers/StirlingPDF/logs:/logs/
      - /home/alan/containers/StirlingPDF/pipeline:/pipeline/
    environment:
      - DOCKER_ENABLE_SECURITY=false
      - INSTALL_BOOK_AND_ADVANCED_HTML_OPS=false
      - LANGS=en_GB
    restart: unless-stopped
    networks:
      - nginx-proxy-manager_default
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default

Uptime Kuma monitoring

A nice status monitoring app - https://github.com/louislam/uptime-kuma

Install via docker-compose:

docker-compose/uptime-kuma.yml
version: "3"
services:
  uptime-kuma:
    command:
      - node
      - server/server.js
    container_name: uptime-kuma
    entrypoint:
      - /usr/bin/dumb-init
      - --
      - extra/entrypoint.sh
    environment:
      - PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
      - NODE_VERSION=16.15.0
      - YARN_VERSION=1.22.18
      - TZ=Europe/London
    hostname: 2331f8c6db9c
    image: louislam/uptime-kuma:1
    ipc: private
    logging:
      driver: json-file
      options: {}
    mac_address: 02:42:ac:11:00:04
    expose:
      - 3001/tcp
    restart: unless-stopped
    networks:
      - nginx-proxy-manager_default    
    volumes:
      - /home/alan/containers/uptime-kuma/data:/app/data
    working_dir: /app
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default
Join to bridge network post setup if required

Remember docker-compose can only join the new container to one network, so need to manually add to bridge network afterwards if you also want to monitor containers that aren't on the nginx-proxy-manager_default network so use the following command (or add network via Portainer):

docker network connect bridge uptime-kuma

NextCloud

Cloud-hosted sharing & collaboration server - https://hub.docker.com/r/linuxserver/nextcloud and https://nextcloud.com/

Install via docker-compose:

docker-compose/nextcloud.yml - remember to change host directories if required
---
version: "2.1"
services:
  nextcloud:
    image: lscr.io/linuxserver/nextcloud:latest
    container_name: nextcloud
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
    volumes:
      - /home/alan/containers/nextcloud/appdata:/config
      - /home/alan/containers/nextcloud/data:/data
    expose:
      - 443/tcp
    restart: unless-stopped
    networks:
      - nginx-proxy-manager_default
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default    

The setup NPM SSH reverse proxy to https port 443 and navigate to new site to setup login.

Setup 2FA

After login go to User > Settings > Security (Administration section) > Enforce two-factor authentication.
Then go User > Apps > Two-Factor TOTP Provider (https://apps.nextcloud.com/apps/twofactor_totp) or just click on search icon at the top right and type in TOTP
Then go back to User > Settings > Security (Personal section) > Tick 'Enable TOTP' and verify the code

Glances

System monitoring tool - https://nicolargo.github.io/glances/

Install via docker-compose:

docker-compose/glances.yml
version: "3"
services:
  glances:
    container_name: glances
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
      - "GLANCES_OPT=-w"
    expose:
      - 61208/tcp
    image: nicolargo/glances:latest-full # alpine-latest-full not showing Docker containers as of 20220723
    networks:
      - nginx-proxy-manager_default
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /home/alan/containers/glances:/glances/conf
    restart:
      unless-stopped
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default

Then setup NPM SSH reverse proxy to https port 443 and navigate to the new site to setup login.

Webtop

'Linux in a web browser' https://github.com/linuxserver/docker-webtop

Install via docker-compose:

docker-compose/webtop.yml
---
version: "2.1"
services:
  webtop:
    image: lscr.io/linuxserver/webtop:latest
    container_name: webtop
    security_opt:
      - seccomp:unconfined #optional
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
      - SUBFOLDER=/ #optional
      - KEYBOARD=en-gb-qwerty #optional
    volumes:
      - /home/alan/webtop:/config
      #- /var/run/docker.sock:/var/run/docker.sock #optional
    ports:
      - 3000:3000
    labels:
      - "com.centurylinklabs.watchtower.enable=false"
    devices:
      - /dev/dri:/dev/dri #optional
    shm_size: "1gb" #required otherwise web browsers will crash
    restart: "no"
    networks:
      - nginx-proxy-manager_default
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default    

Then setup NPM SSH reverse proxy to port 3000 and navigate to the new site.

MeshCentral

Self-hosted remote access client - https://github.com/Ylianst/MeshCentral & https://meshcentral.com/info/

Install via docker-compose:

docker-compose/meshcentral.yml
version: '3'
services:
  meshcentral:
    restart: always
    container_name: meshcentral
    image: typhonragewind/meshcentral
    environment:
      - IFRAME=false    #set to true if you wish to enable iframe support
      - ALLOW_NEW_ACCOUNTS=false    #set to false if you want disable self-service creation of new accounts besides the first (admin)
      - WEBRTC=false  #set to true to enable WebRTC - per documentation it is not officially released with meshcentral, but is solid enough to work with. Use with caution
    volumes:
      - /home/alan/containers/meshcentral/data:/opt/meshcentral/meshcentral-data    #config.json and other important files live here. A must for data persistence
      - /home/alan/containers/meshcentral/web:/opt/meshcentral/meshcentral-web    #to replace image files 
      - /home/alan/containers/meshcentral/user_files:/opt/meshcentral/meshcentral-files    #where file uploads for users live
    networks:
      - nginx-proxy-manager_default
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default

See NGINX section of the user guide (p34 onwards) for more information about configuring to run alongside NPM, however the key thing to note that a fixed IP address needs to be specified in the docker-compose file for NPM - as an example in the docker-compose sample file on this site is set to 172.19.0.100. The MeshCentral configuration file needs to reflect this accordingly. If this is not done the NPM container will be auto-allocated a new IP when it is re-created (e.g., for when there is an update) and MeshCentral then will run into SSL errors as it can't validate the certificate that has been passed through...

Log error with IP mismatch with configuration.
02/25/2023 2:52:11 PM
Installing otplib@10.2.3...
02/25/2023 2:52:25 PM
MeshCentral HTTP redirection server running on port 80.
02/25/2023 2:52:25 PM
MeshCentral v1.1.4, WAN mode, Production mode.
02/25/2023 2:52:27 PM
MeshCentral Intel(R) AMT server running on remote.alanjrobertson.co.uk:4433.
02/25/2023 2:52:27 PM
Failed to load web certificate at: "https://172.19.0.14:443/", host: "remote.alanjrobertson.co.uk"
02/25/2023 2:52:27 PM
MeshCentral HTTP server running on port 4430, alias port 443.
02/25/2023 2:52:48 PM
Agent bad web cert hash (Agent:68db80180d != Server:c68725feb5 or 9259b83292), holding connection (172.19.0.11:44332).
02/25/2023 2:52:48 PM
Agent reported web cert hash:68db80180d05fce0032a326259b825c76f036593c62a8be0346365eb5540a395dbfae31d8cade3f2a4370c29c2563c27.
02/25/2023 2:52:48 PM
Failed to load web certificate at: "https://172.19.0.14:443/", host: "remote.alanjrobertson.co.uk"
02/25/2023 2:52:48 PM
Agent bad web cert hash (Agent:68db80180d != Server:c68725feb5 or 9259b83292), holding connection (172.19.0.11:44344).
02/25/2023 2:52:48 PM
Agent reported web cert hash:68db80180d05fce0032a326259b825c76f036593c62a8be0346365eb5540a395dbfae31d8cade3f2a4370c29c2563c27.
02/25/2023 2:53:18 PM
Agent bad web cert hash (Agent:68db80180d != Server:c68725feb5 or 9259b83292), holding connection (172.19.0.11:52098).
02/25/2023 2:53:18 PM
Agent reported web cert hash:68db80180d05fce0032a326259b825c76f036593c62a8be0346365eb5540a395dbfae31d8cade3f2a4370c29c2563c27.
02/25/2023 2:54:03 PM
Agent bad web cert hash (Agent:68db80180d != Server:c68725feb5 or 9259b83292), holding connection (172.19.0.11:53218).
02/25/2023 2:54:03 PM
Agent reported web cert hash:68db80180d05fce0032a326259b825c76f036593c62a8be0346365eb5540a395dbfae31d8cade3f2a4370c29c2563c27.

Edit ~/containers/meshcentral/data/config.json to replace with the following. Remember that items beginning with an underscore are ignored.

config.json - remember to edit highlighted lines to ensure the correct FQDN and NPM host are specified
{
  "$schema": "http://info.meshcentral.com/downloads/meshcentral-config-schema.json",
  "settings": {
    "cert": "remote.alanjrobertson.co.uk",
    "WANonly": true,
    "_LANonly": true,
    "port": 4430,
    "aliasPort": 443,
    "_redirPort": 800,
    "_redirAliasPort": 80,
    "AgentPong": 200,
    "TLSOffload": "172.19.0.100",
    "SelfUpdate": false,
    "AllowFraming": "false",
    "WebRTC": "false"
  },
  "domains": {
    "": {
      "_title": "MyServer",
      "NewAccounts": "false",
      "certUrl": "https://172.19.0.100:443/"
    }
  }
}

Then setup NPM SSH reverse proxy to port 4430 (remember to switch on websocket support) and navigate to the new site.

If running with Authelia then add new entries into the configuration file there too so that the agent and (for remote control) the meshrelay and (for setup of the agents and settings) the invite download page can all bypass the authentication but that the main web UI is under two factor:

    - domain: remote.alanjrobertson.co.uk
      resources:
        # allow agent & agent invites to bypass
        - "^/agent.ashx([?].*)?$"
        - "^/agentinvite([?].*)?$"
        # allow mesh relay to bypass (for remote control, console, files) and agents to connect/obtain settings
        - "^/meshrelay.ashx([?].*)?$"
        - "^/meshagents([?].*)?$"
        - "^/meshsettings([?].*)?$"
        # allow files to be downloaded
        - "^/devicefile.ashx([?].*)?$"
        # allow invite page for agent download to be displayed
        - "^/images([/].*)?$"
        - "^/scripts([/].*)?$"
        - "^/styles([/].*)?$"
      policy: bypass
    - domain: remote.alanjrobertson.co.uk
      policy: two_factor
See Authelia documentation for more on regex string (and use Regex 101 with Golang option)

Login to MeshCentral and set up an initial account. Then add a new group and download and install the agent. Once installed you will see it show up in MeshCentral and will be able to control/access remotely. There is also the option to download an Assistant (that can be branded) that users can then run once (doesn't require elevated privileges; also can run the Assistant with -debug flag to log if any issues).

To setup custom images for hosts, run the following commands:

docker exec -it meshcentral /bin/bash
cp -r public/ /opt/meshcentral/meshcentral-web/
cp -r views/ /opt/meshcentral/meshcentral-web/
Then change the relevant icons - create a 256x256 PNG with a transparent background and replace some existing icons. This will change the large icons but not the small ones (see Github bug)

Add AV exception

It is like an exception needs to be added to AV software for C:\Program Files\Mesh Agent on the local machine (certainly is the case with Avast).

Netdata

System monitoring tool - https://www.netdata.cloud/

Install via docker-compose:

docker-compose/netdata.yml
version: '3'
services:
  netdata:
    image: netdata/netdata
    container_name: netdata
    hostname: linode.alanjrobertson.co.uk # set to fqdn of host
    expose:
      - 19999/tcp
    restart: unless-stopped
    cap_add:
      - SYS_PTRACE
    security_opt:
      - apparmor:unconfined
    environment:
      - NETDATA_CLAIM_TOKEN=<INSERT_TOKEN_HERE_FROM_CLOUD>
      - NETDATA_CLAIM_URL=https://app.netdata.cloud
      - NETDATA_CLAIM_ROOMS=
    volumes:
      - netdataconfig:/etc/netdata
      - netdatalib:/var/lib/netdata
      - netdatacache:/var/cache/netdata
      - /etc/passwd:/host/etc/passwd:ro
      - /etc/group:/host/etc/group:ro
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /etc/os-release:/host/etc/os-release:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro

volumes:
  netdataconfig:
  netdatalib:
  netdatacache:

The setup NPM SSH reverse proxy to https port 443 and navigate to new site to view login. Also option of linking to online account - need to get login token from website and change stack to include this in the environment variables.

YOURLS

Link shortner tool with personal tracking - https://yourls.org

Setup structure prior to deploying stack/docker compose to avoid directories having root ownership or files being set as directories:

setup commands

mkdir -p ~/containers/yourls/plugins ~/containers/yourls/html
touch ~/containers/yourls/my.cnf

Copy my.cnf file contents to ~/containers/yourls/my.cnf - this reduces RAM usage from ~233MB down to 44MB

~/containers/yourls/my.cnf
# The MariaDB configuration file
#
# The MariaDB/MySQL tools read configuration files in the following order:
# 0. "/etc/mysql/my.cnf" symlinks to this file, reason why all the rest is read.
# 1. "/etc/mysql/mariadb.cnf" (this file) to set global defaults,
# 2. "/etc/mysql/conf.d/*.cnf" to set global options.
# 3. "/etc/mysql/mariadb.conf.d/*.cnf" to set MariaDB-only options.
# 4. "~/.my.cnf" to set user-specific options.
#
# If the same option is defined multiple times, the last one will apply.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# If you are new to MariaDB, check out https://mariadb.com/kb/en/basic-mariadb-articles/

#
# This group is read both by the client and the server
# use it for options that affect everything
#
[client-server]
# Port or socket location where to connect
# port = 3306
socket = /run/mysqld/mysqld.sock

# Import all .cnf files from configuration directory

!includedir /etc/mysql/mariadb.conf.d/
!includedir /etc/mysql/conf.d/

[mysqld]
#max_connections         = 100
max_connections         = 10
connect_timeout         = 5
wait_timeout            = 600
max_allowed_packet      = 16M
#thread_cache_size       = 128
thread_cache_size       = 0
#sort_buffer_size        = 4M
sort_buffer_size        = 32K
#bulk_insert_buffer_size = 16M
bulk_insert_buffer_size = 0
#tmp_table_size          = 32M
tmp_table_size          = 1K
#max_heap_table_size     = 32M
max_heap_table_size     = 16K
#
# * MyISAM
#
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched. On error, make copy and try a repair.
myisam_recover_options = BACKUP
#key_buffer_size         = 128M
key_buffer_size         = 1M
#open-files-limit       = 2000
table_open_cache        = 400
myisam_sort_buffer_size = 512M
concurrent_insert       = 2
#read_buffer_size        = 2M
read_buffer_size        = 8K
#read_rnd_buffer_size    = 1M
read_rnd_buffer_size    = 8K
#
# * Query Cache Configuration
#
# Cache only tiny result sets, so we can fit more in the query cache.
query_cache_limit               = 128K
#query_cache_size                = 64M
query_cache_size                = 512K
# for more write intensive setups, set to DEMAND or OFF
#query_cache_type               = DEMAND
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
#general_log_file        = /var/log/mysql/mysql.log
#general_log             = 1
#
# Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf.
#
# we do want to know about network errors and such
#log_warnings           = 2
#
# Enable the slow query log to see queries with especially long duration
#slow_query_log[={0|1}]
slow_query_log_file     = /var/log/mysql/mariadb-slow.log
long_query_time = 10
#log_slow_rate_limit    = 1000
#log_slow_verbosity     = query_plan

#log-queries-not-using-indexes
#log_slow_admin_statements
#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#       other settings you may need to change.
#server-id              = 1
#report_host            = master1
#auto_increment_increment = 2
#auto_increment_offset  = 1
#log_bin                        = /var/log/mysql/mariadb-bin
#log_bin_index          = /var/log/mysql/mariadb-bin.index
# not fab for performance, but safer
#sync_binlog            = 1
expire_logs_days        = 10
max_binlog_size         = 100M
# slaves
#relay_log              = /var/log/mysql/relay-bin
#relay_log_index        = /var/log/mysql/relay-bin.index
#relay_log_info_file    = /var/log/mysql/relay-bin.info
#log_slave_updates
#read_only
#
# If applications support it, this stricter sql_mode prevents some
# mistakes like inserting invalid dates etc.
#sql_mode               = NO_ENGINE_SUBSTITUTION,TRADITIONAL
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
default_storage_engine  = InnoDB
# you can't just change log file size, requires special procedure
#innodb_log_file_size   = 50M
#innodb_buffer_pool_size = 256M
innodb_buffer_pool_size = 10M
#innodb_log_buffer_size  = 8M
innodb_log_buffer_size  = 512K
innodb_file_per_table   = 1
innodb_open_files       = 400
innodb_io_capacity      = 400
innodb_flush_method     = O_DIRECT
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem

#
# * Galera-related settings
#
[galera]
# Mandatory settings
#wsrep_on=ON
#wsrep_provider=
#wsrep_cluster_address=
#binlog_format=row
#default_storage_engine=InnoDB
#innodb_autoinc_lock_mode=2
#
# Allow server to accept connections on all interfaces.
#
#bind-address=0.0.0.0
#
# Optional setting
#wsrep_slave_threads=1
#innodb_flush_log_at_trx_commit=0

[mysqldump]
quick
quote-names
max_allowed_packet      = 16M

[mysql]
#no-auto-rehash # faster start of mysql but no tab completion

[isamchk]
key_buffer              = 16M

#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#
# As this lives in the container's conf.d directory, the includes would start a recursive loop, so comment them out
#!include /etc/mysql/mariadb.cnf
#!includedir /etc/mysql/conf.d/

Install via docker-compose:

docker-compose/yourls.yml
version: '3'
services:
  yourls_db:
    container_name: yourls_db
    image: mariadb
    restart: always
    volumes:
      - /home/alan/containers/yourls:/var/lib/mysql
      - /home/alan/containers/yourls/my.cnf:/etc/mysql/conf.d/my.cnf
    environment:
      MYSQL_ROOT_PASSWORD: setarandomrootpasswordhere
      MYSQL_DATABASE: yourls    # don't change these
      MYSQL_USER: yourls        # don't change these
      MYSQL_PASSWORD: yourls    # don't change these
    networks:
      - nginx-proxy-manager_default  

  yourls:
    container_name: yourls
    links:
      - yourls_db
    depends_on:
      - yourls_db
    expose:
      - 80
    volumes:
      - /home/alan/containers/yourls/plugins:/var/www/html/user/plugins
      - /home/alan/containers/yourls/index.html:/var/www/html/index.html
      - /home/alan/containers/yourls/bg.jpg:/var/www/html/bg.jpg
      - /home/alan/containers/yourls/favicon:/var/www/html
    environment:
      - YOURLS_SITE=https://ajr.mobi
      - YOURLS_USER=setadminusernamehere
      - YOURLS_PASS=setadminpasswordhere
      - YOURLS_DB_HOST=yourls_db
      - YOURLS_DB_USER=yourls
      - YOURLS_DB_PASS=yourls    
    image: yourls
    restart: always
    networks:
      - nginx-proxy-manager_default    
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default 

Note that after installation the root directory will just show an error - this is by design!

Instead you need to go to domain.tld/admin to access the admin interface. On first run click to setup the database then login using the credentials that were pre-specified in the docker-compose file.

Invalid username/password issues

Note that when parsing the password from the stack to pass in an environment variable there can be issues with special characters (mainly $). You can check what has been passed as a parsed environment variable by looking at the container details in Portainer.
It is also possible to check in ~/containers/yourls/users/config.php - before logging into the admin console this will show in cleartext at line 75 (press Alt-N in nano to show line numbers). After login it will be encrypted

YOURLS has an extensible architecture - any plugins should be downloaded and added to subdirectories within ~/containers/yourls/plugins - see preview and qrcode as examples with setup instructions (although Preview URL with QR code is actually a nicer combination option to install that those separate ones - one installed just append a ~ to the shortcode to see the preview). Once plugins have been copied into place, go to the admin interface to activate them.

As mentioned, by default accessing the root directory (domain.tld) or an incorrect shortcode will display a 403 error page (as the latter just redirects to the root). Place an index.(html|php) file in the ~/containers/yourls/html directory of the host (volume is already mapped in the stack/docker-compose file) to replace this.

example index.html with background image and centred text

<html lang="en" translate="no">
<head>
<meta name="google" content="notranslate">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Courier+Prime&display=swap" rel="stylesheet">
<title>ajr.mobi</title>
<style>
        * {
            margin: 0;
            padding: 0;
        }

        html {
            background: url(bg.jpg) no-repeat center center fixed;
            -webkit-background-size: cover;
            -moz-background-size: cover;
            -o-background-size: cover;
            background-size: cover;
        }

        .align {
            display: flex;
            height: 100%;
            align-items: center;
            justify-content: center;
            font-family: 'Courier Prime', monospace;
            color: linen;
            font-size: 350%;
        }
</style>
</head>
<body><div class="align">ajr.mobi</div></body>
</html>
(image-centering CCS from https://css-tricks.com/perfect-full-page-background-image/)

Don't map the whole /html directory as a Docker volume

If the whole /html directory is mapped then when a new YOURLS Docker image is released it will not be able to update correctly - any file in the mapped volume /var/www/html takes precedence to new application file, to avoid any unexpected override. Thus, the previous version's files are still used. The solution is to only map the plugins directly (which will be empty anyway) and the index.html ± background image.

If a simple redirect to another page is required then instead just create an index.php with the following code:

example ~/containers/yourls/html/index.php redirect
<?php
    header("Location: http://www.example.com/another-page.php");
    exit();
?>

You can change the favourites icon shown in the browser tab for the index - there are nice generators for these from text/emoji or from Font Awesome icons. Place the newly generatored favicon files in ~/containers/yourls/favicon - the Docker compose file above maps the contents of this to the /var/www/html directory within the container.

You can also insert PHP pages into the /pages directory to create pages accessible via shortcode - see the YOURLS documentation for more information.

Homepage options

https://github.com/bastienwirtz/homer

Create the empty assets folder first

mkdir -p $HOME/containers/homer/assets

Install via docker-compose (stack on Portainer):

docker-compose/homer.yml
---
version: "2"
services:
  homer:
    image: b4bz/homer
    #To build from source, comment previous line and uncomment below
    #build: .
    container_name: homer
    volumes:
      - /home/alan/containers/homer/assets:/www/assets:rw
    expose:
      - 8080/tcp
    user: 1000:1000 # default
    environment:
      - INIT_ASSETS=1 # default - installs example configuration file & assets (favicons, ...) to help you get started.
      - TZ=Europe/London
    networks:
      - nginx-proxy-manager_default
    restart: unless-stopped      
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default      

https://github.com/linuxserver/Heimdall

Install via docker-compose (stack on Portainer):

docker-compose/heimdall.yml
---
version: "2.1"
services:
  heimdall:
    image: lscr.io/linuxserver/heimdall:latest
    container_name: heimdall
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
    volumes:
      - /home/alan/containers/heimdall:/config
    expose:
      - 80/tcp
      - 443/tcp
    restart: unless-stopped
    networks:
      - nginx-proxy-manager_default
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default          

Fix max image size issue by increasing default PHP 2MB limit

echo "upload_max_filesize = 30M" >> /home/alan/containers/heimdall/php/php-local.ini

https://github.com/Lissy93/dashy

DO NOT USE IF RAM <1GB

Build fails unless higher RAM levels, leading to high CPU and swap usage.
See discussion at https://github.com/Lissy93/dashy/issues/136

Create the empty db file first

:fontawesome-solid-terminal: bash
mkdir -p $HOME/containers/dashy && touch $HOME/containers/dashy/my-conf.yml

Install via docker-compose (stack on Portainer):

docker-compose/dashy.yml
---
version: "3"
services:
  dashy:
    image: lissy93/dashy:latest
    container_name: dashy
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
    volumes:
      - /home/alan/containers/dashy/my-conf.yml:/app/public/conf.yml
    expose:
      - 80/tcp
    restart: unless-stopped
    networks:
      - nginx-proxy-manager_default
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default          

Matomo

Self-hosted analytics platform - https://matomo.org/
Install via docker-compose (stack on Portainer):

docker-compose/matomo.yml
version: '3'
services:
  app:
    image: matomo:latest
    restart: unless-stopped
    environment:
      - MATOMO_DATABASE_HOST=db
      - MATOMO_DATABASE_TABLES_PREFIX=mat_
      - MATOMO_DATABASE_USERNAME=matomo-CHANGEME
      - MATOMO_DATABASE_PASSWORD=matomo-CHANGEME
      - MATOMO_DATABASE_DBNAME=matomo
      - TZ=Europe/London      
    volumes:
      - /home/alan/containers/matomo/app:/var/www/html
    links:
      - db:db
    expose:
      - 80/tcp
    networks:
      - nginx-proxy-manager_default         
  db:
    image: yobasystems/alpine-mariadb:latest
    restart: unless-stopped
    environment:
      MYSQL_DATABASE: matomo
      MYSQL_USER: matomo-CHANGEME
      MYSQL_PASSWORD: matomo-CHANGEME
      MYSQL_RANDOM_ROOT_PASSWORD: '1'
    volumes:
      - /home/alan/containers/matomo/db:/var/lib/mysql
    networks:
      - nginx-proxy-manager_default
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default

Then setup in NPM as usual with SSL and add the usual Authelia container advanced config.
Once this is done access Matomo via the new proxy address and follow the click-through setup steps. Database parameters should already be pre-filled (from the environment variables above), the main step is just to setup a superadmin user. After this the setup process will generate the tracking code that has to be placed just before the closing </head> tag (or in the relevant Wordpress configuration). This tracking code needs to be able to access the matomo.php and matomo.js files without authentication, so the following has to be added to the Authelia configuration:

add to access_control section of ~/containers/authelia/config/configuration.yml
    - domain: analytics.alanjrobertson.co.uk
    resources:
        - "^matomo.php*$"
        - "^matomo.js*$"
    policy: bypass

PrivateBin

A minimalist, open source online pastebin where the server has zero knowledge of pasted data - https://privatebin.info/

Install via docker-compose (stack on Portainer):

docker-compose/privatebin.yml
services:
  privatebin:
    container_name: privatebin
    image: privatebin/nginx-fpm-alpine
    restart: always
    read_only: true
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
    expose:
      - "8080"    
    volumes:
      - /home/alan/containers/privatebin:/srv/data
    networks:
      - nginx-proxy-manager_default      
networks:
  nginx-proxy-manager_default:
    external: true
    name: nginx-proxy-manager_default      

Then create an npm certificate/reverse proxy redirect.

Fix directory permission issues

By default the new folder is owned by root, however even if created by the logged in user prior to creating the container the web application is still unable to write to the directory - we need to change this as follows:

sudo chown -R nobody:82 privatebin/
sudo chmod 700 privatebin/

If having issues fixing directory permissions then more information on how to work it out

From https://ppfeufer.de/privatebin-your-self-hosted-pastebin-instance/

  1. First make the privatebin directory globally writeable with sudo chmod 777 privatebin-data/
  2. Now open PrivateBin and create a paste. If you then do ls -lh this will show the user - normally user nobody and group 82 (see screenshot)
  3. Now change directory ownership with sudo chown -R nobody:82 privatebin/
  4. Now revert directory restrictions with sudo chmod 700 privatebin/

Administration commands

There is an administration capability built-in to PrivateBin.

This can be accessed by opening a terminal into the container and then cd /bin followed by:

  • administration --help display help info
  • administration --statistics show stats
  • administration --purge purge expired pastes
  • administration --delete <pasteID> delete specified paste

Docker Compose files for existing containers

It is possible to easily generate a Docker Compose file for a container that has been started via the command line - see https://github.com/Red5d/docker-autocompose

docker run --rm --pull always \
    -v /var/run/docker.sock:/var/run/docker.sock \
    ghcr.io/red5d/docker-autocompose \
    <container-name-or-id> <additional-names-or-ids>
docker run --rm --pull always \
    -v /var/run/docker.sock:/var/run/docker.sock \
    ghcr.io/red5d/docker-autocompose \
    $(docker ps -aq)