When docker + systemd is the right choice
Supervising a single Docker container with systemd gives you the image-distribution story of the container ecosystem with the lifecycle management of a proper init system. You get the reproducibility of a pinned image and the operational maturity of systemd — journald logging, restart policies, dependency ordering, and resource limits — without running Kubernetes, Compose, or Swarm.
Pick this path when:
- Your build pipeline already produces container images and pushes them to a registry.
- You need multiple runtimes (Python + Node + a sidecar) on a small fleet without multi-language binary builds.
- You want the exact same artifact to run on a developer laptop and on the server.
- You run a handful of VPS instances, not a cluster — orchestrators are overkill here.
Pick the binary + systemd path instead if your application compiles to a single static binary. You will save the runtime overhead of the container daemon and one extra package to patch.
Prerequisites
You need three things to follow this guide:
- A Hetzner account and a Debian 13 (Trixie) server you can reach over SSH.
- An SSH keypair on your workstation. If you do not have one, generate it with
ssh-keygen -t ed25519. - A container image pushed to a registry you can pull from — GHCR, Docker Hub, a private registry, or a cloud-provider registry.
Hetzner Debian 13 images ship with cloud-init. SSH keys selected during server creation land in /root/.ssh/authorized_keys on first boot.
Part 1. First SSH login
When you create a Hetzner server with an SSH key attached, your public key is placed into the default user's authorized_keys file. On Hetzner that user is root. Log in and patch the base system before doing anything else:
ssh root@YOUR_SERVER_IP apt-get update apt-get -y dist-upgrade apt-get -y install ufw sudo curl ca-certificates gnupg rebootReconnect after the reboot. Keeping the base system patched before you open any ports is the single biggest security win you can take for free.
Part 2. Create a sudo admin user
Logging in as root is fine for provisioning, but day-to-day operations should happen as an unprivileged user with sudo. Create one, copy your SSH key, and allow sudo without a password prompt (sudo over SSH without a TTY is painful otherwise):
adduser --disabled-password --gecos "" admin usermod -aG sudo admin install -d -m 0700 -o admin -g admin /home/admin/.ssh cp /root/.ssh/authorized_keys /home/admin/.ssh/authorized_keys chown admin:admin /home/admin/.ssh/authorized_keys chmod 0600 /home/admin/.ssh/authorized_keys echo "admin ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-admin chmod 0440 /etc/sudoers.d/90-adminOpen a second SSH session as the new admin user in a different terminal before continuing. Never lock yourself out of a server by closing the only working session. Once the new session works, proceed with SSH hardening.
Part 3. Harden SSH
The defaults in Debian's OpenSSH are reasonable, but we can do much better. Disable password login entirely, restrict which users can log in, and move to a non-standard port to reduce opportunistic scanning noise in your logs. Write the following to /etc/ssh/sshd_config:
Port 2222 AddressFamily inet Protocol 2 LogLevel VERBOSE LoginGraceTime 30 StrictModes yes PubkeyAuthentication yes PermitRootLogin no PasswordAuthentication no ChallengeResponseAuthentication no PermitEmptyPasswords no X11Forwarding no PrintMotd no UsePAM yes AllowUsers admin MaxAuthTries 3 MaxSessions 2 ClientAliveInterval 300 ClientAliveCountMax 2 Subsystem sftp /usr/lib/openssh/sftp-serverValidate the config before restarting:
sudo sshd -t sudo systemctl restart sshMoving off port 22 is not real security, but it cuts log volume from internet-wide scanners by 90% or more. AllowUsers is a strict allowlist — if a future account is added, SSH will refuse it until the config is updated, which is exactly the behaviour we want. MaxAuthTries 3 plus PasswordAuthentication no means an attacker gets three key-based attempts before being dropped.
Part 4. Host firewall with ufw
ufw is a thin, sensible wrapper around nftables. The goal is a default-deny firewall that allows only SSH on the new port and your application port:
sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 2222/tcp comment 'ssh' sudo ufw allow 8080/tcp comment 'app' sudo ufw --force enable sudo ufw status verboseOne Docker-specific caveat: the Docker daemon manipulates iptables directly and can punch holes through ufw for published ports. The unit we write later binds the container port to 127.0.0.1 and relies on a reverse proxy or the cloud firewall for public exposure, which sidesteps the issue entirely. If you intend to publish container ports on 0.0.0.0, read the ufw-docker project or rely solely on the cloud firewall.
Part 5. Cloud firewall at the Hetzner edge
Configure Hetzner Cloud Firewall rules in the Hetzner Console or via the hcloud CLI. They apply outside the VM and should be used in addition to ufw on the host.
Host-level ufw and the Hetzner Cloud Firewall are complementary, not redundant. The cloud firewall blocks traffic before it reaches your VM's network stack, which is both cheaper (no CPU cycles spent) and safer (it protects you if ufw is ever misconfigured). Configure both with the same allowlist: SSH on 2222 from your office or VPN, and your app port from 0.0.0.0/0 once you are ready to go public.
Part 6. Install Docker Engine
Debian 13 ships a docker.io package in its own repositories, but it trails upstream by several releases and misses security fixes for months. Install Docker Engine from Docker's official apt repository instead:
sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/debian/gpg \ | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \ https://download.docker.com/linux/debian $(. /etc/os-release && echo $VERSION_CODENAME) stable" \ | sudo tee /etc/apt/sources.list.d/docker.list >/dev/null sudo apt-get update sudo apt-get -y install docker-ce docker-ce-cli containerd.io sudo systemctl enable --now dockerVerify the daemon is up with docker info. Do not add your admin user to the docker group — membership in that group is equivalent to passwordless root, because a user with docker access can mount the host's root filesystem into a container. Always invoke Docker with sudo from interactive shells, and let systemd run containers as root through a locked-down unit.
Part 7. Pull a pinned image
Images must be pinned to an immutable tag or digest. latest is not a version — it is whatever the registry happened to point there last, and it will bite you on the first restart after the tag moves. Pull the exact version you intend to run:
sudo docker pull ghcr.io/your-org/myapp:v1.2.3 sudo docker image inspect ghcr.io/your-org/myapp:v1.2.3 | grep -E '"Id"|"RepoDigests"'Record the image digest somewhere your deploy tooling can read it. Pinning to a digest (myapp@sha256:...) rather than a tag is even stronger: it guarantees the registry cannot change what you run without you noticing.
Part 8. Environment file for secrets
Never bake secrets into the image or put them into the systemd unit directly. Keep them in a root-owned file with mode 0640 so only root and Docker can read it:
sudo install -d -m 0750 -o root -g root /etc/myapp sudo tee /etc/myapp/env >/dev/null <<'EOF' DATABASE_URL=postgres://user:[email protected]/myapp SESSION_KEY=replace-me-with-32-random-bytes LOG_LEVEL=info EOF sudo chmod 0640 /etc/myapp/envDocker will read this file via --env-file at container start time. Secrets never appear in the process list or in the unit file on disk, and a rotation is just a file rewrite plus a restart.
Part 9. Write the systemd unit
This is the heart of the setup. Copy the unit below to /etc/systemd/system/myapp.service. Every docker run flag is doing security work — the notes after the block explain why.
[Unit] Description=myapp (docker) Documentation=https://example.com/docs After=docker.service network-online.target Requires=docker.service Wants=network-online.target [Service] Type=simple Restart=on-failure RestartSec=5 TimeoutStartSec=0 TimeoutStopSec=30 ExecStartPre=-/usr/bin/docker stop myapp ExecStartPre=-/usr/bin/docker rm myapp ExecStart=/usr/bin/docker run --rm --name myapp \ --read-only \ --tmpfs /tmp:rw,size=64m \ --cap-drop=ALL \ --cap-add=NET_BIND_SERVICE \ --security-opt=no-new-privileges \ --pids-limit=256 \ --memory=512m \ --cpus=1.0 \ --log-driver=journald \ --log-opt tag=myapp \ --env-file /etc/myapp/env \ -p 127.0.0.1:8080:8080 \ ghcr.io/your-org/myapp:v1.2.3 ExecStop=/usr/bin/docker stop myapp [Install] WantedBy=multi-user.targetA quick tour of the hardening flags. --read-only mounts the container's root filesystem read-only, then --tmpfs /tmp carves out exactly the scratch space a well-behaved app needs. --cap-drop=ALL strips every Linux capability from the container, and --cap-add=NET_BIND_SERVICE adds back only the one needed to bind low ports — drop it entirely if your app listens on 8080. --security-opt=no-new-privileges blocks setuid escalation from inside the container. --pids-limit, --memory, and --cpus cap runaway resource use so a misbehaving container cannot take the whole VM down.
--log-driver=journald is the critical operational choice: container stdout and stderr flow into the systemd journal, which means journalctl -u myapp shows you the same stream you would see from a plain systemd service. You get one unified log pipeline for the entire box instead of hunting through /var/lib/docker/containers/.../<id>-json.log.
-p 127.0.0.1:8080:8080 binds the published port to localhost only. Public traffic arrives through the cloud firewall to a reverse proxy (Caddy or nginx) on the host, which forwards to 127.0.0.1:8080. That pattern keeps the Docker iptables manipulation from blowing a hole in ufw, and it gives you TLS termination and HTTP logging in one place.
Part 10. Enable, start, verify
sudo systemctl daemon-reload sudo systemctl enable --now myapp sudo systemctl status myapp journalctl -u myapp -n 100 --no-pager curl -sSf http://127.0.0.1:8080/healthzsystemctl status should show active (running) with a recent start timestamp. The journal should show your application's own startup logs flowing through the journald driver. The curl to localhost confirms the container is reachable on the published port. Only after all three are green should you open the port publicly at the cloud firewall.
Part 11. Log retention
Because we route container output through journald, journal retention governs container logs automatically. Without a policy the journal will grow until it fills /var/log and takes the box down. Write a journald override:
sudo install -d /etc/systemd/journald.conf.d sudo tee /etc/systemd/journald.conf.d/retention.conf >/dev/null <<'EOF' [Journal] SystemMaxUse=500M SystemMaxFileSize=50M MaxRetentionSec=14day Compress=yes EOF sudo systemctl restart systemd-journaldThat configuration keeps 500 MB of logs with 14 days of retention, which is a sane default for a small service. Increase the retention window if you need longer incident forensics.
If you ever revert to the default json-file driver instead, configure max-size and max-file in /etc/docker/daemon.json — otherwise container logs will grow forever inside /var/lib/docker and journald retention will not help you.
Part 12. Rolling a new image
A container restart under this unit takes a couple of seconds — long enough that a client retry usually paves over it. For most apps that qualifies as "zero downtime" in practice. An update looks like this:
sudo docker pull ghcr.io/your-org/myapp:v1.2.4 sudo sed -i 's|myapp:v1.2.3|myapp:v1.2.4|' /etc/systemd/system/myapp.service sudo systemctl daemon-reload sudo systemctl restart myapp journalctl -u myapp -n 50 --no-pagerFor apps that truly cannot tolerate a restart gap, run two units — myapp-blue.service and myapp-green.service — on different localhost ports, and flip the reverse proxy upstream between them. The old container drains inflight connections while the new one picks up new traffic. That pattern is where Docker's image-per-version model actually pays its keep over a single-binary setup.
Part 13. Automatic image cleanup
Docker never deletes pulled images or stopped containers on its own. On a long-lived server that leaks disk fast — especially with frequent deploys. Add a weekly prune timer:
sudo tee /etc/systemd/system/docker-prune.service >/dev/null <<'EOF' [Unit] Description=Prune dangling docker resources [Service] Type=oneshot ExecStart=/usr/bin/docker system prune -af --filter "until=168h" EOF sudo tee /etc/systemd/system/docker-prune.timer >/dev/null <<'EOF' [Unit] Description=Weekly docker prune [Timer] OnCalendar=weekly Persistent=true [Install] WantedBy=timers.target EOF sudo systemctl enable --now docker-prune.timerThe until=168h filter keeps anything created in the last week, so a quick rollback to last Tuesday's image still finds it cached locally.
Troubleshooting
The container starts and immediately exits
Cannot bind to port 80 or 443
--cap-add=NET_BIND_SERVICE, or you removed it because 8080 did not need it. Add it back for ports under 1024 and restart.docker: Error response from daemon: pull access denied
sudo docker login ghcr.io once, and systemd-managed pulls will use /root/.docker/config.json thereafter. For ephemeral CI runners, prefer short-lived registry tokens injected via the env file.systemctl restart takes forever and eventually times out
docker stop waits ten seconds before SIGKILL, and systemd waits TimeoutStopSec on top. Handle SIGTERM in your app: drain inflight requests, flush logs, close database connections, exit 0.The app works from the host but not from the internet
-p) must bind to an interface reachable from outside, a reverse proxy must forward public traffic to 127.0.0.1, and the Hetzner Cloud Firewall must allow the public port. Traffic has to pass all three.FAQ
Why not Docker Compose?
Why not Podman's quadlet files?
Do I need a reverse proxy?
127.0.0.1 and let Caddy handle the public side on 80 and 443.How do I run multiple applications on one server?
What about automatic security updates?
unattended-upgrades and enable the security source list. Docker Engine is covered because it is installed from an apt repo. Containers still need their images rebuilt and redeployed to pick up base-image CVEs — that belongs in your CI pipeline, not on the server.