Deployment Hetzner Debian 13 (Trixie) docker systemd

Deploy a Docker container on Hetzner Debian 13 (Trixie) with systemd

A complete walkthrough for teams running their own infrastructure without a dedicated sysadmin: starting from a freshly created Hetzner server running Debian 13 (Trixie), you will lock it down, install Docker Engine, and run a pinned container under systemd with hardening, journal-based logging, and safe rolling updates. No Compose, no Swarm — just one container, one unit file, one supervisor, and a runbook your team can re-run with confidence.

When docker + systemd is the right choice

Supervising a single Docker container with systemd gives you the image-distribution story of the container ecosystem with the lifecycle management of a proper init system. You get the reproducibility of a pinned image and the operational maturity of systemd — journald logging, restart policies, dependency ordering, and resource limits — without running Kubernetes, Compose, or Swarm.

Pick this path when:

  • Your build pipeline already produces container images and pushes them to a registry.
  • You need multiple runtimes (Python + Node + a sidecar) on a small fleet without multi-language binary builds.
  • You want the exact same artifact to run on a developer laptop and on the server.
  • You run a handful of VPS instances, not a cluster — orchestrators are overkill here.

Pick the binary + systemd path instead if your application compiles to a single static binary. You will save the runtime overhead of the container daemon and one extra package to patch.

Prerequisites

You need three things to follow this guide:

  • A Hetzner account and a Debian 13 (Trixie) server you can reach over SSH.
  • An SSH keypair on your workstation. If you do not have one, generate it with ssh-keygen -t ed25519.
  • A container image pushed to a registry you can pull from — GHCR, Docker Hub, a private registry, or a cloud-provider registry.

Hetzner Debian 13 images ship with cloud-init. SSH keys selected during server creation land in /root/.ssh/authorized_keys on first boot.

Part 1. First SSH login

When you create a Hetzner server with an SSH key attached, your public key is placed into the default user's authorized_keys file. On Hetzner that user is root. Log in and patch the base system before doing anything else:

ssh root@YOUR_SERVER_IP apt-get update apt-get -y dist-upgrade apt-get -y install ufw sudo curl ca-certificates gnupg reboot

Reconnect after the reboot. Keeping the base system patched before you open any ports is the single biggest security win you can take for free.

Part 2. Create a sudo admin user

Logging in as root is fine for provisioning, but day-to-day operations should happen as an unprivileged user with sudo. Create one, copy your SSH key, and allow sudo without a password prompt (sudo over SSH without a TTY is painful otherwise):

adduser --disabled-password --gecos "" admin usermod -aG sudo admin install -d -m 0700 -o admin -g admin /home/admin/.ssh cp /root/.ssh/authorized_keys /home/admin/.ssh/authorized_keys chown admin:admin /home/admin/.ssh/authorized_keys chmod 0600 /home/admin/.ssh/authorized_keys echo "admin ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-admin chmod 0440 /etc/sudoers.d/90-admin

Open a second SSH session as the new admin user in a different terminal before continuing. Never lock yourself out of a server by closing the only working session. Once the new session works, proceed with SSH hardening.

Part 3. Harden SSH

The defaults in Debian's OpenSSH are reasonable, but we can do much better. Disable password login entirely, restrict which users can log in, and move to a non-standard port to reduce opportunistic scanning noise in your logs. Write the following to /etc/ssh/sshd_config:

Port 2222 AddressFamily inet Protocol 2 LogLevel VERBOSE LoginGraceTime 30 StrictModes yes PubkeyAuthentication yes PermitRootLogin no PasswordAuthentication no ChallengeResponseAuthentication no PermitEmptyPasswords no X11Forwarding no PrintMotd no UsePAM yes AllowUsers admin MaxAuthTries 3 MaxSessions 2 ClientAliveInterval 300 ClientAliveCountMax 2 Subsystem sftp /usr/lib/openssh/sftp-server

Validate the config before restarting:

sudo sshd -t sudo systemctl restart ssh

Moving off port 22 is not real security, but it cuts log volume from internet-wide scanners by 90% or more. AllowUsers is a strict allowlist — if a future account is added, SSH will refuse it until the config is updated, which is exactly the behaviour we want. MaxAuthTries 3 plus PasswordAuthentication no means an attacker gets three key-based attempts before being dropped.

Part 4. Host firewall with ufw

ufw is a thin, sensible wrapper around nftables. The goal is a default-deny firewall that allows only SSH on the new port and your application port:

sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 2222/tcp comment 'ssh' sudo ufw allow 8080/tcp comment 'app' sudo ufw --force enable sudo ufw status verbose

One Docker-specific caveat: the Docker daemon manipulates iptables directly and can punch holes through ufw for published ports. The unit we write later binds the container port to 127.0.0.1 and relies on a reverse proxy or the cloud firewall for public exposure, which sidesteps the issue entirely. If you intend to publish container ports on 0.0.0.0, read the ufw-docker project or rely solely on the cloud firewall.

Part 5. Cloud firewall at the Hetzner edge

Configure Hetzner Cloud Firewall rules in the Hetzner Console or via the hcloud CLI. They apply outside the VM and should be used in addition to ufw on the host.

Host-level ufw and the Hetzner Cloud Firewall are complementary, not redundant. The cloud firewall blocks traffic before it reaches your VM's network stack, which is both cheaper (no CPU cycles spent) and safer (it protects you if ufw is ever misconfigured). Configure both with the same allowlist: SSH on 2222 from your office or VPN, and your app port from 0.0.0.0/0 once you are ready to go public.

Part 6. Install Docker Engine

Debian 13 ships a docker.io package in its own repositories, but it trails upstream by several releases and misses security fixes for months. Install Docker Engine from Docker's official apt repository instead:

sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/debian/gpg \ | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \ https://download.docker.com/linux/debian $(. /etc/os-release && echo $VERSION_CODENAME) stable" \ | sudo tee /etc/apt/sources.list.d/docker.list >/dev/null sudo apt-get update sudo apt-get -y install docker-ce docker-ce-cli containerd.io sudo systemctl enable --now docker

Verify the daemon is up with docker info. Do not add your admin user to the docker group — membership in that group is equivalent to passwordless root, because a user with docker access can mount the host's root filesystem into a container. Always invoke Docker with sudo from interactive shells, and let systemd run containers as root through a locked-down unit.

Part 7. Pull a pinned image

Images must be pinned to an immutable tag or digest. latest is not a version — it is whatever the registry happened to point there last, and it will bite you on the first restart after the tag moves. Pull the exact version you intend to run:

sudo docker pull ghcr.io/your-org/myapp:v1.2.3 sudo docker image inspect ghcr.io/your-org/myapp:v1.2.3 | grep -E '"Id"|"RepoDigests"'

Record the image digest somewhere your deploy tooling can read it. Pinning to a digest (myapp@sha256:...) rather than a tag is even stronger: it guarantees the registry cannot change what you run without you noticing.

Part 8. Environment file for secrets

Never bake secrets into the image or put them into the systemd unit directly. Keep them in a root-owned file with mode 0640 so only root and Docker can read it:

sudo install -d -m 0750 -o root -g root /etc/myapp sudo tee /etc/myapp/env >/dev/null <<'EOF' DATABASE_URL=postgres://user:[email protected]/myapp SESSION_KEY=replace-me-with-32-random-bytes LOG_LEVEL=info EOF sudo chmod 0640 /etc/myapp/env

Docker will read this file via --env-file at container start time. Secrets never appear in the process list or in the unit file on disk, and a rotation is just a file rewrite plus a restart.

Part 9. Write the systemd unit

This is the heart of the setup. Copy the unit below to /etc/systemd/system/myapp.service. Every docker run flag is doing security work — the notes after the block explain why.

[Unit] Description=myapp (docker) Documentation=https://example.com/docs After=docker.service network-online.target Requires=docker.service Wants=network-online.target [Service] Type=simple Restart=on-failure RestartSec=5 TimeoutStartSec=0 TimeoutStopSec=30 ExecStartPre=-/usr/bin/docker stop myapp ExecStartPre=-/usr/bin/docker rm myapp ExecStart=/usr/bin/docker run --rm --name myapp \ --read-only \ --tmpfs /tmp:rw,size=64m \ --cap-drop=ALL \ --cap-add=NET_BIND_SERVICE \ --security-opt=no-new-privileges \ --pids-limit=256 \ --memory=512m \ --cpus=1.0 \ --log-driver=journald \ --log-opt tag=myapp \ --env-file /etc/myapp/env \ -p 127.0.0.1:8080:8080 \ ghcr.io/your-org/myapp:v1.2.3 ExecStop=/usr/bin/docker stop myapp [Install] WantedBy=multi-user.target

A quick tour of the hardening flags. --read-only mounts the container's root filesystem read-only, then --tmpfs /tmp carves out exactly the scratch space a well-behaved app needs. --cap-drop=ALL strips every Linux capability from the container, and --cap-add=NET_BIND_SERVICE adds back only the one needed to bind low ports — drop it entirely if your app listens on 8080. --security-opt=no-new-privileges blocks setuid escalation from inside the container. --pids-limit, --memory, and --cpus cap runaway resource use so a misbehaving container cannot take the whole VM down.

--log-driver=journald is the critical operational choice: container stdout and stderr flow into the systemd journal, which means journalctl -u myapp shows you the same stream you would see from a plain systemd service. You get one unified log pipeline for the entire box instead of hunting through /var/lib/docker/containers/.../<id>-json.log.

-p 127.0.0.1:8080:8080 binds the published port to localhost only. Public traffic arrives through the cloud firewall to a reverse proxy (Caddy or nginx) on the host, which forwards to 127.0.0.1:8080. That pattern keeps the Docker iptables manipulation from blowing a hole in ufw, and it gives you TLS termination and HTTP logging in one place.

Part 10. Enable, start, verify

sudo systemctl daemon-reload sudo systemctl enable --now myapp sudo systemctl status myapp journalctl -u myapp -n 100 --no-pager curl -sSf http://127.0.0.1:8080/healthz

systemctl status should show active (running) with a recent start timestamp. The journal should show your application's own startup logs flowing through the journald driver. The curl to localhost confirms the container is reachable on the published port. Only after all three are green should you open the port publicly at the cloud firewall.

Part 11. Log retention

Because we route container output through journald, journal retention governs container logs automatically. Without a policy the journal will grow until it fills /var/log and takes the box down. Write a journald override:

sudo install -d /etc/systemd/journald.conf.d sudo tee /etc/systemd/journald.conf.d/retention.conf >/dev/null <<'EOF' [Journal] SystemMaxUse=500M SystemMaxFileSize=50M MaxRetentionSec=14day Compress=yes EOF sudo systemctl restart systemd-journald

That configuration keeps 500 MB of logs with 14 days of retention, which is a sane default for a small service. Increase the retention window if you need longer incident forensics.

If you ever revert to the default json-file driver instead, configure max-size and max-file in /etc/docker/daemon.json — otherwise container logs will grow forever inside /var/lib/docker and journald retention will not help you.

Part 12. Rolling a new image

A container restart under this unit takes a couple of seconds — long enough that a client retry usually paves over it. For most apps that qualifies as "zero downtime" in practice. An update looks like this:

sudo docker pull ghcr.io/your-org/myapp:v1.2.4 sudo sed -i 's|myapp:v1.2.3|myapp:v1.2.4|' /etc/systemd/system/myapp.service sudo systemctl daemon-reload sudo systemctl restart myapp journalctl -u myapp -n 50 --no-pager

For apps that truly cannot tolerate a restart gap, run two units — myapp-blue.service and myapp-green.service — on different localhost ports, and flip the reverse proxy upstream between them. The old container drains inflight connections while the new one picks up new traffic. That pattern is where Docker's image-per-version model actually pays its keep over a single-binary setup.

Part 13. Automatic image cleanup

Docker never deletes pulled images or stopped containers on its own. On a long-lived server that leaks disk fast — especially with frequent deploys. Add a weekly prune timer:

sudo tee /etc/systemd/system/docker-prune.service >/dev/null <<'EOF' [Unit] Description=Prune dangling docker resources [Service] Type=oneshot ExecStart=/usr/bin/docker system prune -af --filter "until=168h" EOF sudo tee /etc/systemd/system/docker-prune.timer >/dev/null <<'EOF' [Unit] Description=Weekly docker prune [Timer] OnCalendar=weekly Persistent=true [Install] WantedBy=timers.target EOF sudo systemctl enable --now docker-prune.timer

The until=168h filter keeps anything created in the last week, so a quick rollback to last Tuesday's image still finds it cached locally.

Troubleshooting

The container starts and immediately exits
Inspect the journal for the container's own output with journalctl scoped to the unit. The most common causes are a missing or misspelled environment variable in the env file, a config path the container expects under tmp that needs more than 64M of scratch space, or the app trying to write to its own filesystem — which is blocked by the read-only root.
Cannot bind to port 80 or 443
Your unit is missing --cap-add=NET_BIND_SERVICE, or you removed it because 8080 did not need it. Add it back for ports under 1024 and restart.
docker: Error response from daemon: pull access denied
The registry requires authentication. Run sudo docker login ghcr.io once, and systemd-managed pulls will use /root/.docker/config.json thereafter. For ephemeral CI runners, prefer short-lived registry tokens injected via the env file.
systemctl restart takes forever and eventually times out
The container is not responding to SIGTERM. docker stop waits ten seconds before SIGKILL, and systemd waits TimeoutStopSec on top. Handle SIGTERM in your app: drain inflight requests, flush logs, close database connections, exit 0.
The app works from the host but not from the internet
Three places to check in order: the published port (-p) must bind to an interface reachable from outside, a reverse proxy must forward public traffic to 127.0.0.1, and the Hetzner Cloud Firewall must allow the public port. Traffic has to pass all three.

FAQ

Why not Docker Compose?
Compose is a good local-development tool and a reasonable single-host orchestrator, but it duplicates work systemd already does well — lifecycle, restarts, dependency ordering, logging. One unit per service with the pattern above gives you the same behaviour with one fewer daemon to patch and a log pipeline that is the same as the rest of the box.
Why not Podman's quadlet files?
Podman quadlets are excellent — if you have already migrated to Podman. If you are on Docker because that is what CI builds for, staying on Docker and using a thin systemd unit is less churn than a full runtime swap. The hardening posture is nearly identical.
Do I need a reverse proxy?
Yes, if you are terminating TLS yourself or serving multiple apps from the same server. Caddy is the simplest option and ships with automatic Let's Encrypt. Bind the container to 127.0.0.1 and let Caddy handle the public side on 80 and 443.
How do I run multiple applications on one server?
Repeat Parts 7 through 10 with a different service name, port, env file, and image. Each application gets its own unit and its own localhost port. ufw picks up the additional allow rules the same way. systemd happily supervises dozens of small containers on a single 2-vCPU box.
What about automatic security updates?
Install unattended-upgrades and enable the security source list. Docker Engine is covered because it is installed from an apt repo. Containers still need their images rebuilt and redeployed to pick up base-image CVEs — that belongs in your CI pipeline, not on the server.
DeployCrate

Skip the sysadmin hire

The walkthrough above is what shipping a container correctly actually looks like — SSH hardening, ufw that cooperates with Docker's iptables, Docker Engine from the upstream repo, a systemd unit with the full hardening flag set, journald-based logging, pinned tags, and a prune timer so the disk does not fill up. Roughly two hours of careful work the first time and twenty minutes every subsequent time, assuming nothing goes wrong and nobody on your team forgets a step. That is real engineering time your team is not spending on product.

DeployCrate is how small teams get this done without hiring someone to own it. You connect your Hetzner credentials, click Provision, and the platform runs a vetted set of scripts that produce exactly the configuration you just read about: an admin user per operator, hardened SSH, ufw with sane defaults, Docker Engine from the upstream apt repo, a systemd unit per container with the full hardening flag set, journald retention, and rolling deploys from your Git pushes. Applied consistently across every server your team owns — no drift, no "wait, did we do that one yet?"

Every script is open for inspection. The SSH hardening above mirrors our ssh_hardening.sh, the firewall setup mirrors host_safety.sh, the Docker install mirrors docker.sh, and the container supervision mirrors install_operator_services.sh. You are not handing over control — you are skipping the step where a small team pretends it has a sysadmin it has not hired yet.

Related guides