Deploy a compiled binary on Hetzner Debian 13 (Trixie) with systemd
A complete walkthrough for teams running their own infrastructure without a dedicated sysadmin: starting from a freshly created Hetzner server running Debian 13 (Trixie), you'll lock it down and run a compiled binary under systemd with hardening, log rotation, and zero-downtime updates. No Docker, no buildpacks — just a single binary, a systemd unit, and a runbook your team can re-run with confidence.
When binary + systemd is the right choice
A statically linked binary supervised by systemd is the leanest way to run a long-lived network service on Linux. There is no container runtime to install, no image registry to manage, and no layered filesystem overhead. The binary is the artifact; systemd is the supervisor; the journal is the log pipeline.
Pick this path when:
- Your application compiles to a single static binary.
- You want the minimum possible runtime footprint on small VPS instances.
- You value fast, predictable startup times — systemd can launch your service in milliseconds.
- You do not need the fleet-wide image distribution story that Docker registries provide.
Pick Docker + systemd instead when you ship multiple languages, need sidecar processes, or already publish to a container registry as part of CI. We have a separate guide for that path.
Prerequisites
You need three things to follow this guide:
- A Hetzner account and a Debian 13 (Trixie) server you can reach over SSH.
- An SSH keypair on your workstation. If you do not have one, generate it with
ssh-keygen -t ed25519. - A Linux binary built for your server's architecture — most commonly
linux/amd64. Build it on your workstation or in CI, not on the server itself.
Hetzner Debian 13 images ship with cloud-init. SSH keys selected during server creation land in /root/.ssh/authorized_keys on first boot.
Part 1. First SSH login
When you create a Hetzner server with an SSH key attached, your public key is placed into the default user's authorized_keys file. On Hetzner that user is root. Log in and make sure the system is up to date before doing anything else:
ssh root@YOUR_SERVER_IP apt-get update apt-get -y dist-upgrade apt-get -y install ufw sudo curl ca-certificates rebootReconnect after the reboot. Keeping the base system patched before you open any ports is the single biggest security win you can take for free.
Part 2. Create a sudo admin user
Logging in as root is fine for provisioning, but day-to-day operations should happen as an unprivileged user with sudo. Create one, copy your SSH key, and make sure it can reach sudo without a password prompt (sudo over SSH without a TTY is painful otherwise):
adduser --disabled-password --gecos "" admin usermod -aG sudo admin install -d -m 0700 -o admin -g admin /home/admin/.ssh cp /root/.ssh/authorized_keys /home/admin/.ssh/authorized_keys chown admin:admin /home/admin/.ssh/authorized_keys chmod 0600 /home/admin/.ssh/authorized_keys echo "admin ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-admin chmod 0440 /etc/sudoers.d/90-adminOpen a second SSH session as the new admin user in a different terminal before continuing. Never lock yourself out of a server by closing the only working session. Once the new session works, proceed with SSH hardening.
Part 3. Harden SSH
The defaults in Debian's OpenSSH are reasonable, but we can do much better. Disable password login entirely, restrict which users can log in, and move to a non-standard port to reduce opportunistic scanning noise in your logs. Write the following to /etc/ssh/sshd_config:
Port 2222 AddressFamily inet Protocol 2 LogLevel VERBOSE LoginGraceTime 30 StrictModes yes PubkeyAuthentication yes PermitRootLogin no PasswordAuthentication no ChallengeResponseAuthentication no PermitEmptyPasswords no X11Forwarding no PrintMotd no UsePAM yes AllowUsers admin MaxAuthTries 3 MaxSessions 2 ClientAliveInterval 300 ClientAliveCountMax 2 Subsystem sftp /usr/lib/openssh/sftp-serverValidate the config before restarting:
sudo sshd -t sudo systemctl restart sshA few notes on the choices above. Moving off port 22 is not real security, but it cuts log volume from internet-wide scanners by 90% or more. AllowUsers is a strict allowlist — if a future deploy adds another account, SSH will refuse it until the config is updated, which is exactly the behaviour we want. MaxAuthTries 3 plus PasswordAuthentication no means an attacker gets three key-based attempts before being dropped.
Part 4. Host firewall with ufw
ufw is a thin, sensible wrapper around nftables. The goal is a default-deny firewall that allows only SSH (on the new port) and your application port:
sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 2222/tcp comment 'ssh' sudo ufw allow 8080/tcp comment 'app' sudo ufw --force enable sudo ufw status verboseThe order matters. Enable ufw only after you have added the SSH rule, or you will lock yourself out of the only session you have. Verify with the status output before closing your spare terminal.
Part 5. Cloud firewall at the Hetzner edge
Configure Hetzner Cloud Firewall rules in the Hetzner Console or via the hcloud CLI. They apply outside the VM and should be used in addition to ufw on the host.
Host-level ufw and the Hetzner Cloud Firewall are complementary, not redundant. The cloud firewall blocks traffic before it reaches your VM's network stack, which is both cheaper (no CPU cycles spent) and safer (it protects you if ufw is ever misconfigured). Configure both with the same allowlist: SSH on 2222 from your office or VPN, and your app port from 0.0.0.0/0 once you are ready to go public.
Part 6. Create the service user and directory layout
Each application gets a dedicated unprivileged user, its own home under /opt, a writable state directory under /var/lib, and a log directory under /var/log. Keeping these separated means you can lock the binary directory read-only in systemd while still letting the process write to state and log paths.
sudo useradd --system --home /opt/myapp --shell /usr/sbin/nologin myapp sudo install -d -o myapp -g myapp -m 0755 /opt/myapp /opt/myapp/bin sudo install -d -o myapp -g myapp -m 0750 /var/lib/myapp /var/log/myappThe service user has no shell and no password — it only exists for systemd to drop privileges to. That alone blocks an entire class of attacks where a compromise in the app leads to an interactive shell.
Part 7. Upload the binary
Copy the binary from your workstation. Always upload to a staging path first, then atomically install it into place — that avoids a partially-written file ever being executed:
scp -P 2222 ./dist/myapp admin@YOUR_SERVER_IP:/tmp/myapp.new ssh -p 2222 admin@YOUR_SERVER_IP \ 'sudo install -o myapp -g myapp -m 0755 /tmp/myapp.new /opt/myapp/bin/myapp && rm /tmp/myapp.new'install(1) takes care of ownership, mode, and atomic rename in one call — much safer than separate cp plus chown plus chmod.
Part 8. Environment file for secrets
Never bake secrets into the binary or put them into the systemd unit file directly. Keep them in a root-owned file with mode 0640 and read permission for the service group. systemd's EnvironmentFile directive will load it at startup:
sudo install -d -m 0750 -o root -g myapp /etc/myapp sudo tee /etc/myapp/env >/dev/null <<'EOF' DATABASE_URL=postgres://user:[email protected]/myapp SESSION_KEY=replace-me-with-32-random-bytes LOG_LEVEL=info EOF sudo chown root:myapp /etc/myapp/env sudo chmod 0640 /etc/myapp/envThe service user can read /etc/myapp/env because it is in the myapp group. Root still owns the file, so only an administrator can modify it. This is the same approach DeployCrate uses for application secrets.
Part 9. Write the systemd unit
This is the heart of the setup. Copy the unit below to /etc/systemd/system/myapp.service. Every directive has a purpose — the notes after the block explain why.
[Unit] Description=myapp Documentation=https://example.com/docs After=network-online.target Wants=network-online.target [Service] Type=simple User=myapp Group=myapp WorkingDirectory=/opt/myapp EnvironmentFile=/etc/myapp/env ExecStart=/opt/myapp/bin/myapp ExecReload=/bin/kill -HUP $MAINPID Restart=on-failure RestartSec=5 TimeoutStopSec=30 # Hardening NoNewPrivileges=true ProtectSystem=strict ProtectHome=true PrivateTmp=true PrivateDevices=true ProtectKernelTunables=true ProtectKernelModules=true ProtectControlGroups=true RestrictSUIDSGID=true RestrictNamespaces=true RestrictRealtime=true LockPersonality=true MemoryDenyWriteExecute=true SystemCallArchitectures=native SystemCallFilter=@system-service SystemCallErrorNumber=EPERM CapabilityBoundingSet=CAP_NET_BIND_SERVICE AmbientCapabilities=CAP_NET_BIND_SERVICE ReadWritePaths=/var/lib/myapp /var/log/myapp # Resource limits LimitNOFILE=65536 LimitNPROC=512 # Watchdog WatchdogSec=30s [Install] WantedBy=multi-user.targetA quick tour of the hardening block. NoNewPrivileges prevents the process from ever gaining more privileges, even via setuid binaries. ProtectSystem=strict mounts the entire filesystem read-only except for the paths you name in ReadWritePaths. ProtectHome hides /home, /root, and /run/user. PrivateTmp gives the service its own /tmp so it cannot see or tamper with tempfiles from other services. The Protect*Kernel and Restrict* directives block whole categories of kernel-level abuse that a compromised process might try.
CapabilityBoundingSet + AmbientCapabilities is how the service can still bind to low ports (under 1024) while running as an unprivileged user. If your app only listens on 8080, drop both lines. SystemCallFilter=@system-service whitelists the syscall groups a typical network service needs and blocks everything else — a strong mitigation against exploit payloads.
Part 10. Enable, start, verify
sudo systemctl daemon-reload sudo systemctl enable --now myapp sudo systemctl status myapp journalctl -u myapp -n 100 --no-pager curl -sSf http://127.0.0.1:8080/healthzsystemctl status should show active (running) with a recent start timestamp. The journal should show your application's own startup logs. The curl to localhost confirms the process is listening on the right port. Only after all three are green should you open the port publicly at the cloud firewall.
Part 11. Log rotation and journal retention
systemd sends your service's stdout and stderr to the journal by default. Without a retention policy the journal will grow until it fills /var/log and takes the box down. Write a journald override:
sudo install -d /etc/systemd/journald.conf.d sudo tee /etc/systemd/journald.conf.d/retention.conf >/dev/null <<'EOF' [Journal] SystemMaxUse=500M SystemMaxFileSize=50M MaxRetentionSec=14day Compress=yes EOF sudo systemctl restart systemd-journaldThat configuration keeps 500 MB of logs with 14 days of retention, which is a sane default for a small service. Increase the retention window if you need longer incident forensics.
For fleet-wide observability, ship the journal to a central log store like Loki, Vector, or Elastic. The journalctl command supports structured JSON output that plugs into any of them.
Part 12. Zero-downtime updates
A restart on a small binary service takes milliseconds. For most apps that qualifies as "zero downtime" in practice — a client retry is enough. For apps that cannot tolerate a restart window at all, use socket activation: systemd holds the listening socket and hands it to every new version of the process, so inflight connections drain through the old binary while new ones hit the new one.
The simple path is what most teams pick. An update looks like this:
scp -P 2222 ./dist/myapp admin@YOUR_SERVER_IP:/tmp/myapp.new ssh -p 2222 admin@YOUR_SERVER_IP ' sudo install -o myapp -g myapp -m 0755 /tmp/myapp.new /opt/myapp/bin/myapp && sudo systemctl restart myapp && rm /tmp/myapp.new 'If your deploys start rolling out more than a few times a day, upload to a versioned path such as /opt/myapp/releases/v1.2.3/myapp and symlink /opt/myapp/bin/myapp to it. Rollback then becomes re-pointing the symlink and restarting — no re-upload required.
Part 13. Watchdog and healthchecks
WatchdogSec=30s in the unit tells systemd to restart the process if it goes silent for half a minute. For this to do anything useful your application has to actually ping systemd on a timer. If you skip the watchdog ping, remove the WatchdogSec line — otherwise systemd will kill a healthy process every 30 seconds.
Pair the watchdog with an external blackbox check — a curl from a monitoring server, or a Prometheus blackbox_exporter probe — and you have two independent signals that the service is alive. One catches hangs; the other catches network-layer breakage.
Troubleshooting
The service fails to start and journalctl shows "Permission denied"
ProtectSystem=strict blocking a write path. Check where your app tries to write and add it to ReadWritePaths. Use journalctl -u myapp -b plus systemd-analyze security myapp to see which directive is biting.Cannot bind to port 80 or 443
CAP_NET_BIND_SERVICE in both CapabilityBoundingSet and AmbientCapabilities. Add them, then systemctl daemon-reload && systemctl restart myapp.The app runs locally but systemd reports status=203/EXEC
file /opt/myapp/bin/myapp should show ELF 64-bit LSB executable for the server's arch.systemctl restart takes forever and eventually times out
TimeoutStopSec seconds before sending SIGKILL. Handle SIGTERM in your process: drain inflight requests, flush logs, close database connections, then exit 0. Every well-behaved service does this.The app works over localhost but not from the internet
0.0.0.0 or the public interface, not 127.0.0.1), ufw, and the Hetzner Cloud Firewall. Traffic has to pass all three.FAQ
Why not just use Docker?
Do I need a reverse proxy?
How do I run multiple applications on one server?
What about automatic security updates?
unattended-upgrades and enable the security-only source list. The package only applies security patches by default, reboots are gated behind a flag, and the default timer runs nightly. sudo apt-get install unattended-upgrades && sudo dpkg-reconfigure -plow unattended-upgrades.