Deployment DigitalOcean Debian 13 (Trixie) systemd

Deploy a compiled binary on DigitalOcean Debian 13 (Trixie) with systemd

A complete walkthrough for teams running their own infrastructure without a dedicated sysadmin: starting from a freshly created DigitalOcean server running Debian 13 (Trixie), you'll lock it down and run a compiled binary under systemd with hardening, log rotation, and zero-downtime updates. No Docker, no buildpacks — just a single binary, a systemd unit, and a runbook your team can re-run with confidence.

When binary + systemd is the right choice

A statically linked binary supervised by systemd is the leanest way to run a long-lived network service on Linux. There is no container runtime to install, no image registry to manage, and no layered filesystem overhead. The binary is the artifact; systemd is the supervisor; the journal is the log pipeline.

Pick this path when:

  • Your application compiles to a single static binary.
  • You want the minimum possible runtime footprint on small VPS instances.
  • You value fast, predictable startup times — systemd can launch your service in milliseconds.
  • You do not need the fleet-wide image distribution story that Docker registries provide.

Pick Docker + systemd instead when you ship multiple languages, need sidecar processes, or already publish to a container registry as part of CI. We have a separate guide for that path.

Prerequisites

You need three things to follow this guide:

  • A DigitalOcean account and a Debian 13 (Trixie) server you can reach over SSH.
  • An SSH keypair on your workstation. If you do not have one, generate it with ssh-keygen -t ed25519.
  • A Linux binary built for your server's architecture — most commonly linux/amd64. Build it on your workstation or in CI, not on the server itself.

DigitalOcean droplets include the do-agent and DO monitoring hooks by default. Keep them if you plan to use DigitalOcean monitoring; otherwise they can be removed after provisioning.

Part 1. First SSH login

When you create a DigitalOcean server with an SSH key attached, your public key is placed into the default user's authorized_keys file. On DigitalOcean that user is root. Log in and make sure the system is up to date before doing anything else:

ssh root@YOUR_SERVER_IP apt-get update apt-get -y dist-upgrade apt-get -y install ufw sudo curl ca-certificates reboot

Reconnect after the reboot. Keeping the base system patched before you open any ports is the single biggest security win you can take for free.

Part 2. Create a sudo admin user

Logging in as root is fine for provisioning, but day-to-day operations should happen as an unprivileged user with sudo. Create one, copy your SSH key, and make sure it can reach sudo without a password prompt (sudo over SSH without a TTY is painful otherwise):

adduser --disabled-password --gecos "" admin usermod -aG sudo admin install -d -m 0700 -o admin -g admin /home/admin/.ssh cp /root/.ssh/authorized_keys /home/admin/.ssh/authorized_keys chown admin:admin /home/admin/.ssh/authorized_keys chmod 0600 /home/admin/.ssh/authorized_keys echo "admin ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-admin chmod 0440 /etc/sudoers.d/90-admin

Open a second SSH session as the new admin user in a different terminal before continuing. Never lock yourself out of a server by closing the only working session. Once the new session works, proceed with SSH hardening.

Part 3. Harden SSH

The defaults in Debian's OpenSSH are reasonable, but we can do much better. Disable password login entirely, restrict which users can log in, and move to a non-standard port to reduce opportunistic scanning noise in your logs. Write the following to /etc/ssh/sshd_config:

Port 2222 AddressFamily inet Protocol 2 LogLevel VERBOSE LoginGraceTime 30 StrictModes yes PubkeyAuthentication yes PermitRootLogin no PasswordAuthentication no ChallengeResponseAuthentication no PermitEmptyPasswords no X11Forwarding no PrintMotd no UsePAM yes AllowUsers admin MaxAuthTries 3 MaxSessions 2 ClientAliveInterval 300 ClientAliveCountMax 2 Subsystem sftp /usr/lib/openssh/sftp-server

Validate the config before restarting:

sudo sshd -t sudo systemctl restart ssh

A few notes on the choices above. Moving off port 22 is not real security, but it cuts log volume from internet-wide scanners by 90% or more. AllowUsers is a strict allowlist — if a future deploy adds another account, SSH will refuse it until the config is updated, which is exactly the behaviour we want. MaxAuthTries 3 plus PasswordAuthentication no means an attacker gets three key-based attempts before being dropped.

Part 4. Host firewall with ufw

ufw is a thin, sensible wrapper around nftables. The goal is a default-deny firewall that allows only SSH (on the new port) and your application port:

sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 2222/tcp comment 'ssh' sudo ufw allow 8080/tcp comment 'app' sudo ufw --force enable sudo ufw status verbose

The order matters. Enable ufw only after you have added the SSH rule, or you will lock yourself out of the only session you have. Verify with the status output before closing your spare terminal.

Part 5. Cloud firewall at the DigitalOcean edge

Attach a DigitalOcean Cloud Firewall to the droplet from the control panel or via doctl. Configure it alongside ufw on the host.

Host-level ufw and the DigitalOcean Cloud Firewall are complementary, not redundant. The cloud firewall blocks traffic before it reaches your VM's network stack, which is both cheaper (no CPU cycles spent) and safer (it protects you if ufw is ever misconfigured). Configure both with the same allowlist: SSH on 2222 from your office or VPN, and your app port from 0.0.0.0/0 once you are ready to go public.

Part 6. Create the service user and directory layout

Each application gets a dedicated unprivileged user, its own home under /opt, a writable state directory under /var/lib, and a log directory under /var/log. Keeping these separated means you can lock the binary directory read-only in systemd while still letting the process write to state and log paths.

sudo useradd --system --home /opt/myapp --shell /usr/sbin/nologin myapp sudo install -d -o myapp -g myapp -m 0755 /opt/myapp /opt/myapp/bin sudo install -d -o myapp -g myapp -m 0750 /var/lib/myapp /var/log/myapp

The service user has no shell and no password — it only exists for systemd to drop privileges to. That alone blocks an entire class of attacks where a compromise in the app leads to an interactive shell.

Part 7. Upload the binary

Copy the binary from your workstation. Always upload to a staging path first, then atomically install it into place — that avoids a partially-written file ever being executed:

scp -P 2222 ./dist/myapp admin@YOUR_SERVER_IP:/tmp/myapp.new ssh -p 2222 admin@YOUR_SERVER_IP \ 'sudo install -o myapp -g myapp -m 0755 /tmp/myapp.new /opt/myapp/bin/myapp && rm /tmp/myapp.new'

install(1) takes care of ownership, mode, and atomic rename in one call — much safer than separate cp plus chown plus chmod.

Part 8. Environment file for secrets

Never bake secrets into the binary or put them into the systemd unit file directly. Keep them in a root-owned file with mode 0640 and read permission for the service group. systemd's EnvironmentFile directive will load it at startup:

sudo install -d -m 0750 -o root -g myapp /etc/myapp sudo tee /etc/myapp/env >/dev/null <<'EOF' DATABASE_URL=postgres://user:[email protected]/myapp SESSION_KEY=replace-me-with-32-random-bytes LOG_LEVEL=info EOF sudo chown root:myapp /etc/myapp/env sudo chmod 0640 /etc/myapp/env

The service user can read /etc/myapp/env because it is in the myapp group. Root still owns the file, so only an administrator can modify it. This is the same approach DeployCrate uses for application secrets.

Part 9. Write the systemd unit

This is the heart of the setup. Copy the unit below to /etc/systemd/system/myapp.service. Every directive has a purpose — the notes after the block explain why.

[Unit] Description=myapp Documentation=https://example.com/docs After=network-online.target Wants=network-online.target [Service] Type=simple User=myapp Group=myapp WorkingDirectory=/opt/myapp EnvironmentFile=/etc/myapp/env ExecStart=/opt/myapp/bin/myapp ExecReload=/bin/kill -HUP $MAINPID Restart=on-failure RestartSec=5 TimeoutStopSec=30 # Hardening NoNewPrivileges=true ProtectSystem=strict ProtectHome=true PrivateTmp=true PrivateDevices=true ProtectKernelTunables=true ProtectKernelModules=true ProtectControlGroups=true RestrictSUIDSGID=true RestrictNamespaces=true RestrictRealtime=true LockPersonality=true MemoryDenyWriteExecute=true SystemCallArchitectures=native SystemCallFilter=@system-service SystemCallErrorNumber=EPERM CapabilityBoundingSet=CAP_NET_BIND_SERVICE AmbientCapabilities=CAP_NET_BIND_SERVICE ReadWritePaths=/var/lib/myapp /var/log/myapp # Resource limits LimitNOFILE=65536 LimitNPROC=512 # Watchdog WatchdogSec=30s [Install] WantedBy=multi-user.target

A quick tour of the hardening block. NoNewPrivileges prevents the process from ever gaining more privileges, even via setuid binaries. ProtectSystem=strict mounts the entire filesystem read-only except for the paths you name in ReadWritePaths. ProtectHome hides /home, /root, and /run/user. PrivateTmp gives the service its own /tmp so it cannot see or tamper with tempfiles from other services. The Protect*Kernel and Restrict* directives block whole categories of kernel-level abuse that a compromised process might try.

CapabilityBoundingSet + AmbientCapabilities is how the service can still bind to low ports (under 1024) while running as an unprivileged user. If your app only listens on 8080, drop both lines. SystemCallFilter=@system-service whitelists the syscall groups a typical network service needs and blocks everything else — a strong mitigation against exploit payloads.

Part 10. Enable, start, verify

sudo systemctl daemon-reload sudo systemctl enable --now myapp sudo systemctl status myapp journalctl -u myapp -n 100 --no-pager curl -sSf http://127.0.0.1:8080/healthz

systemctl status should show active (running) with a recent start timestamp. The journal should show your application's own startup logs. The curl to localhost confirms the process is listening on the right port. Only after all three are green should you open the port publicly at the cloud firewall.

Part 11. Log rotation and journal retention

systemd sends your service's stdout and stderr to the journal by default. Without a retention policy the journal will grow until it fills /var/log and takes the box down. Write a journald override:

sudo install -d /etc/systemd/journald.conf.d sudo tee /etc/systemd/journald.conf.d/retention.conf >/dev/null <<'EOF' [Journal] SystemMaxUse=500M SystemMaxFileSize=50M MaxRetentionSec=14day Compress=yes EOF sudo systemctl restart systemd-journald

That configuration keeps 500 MB of logs with 14 days of retention, which is a sane default for a small service. Increase the retention window if you need longer incident forensics.

For fleet-wide observability, ship the journal to a central log store like Loki, Vector, or Elastic. The journalctl command supports structured JSON output that plugs into any of them.

Part 12. Zero-downtime updates

A restart on a small binary service takes milliseconds. For most apps that qualifies as "zero downtime" in practice — a client retry is enough. For apps that cannot tolerate a restart window at all, use socket activation: systemd holds the listening socket and hands it to every new version of the process, so inflight connections drain through the old binary while new ones hit the new one.

The simple path is what most teams pick. An update looks like this:

scp -P 2222 ./dist/myapp admin@YOUR_SERVER_IP:/tmp/myapp.new ssh -p 2222 admin@YOUR_SERVER_IP ' sudo install -o myapp -g myapp -m 0755 /tmp/myapp.new /opt/myapp/bin/myapp && sudo systemctl restart myapp && rm /tmp/myapp.new '

If your deploys start rolling out more than a few times a day, upload to a versioned path such as /opt/myapp/releases/v1.2.3/myapp and symlink /opt/myapp/bin/myapp to it. Rollback then becomes re-pointing the symlink and restarting — no re-upload required.

Part 13. Watchdog and healthchecks

WatchdogSec=30s in the unit tells systemd to restart the process if it goes silent for half a minute. For this to do anything useful your application has to actually ping systemd on a timer. If you skip the watchdog ping, remove the WatchdogSec line — otherwise systemd will kill a healthy process every 30 seconds.

Pair the watchdog with an external blackbox check — a curl from a monitoring server, or a Prometheus blackbox_exporter probe — and you have two independent signals that the service is alive. One catches hangs; the other catches network-layer breakage.

Troubleshooting

The service fails to start and journalctl shows "Permission denied"
Nine times out of ten this is ProtectSystem=strict blocking a write path. Check where your app tries to write and add it to ReadWritePaths. Use journalctl -u myapp -b plus systemd-analyze security myapp to see which directive is biting.
Cannot bind to port 80 or 443
Your unit is missing CAP_NET_BIND_SERVICE in both CapabilityBoundingSet and AmbientCapabilities. Add them, then systemctl daemon-reload && systemctl restart myapp.
The app runs locally but systemd reports status=203/EXEC
Exit 203 means systemd could not exec the binary. Check that the file is executable, is not a shell script missing a shebang, and was built for the right architecture. file /opt/myapp/bin/myapp should show ELF 64-bit LSB executable for the server's arch.
systemctl restart takes forever and eventually times out
Your app is not responding to SIGTERM. systemd waits TimeoutStopSec seconds before sending SIGKILL. Handle SIGTERM in your process: drain inflight requests, flush logs, close database connections, then exit 0. Every well-behaved service does this.
The app works over localhost but not from the internet
Three places to check in order: the app's listen address (must be 0.0.0.0 or the public interface, not 127.0.0.1), ufw, and the DigitalOcean Cloud Firewall. Traffic has to pass all three.

FAQ

Why not just use Docker?
Docker is a great choice if you already have an image pipeline or need multi-language sidecars. For a single compiled binary on a small VPS, systemd gives you faster startup, lower memory overhead, simpler logging, and one less daemon to patch. Both paths are valid — pick the one that matches the rest of your infrastructure.
Do I need a reverse proxy?
If you are terminating TLS yourself or serving multiple apps from the same server, yes — Caddy is the simplest option and ships with automatic Let's Encrypt. If you front the server with a managed load balancer (which most production setups do), you can skip it and let the binary serve plain HTTP on the internal port.
How do I run multiple applications on one server?
Repeat Parts 6 through 10 with a different app name. Each application gets its own user, directory, environment file, unit, and port. ufw picks up the additional allow rules the same way. systemd happily supervises dozens of small services on a single 2-vCPU box.
What about automatic security updates?
Install unattended-upgrades and enable the security-only source list. The package only applies security patches by default, reboots are gated behind a flag, and the default timer runs nightly. sudo apt-get install unattended-upgrades && sudo dpkg-reconfigure -plow unattended-upgrades.
DeployCrate

Skip the sysadmin hire

The walkthrough above is what deploying correctly actually looks like — SSH hardening, least-privilege service users, systemd with a full hardening block, log retention, a watchdog, and a dual-firewall setup. Roughly two hours of careful work the first time and twenty minutes every subsequent time, assuming nothing goes wrong and nobody on your team forgets a step. That is real engineering time your team is not spending on product.

DeployCrate is how small teams get this done without hiring someone to own it. You connect your DigitalOcean credentials, click Provision, and the platform runs a vetted set of scripts that produce exactly the configuration you just read about: an admin user per operator, hardened SSH, ufw with sane defaults, a service user per application, a systemd unit with the full hardening block, journald retention, and zero-downtime updates from your Git pushes. Configuration is applied consistently across every server your team owns — no drift, no "wait, did we do that one yet?"

Every script is open for inspection. The SSH hardening above mirrors our ssh_hardening.sh, the firewall setup mirrors host_safety.sh, and the service supervision mirrors install_operator_services.sh. You are not handing over control — you are skipping the step where a small team pretends it has a sysadmin it has not hired yet.

Related guides