Install Docker on DigitalOcean Debian 13 (Trixie)
A small-team runbook for installing Docker Engine on a fresh DigitalOcean server running Debian 13 (Trixie). Upstream repo, pinned packages, a daemon.json that will not surprise you under load, and the firewall and operator-account tradeoffs you need to decide up front. Not a quickstart — the version you will be glad you followed six months from now.
Why upstream repo, not apt
The Docker package in Debian's default archive lags several versions behind upstream and is missing the plugins most teams actually want — buildx, compose v2, containerd pinned to a known Docker version. The upstream docker.com repository is maintained by Docker themselves, signed with a key you pin in /etc/apt/keyrings, and rolls forward in lockstep with containerd. For anything beyond a throwaway test box, use upstream.
Skip the convenience get.docker.com one-liner. It works, but it hides version pinning, offers no rollback path, and is the kind of script that mysteriously changes behaviour between audits. A team should be able to read its own installation steps and know exactly what got installed.
Prerequisites
- A DigitalOcean Debian 13 (Trixie) server you can reach over SSH as a sudo-capable admin user. If you have not set that up, follow the harden-SSH guide linked below first.
- ufw installed and enabled with SSH already allowed. Docker rewrites parts of the iptables rule set; we cover the interaction below.
- An outbound network path to
download.docker.comon 443 — worth verifying up front if your VPC restricts egress.
DigitalOcean droplets include the do-agent and DO monitoring hooks by default. Keep them if you plan to use DigitalOcean monitoring; otherwise they can be removed after provisioning.
Part 1. Remove any distro Docker packages
Clean-slate first. If any of the distro packages are installed, the upstream install will either refuse or produce a split install where containerd and docker.io come from different sources. Remove them before adding the repo.
sudo apt-get remove -y docker.io docker-doc docker-compose podman-docker containerd runc || true sudo apt-get autoremove -yPart 2. Add the upstream apt repository
The Docker repo is signed by a GPG key that you store under /etc/apt/keyrings rather than through the deprecated apt-key tool. Docker currently ships repositories for the previous stable Debian release, so on a Debian 13 (Trixie) host you will point the sources line at bookworm for the time being — that is the pattern our own provisioning script follows.
sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian bookworm stable" \ | sudo tee /etc/apt/sources.list.d/docker.list >/dev/null sudo apt-get updateIf apt-get update errors on a missing release file, you have a codename mismatch. Check /etc/os-release for the host's VERSION_CODENAME and adjust the sources.list line. Stable Docker releases consistently trail distro releases by a few months.
Part 3. Install Docker Engine and plugins
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker --version docker compose versionCompose v2 is a Docker CLI plugin now, not a separate docker-compose binary. Scripts that still call docker-compose with a hyphen will not work; update them to docker compose with a space.
Part 4. Write a production daemon.json
The default Docker log driver writes unbounded JSON files under /var/lib/docker/containers. One chatty app can fill a small VPS in days. The config below caps log size, enables live-restore so containers survive a daemon restart, and pins an internal address pool that will not collide with common VPC ranges.
sudo install -m 0755 -d /etc/docker sudo tee /etc/docker/daemon.json >/dev/null <<'EOF' { "log-driver": "local", "log-opts": { "max-size": "10m", "max-file": "5", "compress": "true" }, "live-restore": true, "default-address-pools": [ { "base": "10.201.0.0/16", "size": 24 } ] } EOF sudo systemctl enable --now docker sudo systemctl restart docker docker info | grep -E 'Live Restore|Logging Driver'local is the current recommended log driver — it stores logs in a compressed binary format that is significantly faster than json-file and still works with docker logs. live-restore keeps running containers up when the Docker daemon restarts, which matters because a Docker upgrade otherwise takes down every container on the host at once.
Part 5. ufw and Docker — the one gotcha that always bites
Docker inserts its own rules into the iptables DOCKER-USER chain and, by default, those rules bypass ufw. A container published with -p 80:80 will be reachable from the internet even if ufw's public policy is deny — because Docker opened the port at a lower layer than ufw is watching. This is the single most common small-team incident in self-hosted Docker setups.
Two fixes depending on intent:
- Bind to localhost and front with a reverse proxy. Publish with
-p 127.0.0.1:8080:8080(or the host-only IP); Caddy, nginx, or the deploy guide's pattern terminates TLS on ufw-allowed ports. Containers never get public exposure. Simplest answer, and the one we recommend. - Route Docker's DOCKER-USER chain through ufw. Append DROP rules to
/etc/ufw/after.rulesunder theufw-user-forwardchain so that forwarded traffic to containers respects ufw's default-deny. Documented in theufw-dockercommunity script; use it only if you really need to publish container ports directly.
Part 6. Operator accounts and the docker group
Adding a user to the docker group lets them run docker commands without sudo. It is also effectively root equivalence — the Docker socket accepts commands like docker run -v /:/host, which hand over the host filesystem. Weigh the tradeoff explicitly.
sudo usermod -aG docker adminFor a small team this is usually the right call — every operator that has sudo already has a path to root, and the ergonomics of docker without sudo are meaningfully better for day-to-day operations. Do not grant group membership to service accounts that only need to run a specific container; use a dedicated systemd unit for that instead, as shown in the Docker deploy guide.
Part 7. Schedule a prune timer
Images and volumes accumulate. A weekly docker system prune keeps the disk bounded without manual intervention. Run it from a systemd timer so the schedule survives reboots.
sudo tee /etc/systemd/system/docker-prune.service >/dev/null <<'EOF' [Unit] Description=Prune unused Docker images and volumes Requires=docker.service After=docker.service [Service] Type=oneshot ExecStart=/usr/bin/docker system prune -af --filter "until=168h" EOF sudo tee /etc/systemd/system/docker-prune.timer >/dev/null <<'EOF' [Unit] Description=Weekly docker prune [Timer] OnCalendar=weekly Persistent=true [Install] WantedBy=timers.target EOF sudo systemctl daemon-reload sudo systemctl enable --now docker-prune.timer systemctl list-timers docker-prune.timerThe until=168h filter keeps the last seven days of images in case you need to roll back; everything older goes. Raise or lower the window to match how often your team redeploys.
Troubleshooting
apt-get update fails with NO_PUBKEY or release file missing
docker.list does not match a repo Docker publishes. Re-run the key fetch, confirm ls /etc/apt/keyrings/docker.gpg shows a world-readable file, and check the codename line.docker info shows Live Restore off after editing daemon.json
sudo dockerd --validate --config-file /etc/docker/daemon.json to print the exact parse error, fix it, then restart.Container is reachable from the internet even though ufw says deny
127.0.0.1 and front with a reverse proxy, or patch /etc/ufw/after.rules to filter the DOCKER-USER chain. See Part 5.docker-prune timer never runs
systemctl list-timers shows the next scheduled run. If the timer is enabled but not listed, check that the service unit path is correct and that daemon-reload ran after the unit files were written.FAQ
Should we use rootless Docker?
Why the default-address-pools line?
172.17.0.0/16 and similar ranges by default, which overlap common VPC, VPN, and corp network ranges. Pinning an internal pool avoids the surprise where a new container network routes over your WireGuard tunnel instead of through the host.