Docker on Raspberry Pi 5: Your Foundation for Everything Else
Every other project in this series runs on Docker. The reverse proxy, the monitoring, the DNS, the automation platform, the AI models — all of it. Docker is not one of the ten projects. It is project zero. The foundation that everything else sits on.
I have Docker running on seven machines right now. Two Raspberry Pis, four mini PCs, and a cloud VPS. One of those Pis runs our newsletter platform and website analytics in production. Not a test environment. Not “I’ll get around to configuring it properly one day.” Production. Listmonk sending real emails to real subscribers, Tianji tracking real page views, both in containers that have been quietly doing their thing for months.
Getting Docker running on a Pi 5 takes about five minutes. Getting it running properly — with a directory structure that does not become a mess within a fortnight, with Compose as the default rather than an afterthought, with an understanding of what ARM means for image compatibility — takes a bit longer. That is what this guide covers.
Career Context: Container orchestration is not a “nice to have” on a CV anymore. It is the baseline expectation for DevOps Engineer, Platform Engineer, and SRE roles paying £55-80k+. Every cloud platform — AWS ECS/EKS, Azure ACI/AKS, Google Cloud Run/GKE — is built on containers. Learning Docker on a Pi teaches you the fundamentals on the same ARM architecture that powers AWS Graviton and Azure Ampere instances. That is not a coincidence you should ignore.
Why Docker? Why Not Just Install Things Directly?
You could install Nginx directly on your Pi. You could install Node.js, Python, PostgreSQL, Redis, all of it, directly onto the operating system. People did this for years. Some still do.
The problem is that it works brilliantly until it does not.
Service A needs Python 3.10. Service B needs Python 3.12. They both want port 80. You upgrade a system library for one thing and something else breaks. You decide to start fresh and spend a weekend reinstalling everything from memory because you cannot remember half of what you configured six months ago.
Docker solves this by isolating each service in its own container. Each container has its own filesystem, its own dependencies, its own network configuration. They cannot interfere with each other. They cannot interfere with the host operating system. And because everything is defined in a file, you can rebuild the entire stack from scratch in minutes — not hours, not a weekend of “what was that config option again?”
In enterprise environments, containers are the deployment standard. Not the future. The present. If you are applying for any infrastructure role in 2026 and you cannot talk fluently about containers, you are at a serious disadvantage.
Installing Docker on Pi 5
This assumes you have Ubuntu Server or Raspberry Pi OS (64-bit) running on your Raspberry Pi 5. If you need help with that step, see Installing Ubuntu Server on Raspberry Pi with USB Boot.
64-bit OS is mandatory. If you are running a 32-bit operating system, stop here and reinstall with 64-bit. Many Docker images — including some critical ones — no longer publish 32-bit ARM builds. The Pi 5 is a 64-bit processor. Run a 64-bit OS. There is no good reason not to.
The Official Install Script
Docker provides a convenience script that handles everything. It detects your distribution, adds the repository, and installs Docker Engine plus Docker Compose:
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
This takes about two minutes on a Pi 5 with a decent internet connection. When it finishes, Docker is installed and the daemon is running.
Verify it worked:
sudo docker run hello-world
You should see a message confirming Docker is working. If you see an error about the daemon not running, check the service status:
sudo systemctl status docker
The Manual Method (If You Prefer)
If piping a script from the internet into sh makes you uncomfortable — and honestly, it probably should — you can install manually from Docker’s apt repository:
# Remove any old versions
sudo apt remove docker docker-engine docker.io containerd runc
# Install prerequisites
sudo apt update
sudo apt install ca-certificates curl gnupg
# Add Docker's GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository (Ubuntu - adjust for Debian/Raspberry Pi OS)
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Either method gets you to the same place. The convenience script is fine for homelabs. The manual method teaches you what is actually happening, which is more valuable if you are learning.
The Usermod Logout Trap
Right after installing Docker, every guide on the internet tells you to run this:
sudo usermod -aG docker $USER
This adds your user to the docker group so you can run Docker commands without sudo. Good advice. The problem is what comes next.
The trap: Group changes do not take effect until you log out and log back in. Not “open a new terminal.” Not “run source ~/.bashrc.” You must completely end your session and start a new one. If you are SSH’d in, disconnect and reconnect. If you are on the console, log out and log in.
Every single week on Reddit and Stack Overflow, someone posts “I ran usermod but I still get permission denied.” The answer is always the same: log out, log back in.
You can verify the change took effect:
# Before logging out - your current groups
groups
# You will NOT see 'docker' in the list yet
# After logging back in
groups
# Now 'docker' should appear
# Test it works without sudo
docker ps
There is a workaround using newgrp docker which activates the group in the current shell, but it is fiddly and only applies to that one session. Just log out and back in. It takes ten seconds and it always works.
Security note: Adding a user to the docker group effectively gives them root access to the host, because Docker containers can mount any filesystem. On a personal Pi, this is fine. In a shared environment, be aware of the implications. In enterprise settings, rootless Docker or Podman address this.
Directory Structure That Scales
This is the bit most guides skip entirely, and it is the bit that matters most six months from now.
When you start with Docker, you tend to run commands from wherever you happen to be. A Compose file in your home directory. Another one in /tmp because you were testing something. Volumes scattered across the filesystem. Config files you cannot find.
Within a month, it is a mess. Within three months, you cannot remember where anything is or what depends on what.
Here is the structure I use across all my Docker hosts. It has survived two years and over thirty services without becoming unmanageable:
/home/your-user/docker/
├── portainer/
│ ├── docker-compose.yml
│ ├── .env
│ └── data/
├── pihole/
│ ├── docker-compose.yml
│ ├── .env
│ └── data/
│ ├── etc-pihole/
│ └── etc-dnsmasq/
├── uptime-kuma/
│ ├── docker-compose.yml
│ ├── .env
│ └── data/
├── nginx-proxy-manager/
│ ├── docker-compose.yml
│ ├── .env
│ └── data/
│ ├── mysql/
│ └── letsencrypt/
└── shared/
└── networks.yml
The rules are simple:
- One directory per service. Always. Even if the service is just one container with no volumes.
- Every directory has a
docker-compose.yml. Always. Even if you could get away withdocker run. - Environment variables live in
.env. Never hardcode secrets in Compose files. The.envfile in the same directory as the Compose file is loaded automatically. - Persistent data goes in
data/subdirectories. Bind mounts, not Docker volumes (more on this below). When you need to back up a service, you back up its directory. When you need to migrate, you copy the directory.
Why bind mounts over Docker volumes? Docker volumes store data in /var/lib/docker/volumes/, which is owned by root and harder to browse, back up, and migrate. Bind mounts put the data exactly where you expect it, in the service directory. For homelabs, this is overwhelmingly the better choice. Enterprise environments use Docker volumes and networked storage drivers for different reasons, but on a Pi, keep it simple.
Create the base structure now, even before you deploy anything:
mkdir -p ~/docker
That is it. Each project in this series will create its own subdirectory as you go. The point is to have a convention from day one, not to build a structure you do not need yet.
Docker Compose: Your Default, Not Your Afterthought
You will find guides everywhere that start with docker run commands like this:
# Don't do this as your default workflow
docker run -d \
--name portainer \
--restart=always \
-p 8000:8000 \
-p 9443:9443 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:latest
This works. And then three months later you need to recreate the container and you cannot remember the exact flags you used. Was it port 9000 or 9443? Did you mount the Docker socket? What restart policy did you set?
The same deployment as a Compose file:
# docker-compose.yml
services:
portainer:
image: portainer/portainer-ce:latest
restart: always
ports:
- "8000:8000"
- "9443:9443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/data
Same result. But now it is documented, version-controllable, shareable, and reproducible. You can commit it to Git. You can copy it to another machine. You can look at it in six months and know exactly what is running and how.
The commands are simple:
# Start everything defined in docker-compose.yml
docker compose up -d
# Stop and remove containers (data is preserved in bind mounts)
docker compose down
# View logs
docker compose logs -f
# Rebuild and restart after changing the Compose file
docker compose up -d --force-recreate
# Pull latest images and restart
docker compose pull && docker compose up -d
Note on the command: Modern Docker uses docker compose (with a space) as a built-in plugin. The older docker-compose (with a hyphen) was a separate Python tool that is now deprecated. If you installed Docker using the method above, you already have the plugin version. Use docker compose everywhere.
The rule I follow: if I am running a container for more than five minutes, it gets a Compose file. No exceptions. docker run is for quick tests and throwaway containers. Everything else is Compose.
In enterprise environments, nobody deploys with docker run. Everything is declarative — Compose files, Kubernetes manifests, Helm charts, Terraform configs. Building the Compose habit now means you are already working the way production environments work.
ARM Image Compatibility: What Works, What Does Not
The Pi 5 uses an ARM64 (also called aarch64) processor. Most x86/amd64 Docker images will not run on it. This is the single biggest source of confusion for people moving from Docker on a laptop or desktop to Docker on a Pi.
How to Check Before You Deploy
Before pulling an image, check if it supports ARM64. There are a few ways:
Docker Hub: Look at the “OS/ARCH” tab on the image page. If you see linux/arm64 or linux/arm64/v8, it will work. If you only see linux/amd64, it will not.
From the command line:
# Check available architectures for an image
docker manifest inspect portainer/portainer-ce:latest | grep architecture
If you see "architecture": "arm64" in the output, you are good.
The Good News
Most popular self-hosting images now support ARM64. Multi-architecture builds have become standard practice. Everything in this Pi 5 series uses images that work on ARM:
- Portainer — full ARM64 support
- Pi-hole — full ARM64 support (originally built for Pi, naturally)
- Nginx Proxy Manager — full ARM64 support
- Uptime Kuma — full ARM64 support
- Home Assistant — full ARM64 support
- Ollama — full ARM64 support
- n8n — full ARM64 support
- Gitea — full ARM64 support
The Bad News
Some images do not support ARM64, and the failure mode can be confusing:
WARNING: The requested image's platform (linux/amd64) does not match
the detected host platform (linux/arm64/v8) and no specific platform was requested
exec format error
That “exec format error” is what you get when you try to run an x86 binary on ARM. There is no workaround short of emulation (which is painfully slow) or finding an alternative image.
Common images that do NOT have ARM builds (as of early 2026):
- Some older database tools and admin panels
- Niche enterprise software containers
- Images that have not been updated in over a year
When you hit this:
- Check if there is a community fork with ARM support (search Docker Hub for the image name plus “arm64”)
- Check if the software provides an official ARM image under a different tag
- Check LinuxServer.io — they maintain multi-arch builds of hundreds of popular images
- As a last resort, build it yourself from source if a Dockerfile is available
Career relevance: ARM compatibility is not just a Pi problem. AWS Graviton instances (ARM-based) are 20-40% cheaper than equivalent x86 instances. Companies are actively migrating workloads to ARM for cost savings. Understanding multi-architecture container builds is a genuinely valuable skill.
Resource Management on 8GB
The Pi 5 has 8GB of RAM. That sounds like a lot until you have six containers running and the system starts swapping. You need to keep an eye on resource usage.
docker stats: Your First Monitoring Tool
# Live resource usage for all running containers
docker stats
# Snapshot (non-streaming) view
docker stats --no-stream
# Specific container
docker stats portainer
The output shows CPU percentage, memory usage, memory limit, network I/O, and block I/O for each container. Get into the habit of checking this regularly. You will learn what “normal” looks like for your services, which means you will notice when something is wrong.
Setting Memory Limits
By default, a container can use all available memory. On 8GB, one misbehaving container can take down everything. Set limits in your Compose files:
services:
my-service:
image: some-image:latest
deploy:
resources:
limits:
memory: 512M
reservations:
memory: 256M
If a container exceeds its memory limit, Docker kills it and restarts it (if you have restart: always set). This is better than the container eating all your RAM and causing the entire Pi to become unresponsive.
Pruning: Reclaiming Space
Docker accumulates cruft. Old images, stopped containers, unused networks, build cache. On a Pi with limited storage, this matters:
# See how much space Docker is using
docker system df
# Remove unused containers, networks, and dangling images
docker system prune
# The nuclear option - also removes unused images and build cache
docker system prune -a --volumes
Be careful with prune -a. It removes all images not associated with a running container. If you have stopped a service temporarily, its image gets deleted and will need to be downloaded again. Use the basic docker system prune for regular maintenance. Save -a for when you genuinely need to reclaim space.
I run docker system prune roughly once a month. On my Pis, it typically reclaims 1-3GB. Not massive, but on a 128GB NVMe SSD, it adds up.
Common First-Day Mistakes
I have made all of these. You do not have to.
Mistake 1: Using :latest Tags and Forgetting What Version You Are Running
The :latest tag is convenient but means “whatever the most recent build was when I pulled it.” Two Pis pulling :latest a week apart might get different versions. When something breaks, you have no idea what changed.
Better approach: Use :latest for initial testing, then pin to a specific version once you are happy:
# Testing
image: portainer/portainer-ce:latest
# Production
image: portainer/portainer-ce:2.21.5
Mistake 2: Not Setting a Restart Policy
Without a restart policy, containers do not come back after a Pi reboot. Your services silently disappear until you notice and manually restart them.
services:
my-service:
image: some-image:latest
restart: unless-stopped # Restart on crash and on boot, unless you explicitly stopped it
unless-stopped is the sensible default for most services. Use always for critical infrastructure like your reverse proxy.
Mistake 3: Exposing Ports You Do Not Need
Not every container needs a published port. If two containers need to talk to each other, they can use Docker’s internal networking. Only publish ports that need to be accessible from outside Docker.
# BAD: exposing the database port to the network
services:
app:
ports:
- "3000:3000"
database:
ports:
- "5432:5432" # Why? Nothing outside Docker needs this
# BETTER: only expose what needs external access
services:
app:
ports:
- "3000:3000"
database:
# No ports section - only accessible within the Docker network
Mistake 4: Editing Files Inside Containers
You docker exec into a container, edit a config file, and it works. Then the container restarts and your changes vanish. Container filesystems are ephemeral. Anything you need to persist must be in a volume or bind mount.
Mistake 5: Not Checking Logs When Things Go Wrong
Before searching the internet for error messages, check the logs:
# Logs for a specific service
docker compose logs my-service
# Follow logs in real time
docker compose logs -f my-service
# Last 50 lines
docker compose logs --tail 50 my-service
Nine times out of ten, the container is telling you exactly what is wrong. A missing environment variable, a permission denied error, a configuration file in the wrong format. Read the logs before doing anything else.
Your First Compose Deployment
Let us put it all together with a simple example. We will deploy Uptime Kuma — a lightweight monitoring tool that is genuinely useful from day one.
# Create the directory structure
mkdir -p ~/docker/uptime-kuma/data
# Create the Compose file
cd ~/docker/uptime-kuma
Create docker-compose.yml:
services:
uptime-kuma:
image: louislam/uptime-kuma:1
container_name: uptime-kuma
restart: unless-stopped
ports:
- "3001:3001"
volumes:
- ./data:/app/data
Deploy it:
cd ~/docker/uptime-kuma
docker compose up -d
Open http://your-pi-ip:3001 in a browser. You have a working monitoring dashboard. The data persists in ~/docker/uptime-kuma/data/. The Compose file documents exactly how it is deployed. You can rebuild it in seconds.
That is the pattern. Every project in this series follows it.
Career Value: What You Have Actually Learned
If you have followed this guide, you can now speak to these topics in an interview or on a CV:
| What You Did | Enterprise Translation |
|---|---|
| Installed Docker on ARM hardware | Container runtime deployment and platform-specific considerations |
| Created a directory structure for services | Infrastructure organisation and operational standards |
| Wrote Docker Compose files | Declarative infrastructure, Infrastructure as Code fundamentals |
| Managed ARM image compatibility | Multi-architecture deployment (AWS Graviton, Azure Ampere) |
| Monitored resource usage with docker stats | Container resource management and capacity planning |
| Set memory limits on containers | Resource governance and QoS in containerised environments |
None of this is theoretical. You have a running Docker host with a structured deployment workflow. That is more hands-on experience than most bootcamp graduates have, and it is the foundation for every other project in this series.
“I built and maintain a containerised infrastructure on ARM with declarative deployment, resource governance, and a reproducible configuration workflow.” That sentence is worth money.
Next Steps
- Project 2: Portainer — Get visibility into what your containers are doing, without living in the terminal
- Back to the Pi 5 Series Hub — See all 10 projects and the recommended order
- How to Build Your First Homelab in 2026 — Broader context on hardware options and the 2026 landscape
Running into issues getting Docker set up? Drop a comment below. I have probably hit the same problem and can save you some debugging time.

ReadTheManual is run, written and curated by Eric Lonsdale.
Eric has over 20 years of professional experience in IT infrastructure, cloud architecture, and cybersecurity, but started with PCs long before that.
He built his first machine from parts bought off tables at the local college campus, hoping they worked. He learned on BBC Micros and Atari units in the early 90s, and has built almost every PC he’s used between 1995 and now.
From helpdesk to infrastructure architect, Eric has worked across enterprise datacentres, Azure environments, and security operations. He’s managed teams, trained engineers, and spent two decades solving the problems this site teaches you to solve.
ReadTheManual exists because Eric believes the best way to learn IT is to build things, break things, and actually read the manual. Every guide on this site runs on infrastructure he owns and maintains.
Enjoyed this guide?
New articles on Linux, homelab, cloud, and automation every 2 days. No spam, unsubscribe anytime.


