K3s: Real Kubernetes on a Raspberry Pi
Let me be honest upfront: running Kubernetes on a single Raspberry Pi is over-engineered. For running a few containers, Docker Compose is simpler, lighter, and more appropriate. If your goal is purely practical — get services running with minimum fuss — you don’t need Kubernetes.
That’s not the point.
The point is that Kubernetes is the container orchestration standard. Full stop. EKS, AKS, GKE, OpenShift, Rancher — they’re all Kubernetes underneath. When a job listing says “experience with container orchestration,” they mean Kubernetes. When an interviewer asks “how do you deploy applications in production,” the answer they’re looking for involves pods, deployments, and services. And “I did a Katacoda tutorial once” is a fundamentally different answer from “I run a K3s cluster at home.”
K3s is lightweight Kubernetes built by Rancher Labs (now SUSE). It’s a fully conformant Kubernetes distribution that runs on ARM, uses about half the memory of standard K8s, and installs with a single command. It’s real Kubernetes. Not a simulator, not a subset, not “Kubernetes-like.” The same API, the same objects, the same kubectl commands. What you learn here works identically on a 500-node EKS cluster.
Career Context: Kubernetes experience is the single most in-demand skill in DevOps and Platform Engineering. CKA (Certified Kubernetes Administrator) holders command £60-85k+ salaries. Every major cloud provider’s managed Kubernetes service — EKS, AKS, GKE — uses the same APIs, the same manifests, the same kubectl commands you’ll learn here. Building muscle memory on a Pi means you’re not learning these concepts in a pressured production environment. You’re arriving already fluent.
What K3s Actually Is
Standard Kubernetes (k8s) is designed for large-scale production. It’s a collection of components — API server, scheduler, controller manager, etcd, kubelet, kube-proxy — that together orchestrate containers across clusters of machines. It’s powerful. It’s also heavy. A minimal k8s control plane wants 2GB+ of RAM before you’ve deployed a single workload.
K3s is Kubernetes compressed. Rancher Labs took the full Kubernetes codebase and:
- Replaced etcd with SQLite (single-node) or embedded etcd (multi-node) for lighter storage
- Bundled everything into a single binary (~60MB)
- Included sensible defaults: Traefik for ingress, CoreDNS for service discovery, Flannel for networking
- Removed cloud-provider-specific code that’s irrelevant on bare metal
- Optimised for ARM processors
The result: a fully conformant Kubernetes distribution that starts in about 30 seconds and runs comfortably in 512MB of RAM (control plane only). On a Pi 5 with 8GB, that leaves plenty of room for actual workloads.
Critically, K3s passes the full Kubernetes conformance tests. Every kubectl command, every YAML manifest, every Helm chart that works on standard Kubernetes works on K3s. You’re not learning a simplified version — you’re learning the real thing on lighter infrastructure.
Installation
This is the part that makes Kubernetes veterans do a double-take. One command:
curl -sfL https://get.k3s.io | sh -
That’s it. Within 30 seconds, you have a working Kubernetes cluster. The installer:
- Downloads the K3s binary
- Creates a systemd service
- Starts the server (control plane + worker)
- Configures kubectl automatically
- Deploys Traefik (ingress controller), CoreDNS, and Flannel (CNI)
# Verify it's running
sudo kubectl get nodes
# You should see something like:
# NAME STATUS ROLES AGE VERSION
# pi5 Ready control-plane,master 45s v1.31.4+k3s1
If you see “Ready,” you have a working Kubernetes cluster. On a Raspberry Pi. In under a minute.
Pro tip: By default, K3s puts its kubeconfig at /etc/rancher/k3s/k3s.yaml and requires sudo for kubectl commands. To use kubectl without sudo, copy the config to your user:
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
export KUBECONFIG=~/.kube/config
# Add to your .bashrc or .zshrc to make it permanent:
echo 'export KUBECONFIG=~/.kube/config' >> ~/.bashrc
Your First kubectl Commands
Before you deploy anything, let’s look at what K3s has already set up. This is where you start building kubectl muscle memory.
# See all nodes in the cluster (just one for now)
kubectl get nodes
# See all pods across all namespaces
kubectl get pods -A
# You'll see system pods already running:
# NAMESPACE NAME READY STATUS
# kube-system coredns-xxxxx 1/1 Running
# kube-system local-path-provisioner-xxxxx 1/1 Running
# kube-system metrics-server-xxxxx 1/1 Running
# kube-system svclb-traefik-xxxxx 2/2 Running
# kube-system traefik-xxxxx 1/1 Running
Look at that. You haven’t deployed anything, and there are already five pods running. That’s Kubernetes infrastructure — DNS, ingress routing, storage provisioning, and metrics collection. Understanding what these do is genuine Kubernetes knowledge:
- CoreDNS: Provides DNS-based service discovery. When one pod needs to talk to another, it resolves the service name through CoreDNS.
- Traefik: The ingress controller. It routes external HTTP/HTTPS traffic to the right services inside the cluster.
- Metrics Server: Collects resource usage data. Powers
kubectl topcommands. - Local Path Provisioner: Handles persistent storage. When a pod requests storage, this creates directories on the host.
# See all namespaces
kubectl get namespaces
# See more detail about your node
kubectl describe node pi5
# Check resource usage (once metrics-server is ready, ~60 seconds after install)
kubectl top nodes
kubectl top pods -A
Spend a few minutes just exploring with kubectl get and kubectl describe. Tab completion helps enormously — K3s installs it automatically for bash. Every minute you spend poking around now builds familiarity that pays dividends later.
Deploying Your First Application
Let’s deploy something. We’ll start with the imperative approach (commands) and then move to the declarative approach (YAML manifests), because understanding both is important.
The Imperative Way (Quick, Not Repeatable)
# Create a deployment running nginx
kubectl create deployment hello-web --image=nginx:alpine
# Check it's running
kubectl get deployments
kubectl get pods
# Expose it as a service on port 80
kubectl expose deployment hello-web --type=NodePort --port=80
# Find the assigned port
kubectl get services hello-web
# Look for the NodePort - something like 80:31234/TCP
# Visit http://your-pi-ip:31234 in a browser
You’ve just deployed an application to Kubernetes. It’s running in a pod, managed by a deployment, and exposed via a service. These are the core three objects you’ll work with constantly.
But notice the problem: if someone asked you to recreate this, you’d need to remember (or find in your shell history) those exact commands. That’s the imperative approach, and it doesn’t scale. In enterprise environments, everything is declarative — defined in files that can be version-controlled, reviewed, and reproduced.
Clean Up Before We Do It Properly
# Delete the imperative resources
kubectl delete service hello-web
kubectl delete deployment hello-web
Understanding Manifests: Declarative Configuration
This is where Kubernetes clicks. Instead of running commands, you describe the state you want in YAML files, and Kubernetes makes it happen. This is the same principle behind Terraform, Ansible, and every other infrastructure-as-code tool. Declare the desired state; let the system figure out how to get there.
Create a file called hello-web.yaml:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-web
labels:
app: hello-web
spec:
replicas: 2
selector:
matchLabels:
app: hello-web
template:
metadata:
labels:
app: hello-web
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
limits:
memory: "64Mi"
cpu: "250m"
requests:
memory: "32Mi"
cpu: "100m"
---
apiVersion: v1
kind: Service
metadata:
name: hello-web
spec:
type: NodePort
selector:
app: hello-web
ports:
- port: 80
targetPort: 80
Let’s break this down, because understanding YAML manifests is 80% of working with Kubernetes:
- apiVersion / kind: What type of object you’re creating. Deployments are in
apps/v1, Services are inv1. - metadata: Name and labels. Labels are how Kubernetes objects find each other.
- spec.replicas: 2: Run two copies of this pod. If one crashes, Kubernetes restarts it. If the node has capacity, both run simultaneously.
- spec.template: The pod template. This describes what each replica looks like.
- resources.limits/requests: How much CPU and memory this container can use. Critical on a Pi 5 with limited resources.
- selector.matchLabels: The Service finds the Deployment’s pods by matching labels. The label
app: hello-webconnects them.
# Apply the manifest
kubectl apply -f hello-web.yaml
# Watch the pods come up
kubectl get pods -w
# You should see two pods:
# NAME READY STATUS RESTARTS AGE
# hello-web-6d9b4f5c7d-abc12 1/1 Running 0 5s
# hello-web-6d9b4f5c7d-def34 1/1 Running 0 5s
# Get the service details
kubectl get svc hello-web
Now here’s the magic. Delete one of those pods:
# Delete a pod (use one of your actual pod names)
kubectl delete pod hello-web-6d9b4f5c7d-abc12
# Immediately check pods again
kubectl get pods
# A new pod is already being created to replace it.
# That's the Deployment controller doing its job -
# you said you want 2 replicas, so it maintains 2 replicas.
This is the core value of Kubernetes: self-healing. You declare the desired state (“I want 2 replicas”), and Kubernetes continuously works to maintain that state. Container crashes? Restarted. Node goes down? Pods rescheduled. You don’t manage individual containers anymore — you manage desired state, and the system handles the rest.
Resource limits matter on a Pi 5. With 8GB of RAM shared between the OS, K3s control plane, and all your workloads, setting resource limits isn’t optional — it’s essential. Without limits, a misbehaving pod can consume all available memory and trigger the Linux OOM killer, which may take down system-critical pods or even K3s itself. Always set resources.requests (minimum guaranteed) and resources.limits (maximum allowed) in your manifests. Start conservative (64Mi memory, 250m CPU) and adjust based on actual usage via kubectl top pods.
Essential kubectl Commands
These are the commands you’ll use daily. Burn them into muscle memory.
# === Viewing Resources ===
kubectl get pods # List pods in default namespace
kubectl get pods -A # List pods in ALL namespaces
kubectl get pods -o wide # Extra detail (IP, node)
kubectl get deployments # List deployments
kubectl get services # List services
kubectl get all # Everything in current namespace
# === Inspecting Resources ===
kubectl describe pod POD_NAME # Detailed pod info (events, conditions)
kubectl logs POD_NAME # Container logs
kubectl logs POD_NAME -f # Follow logs (like tail -f)
kubectl logs POD_NAME --previous # Logs from crashed/previous container
# === Modifying Resources ===
kubectl apply -f manifest.yaml # Create or update from file
kubectl delete -f manifest.yaml # Delete resources defined in file
kubectl scale deployment NAME --replicas=3 # Scale up/down
# === Debugging ===
kubectl exec -it POD_NAME -- /bin/sh # Shell into a running pod
kubectl get events --sort-by=.metadata.creationTimestamp # Recent events
kubectl top pods # Resource usage
kubectl top nodes # Node resource usage
# === Namespaces ===
kubectl get namespaces # List namespaces
kubectl -n kube-system get pods # Pods in a specific namespace
The command you’ll use most is kubectl describe. When something isn’t working, describe shows you the events — the sequence of things Kubernetes tried to do and what went wrong. “Failed to pull image,” “Insufficient memory,” “Liveness probe failed” — the answers are almost always in the events section.
Pro tip: Set up an alias. Typing kubectl hundreds of times a day gets old fast. Every Kubernetes engineer aliases it:
# Add to ~/.bashrc or ~/.zshrc
alias k='kubectl'
alias kgp='kubectl get pods'
alias kga='kubectl get all'
alias kd='kubectl describe'
alias kl='kubectl logs'
# Now: kgp -A instead of kubectl get pods -A
This isn’t laziness — it’s efficiency. In interviews and certifications (CKA), speed matters, and these aliases are standard practice.
Deploying Something Real: A Web Application With Database
Nginx is fine for learning, but let’s deploy something that exercises more Kubernetes concepts. We’ll deploy a simple application with persistent storage.
Create app-stack.yaml:
---
# Namespace - keep things organised
apiVersion: v1
kind: Namespace
metadata:
name: demo
---
# Persistent storage for the database
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data
namespace: demo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
# Redis deployment (in-memory store)
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: demo
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7-alpine
ports:
- containerPort: 6379
resources:
limits:
memory: "128Mi"
cpu: "250m"
volumeMounts:
- name: redis-storage
mountPath: /data
volumes:
- name: redis-storage
persistentVolumeClaim:
claimName: redis-data
---
# Redis service (internal only)
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: demo
spec:
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
---
# Web frontend
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: demo
spec:
replicas: 2
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx:alpine
ports:
- containerPort: 80
resources:
limits:
memory: "64Mi"
cpu: "200m"
---
# Web service (external access)
apiVersion: v1
kind: Service
metadata:
name: web
namespace: demo
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 80
# Deploy everything
kubectl apply -f app-stack.yaml
# Watch it come up
kubectl -n demo get all
# Check the persistent volume was created
kubectl -n demo get pvc
This manifest introduces several new concepts:
- Namespaces: Logical isolation. Production workloads in one namespace, staging in another. On shared clusters, namespaces separate teams.
- PersistentVolumeClaim: Kubernetes’ way of requesting storage. The local-path-provisioner (installed by K3s) creates a directory on your Pi. In the cloud, this would provision an EBS volume or Azure Disk.
- Service discovery: The web pods can reach Redis using the hostname
redis(the service name). CoreDNS handles the resolution. No hardcoded IPs. - Multiple manifests in one file: The
---separator lets you define everything in a single file. In practice, you might split them into separate files for larger applications.
Single Node vs Multi-Node: Adding Workers
Running K3s on a single Pi teaches you 90% of what you need. But the last 10% — pod scheduling across nodes, node failure recovery, network policies between nodes — requires at least two machines.
If you’ve got a second Pi (or any Linux machine on your network), adding it as a worker node is straightforward.
On the Master Node (Your Existing Pi)
# Get the node token
sudo cat /var/lib/rancher/k3s/server/node-token
On the Worker Node (Second Pi or Machine)
# Install K3s as a worker, pointing to the master
curl -sfL https://get.k3s.io | K3S_URL=https://MASTER_IP:6443 \
K3S_TOKEN=YOUR_NODE_TOKEN sh -
Back on the Master
# Verify the worker joined
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# pi5 Ready control-plane,master 2h v1.31.4+k3s1
# pi4 Ready 30s v1.31.4+k3s1
Two nodes. A real cluster. Kubernetes will now schedule pods across both nodes based on available resources. If you scale your deployment to 3 replicas, you’ll see them distributed across the nodes.
What Happens When a Node Goes Down
This is where it gets educational. Try this (on the worker node, not the master):
# On the worker: simulate a node failure
sudo systemctl stop k3s-agent
Now watch from the master:
# Watch the node status change
kubectl get nodes -w
# After ~40 seconds, the worker changes to NotReady
# After ~5 minutes, pods on the failed node are rescheduled to the master
That 5-minute delay (configurable) is important to understand. Kubernetes doesn’t immediately reschedule pods because the node might be briefly unreachable (network blip, reboot). It waits, then acts. In enterprise environments, this tolerance is tuned based on SLA requirements.
Bring the worker back:
# On the worker:
sudo systemctl start k3s-agent
# The node returns to Ready and starts accepting pods again
You’ve just experienced pod rescheduling and node failure recovery. This is the fundamental resilience model of Kubernetes, and you’ve seen it with your own eyes on hardware you can literally unplug.
Pro tip: I have a Pi 4 running a newsletter platform, and a Pi 5 being onboarded as a K3s worker. Mixing Pi generations in a cluster works fine — K3s handles the ARM architecture consistently across Pi 4 and Pi 5. The scheduler considers resource availability, so the more powerful Pi 5 will naturally attract more workloads if you set appropriate resource requests. This heterogeneous cluster setup is actually common in production edge computing.
Gotchas and Troubleshooting
Resource Limits on 8GB
The Pi 5’s 8GB is shared between the OS (~300-500MB), K3s control plane (~400-600MB), system pods (~200-300MB), and your workloads. Realistically, you have about 5-6GB for application pods. That’s plenty for learning, but you’ll hit limits if you try to run heavy workloads.
# Check actual resource usage
kubectl top nodes
kubectl top pods -A
# If you see pods in "OOMKilled" status:
kubectl get pods -A | grep OOMKilled
# This means they exceeded their memory limit.
# Either increase the limit or optimise the application.
DNS Issues
DNS resolution inside the cluster occasionally causes confusion. Pods resolve service names through CoreDNS, not your host’s DNS. If you’re running Pi-hole on the same machine, understand that:
- Pod-to-pod DNS goes through CoreDNS (internal to the cluster)
- Pod-to-external DNS goes through CoreDNS, which forwards to the host’s DNS (likely Pi-hole)
- If Pi-hole is down, pods can still find each other (CoreDNS) but can’t resolve external domains
# Debug DNS from inside a pod
kubectl run -it --rm debug --image=busybox -- sh
# Inside the pod:
nslookup kubernetes.default.svc.cluster.local
# Should resolve to 10.43.0.1 (the Kubernetes API server)
nslookup google.com
# Should resolve if external DNS is working
Traefik Default Ingress
K3s installs Traefik as the default ingress controller. If you’re already running Nginx Proxy Manager (Article 4), you’ve got two reverse proxies potentially fighting over ports 80 and 443.
Options:
- Disable Traefik at install time:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh - - Use NodePort services instead of Ingress: Services get random high ports (30000-32767) and bypass the ingress controller entirely. Simpler for learning.
- Keep both: Let Traefik handle Kubernetes ingress on different ports, and NPM handle everything else. This is actually a reasonable architecture — Kubernetes workloads route through Traefik, Docker Compose workloads route through NPM.
Storage Considerations
K3s uses local-path-provisioner by default, which creates directories under /var/lib/rancher/k3s/storage/. This is fine for learning but has limitations:
- Storage is tied to the node. If a pod with a PVC moves to a different node, it can’t access the same data.
- No redundancy. If the SD card or NVMe dies, the data goes with it.
- For multi-node clusters with shared storage, you’d need something like Longhorn, NFS, or Rook-Ceph. Longhorn (also by Rancher) works on Pis but is resource-heavy.
ARM Image Compatibility
Not every Docker image has an ARM build. When you deploy and a pod sits in ImagePullBackOff or CrashLoopBackOff, check the image supports linux/arm64:
# Check image architectures (requires docker on the machine)
docker manifest inspect nginx:alpine | grep architecture
# Most popular images support arm64 now.
# If not, look for community ARM builds or build your own.
Don’t run K3s alongside Docker Compose on the same Pi unless you’re careful with resources. Both systems manage containers, both consume memory, and both can claim ports. If you’re following this series and already have services running via Docker Compose, either: (1) migrate them into K3s gradually, (2) run K3s on a separate Pi, or (3) set strict resource limits on everything and monitor closely with Uptime Kuma (Article 7). I’d recommend option 2 or 3 for learning — running both teaches you the real-world coexistence challenges that many organisations face during Kubernetes migrations.
Building kubectl Muscle Memory
The goal isn’t to memorise every flag. It’s to develop intuition. After a few weeks of using K3s, you should be able to:
- Check cluster health in under 10 seconds (
kubectl get nodes,kubectl get pods -A) - Deploy from a manifest without looking up syntax (
kubectl apply -f) - Troubleshoot a failing pod instinctively (
kubectl describe pod, check events, check logs) - Scale a deployment up or down and understand what happens
- Know the difference between a Deployment, Service, Pod, and Namespace without thinking
Daily Practice Exercises
Spend 10 minutes a day on these. Seriously. The muscle memory compounds fast.
# Day 1-7: The basics
kubectl get nodes
kubectl get pods -A
kubectl get services -A
kubectl describe pod PICK_A_POD
kubectl logs PICK_A_POD
# Day 8-14: Deployments
kubectl create deployment test --image=nginx:alpine --dry-run=client -o yaml
# Read the output. Understand every line.
kubectl scale deployment test --replicas=3
kubectl delete deployment test
# Day 15-21: Troubleshooting
kubectl get events --sort-by=.metadata.creationTimestamp
kubectl top pods
# Deliberately break things: wrong image name, impossible resource limits,
# non-existent ConfigMaps. Learn what the error messages look like.
# Day 22-30: Writing manifests from scratch
# Try writing a Deployment + Service from memory.
# Get it wrong. Fix it. Get faster.
The Honest Assessment
Let me be direct about what K3s on a Pi will and won’t do for you.
What It Will Do
- Give you genuine, hands-on Kubernetes experience that translates directly to enterprise environments
- Build kubectl muscle memory that you’ll use in every Kubernetes role
- Teach you manifest syntax, resource management, and declarative configuration
- Demonstrate self-healing, scheduling, and service discovery in ways a tutorial can’t
- Provide concrete answers for interview questions about container orchestration
What It Won’t Do
- Teach you about cloud-specific integrations (EBS volumes, ALB ingress, IAM roles for service accounts) — those require actual cloud environments
- Give you experience with cluster autoscaling (you can’t dynamically add Pis)
- Simulate multi-region or multi-zone deployments
- Run heavy production workloads (8GB is genuinely limiting for anything beyond learning)
- Replace the need for cloud experience — it complements it
The gap between K3s on a Pi and EKS in production is primarily scale and cloud integration, not concepts. The scheduling, the manifests, the networking model, the troubleshooting process — all identical. Learn it here, apply it there.
Cleaning Up and Managing K3s
# Remove test deployments
kubectl delete -f hello-web.yaml
kubectl delete -f app-stack.yaml
# If you ever want to completely uninstall K3s:
# On server (master):
/usr/local/bin/k3s-uninstall.sh
# On agent (worker):
/usr/local/bin/k3s-agent-uninstall.sh
# This removes everything: binary, data, configs.
# Your Docker Compose services are unaffected.
The Career Translation
| K3s Concept | Enterprise Equivalent |
|---|---|
| kubectl commands | Identical in EKS, AKS, GKE, OpenShift |
| YAML manifests | Helm charts, Kustomize, ArgoCD GitOps |
| Deployments and ReplicaSets | Production deployment patterns, rolling updates |
| Services and networking | Service mesh (Istio, Linkerd), network policies |
| Namespaces | Multi-tenant clusters, RBAC, resource quotas |
| Resource limits | Capacity planning, cost optimisation, right-sizing |
| PersistentVolumeClaims | Cloud storage (EBS, Azure Disk, GCE PD) |
| Node failure recovery | High availability, disaster recovery, SLA compliance |
| Traefik ingress | Ingress controllers (Nginx, AWS ALB, Istio Gateway) |
Interview Talking Points
- “Tell me about your Kubernetes experience.” — “I run a K3s cluster at home on Raspberry Pis. I manage deployments, services, persistent storage, and have experimented with multi-node scheduling and failure recovery. The concepts are the same as managed Kubernetes — I’ve built the kubectl fluency and manifest understanding that transfers directly.”
- “How would you troubleshoot a pod that won’t start?” — You can walk through:
kubectl describe pod, check events, check image pull status, check resource limits, check liveness probes, check logs. You’ve done all of this on your own cluster. - “What’s the difference between a Deployment and a Pod?” — You know this from practice, not from a textbook. A pod is a running container. A Deployment manages pods — ensuring the right number of replicas, handling rolling updates, maintaining desired state.
- “How do you handle persistent storage in Kubernetes?” — You’ve created PVCs, watched the provisioner create storage, understood the node-affinity implications. In the cloud, the storage backend changes; the Kubernetes interface is the same.
Where to Go From Here
If Kubernetes has clicked for you, the natural progressions are:
- Helm: The package manager for Kubernetes. Instead of writing manifests from scratch, install complex applications with
helm install. It’s how most real-world applications are deployed. - CKA Certification: The Certified Kubernetes Administrator exam. Everything you’ve learned here — kubectl, manifests, troubleshooting, cluster management — is directly tested. Having a home cluster to practise on is an enormous advantage.
- GitOps with ArgoCD: Combine K3s with Gitea (Article 10) and ArgoCD for automated deployments triggered by git commits. This is the cutting-edge deployment pattern enterprise teams are adopting.
- Cloud managed K8s: Apply what you’ve learned to a free-tier EKS, AKS, or GKE cluster. You’ll be amazed how familiar it feels — because it’s the same Kubernetes underneath.
What’s Next in the Series
Next up is Article 9: n8n Automation, where you’ll connect all your services into automated workflows. n8n can interact with Kubernetes via its HTTP Request node — triggering deployments, scaling services, and responding to monitoring alerts automatically. It’s the glue that turns individual services into a cohesive, self-managing system.
Series navigation:
Running K3s on a Pi? Whether it’s a single node or a multi-Pi cluster, I’d be keen to hear what you’re deploying. The jump from Docker Compose to Kubernetes is a significant one, and it’s worth the investment — even if it does feel like using a sledgehammer to crack a nut at first.

ReadTheManual is run, written and curated by Eric Lonsdale.
Eric has over 20 years of professional experience in IT infrastructure, cloud architecture, and cybersecurity, but started with PCs long before that.
He built his first machine from parts bought off tables at the local college campus, hoping they worked. He learned on BBC Micros and Atari units in the early 90s, and has built almost every PC he’s used between 1995 and now.
From helpdesk to infrastructure architect, Eric has worked across enterprise datacentres, Azure environments, and security operations. He’s managed teams, trained engineers, and spent two decades solving the problems this site teaches you to solve.
ReadTheManual exists because Eric believes the best way to learn IT is to build things, break things, and actually read the manual. Every guide on this site runs on infrastructure he owns and maintains.
Enjoyed this guide?
New articles on Linux, homelab, cloud, and automation every 2 days. No spam, unsubscribe anytime.

