Skip to main content

Marcus Craske

hack the planet|

Raspberry Pi Kubernetes Cluster

K8 and Pi logo

In this article, I setup:

  • A Kubernetes cluster using the new Raspberry Pi 4.
  • Kubernetes Dashboard.
  • Ingress using NGINX.

Many articles already exist for older Debian distributions and Raspberry Pis, so hopefully this updated set of steps helps others save time.

Kubernetes Cluster Setup

In this section is the setup of a basic cluster, which consists of the following:

  • A single control plane node, used to manage the cluster.
  • One or more nodes, used to later run our application deployments (pods, containers, etc).

A node is just a traditional bare-metal or virtual host, such as a Raspberry Pi.

Relevant official docs:

Generic Raspberry Pi Setup

Run these steps for both your single control plane node, and node(s):

  • Download the latest Raspbian imag and install it onto the micro SD card.

    At the time of this article, the latest Raspbian Buster. I’ve chosen the lite image, as everything will be remotely and no desktop is required.

  • Create ssh.txt in /boot partition of micro SD card, to enable SSH daemon when started for first time. This is to avoid using an actual physical keyboard, mouse and screen.

  • Plug-in the Raspberry Pi to your network and power it up. Use your router to find the dynamically allocated IP address (assuming your network has DHCP).

  • SSH to the Raspberry Pi using the default credentials. The default user is pi the password is raspberry. E.g. ssh [email protected] and enter raspberry.

  • Run sudo raspi-config and:
    • Configure hostname (under Network Options).
      • I’ve used the naming convention k8-master1 for the control plane node and k8-slave1 for the nodes.
    • Enable SSH daemon (under Interfacing Options).
    • Update to latest version.
  • Run sudo nano /etc/dhcpcd.conf and configure a static IP address.
      interface eth0
      static ip_address=
      static routers=
      static domain_name_servers=
  • Disable swap:
      sudo dphys-swapfile swapoff
      sudo dphys-swapfile uninstall
      sudo update-rc.d dphys-swapfile remove
      sudo apt purge dphys-swapfile
  • Add Kubernetes repository to package sources:
      echo "deb kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
      curl -s | sudo apt-key add -
  • Update available packages:
      sudo apt update
  • Install useful network tools and Kubernetes ADM client:
      sudo apt install dnsutils kubeadm
  • Install latest Docker:
      curl -sSL | sh && \
      sudo usermod pi -aG docker && \
      newgrp docker
  • Configure ip tables to open all ports, currently required for pod DNS to work):
      sudo iptables -P FORWARD ACCEPT

    Without this step, you may find coredns pods fail to start on app nodes and/or DNS does not work.

  • Configure iptables to run in legacy, since kube-proxy has issues with iptables version 1.8:
      sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
  • Reboot sudo reboot.

Control Plane Setup

The master node, responsible for controlling the cluster.

Based on steps from official guide.

I’ve named my control plane node k8-master1.

  • SSH onto the Pi you want to be the master e.g. ssh [email protected].

  • Init cluster, with defined CIDR / network range for pods and a non-expiring token for slaves to join the cluster later:
      sudo kubeadm init --pod-network-cidr= --service-cidr= --token-ttl=0
  • At the end of the previous step, copy the provided command for app nodes / slave to join the cluster. We’ll run this later, it’ll look something like:
      kubeadm join --token xxxxxx.xxxxxxxxxxxxxxxxx \
        --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  • Run the provided commands displayed at the end of the previous step.

  • Install a pod network; in this case, we’ll install Weave:
      kubectl apply -f "$(kubectl version | base64 | tr -d '\n')"

Node Setup

The node(s), responsible for later running our application deployments.

Run the command from earlier to join the node to the cluster, either as root or prefix the command with sudo, for example:

sudo kubeadm join --token xxxxxx.xxxxxxxxxxxxxxxxx \
    --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

This may take a few minutes.


Kubernetes Dashbaord

Let’s install a dashboard to easily manage our cluster.

Based on the official guide.


  • SSH to the master node e.g. ssh [email protected].

  • Install the dashboard:

      kubectl apply -f
  • Paste the following in the file user.yaml:

      apiVersion: v1
      kind: ServiceAccount
        name: admin-user
        namespace: kube-system
      kind: ClusterRoleBinding
        name: admin-user
        kind: ClusterRole
        name: cluster-admin
      - kind: ServiceAccount
        name: admin-user
        namespace: kube-system
  • Apply the file, it will setup a user for the dashboard:

      kubectl apply -f user.yaml


I’ve written a simple script to help automate accessing the dashboard.

This will:

  • Proxy port 8080 from the cluster to our local machine, in order to access the dashboard on our machine.
  • Print out a token we can use to login to the dashboard.

Save the following script as

MASTER="[email protected]"

# Print token for login
TOKEN_COMMAND="kubectl -n kube-system describe secret \$(kubectl -n kube-system get secret | grep admin-user | awk '{print \$1}')"

echo "Dumping token for dashboard..."

echo "Login:"
echo "  http://localhost:8080/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login"

# Create SSH tunnel to k8 master and run proxy
echo "Creating proxy tunnel to this machine from master..."
ssh -L 8080:localhost:8080 ${MASTER} -C "kubectl proxy --port=8080 || true"

echo "Terminate proxy..."
ssh ${MASTER} -C "pkill kubectl"

Change the MASTER variable to the IP address or host-name of your control plane node.

Run the script:


Visit the URL displayed by the script to access the dashboard, and copy the token on the logon screen.

Installing Helm

Helm is a package manager for Kubernetes, which we’ll use to setup ingress in the next section.

It consists of two parts: helm the client, tiller the Helm server.

The server will be deployed as a pod, which needs a service account. Thus create a file rbac-config.yaml with:

apiVersion: v1
kind: ServiceAccount
  name: tiller
  namespace: kube-system
kind: ClusterRoleBinding
  name: tiller
  kind: ClusterRole
  name: cluster-admin
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

And apply the config:

kubectl apply -f rbac-config.yaml

Since we’re using a Raspberry Pi with an arm architecture, we’ll install an arm version of the helm client on the master node and a multi-arch image of tiller on the cluster:

ssh [email protected]
tar xvzf helm-v2.14.3-linux-arm.tar.gz
sudo mv linux-arm/helm /usr/local/bin/helm
rm -rf linux-arm
helm init --tiller-image=jessestuart/tiller:v2.14.3 --service-account tiller


We’ll setup ingress, so that external traffic can reach our cluster.

Our end-goal will be for services to be accessible from the internet.


This section will setup our ingress controller, using NGINX, responsible for managing anything related to ingress to our services.

Install using helm:

ssh [email protected]
helm install stable/nginx-ingress --name nginx-ingress --namespace kube-ingress --set controller.publishService.enabled=true

This now has official arm support, we just need to change the deployment images:

kubectl --namespace kube-ingress set image deployment/nginx-ingress-controller \

kubectl --namespace kube-ingress set image deployment/nginx-ingress-default-backend \

Metal Loadbalancer

Since we aren’t using a cloud provider, such as AWS, we don’t have a cloud load balancer available.

Instead we’ll setup a load-balancer on our cluster, using MetalLB, which runs in a pod and attaches its self onto a physical network IP.

You can either attach to an IP using Layer 2 (OSI Data Link layer), whereby only a single node can handle traffic. This presents a resilience risk, but offers the most support, and you’ll definitely be able to access said IP from your machine on the same network. Alternatively your cluster can, providing your router supports it, use BGP, but you’ll most likely not be able to access the cluster locally.

Let’s install MetalLB using helm:

helm install --namespace kube-ingress --name metallb stable/metallb

Based on the above, choose either Layer 2 or BGP.

Option 1: Layer 2

Create metallb-config.yaml to configure load balancer:

apiVersion: v1
kind: ConfigMap
  namespace: kube-ingress
  name: metallb-config
  config: |
    - name: default
      protocol: layer2

Option 2: BGP

Create metallb-config.yaml to configure load balancer:

apiVersion: v1
kind: ConfigMap
  namespace: kube-ingress
  name: metallb-config
  config: |
    - peer-address:
      peer-asn: 64501
      my-asn: 64500
    - name: default
      protocol: bgp

For more details on both options, refer to the official docs.

Just be careful, as we’ve got a different namespace to the docs.

Once config has been setup, apply it:

kubectl apply -f metallb-config.yaml

Ingress is now setup.

Ingress Example

You can now define an ingress to a service.

An example which forwards traffic for** to the service uptime.

kind: Ingress
  namespace: home-network
  name: uptime-ingress
  annotations: /
  - host:
      - path: /uptime
          serviceName: uptime
          servicePort: 80

Using the Kubernetes dashboard, or kubeadm tool, you can find the local network IP address allocated.

Then port forward traffic on your router for the IP address, so that it’s exposed to the internet.

If you want multiple ingress points to share the same IP address, you can add config to your services to get a defined IP address allocated.

Take note of the annotations section, type and loadBalancerIP params in this example:

kind: Service
apiVersion: v1
  name: uptime
  annotations: home-network
    app: uptime
  - protocol: TCP
    port: 80
    targetPort: 8080
  sessionAffinity: None
  type: LoadBalancer

You can then front your router’s WAN / public internet IP address with a CDN such as CloudFlare, which is free and offers DDOS protection (useful for a home network). This will also provide valid SSL certificates.


In this article we setup a simple cluster capable of deployments and taking external traffic.