Home IoT Part 1 — Cluster Setup

Paul Leroy
7 min readApr 11, 2021

--

I visited a friend of mine who was doing some amazing things with IoT, some of which was on my roadmap already and I had lots of the pieces already. I received a PiCluster as a gift a while back and after the initial build I got to the point where most people go “great, now what do I do with this?” If that was you, here are a few ideas, from the basic cluster all the way to integrated IoT. If there is a good response I’ll go further to the Machine Learning side too.

Configuring the network
Firstly we need to make a few things easier on the networking side. Firstly we need the local DNS to be set up so that on the LAN we can find all the devices. For this step I used my Mikrotik cAP access points. I know I was putting them to work, but that’s what they’re for.

I used statically assigned addresses out of DHCP rather than statically configuring the network interfaces on the Pis. It makes sense to have everything on DHCP as it’s easier to change if we need than editing all the Pis if something changes.

I had a DHCP range of 10.100.10.0/24 and decided the static addresses for the cluster would be 10.100.10.96-10.100.10.127. I try to allocate different ranges on subnet splits (16, 32, 128 etc) rather than 10, 20, 50, 100 etc) as it gives me the capability to firewall these ranges separately very quickly even if they’re on the same network. Assume the default gateway was 10.100.10.1 as was the DNS for the LAN. As the Pis joined the network, I clicked on then, changed them to static and the IP I wanted then to have. Cluster control planes were 10.100.10.96-10.100.10.99 and worked nodes 10.100.10.100-10.100.10.105 giving me 3 control plane servers and 6 workers.

Then I wanted to configure the LAN DNS so that resolution from devices on the network would get the cluster addresses when resolving the DNS names. This makes it easier to configure services outside the cluster to point to the cluster.

The configuration I added to the static DNS as follows:

/ip dns static
add address=10.173.10.96 name=pimaster.local ttl=5m
add address=10.173.10.97 name=pimaster.local ttl=5m
add address=10.173.10.98 name=pimaster.local ttl=5m
add address=10.173.10.99 name=picluster.local ttl=5m
add address=10.173.10.100 name=picluster.local ttl=5m
add address=10.173.10.101 name=picluster.local ttl=5m
add address=10.173.10.102 name=picluster.local ttl=5m
add address=10.173.10.103 name=picluster.local ttl=5m
add address=10.173.10.104 name=picluster.local ttl=5m

Now you should be able to ping the control plane and the workers by pinging pimaster.local or picluster.local respectively. Well, when we have the Pis configured that is.

Building the cluster

Building the cluster is fairly easy. You need a number of Pis. You need at least one to be the control plane and probably two would make sense for the work pool.

You’ll need to set up each of the Pis using Ubuntu 20.04 LTS. Long Term Support (LTS) is preferred as you don’t want to have to fiddle too much with the underlying infrastructure of the IoT platform too often. Make sure that they are connected to the network (wi-fi is nicer, then you can drop them out of the way where you just need to get power to them).

First we need the image for the Pi. Head on over here and get the newest LTS server image (20.04.2 LTS at the time of writing).

I use Balena Etcher, its multi-platform so I use the same it I am running it on Linux, Windows or Mac. I use all three depending on the requirements so I don’t judge ;P. Write the image to your SD Cards. It will take some time for this step so put a pot of coffee on and get to work.

Once the cards are ready, power up the Pis and set a sensible password for the ubuntu user (default is ubuntu, ubuntu a t the moment, but wait for the cloud config script to finish running or it looks like the password is wrong). Then to get the Pi connected to wifi you’ll need to complete these setups (which require a monitor and a keyboard). Honestly I don’t have a micro HDMI plug, so I actually configure the SD cards on a Pi 3 as it has a standard HDMI cable, then plug the SD cards into the Pi4s. I followed this post as Ubuntu now uses systemd which is different to my old school ways.

Next you need to fix the local LAN lookup. On each Pi edit the resolved.conf file and add (or uncomment and edit) the following lines:

sudo nano /etc/systemd/resolved.conf

Add/Change:

DNS=10.100.10.1
Domains=local

This means the Pis will ask DNS for resolution on `picluster.local` and resolve back to the cluster. This is important when we set up the cluster services.

Now you can rack and stack the Pis in you favorite cluster container. I used a Geekstack (its extensible which is great as I added more Pis to the cluster). I also used a 10 port 96W USB charger to power the cluster. One power point, up to ten Pis.

Cluster Setup

Now to set up the control plane. I used Rancher K3S for this. This is simple as you just run this on the Pis that you want to be the control plane:

curl -sfL https://get.k3s.io | sh -

It will ask you for privilege escalation and then install everything you need. Then on that Pi copy down the token from this file (I mean cut and paste unless you have an eidetic memory) which you get using this command:

cat /var/lib/rancher/k3s/server/node-token`

That is the token you need to get all the other nodes to join the party. Now on each of the worker nodes run the following command:

MY_TOKEN=<the token from the above file>
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=$MY_TOKEN sh -

Wouldn’t it be cool to check that everything is working? I mean easily?

Cluster Monitoring

For this part we are going to use Marantis Lens. So what we need is the config fill to connect to the cluster. Fetch that from your control plane Pi here:

/etc/rancher/k3s/k3s.yaml

And save it to your local machine. I just printed it and pasted it into Notepad. While its still in notepad, edit the server from whatever it is (localhost probably) to pimaster.local like so:

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: --snip--
server: https://pimaster.local:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: --snip--
client-key-data: --snip--

Save the file and remember where it is. Now you can administer the cluster from any machine with that file, be careful with it.

Now, open Lens and click the add cluster button, and select that file as the kubeconfig file. Add a nice raspberry as the logo so you know which is your Pi Cluster.

Adding the cluster

Once you’ve added the cluster you should see all the nodes you added show up.

Checking node addition (we’ll add the metrics shortly)

Next we need to add the metrics servers so we can see what’s going on with them. Right click on the raspberry pi cluster and click setting and scroll to the bottom of the settings page. You’ll see an option for installing the Metrics Stack (the big blue install button).

Click this and wait a bit, should be about 5–10 minutes to stabilize. Note that the kube-state-metrics deployment is stuck in pending. So, go to the workloads, expand the Workloads and check the deployments in the lens-metrics namespace.

Edit and remove the architecture node selector section by clicking on the three dots at the end of the deployments line and clicking edit:

---snip--
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- amd64
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
--snip--

And remove the container image from the container spec:

--snip--
spec:
containers:
- name: kube-state-metrics
image: 'quay.io/coreos/kube-state-metrics:v1.9.7'
--snip--

to:

--snip--
spec:
containers:
- name: kube-state-metrics
image: 'k8s.gcr.io/kube-state-metrics/kube-state-metrics'
--snip--

This allows the container for the ARM64v8 architecture to be used. We’re on Pis not intel anymore. You’ll go from this:

To this in about 1 minute:

Saving and closing will update the cluster and you should be good to go. Now if you click on the cluster icon, top left, you’ll see cluster metrics, give it some time to show up though.

And each node is here:

Now we have an execution environment and a way to monitor it.

If you liked this, I’ll be posting follow ups and upgrades over the next few weeks. Next I’ll discuss architecture I wanted and how I got it deployed. Its not perfect, so we’ll be improving it as we go along.

--

--