Anatomy of my home Kubernetes cluster

A year ago, I realized that I wanted to create my own Kubernetes cluster. I am a software developer. Usually I either use a single-node local cluster or use a remote multi-node cluster to test my projects. When working with a single-node cluster, I usually rely on Minikube , although there are other solutions, like the Kind project , which can emulate the presence of several nodes in a cluster. Support for multiple nodes may appear in Minikube.

So, I would like to have at my disposal the capabilities of both of the above environments. That is - so that I would have a cluster consisting of several nodes, and so that working with these clusters would not imply network delays typical for interacting with remote environments. Many great tutorials have already been written on building Kubernetes multi-node clusters using single-board computers . Many of these manuals use the Raspberry Pi as SBC. Seeing this, I decided to follow the path of least resistance and also chose this computer. The Raspberry Pi platform has established itself as an inexpensive and affordable solution.





It should be noted that the choice of this platform provides for some compromises. For example, Broadcom and the Raspberry Pi Foundation did not license ARMv8 cryptographic extensions (this is necessary for hardware acceleration of AES support). Another controversial solution is the use of microSD-cards in the form of a standard storage medium from which the operating system is loaded.

Analyzing the existing guidelines for creating clusters on single-board computers, I did not find a single one that describes a solution that satisfies my requirements. I’ll start the story about the cluster I created with them.

Requirements


  • The cluster should be enclosed in a separate housing, which can be easily moved, opened and maintained, working with separate modules of the system;
  • , . , , . , . , .
  • , .
  • , , , .

Behind these requirements, there is one non-obvious goal, which is that I needed my four-year-old daughter to work with the cluster. I hope that he will become for her a kind of study guide that will help her get acquainted with computers, command shells and terminals.

Accessories


DetailSource
Pico 5 Raspberry PI 5S Starter Kitpicocluster.com
Single Board Computers: 2 Raspberry Pi 4B (4GB) 3 Raspberry Pi 3B +
5 microSD cards 32GB class 10 / A1 raspberrypi.org
Ethernet cables: 2 0.25m cat. 83 0.15m cat. 7, S / FTP1attack.de
USB cables PortaPow 20AWG2 USB-C3 cable micro-USB cableportablepowersupplies.co.uk
Official Raspberry Pi 7 ″ Touchscreenraspberrypi.org
Power supply Dehner Elektronik STD-12090 12V / DC 9A 108Wdehner.net
12V to 5V 15A DC-Buck Converterdroking.com
Heschen 12V 25A SPST 2-Pin ON/OFFheschen.com
Noctua NF-A4x20 5V PWMnoctua.at
, (15A, 30V) 4 , MOSFET PSMN011-30YLCebay.com
2 M.2 NVMe USB 3.0, JMS58amazon.com
2 Samsung SSD 970 EVO Plus M.2 PCIe NVMe 500 Go
2 USB 3.0 ( , ) 6″/152mmusbfirewire.com
2 USB 3.0- Delock, Male-Female ( 270°)delock.com
Adafruit Raspberry Pi 24″/610mmadafruit.com
Adafruit (DSI CSI) Raspberry Pi adafruit.com
Wago 221wago.com
Lapp Unitronic 300mm/1200mm 2x0.14mm²lappgroup.com
Lemo FGG.0B.302.CLAD42 EGG.0B.302.CLLlemo.com
DuPont Female-Female

The PicoCluster case is small in size and very easy to use. From the starter kit I ordered, I used only the chassis, including the latches and racks, as well as the 8-port Gigabit Ethernet switch.

Now I understand that it would be better if PicoCluster sold a version of its kit without any electrical components. Another alternative to this kit was the development of its own enclosure. This, however, would require more time. I would have to create vector drawings, I would have to use the services of laser cutting in a company offering acrylic sheets, ideally sheets with a coating that dissipates electrostatic discharge.

In the course of work, I came across warnings about insufficient voltageand found out that the problem was caused by micro-USB cables that came with the case. In addition, together with the case there was a step-down DC-DC converter 12V to 5V, 30W, which I replaced with a more powerful one. I did this for reasons that I will discuss below in the section on powering the system.

At the time I started work on this project, I did not plan to use the Raspberry Pi 4 boards released in 2019. This explains that I still have three Raspberry Pi 3 boards that act as work nodes. I exchanged two other such boards for Raspberry Pi 4. They are used for nodes that need more resources. This is the main node and the working node, which, among other things, is used to create data backups.

Shortly after the release of the Raspberry Pi 4, PicoCluster launched a new chassis designed specifically for such boards. It includes a powerful power supply, two fans and a power switch. True, this case is larger than mine, and the fans will certainly turn out to be louder than the Noctua NF-A4x20. The rotation speed of this fan can be controlled using pulse-width modulation (PWM), taking into account the results of temperature measurements performed on the boards.

It should be noted that both the SoC Broadcom BCM2837 (Raspberry Pi 3) and the SoC BCM2711 (Raspberry Pi 4) have hardware timers capable of generating PWM signals. As a result, it is very simple to issue control input signalsexpected by Noctua NF-A4x20 PWM without overloading the processor. In addition, the fan is barely audible when it is running at half the speed it supports (about 2500 rpm). This, as it turned out, is enough to maintain the system temperature below 45 ° C / 113 ° F under normal load.

The new enclosure includes an integrated power supply. He, moreover, is more expensive than the one I chose. All these features of this case are the reason that I would choose the Pico 5S kit even if I bought the case now. More efforts should be put into bringing it to mind, but what turned out in the end seems to me better than what would have turned out, if I had chosen a different building. In general, it is worth the effort and time.

Assembly


▍ Front panel



Front panel

The front panel has an opening for access to microSD cards. This is convenient, even considering the fact that I plan to organize the system boot using the USB mass storage boot . I plan to do this right after the Raspberry Pi 4 fully supports this boot mode. The hole also helps to organize air circulation in the housing. In this case, two SSD-drives are located directly in front of the fan, which helps to maintain their optimum operating temperature.

The activity and power indicators of the boards are clearly visible, which allows you to assess the state of the cluster at a glance. The activity indicators of the Ethernet switch are also visible.


The top panel of the case without a screen The

screen can easily be moved to bring it closer to the keyboard, or to provide access to the top panel when opening the case.

▍ Left panel



Left panel

On the left panel of the case are the GPIO ports of the boards. One of the cables connects the main unit with the fan - for the organization of PWM control of its rotation speed. Another cable connects the main node to the circuit board module, which is used to turn work nodes on and off. Four cables connect each of the working nodes to the main node. It uses GPIO pins with a high active level when turned off. This allows the main node to safely turn off the power of the working node after the sequence of actions performed at the end of its operation is completed.


Module used to connect the fan to the main node

In the upper left corner there is a small module designed to connect the 4-pin Noctua fan connector to the Raspberry Pi main node. The corresponding GPIO connector, thanks to the hardware support for generating PWM signals, is configured to control the fan speed. The speed is selected based on the analysis of the temperature of the boards, information about which is regularly collected using SSH. The main unit, in addition, for monitoring purposes, reads the fan speed.

▍ back panel



Rear Panel

The rear panel of the chassis conceals cables. The fact that everything looks that way can be considered the consequence of a cunning approach to the modular organization of my project. It was hard to find quality cables. Especially - short USB 3.0 cables with curved plugs. Cables - this is the largest group of moving elements in my system, they must meet certain mechanical and electrical characteristics. They turned out to be the main source of problems.

I had an idea to use a single spiral cable in order to replace it with a DSI tape cable, with which a screen is connected to the system. Then the spiral cable would connect the screen to the cluster and supply power to it. But I could not find a suitable DSI connector. I refrained from this venture and the fact that I was not attracted by the prospect of soldering 17 wires of 0.14mm². I used a DSI cable extender and applied a 2-wire spiral micro-USB power cable. This gave me the ability to easily move and turn off the screen without opening the case.


Unsuccessful attempt to arrange cables

Here is an unsuccessful attempt to use the system to organize cable entry into the enclosure designed for USB 3.0 expansion cables. Despite countless attempts to properly route the cables, I have systematically encountered I / O errors during data transfers. They were probably caused by the fact that they were stretched too much when laying the cables, or by the fact that this impaired the reliable connection of the cables to the connectors.

▍ Right panel



Right panel

On the right side of the case you can see the 8-port Ethernet switch.

System design


▍ Food



Case with open front panel

I had to decide which power supply to use. In particular, about what power in watts it must support in order to meet the needs of the cluster. This is a bit of a question about what came before - a chicken or an egg, since measurements cannot be made before power is applied to the cluster.

In order to get an approximate estimate of power, I turned to the official documentation , which contains some information about the power of the Raspberry Pi, as well as to the specifications of other components. This gave me the following approximate figures based on average energy consumption.


But what happened after taking into account the maximum values ​​of energy consumption from the documentation.


This was much higher than the power provided by those power supplies that I could find that produced 5V, since the "low-power" components needed a current of 10A. This ruled out the option of using a single 5V power supply. At the same time, 12V power supplies are very common, which can provide the necessary level of power. As a result, I selected the Dehner Elektronik STD-12090 12V / DC 9A 108W power supply and connected it to a 12V to 5V 75W DC / DC converter .

Here I had another option, which consisted of using five less powerful converters - one for each Raspberry Pi board, but this would greatly complicate the cluster design.


Patch board A

4-pin patch board based on the MOSFET PSMN011-30YLC is installed in the bottom of the chassis. It is used to enable and disable work nodes. It is labeled 15A, 30V, so it copes well even with the load created by four Raspberry Pi.

I measured the average and maximum power consumed by the cluster. The measurement results are approximately consistent with those approximate calculations that I made earlier. The difference between the expected and actual values ​​can be explained by the features of the system configuration and the features of the test. In particular, I turned off Wi-Fi and Bluetooth on my Raspberry Pi, they also work without displaying an image on the monitor. This may explain the fact that in reality the values ​​turned out to be less.

As it turned out, the 108W power supply is much more powerful than what is needed for the cluster. However, the fact that I have just such a power supply means that I can expand the capabilities of the system. For example, replace Raspberry Pi 3 with Raspberry Pi 4.

▍ Data storage


One of the new nice features of the Raspberry Pi 4 is the presence on the board of two USB 3.0 ports that are connected to the BCM2711 SoC using an extremely fast PCIe connection. Thanks to this, one can hope to achieve very high data rates. I decided that I was using these USB 3.0 ports to connect SSDs using the M.2 NVMe to USB 3.0 adapters. However, as it turned out, it was very difficult to find such adapters. I naively suggested that any adapters would work for me. As a result, I bought the first such adapter without checking its compatibility with the Raspberry Pi 4.

I, fortunately, came across thisRaspberry Pi 4 download guide and then bought the recommended Shinestar M.2 NVMe to USB 3.0 adapter. It was almost the same size as the M.2 NVMe 2280 (22 mm wide and 80 mm long), which is great for the Pico 5S case. I used the adapter to connect the Samsung SSD 970 EVO Plus M.2 PCIe NVMe 500 Go to the Raspberry Pi.


M.2 NVMe adapter to USB 3.0

After I installed and configured everything, I decided to quickly test the drive and find out the data transfer speed. To do this, I copied a large file from the SSD to my laptop usingscp:

$ scp pi@master:<source> <destination>
100% 1181MB 39.0MB/s 00:30


The result of 39 Mb / s disappointed me. These numbers are far from those needed to fully load the Gigabit Ethernet switch. I started looking for a possible bottleneck in the system, and realized that during file transfer one of the processor cores is always 100% loaded. After I found out that the processor was the bottleneck in data transfer, I quickly found out that the Raspberry Pi 4 does not have AES hardware support, since the Broadcom and the Raspberry Pi Foundation did not license ARMv8 cryptographic extensions. Interestingly, the processor is the bottleneck of the system on the Raspberry Pi 4, while the bottlenecks of the Raspberry Pi 3 were USB 2.0 and network interfaces.

A new test using netcat, under the same conditions, gave much better results at 104 Mb / s:

$ nc -l 6000 |dd bs=1m of=<destination> & ssh pi@master "dd bs=1M if=<source> | nc -q 0 $(hostname -I | awk '{print $1}') 6000"
[1] 71300 71301
1181+1 records in
1181+1 records out
1238558304 bytes (1.2 GB, 1.2 GiB) copied, 11.8632 s, 104 MB/s
0+740624 records in
0+740624 records out
1238558304 bytes transferred in 14.212518 secs (87145593 bytes/sec)
[1]  + 71300 done       nc -l 6000 |
       71301 done       dd bs=1m of=<destination>

Having dealt with the first SSD drive, I connected the second same drive to another Raspberry Pi 4 board. I am going to use this drive to backup the first one using something like Restic .

In addition, I plan to use SSDs to organize boot using the USB mass storage boot immediately after the Raspberry Pi 4 fully supports this boot method. It, in comparison with downloading from microSD, promises a higher speed and a more stable level of performance.

Software


I am a programmer, so I thought that it would be easier for me to deal with the software part of the cluster than with other questions. The results of the wonderful work on k3s performed by the Rancher team definitely helped me here . I will not go into particular details about setting up a Kubernetes cluster on a Raspberry Pi using k3s here. For those who are interested, I can recommend referring to this guide. Here are the main points to configure that I would like to dwell on.

To start, on each node, enable cgroups:

$ sudo sed -i '$ s/$/ cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory/' /boot/cmdline.txt
$ sudo reboot

Then install k3s on the main node:

$ curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644
#   
$ sudo systemctl status k3s

Now we get a token authorizing the connection of work nodes to the main node:

$ sudo cat /var/lib/rancher/k3s/server/node-token

Next - install k3s on each working node:

$ curl -sfL https://get.k3s.io | K3S_URL="https://<MASTER_IP>:6443" K3S_TOKEN="<NODE_TOKEN>" sh -
#   
$ sudo systemctl status k3s-agent

If you plan to use the internal registry of container images, which is installed by default, you may need to configure it in a special way, doing this in order to allow containerdimages to be loaded from it:

$ sudo sh -c 'REGISTRY=$(kubectl get svc -n kube-system registry -o jsonpath={.spec.clusterIP}); \
cat <<EOT >> /etc/rancher/k3s/registries.yaml
mirrors:
  "$REGISTRY":
    endpoint:
      - "http://$REGISTRY"
EOT'
$ sudo service k3s restart

Future plans


In this material, I did not disclose some important topics that are worthy of sufficiently deep analysis. For example, the following:

  • Software power management of work nodes and automatic cluster scaling .
  • Installing a PWM fan and adjusting its speed taking into account the temperature indicators of the system.
  • Install Pi-Hole on Kubernetes using MetalLB .

Perhaps I will write more about this.

In addition, I plan to continue working on the cluster by doing the following:


I am sure that when I talked about my experience in creating the Kubernetes home cluster, I forgot a lot. I am a programmer and am used to a rather heterogeneous breakdown of large tasks into smaller cases. Experience has shown that hardware is much less tolerant than software when it comes to trial and error work.

In general, the more I find out, the greater admiration in me is caused by modern hardware and software technologies. It amazes me how a person’s ingenuity was able to combine electrical phenomena and programming languages, making them something that can be considered an example of how consciousness controls matter.

Dear readers! Have you tried to do something similar to what the author of this article described?


All Articles