I’ve been busy, although I’ve been quiet here for a while. There have been some changes to the cluster, and I’ve learned quite a bit about containers, kubernetes, and some of the tools I’m using for this project. I hope to be able to post some content about these soon.
I’ve been taking a break. I’ve found that with the restrictions on activities due to (or rather, rationalized by) COVID19, I’m having work-life balance issues all over the place. I"m still working on the cluster, and I have some posts in the pipeline, but it’s going to be a while before I’m able to put the kind of time in required to maintain the pace I’ve had since March.
Maintaining a cluster of single-board computers can be a headache at times. Keeping a bunch of computers that should all be configured the same actually the same is a challenge. Fortunately, there are some tools available for managing your cluster that help make some of it easier. In this series, I’m going to discuss my home lab setup, and describe some of the components I’ve built to take some of the headache out of running it.
I have decided to scrap Jekyll in favor of Hugo for building static sites. I’m in the process of reworking this site, so there may be some broken things still. I’ve got a lot of stuff to write about, there’s a backlog of details from settling my NFS Client provisioner work, and a lot of stuff about drone CI.
I’ve been quiet, but also busy. After building and configuring my Cloudshell 2, I played around with several potential uses for it. Meanwhile, I ran into several issues with local-path-provider in k3s. Some of these were due to my own sloppiness; I managed to create two postgresql pods that played in a lot of the same spaces, and all manner of mischief ensued.
I’ve finished my NAS build. It’s built around an ODroid XU4 that I got from ebay. I replaced the fan/heatsink combination with a taller heatsink from AmeriDrod, which brought CPU temp down by 8°C. I picked up a pair of Seagate Ironwolf 1tb drives to use with it. All of this is housed in HardKernel’s CloudSHell 2 for XU4 case.
A closer look at the cluster Hardware and Operating System Currently, the cluster consists of 5 Raspberry Pi 4B systems with 4GB memory. Each of these has a 16gb micro SD card and a 16gb USB flash drive. I use the SD cards for boot only, but the root filesystem is on the flash drives.
Last night, I tore down the entire stack and rebuilt it. This was to accomodate two things: The release of Ubuntu 20.04, which required an upgrade, and A hardware change involving using USB flash drives for the root file system on all nodes, rather than just a micro SD card.
I’ve been sort-of following the series that Lee Carpenter is doing over at carpie.net, but for a while I was hung up on getting cert-manager to work. The specific failure mode I had was this: My external IP address (the IP assigned to my router by the cable company) for some reason isn’t routed correctly from inside my home network.
A little side-excursion on hardware infrastructure. pi_kube is a product of my fascination with Raspberry Pi and other single-board computers. The Raspberry Pi part is pretty much standard stuff in the IT-oriented parts of the maker universe. Other SBCs, though, are interesting both in terms of their technical capabilities and the ecosystems and communities they spawn.