Recent changes in Docker’s terms of service include the introduction of rate limits for both anonymous and authenticated (but non-paying) pull operations from Docker Hub. While the limits themselves are reasonably generous for casual users, they can pose a problem for those running clusters and/or doing heavy development work. One way to get around this (and also be a good citizen and not overuse Docker’s free services) is to run a pull-through registry cache.
In my current home network, I have two independent kubernetes clusters running, and several infrastructure servers. I want to expose some of the services running on the clusters via the internet. I needed a solution that would let me balance requests across the ingress points for each cluster, and give me the flexibility to run services on stand-alone servers if necessary.
I’ve been busy, although I’ve been quiet here for a while. There have been some changes to the cluster, and I’ve learned quite a bit about containers, kubernetes, and some of the tools I’m using for this project. I hope to be able to post some content about these soon.
I’ve been taking a break. I’ve found that with the restrictions on activities due to (or rather, rationalized by) COVID19, I’m having work-life balance issues all over the place. I"m still working on the cluster, and I have some posts in the pipeline, but it’s going to be a while before I’m able to put the kind of time in required to maintain the pace I’ve had since March.
Maintaining a cluster of single-board computers can be a headache at times. Keeping a bunch of computers that should all be configured the same actually the same is a challenge. Fortunately, there are some tools available for managing your cluster that help make some of it easier. In this series, I’m going to discuss my home lab setup, and describe some of the components I’ve built to take some of the headache out of running it.
I have decided to scrap Jekyll in favor of Hugo for building static sites. I’m in the process of reworking this site, so there may be some broken things still. I’ve got a lot of stuff to write about, there’s a backlog of details from settling my NFS Client provisioner work, and a lot of stuff about drone CI.
I’ve been quiet, but also busy. After building and configuring my Cloudshell 2, I played around with several potential uses for it. Meanwhile, I ran into several issues with local-path-provider in k3s. Some of these were due to my own sloppiness; I managed to create two postgresql pods that played in a lot of the same spaces, and all manner of mischief ensued.
I’ve finished my NAS build. It’s built around an ODroid XU4 that I got from ebay. I replaced the fan/heatsink combination with a taller heatsink from AmeriDrod, which brought CPU temp down by 8°C. I picked up a pair of Seagate Ironwolf 1tb drives to use with it. All of this is housed in HardKernel’s CloudSHell 2 for XU4 case.
A closer look at the cluster Hardware and Operating System Currently, the cluster consists of 5 Raspberry Pi 4B systems with 4GB memory. Each of these has a 16gb micro SD card and a 16gb USB flash drive. I use the SD cards for boot only, but the root filesystem is on the flash drives.
Last night, I tore down the entire stack and rebuilt it. This was to accomodate two things: The release of Ubuntu 20.04, which required an upgrade, and A hardware change involving using USB flash drives for the root file system on all nodes, rather than just a micro SD card.