It’s been a while since I’ve posted, and there have been a lot of changes in the lab. The cluster has grown, and in the process it has been reshaped several times. I’ll try to cover all of what’s happened in this post. This project has always been about learning, and I’ve learned a lot.
Pi-Kube consists of a cluster of 10 Raspberry Pi 4b single-board computers, housed in a Pico 10H cluster case. I went with the Picocluster case solution out of frustration with finding a good power supply solution for multiple Raspberry Pi boards. Without doing a full review, I’ll just say that the Pico 10H has been mostly satisfactory; I’ve not had any power issues with a full cluster of 4b boards, the unit is extremely quiet and provides more than adequate cooling for the cluster.
The only technical downside I’ve encountered are with the included 8-port switches (the 10H includes 2 of these,) which seem to be dropping enough packets to make PXE booting the cluster problematic; I would consistently get 2-3 boards to fail to boot every time I cycled power. I ultimately switched to a 16-port TP link switch to resolve this, and then later I abandoned network booting entirely when I switched to k3os.
The non-technical downside of the Pico 10H is the price, which puts it out of range for most single-board computer projects. The best I can say is that you’re paying for the engineering, and the engineering is mostly solid.
The other components of the cluster, aside from the aforementioned switch, are a NAS consisting of an Odroid XU-4 board in a Cloudshell 2 case, with 2 4TB drives installed. This has been a reliable little storage appliance for me, but unfortunately it’s made up of largely discontinued or out-of-stock components. Based on the experience, I’d probably recommend Hardkernel components for a NAS project. Having said that, I’d suggest you look at a dedicated NAS from Synology as well; I use a Diskstation DS220J unit for other projects in my homelab, and I’m very happy with it.
In the next post, I’ll discuss the software I’m currently using to run the cluster, and talk about how I got there.