Tag Archives: nfs

Progress on pvc storage


I’ve been quiet, but also busy. After building and configuring my Cloudshell 2, I played around with several potential uses for it. Meanwhile, I ran into several issues with local-path-provider in k3s. Some of these were due to my own sloppiness; I managed to create two postgresql pods that played in a lot of the same spaces, and all manner of mischief ensued.

Tonight, I got nfs-client-provisioner working. This involved setting up an nfs server on the Odroid xu4 that powers the Cloudshell2, which I’ll describe in another post.

The nfs provisioner for kubernetes requires that the nfs client be installed on all the nodes in the cluster. This turned out to be pretty easy. Ansible to the rescue:

ansible -m package -a 'name=nfs-common' master:workers

The next step was to set up the provisioner’s configuration. Most of what I did here was based on the NFS section of Isaac Johnson’s post at FreshBrewed.Science I grabbed a copy of the sample config first:

wget https://raw.githubusercontent.com/jromers/k8s-ol-howto/master/nfs-client/values-nfs-client.yaml

These values need to be changed to match your NFS server configuration.

replicaCount: 1

nfs:
  server: 10.0.96.30
  path: /storage1/pvc
  mountOptions:

storageClass:
  archiveOnDelete: false

Once that was done, it was time to deploy the client. That can be accomplished via helm:

helm install nfs -f values-nfs-client.yaml 
    stable/nfs-client-provisioner 
    --set image.repository=quay.io/external_storage/nfs-client-provisioner-arm

Wait a few minutes for the deploy to finish, and there we have it:

$ kubectl get storageclasses
NAME                   PROVISIONER                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path                      Delete          WaitForFirstConsumer   false                  23d
nfs-client             cluster.local/nfs-nfs-client-provisioner   Delete          Immediate              true                   57m

Odroid XU-4 file server (CloudShell 2)

I’ve finished my NAS build. It’s built around an ODroid XU4 that I got from ebay. I replaced the fan/heatsink combination with a taller heatsink from AmeriDrod, which brought CPU temp down by 8°C. I picked up a pair of Seagate Ironwolf 1tb drives to use with it. All of this is housed in HardKernel’s CloudShell 2 for XU4 case. The CloudShell 2 can support RAID 0, RAID 1, spanned, or seperate volumes; I went with RAID 1 out of paranoia. It’s running an ODroid-provided Ubuntu Bionic (18.04) minimal system with Samba and Gluster installed.

I didn’t much care for the display software provided by HardKernel, and there was some discussion about how those scripts are CPU-intensive. I tried a few things from scratch, but in the end I settled on Dave Burkhardt’s nmon wrapper scripts for the CloudShell 2 display, and I’m happy with the results.

For now, I’m using this mainly as a share ddrive for the family