Pi Kube
Progress on PVC storage

I’ve been quiet, but also busy. After building and configuring my Cloudshell 2, I played around with several potential uses for it. Meanwhile, I ran into several issues with local-path-provider in k3s. Some of these were due to my own sloppiness; I managed to create two postgresql pods that played in a lot of the same spaces, and all manner of mischief ensued.

Tonight, I got nfs-client-provisioner working. This involved setting up an nfs server on the Odroid xu4 that powers the Cloudshell2, which I’ll describe in another post.

The nfs provisioner for kubernetes requires that the nfs client be installed on all the nodes in the cluster. This turned out to be pretty easy. Ansible to the rescue:

ansible -m package -a 'name=nfs-common' master:workers

The next step was to set up the provisioner’s configuration. Most of what I did here was based on the NFS section of Isaac Johnson’s post at FreshBrewed.Science I grabbed a copy of the sample config first:

wget https://raw.githubusercontent.com/jromers/k8s-ol-howto/master/nfs-client/values-nfs-client.yaml

These values need to be changed to match your NFS server configuration.

replicaCount: 1

  path: /storage1/pvc

  archiveOnDelete: false

Once that was done, it was time to deploy the client. That can be accomplished via helm:

helm install nfs -f values-nfs-client.yaml \
    stable/nfs-client-provisioner \
    --set image.repository=quay.io/external_storage/nfs-client-provisioner-arm

Wait a few minutes for the deploy to finish, and there we have it:

$ kubectl get storageclasses
local-path (default)   rancher.io/local-path                      Delete          WaitForFirstConsumer   false                  23d
nfs-client             cluster.local/nfs-nfs-client-provisioner   Delete          Immediate              true                   57m
26 May 2020
Design pdevty