I’ve been quiet, but also busy. After building and configuring my Cloudshell 2, I played around with several potential uses for it. Meanwhile, I ran into several issues with local-path-provider in k3s. Some of these were due to my own sloppiness; I managed to create two postgresql pods that played in a lot of the same spaces, and all manner of mischief ensued.
Tonight, I got nfs-client-provisioner working. This involved setting up an nfs server on the Odroid xu4 that powers the Cloudshell2, which I’ll describe in another post.
The nfs provisioner for kubernetes requires that the nfs client be installed on all the nodes in the cluster. This turned out to be pretty easy. Ansible to the rescue:
ansible -m package -a 'name=nfs-common' master:workers
The next step was to set up the provisioner’s configuration. Most of what I did here was based on the NFS section of Isaac Johnson’s post at FreshBrewed.Science I grabbed a copy of the sample config first:
These values need to be changed to match your NFS server configuration.
replicaCount: 1 nfs: server: 10.0.96.30 path: /storage1/pvc mountOptions: storageClass: archiveOnDelete: false
Once that was done, it was time to deploy the client. That can be accomplished via helm:
helm install nfs -f values-nfs-client.yaml \ stable/nfs-client-provisioner \ --set image.repository=quay.io/external_storage/nfs-client-provisioner-arm
Wait a few minutes for the deploy to finish, and there we have it:
$ kubectl get storageclasses NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 23d nfs-client cluster.local/nfs-nfs-client-provisioner Delete Immediate true 57m