Volume overviews
At its core, a volume is a directory, possibly with some data in it, which is accessible to the containers in a pod.
How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used.
To use a volume, the deployment resource needs to :
– specify the volumes to provide in .spec.volumes
– then specify where to mount those volumes into containers in .spec.containers[*].volumeMounts
HostPath
Overview
A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod.
This is not something that most Pods will need, but it may be helpful.
Use cases
– running a container that needs access to Docker internals; use a hostPath of /var/lib/docker
– running cAdvisor in a container; use a hostPath of /sys
– workaround to not cope with PersistentVolumeClaim that requires more configuration
Example : host volume to not cope with PersistentVolumClaim
Important things :
– There are several types of hostPath and checks associated. By default, no check is done and the real type is guessed. Look at the example to know all types.
– The nodeAffinity is in a some way mandatory.
To keep data consistent and updated between the pod starts of that deployment, we need that pod uses the same volume content at each pod creation.
But the hostPath volume is not shared between the nodes, so we need to bind the pod deployment to a specific node over the time.
apiVersion: apps/v1 kind: Deployment metadata: name: foo-redis namespace: my-apps spec: replicas: 1 selector: matchLabels: app: foo-redis template: # pod template metadata: labels: app: foo-redis spec: # mandatory affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: redis-data operator: In values: - "true" containers: - image: redis:6.0.9-alpine3.12 name: foo-redis command: ["redis-server"] args: ["--appendonly", "yes"] volumeMounts: - mountPath: /data name: redis-data volumes: - name: redis-data hostPath: path: /var/kub-volume-mounts/foo-redis/_data # default value : nothing. It means no check is performed type: DirectoryOrCreate # if the directory doesn't exist, an empty directory is created with permission set to 0755 and group and ownership of Kubelet. # type: Directory # Directory must exist # type: FileOrCreate # if the file doesn't exist, an empty file is created with permission set to 0755 and group and ownership of Kubelet. # type: File # File must exist # type: Socket # UNIX socket must exist at the given path # type: CharDevice # character device must exist # type: BlockDevice # block device must exist |
Local
Overview
A local volume represents a mounted local storage device such as a disk, partition or directory.
Local volumes can only be used as a statically created PersistentVolume, it means that dynamic provisioning is not supported.
When hostPath is used as workaround to not cope with PersistentVolumeClaim, Local volumes is actually conceptually very close to hostPath volumes.
Indeed, in both cases, node affinity is needed.
With hostPath volumes, the node affinity (that is optional) is defined in the deployment while with Local volumes, the node affinity (that is mandatory) is defined on the PersistentVolume.
Use cases
Define a static persistent volume with more options than hostPath
provides.
Example
We define a persistentVolume named redis-local-volume in mode FS/ReadWriteOnce with a capacity of 200Mi.
--- apiVersion: v1 kind: PersistentVolume metadata: name: redis-local-volume # PersistentVolume doesn't have namespace notion spec: capacity: #storage: 1Gi storage: 200Mi # Filesystem or "Raw???" volumeMode: Filesystem #possible values: ReadWriteOnce, ReadOnlyMany, ReadWriteMany accessModes: - ReadWriteOnce #possible values: Retain, Recycle, Delete # - Retain : when the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered "released" vut it is not yet available for another claim because the previous claimant's data remains on the volume. # An administrator can manually reclaim the volume with the following steps. # - Recycle : persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /var/kub-volume-mounts/redis/_data-localVolume nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: redis-data operator: In values: - "true" |
Then we deploy it :
kubectl apply -f localVolume.yml
We can see that the volume is created and available :
kubectl get persistentvolume redis-local-volume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE redis-local-volume 200Mi RWO Retain Available local-storage 2m42s |
To bind a PersistentVolume to a pod/container, we need to define :
– a PersistentVolumeClaim associated to that PersistentVolume
– a volume in spec.Volume
associated both to that PersistentVolume and the PersistentVolumeClaim
– a volumeMounts in spec.Volume
associated to the PersistentVolume and that define where mount the volume on the pod
The PersistentVolumeClaim + the Redis application deployment :
These two resource declarations don’t need to be in the same file.
For ReadWriteOnce accessModes pv, it makes sense to gather PersistentVolumeClaim and Deployment while for ReadWriteMany or ReadOnlyMany accessModes pv, we want to gather PersistentVolume and PersistentVolumeClaim as these are not specific to a specific deployment
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: redis-local-pv-claim namespace: my-apps spec: storageClassName: local-storage volumeName: redis-local-volume accessModes: - ReadWriteOnce resources: requests: storage: 200Mi --- apiVersion: apps/v1 kind: Deployment metadata: name: redis-local-volume namespace: my-apps spec: replicas: 1 selector: matchLabels: app: redis-local-volume template: # pod template metadata: labels: app: redis-local-volume spec: containers: - image: redis:6.0.9-alpine3.12 name: redis-local-volume command: ["redis-server"] args: ["--appendonly", "yes"] volumeMounts: - name: redis-local-volume mountPath: /data volumes: # reference the name of the persistentVolume - name: redis-local-volume persistentVolumeClaim: claimName: redis-local-pv-claim |
We create the resource :
kubectl apply -f k8s/redis-LocalVolume.yml
As a consequence, the pv and pvc are now bound and the deployment/pod (the redis app) can use the volume.
PersistentVolume, PersistentVolumeClaim and Deployment problems
The order of their creation may have an incidence.
For example fixing a pv or a pvc with an existing deployment may make the pod to not be aware that the pv or the pvc was fixed.
Generally we could solve the issue by deleting the pod or the deployment if not enough and by creating it after.
PersistentVolume and PersistentVolumeClaim resources update and delete constraints
These two resources are stricter than deployment, service or namespace in terms of updating.
PersistentVolumeClaim or PersistentVolume resource update : forbidden
Commands :
kubectl apply -f foo-resource
with an existing PersistentVolume or PersistentVolumeClaim .
Issue :
The PersistentVolumeClaim "redis-local-pv-claim" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims |
Or
The PersistentVolume "redis-local-volume" is invalid: nodeAffinity: Invalid value: core.VolumeNodeAffinity{Required:(*core.NodeSelector)(0xc005dcb040)}: field is immutable |
Cause :
PersistentVolume and PersistentVolumeClaim resources are immutable after creation except resources.requests
for bound claims.
Solution :
Delete the PersistentVolume or the PersistentVolumeClaim resource such as : kubectl -n my-apps delete persistentvolumeclaims redis-local-pv-claim
and recreating it from scratch : kubectl apply -f k8s/redis-LocalVolume.yml
PersistentVolume and PersistentVolumeClaim resources deletion : may hang up if in the state available or bound (look to work with released state) :
Commands :
kubectl delete persistentvolume redis-local-volume
or
kubectl -n my-apps delete persistentvolumeclaims redis-local-pv-claim
Issue :
The commands may hang up and never return.
After « breaking » the command, the display of persistentvolume or persistentvolumeclaims shows that both are not in the Bound state any longer but in the Terminating state.
Cause :
pv-protection finalizer and pvc-protection finalizer are executed.
But these never finish.
To see them :
kubectl describe persistentvolume redis-local-volume
kubectl -n my-apps describe persistentvolumeclaims redis-local-pv-claim
Solution :
Remove the just added finalizer kubernetes.io/pvc-protection of the pvc.
kubectl -n my-apps patch pvc redis-local-pv-claim -p '{"metadata":{"finalizers": []}}' --type=merge
Output :
persistentvolumeclaim/redis-local-pv-claim patched |
It will make the the pvc to be effectively terminated along the pv.