Kubernetes resources deployments

Table of matters

Kind : Deployment

Deploy a resource

Restart, delete permanently or just disable a pod

Kind : Service (ports definitions)

Others

Examples

Kubectl apply : common errors

Kubernetes resource deployment : common errors


Kind : Deployment

The definitions

The pod definition is defined in spec.template.spec of the Deployment
That is the equivalent of the PodSpec part of the Pod kind resource.
It defines many things.
It contains among others, the child element containers that defines specifically the containers similarly enough to Docker run (image, container name, volumes etc…)
The

Select the node for the pod deployment

There are multiple ways of specifying the node :
All are located as child of spec.template.spec.
– nodeName
– nodeSelector
– affinity

nodeName

Straighter way to specify the node : by its name.

Labels on nodes

Label are everywhere in K8s (not only in nodes) but that plays a important role for pod deployment configured with node selectors (nodeSelector or affinity way).

nodeSelector

nodeSelector (located in deployment/spec/template/spec) provides a very simple way to constrain pods to nodes with particular labels.

affinity

The affinity alternative (also located in deployment/spec/template/spec) , greatly expands the types of constraints you can express.
We have 3 types of affinity :
– nodeAffinity
– podAffinity
– podAntiAffinity

All of these 3 affinities allow to specify a soft constraint (preferredDuringSchedulingIgnoredDuringExecution) or a hard constraint (requiredDuringSchedulingIgnoredDuringExecution) on the node to select

The containers element

imagePullPolicy

imagePullPolicy: IfNotPresent: the image is pulled only if it is not already present locally.
imagePullPolicy: Always: every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container.
imagePullPolicy is omitted and either the image tag is :latest or it is omitted: Always is applied.
imagePullPolicy is omitted and the image tag is present but not :latest: IfNotPresent is applied.
imagePullPolicy: Never: the image is assumed to exist locally. No attempt is made to pull the image.

The pod specification

We find it as PodSpec element in kind of Pod and as spec.template.spec element in kind of Deployment.

Add additional entries in the /etc/hosts of the pod :

apiVersion: apps/v1
kind: Deployment
metadata:
   # ...
spec:
  template: # pod template
     # ...
    spec:
      hostAliases:
        - ip: "192.168.0.2"
          hostnames:
            - "foo-machine"

….

Deploy a resource

Deploy or update a resource

kubectl apply -f fooFile.yml or kubectl apply << EOF yml conf ...

When a deployment is updated by the apply subcommand ?

A Deployment’s rollout is triggered only if the Deployment’s Pod template (that is, .spec.template) is changed, for example if the labels or container images of the template are updated. Otherwise no update occurs.

How to force a deployment update ?

We have two ways :
1) To delete and recreate the deployment :
kubectl delete deployment foo-app
kubectl apply -f fooFile.yml
Note:
If scripted, we could add a always true condition to the delete operation :
kubectl delete deployment foo-app || true
About the kubectl replace -f fooFile.yml command, it may be very helpful but be cautious.
It deletes all objects declared in the yml file and and a namespace deletion means a deletion of all deployments on that.
So if fooFile.yml declares a namespace shared by others deployment, the replace command will delete all deployments !

2) To update a label of the deployment template.
For example we could define at label that changes at each build. It may contain the current git commit sha1 or the current date-time.
The idea is that we update it before executing kubectl apply.
To see a concrete example, look below the example « Commands to deploy the resources ».

How to restart, disable temporarily or permanently a pod

restart a pod

Delete the pod with kubectl or docker :
kubectl delete pod foo-pod-1234hash
or
docker rm -f k8s_foo
It will delete the pod and recreate a new one automatically because the deployment resource is still present.

delete permanently a pod

Delete the deployment with kubectl :
kubectl delete deployment foo
One the deployment deleted, the pod will be deleted.
the pod will be deleted quite fast but it is not necessarily immediate

Disable temporarily a pod

Set its replicas to 0 such as :
kubectl scale --replicas=0 deployment/FooDeployment or
kubectl scale --replicas=0 deployment FooDeployment

Kind : Service (ports definitions)

How to expose port pods ?
By deploying a service object matching to the deployment object of the pod.
A service can expose several types of ports.
For every of them, a yml list is expected.


Exposing a port only inside the cluster : ClusterIP port

apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-app
  # service name matters. It provides the way which other pods of the cluster may communicate with that pod
  name: my-app
  namespace: my-apps
spec:
  type: ClusterIP
  ports:
    - name: "application-port" # port.name is just informational
      targetPort: 6379 # port of the running app
      port: 6379       # Cluster IP Port
  # do the match with the pod
  selector:
    app: my-app

Exposing a port both everywhere (inside and outer the cluster) : NodePort port

apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-app
  # service name matters. It provides the way which other pods of the cluster may communicate with that pod
  name: my-app
  namespace: my-apps
spec:
  type: NodePort
  ports:
    - name: "application-port" # port.name is just informational
      targetPort: 8090 # port of the running app
      port: 8090 # Cluster IP Port
      nodePort: 30000 # External port (has to be unique in the cluster).<br />                     # By default Kubernetes  allocates a node port from a range (default: 30000-32767)   
  # do the match with the pod
  selector:
    app: my-app

How to request the port of a pod

For NodePorts ports:
External and pods can reach it thanks to the node hostname + NodePort port.
We could specify any node hostanme of the Cluster since NodePort are unique through the whole cluster.
Example : curl node01:7070

Only pods can reach it from the clusterIp + clusterIp port.
Example : curl 10.98.32.4:7070

For ClusterIp ports inside the same namespace:
Pods reach it with service_foo:clusterIpPort
Example : curl my-service:7070

About curl :
ClusterIp is a virtual IP that doesn’t allow to be pinged. So don’t validate the communication between two pods thanks to a ping if the target pod exposes only CluserIp port(s).

For ClusterIp ports located in a distinct namespace:
Pods reach it with service_foo.namespace_foo:clusterIpPort
Example : curl my-service.my-namespace:7070

Examples of resources.yml

Example : deploy a foo namespace along a foo-app deployment and service for a spring boot application along a foo-redis deployment and service for a Redis database used by the Spring Boot app

Dockerfile

Interesting things for Dockerfile
– we created layers by kind of resources : first we have libraries (don’t move often), then app configuration (move not often either and fast to perform), at last the app that is the less stable of the three.
The idea is to recreate first layers only when needed.
– we specify 0.0.0.0 as host instead of localhost because the app runs under docker and that the developer that wants to debug the app is not ever localhost, it may be distant host (the developer machine).

# version where we rely on app build done before docker build
# gitlab : useful when we have a first pipeline stage that build the component
# syntax=docker/dockerfile:experimental
FROM adoptopenjdk/openjdk11:jdk-11.0.11_9-alpine
RUN apk update
RUN apk add dos2unix
 
RUN echo "copying app binary and libraries ..."
ARG DEPENDENCY=target/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
RUN ls -la /app/
RUN cat /app/application.yml
 
ENTRYPOINT ["java", "-cp", "/app:/app/lib/*", "-Djava.security.egd=file:/dev/./urandom", \
             "-Xdebug",  "-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=0.0.0.0:8888", \
             "davidxxx.MyApplication"]

deployment.yml or k8s resources definiton

Interesting things for deployment.yml

About spring boot deployment :
– we force the (re-)deploy thanks to a templatized label defined in the deployment metadata that is always distinct over the time.
– we use readiness and liveness healthcheck provided by Spring Boot 2.2.
Spring Boot exposes these two sub health actuators when it detects that the apps runs on Kubernetes.
The detection relies on the existence of two « *_SERVICE_HOST » and « *_SERVICE_PORT » env variables (added by Kubernetes for any service resource deployed).
– we define a service with NodePort type for the web app as it is designed to use outside the K8s cluster.
– we define both a port service for the java app and the java debug

About redis deployment :
– we define a service with ClusterIP type for redis as it is designed to use only by the spring boot app (so inside the K8s cluster).

apiVersion: v1
kind: Namespace
metadata:
  name: my-apps
---
# spring boot app
apiVersion: apps/v1
kind: Deployment
metadata:
  name: spring-boot-docker-kubernetes-example-sboot
  namespace: my-apps
spec:
  replicas: 1
  selector: #it defines how the Deployment finds which Pods to manage.
    matchLabels: # It may have complexer rules. Here  we simply select a label that is defined just below in the pod template
      app: spring-boot-docker-kubernetes-example-sboot
  template: # pod template
    metadata:
      labels:
        #Optional : add a commit sha or a datime label to make the deployment to be updated by kubcetl apply even if the docker image is the same
        app: spring-boot-docker-kubernetes-example-sboot
        commitSha: __COMMIT_SHA__
        dateTime: __DATE_TIME__
        jarVersion: __JAR_VERSION__
    spec:
      containers:
        - image: registry.david.org:444/spring-boot-docker-kubernetes-example-sboot:__IMAGE_VERSION__ #Make version variable-sboot
          name: spring-boot-docker-kubernetes-example-sboot
          command: ["java"]
          args: ["-cp", "/app:/app/lib/*", "-Djava.security.egd=file:/dev/./urandom", "-Xdebug", "-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=0.0.0.0:8888", "davidxxx.SpringBootAndRedisApplication"]
          imagePullPolicy: Always # useful in dev because the source code may change but that we reuse the same dev docker image
#          imagePullPolicy: IfNotPresent # prevent pulling from a docker registry
          readinessProbe:
            httpGet:
              scheme: HTTP
              # We use the default spring boot health check readiness (spring boot 2.2 or +)
              path: /actuator/health/readiness
              port: 8090
            initialDelaySeconds: 15
            timeoutSeconds: 5
          livenessProbe:
            httpGet:
              scheme: HTTP
              # We use the default spring boot health check liveness (spring boot 2.2 or +)
              path: /actuator/health/liveness
              port: 8090
            initialDelaySeconds: 15
            timeoutSeconds: 15
           # it is informational (only ports declared in service are required). It is alike docker EXPOSE instruction
#          ports:
#            - containerPort: 8090 # most of the time : it has to be the same than NodePort.targetPort
 
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: spring-boot-docker-kubernetes-example-sboot
  # service name matters. It provides the way which other pods of the cluster may communicate with that pod
  name: spring-boot-docker-kubernetes-example-sboot
  namespace: my-apps
spec:
  type: NodePort
  ports:
    - name: "application-port" # port.name is just informational
      targetPort: 8090 # port of the running app
      port: 8090 # Cluster IP Port
      nodePort: 30000 # External port (has to be unique in the cluster). By default Kubernetes  allocates  a node port from a range (default: 30000-32767)
    - name: "debug-port"# port.name is just informational
      targetPort: 8888 # port of the running app
      port: 8888 # Cluster IP Port
      nodePort: 30001 # External port (has to be unique in the cluster). By default Kubernetes  allocates  a node port from a range (default: 30000-32767)
  selector:
    app: spring-boot-docker-kubernetes-example-sboot
 
 
###------- REDIS---------###
# Redis app
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: spring-boot-docker-kubernetes-example-redis
  namespace: my-apps
 
spec:
  replicas: 1
  selector: #it defines how the Deployment finds which Pods to manage.
    matchLabels: # It may have complexer rules. Here  we simply select a label that is defined just below in the pod template
      app: spring-boot-docker-kubernetes-example-redis
  template: # pod template
    metadata:
      labels:
        app: spring-boot-docker-kubernetes-example-redis
        commitSha: __COMMIT_SHA__ #Optional : add a commit sha label to make the deployment to be updated by kubcetl apply even if the docker image is the same
        dateTime: __DATE_TIME__
    spec:
# Optional : hard constraint : force to deploy on  a node that has as tag redis-data=true
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: redis-data
                    operator: In
                    values:
                      - "true"
# alternative : soft constraint : try to deploy on  a node that has as tag redis-data=true
#          preferredDuringSchedulingIgnoredDuringExecution:
#            - weight: 1
#              preference:
#                matchExpressions:
#                  - key: redis-data
#                    operator: In
#                    values:
#                      - "true"
 
      containers:
        - image: redis:6.0.9-alpine3.12
          name: spring-boot-docker-kubernetes-example-redis
          # --appendonly flag allows to persist data into a file
          command: ["redis-server"]
          args: ["--appendonly", "yes"]
          # it is informational (only ports declared in service are required). It is alike docker EXPOSE instruction
#          ports:
#            - containerPort: 6379
#              name: foo-redis
          volumeMounts:
            - mountPath: /data
              name: redis-data
      volumes:
        - name: redis-data
          hostPath:
            path: /var/kub-volume-mounts/spring-boot-docker-kubernetes-example-redis/_data
 
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: spring-boot-docker-kubernetes-example-redis
  # service name matters. It provides the way which other pods of the cluster may communicate with that pod
  name: spring-boot-docker-kubernetes-example-redis
  namespace: my-apps
spec:
  type: ClusterIP
  ports:
    - name: "application-port" # port.name is just informational
      targetPort: 6379 # port of the running app
      port: 6379       # Cluster IP Port
 
  selector:
    app: spring-boot-docker-kubernetes-example-redis

Commands to deploy the resources :

#!/bin/bash
set -e
COMMIT_SHA1=\"123456\"
EPOCH_SEC=\"$(date +%s)\"
 
jar_filename=$(ls target/*.jar)
reg='(.*)([0-9]+.[0-9]+.[0-9].*)(.jar)'
[[ $jar_filename =~ $reg ]]
jar_version="${BASH_REMATCH[2]}"
echo "jar_version=$jar_version"
 
cd k8s 
rm -f deployment-dynamic.yml
sed s/__COMMIT_SHA__/"${COMMIT_SHA1}"/ deployment.yml | \
sed s/__IMAGE_VERSION__/1.0/ | \
sed s/__DATE_TIME__/${EPOCH_SEC}/ | \
sed s/__JAR_VERSION__/${jar_version}/ \
> deployment-dynamic.yml
echo "---deploy the application to K8s---"
kubectl apply -f ./deployment-dynamic.yml

Kubectl apply : common errors

The yaml has indentation issues

Symptoms :
The deploy fails with an error like :

error: error parsing ./deployment.yml: error converting YAML to JSON: yaml: line 11: did not find expected key

Solution : sometimes the line doesn’t match to the yaml issue syntax to fix.
To find that we can use yamllint on the file that should output an error such as:

 123:21    warning  wrong indentation: expected 22 but found 20  (indentation)

So we just to fix the indentation according the error message.

The yaml contains string value that is convertible into a number

Symptoms :
The deploy fails with an error like :

The request is invalid: patch: Invalid value: "map[metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:
....
spec:map[template:map[metadata:map[labels:map[commitSha:123456]]]]]": cannot convert int64 to string

Solution : identify the key which the value looks like a number.
Example : commitSha: 123456
And enclose it with quotes such as : commitSha: "123456"

Kubernetes pods: tricks to fix some runtime errors

Deployment/pod/service/container fails after kubectl apply

Symptoms :
The pod starting looks wrong.
With event messages such as : « Back-off restarting failed container » or « CrashLoopBackOff ».

General solution : inspect events and logs for it and delete container/pod/deployment if required.

– First : inspect events for the namespace sorted by date creation:
kubectl -n FOO_NS get events --sort-by=.metadata.creationTimestamp

– If not enough inspect logs of the failed pod :
kubectl -n FOO_NS logs FOO_POD

– If not enough inspect logs of the previous failed pod :
kubectl -n FOO_NS logs -p FOO_POD

Fix the issue and redeploy the pod/deployment…
If issue after redeploy (ex: two containers instead of one), remove all failing objects and only after redeploy.

– full remove : remove pod, service and deployment associated and re apply it still : kubectl delete deployment foo
And add again the deployment :
kubectl apply -f foo.yml

– remove only the pod. In that case, Kubernetes will automatically create a new pod for the deployment associated to :
kubectl delete pod foo-generatedHash

– kill  the docker container of the application (not which one with the POD prefix of Kubernetes).
By doing that, Kubernetes automatically recreates the container for that pod :
containerId=kubectl describe pods foo-generatedHash | grep -o "docker://.*" | cut -d '/' -f 3
docker rm -f containerId

Fix DiskPressure issue on a Node

Symptoms :
The pod cannot be deployed on a node, it has the status « evicted » and logs (kubectl events and kubelet logs) show that the node has a taint/condition related to DiskPressure.

Solution : clean useless data and/or change the hard eviction configuration

– First : inspect the node disk space:
the df command may help.

– Second : check the kubelet configration of the node and adjust it if required.
Official doc about out of resources in K8s.

The 4 properties of the evictionHard property :

DiskPressure	nodefs.available, nodefs.inodesFree, imagefs.available, or imagefs.inodesFree	
Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold

*)To check the kubelet configuration, we could see how the process is started on the node :
ps aux | grep kubelet
Generally, we find the kubectl startup parameters here : --config=/var/lib/kubelet/config.yaml

*)To be really sure of the kubectl configuration or if we want to get it from the master node, we could start a kubectl proxy and request the node configuration via the api :
kubectl proxy --port=8080 &
And then :

curl "localhost:8001/api/v1/nodes/MY_NODE/proxy/configz"


*) Try a very minimal threshold for hard eviction and gc cleanup .
You can look at the example here (to add link).

Others

Taints

Remove the NoSchedule taint that is by default set on master :
kubectl taint nodes $(hostname) node-role.kubernetes.io/master:NoSchedule-

Ce contenu a été publié dans Non classé. Vous pouvez le mettre en favoris avec ce permalien.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *