gitlab pipelines with python application

Define a stage for creating a docker image for the python application and running python tests on a container of that image

This approach is more verbose than directly executing python tests without creating a docker image for the application.
But it has multiple advantages:
– the user may execute tests directly with docker since the image is available outside gitlab CI.
– We can add further stages that may use the docker image we have created, for example we can launch integration tests with this image, or use this image to deploy the application on target environments

Define a dockerfile that contains the application with all required dependencies

Some remarks:
– According to the python modules we use in our python application, we may encounter some errors at runtime during python execution.
This is related to missing OS dependencies in the docker image that we generally find in a conventional OS .
For example if we depend on opencv in our python application, we can have this kind of error at runtime:

ImportError: libGL.so.1: cannot open shared object file: No such file or directory

To fix this problem we have to install in the first steps of our dockerfile the OS dependencies that may be missing in our docker base image .
– Python doesn’t like we use the root user to install python dependencies(Generally pip produce says some warning if we do that), so we create a user and perform steps on the dockerfile on behalf of this user and store the application content/dependencies inside the home directory of this user.
– Concerning dependencies download, we have multiple ways to address this issue.
In our case, we do it in a very simple way: we copy the requirement text file and install referenced dependencies before copying the whole application.
In that way, we ensure we download all dependencies only when the requirement text file changes.

Here is the dockerfile:

FROM python:3.9.14
# Error during python execution related to opencv:
# ImportError: libGL.so.1: cannot open shared object file: No such file or directory
#solution: install opencv dependencies that may be missing 
RUN apt-get update
RUN apt-get install ffmpeg libsm6 libxext6  -y 
RUN useradd  -m python_user
USER python_user
RUN mkdir /home/python_user/foo_application 
WORKDIR /home/python_user/foo_application
COPY  --chown=python_user:python_user ./requirements_linux.txt ./requirements_linux.txt
RUN python -m pip install --user  -r requirements_linux.txt
COPY  --chown=python_user:python_user . .

Define the pipeline stage in gitlab CI that builds the docker image and runs a container of it to execute its tests

Some remarks:
– We control whether tests are successful or not thanks to the gitlab junit report feature that analyzes a junit report (xml file) and produces 2 things: a report viewable inside gitlab web interface that lists successful and failing tests and very important, the result of the current stage that is success or failure.
– That’s why we execute tests in multiple steps: first we run the container, then we execute tests on this container, afterwe determinate with the exit code returned by pytest if tests are successful or not and at last we copy the test report from the container to the gitlab stage working directory.
The trick with the exit code returned by pytest is required because Gitlab doesn’t analyze the report junit file to determinate if the stage has to be considered as successful or failing, as usually it relies on the exit code of a step to know if the current stage has to be considered as failing .
Without this trick, the step after docker exec will never be executed and so the report will never be instrumented by gitlab.
– We can notice that we don’t need to use the cache feature provided by gitlab to prevent from downloading the pip dependencies at each gitlab build. Remember, the pip dependencies are not downloaded inside the gitlab stage but inside the docker build, so the gitlab cache would be helpless here.
The pipeline definition:

stages:
  - python-test
 
python test:
  stage: python-test
  rules:
    - when: on_success
  variables:
    CI_DEBUG_TRACE: "false"
  tags:
    - docker
#  cache:
#    paths:
#      - /home/python_user/.cache/pip/
  script:
    - docker build -f ./docker/Dockerfile  --tag foo_application:1.0 .
    - docker rm -f foo_application_test  ||  true
    - docker run  --name foo_application_test  -d  -ti --entrypoint  bash foo_application:1.0
    - is_test_execution_success=0
    - docker exec foo_application_test python -m pytest .tests --junitxml=report.xml || { is_test_execution_success=-1  && true; }
    - docker cp foo_application_test:/home/python_user/foo_application/report.xml .
 
  artifacts:
    when: always
    paths:
      - report.xml
    reports:
      junit: report.xml
    expire_in: 2 weeks
Ce contenu a été publié dans Non classé. Vous pouvez le mettre en favoris avec ce permalien.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *