Angular on Docker

Docker choices for fundamental things to address

Here are fundamentals things to address when we want to run an Angular application on Docker :  

– The package manager for JavaScript libraries
– How to build and run the Angular application

Here my choices :
– The package manager : there are several options but to make things simple we will use the default package manager used by Node.js.

– How to build and run the Angular application :
In recent angular versions, in any way, we are using the module bundler Wepack to build a packaged module of our application as that is the default whatever the way of build/run the application with ng-client (indeed ng build and ng serve use Webpack)
. But there is still a choice : using ng serve to run the application in memory or using ng build to first build them into files/folders and host these files in a http server.
In Docker, we will follow the same flow that we would have follow without it : using ng-serve in dev env and hosting the built application (static resources) in a http server.
For the http server : nginx and httpd are reasonable choices, here I will use httpd.

As you guess, we will define docker build and run strategies according to the constraints of the targeted environment.

Our illustrating use case

We have 1 angular application foo to build and run on docker.  We need to make them usable in 3 environments : the local development, the integration and the production environment. The local development environment appears very different from the others in terms of needs and constraints while the integration and the production environment appear very similar for these same matters. A rather common thing.

The docker-compose file and the Dockerfile files in integration/production-like environments

Make image builds variable according to the target environment

Angular implements the concept of environments. Some default files are present in src/environments and we could update them add others to custom the values of some properties/fields according to the target env.
So we need just a way to pass the correct key-value when the application is build such as : ng build --env=myenv.
From the docker side, we need to feed the value of that parameter. We make it by declaring ARG ENV in the DockerFile and by defining in each docker compose template, the build:args with the expected arg. So, we could use it in a DockerFile instruction such as : ng build --env=$ENV

Use a multi-stage build

From a high level view, we want to define a multi-step build : a first one based on Node, that builds the application thanks to Node and ng-cli commands and a second one based on Httpd that copies the built application from the first step into the http server.
It makes the final image size smaller and it also improve the reading of the Dockerfile instructions.

Rely on cache during the image build

During the first step (build the application), we don’t want to download all node_modules whenever the package.json file is modified.
That is really too long. To prevent it, we use buildkit (the experimental build of docker) to cache downloaded modules in the docker builder area. 
But because with buildkit, cached resources are available only during the instruction that declares that and not after, we need to explicitly copy cached resources somewhere in th image during that instruction to be able to use it later in the current build.
Another alternative is chaining multiple statements inside a single Docker instruction such as RUN foo && bar && foobar. It is conciser but it has also the drawback to use a single docker image layer. It means that we would execute the all chained statements whatever what has changed. That is not very efficient.

Copy in the httpd image the suitable configuration for a SPA hosted

That step allows to define index.html as the default page for requested urls that should not be considered as requests to the backends.

The DockerFile

# syntax=docker/dockerfile:experimental
 
########################################
FROM node:12.13.1-stretch as angular_build
########################################
ARG ENV
 
#1) Install ng-client globally 
RUN npm install --global @angular/cli@1.7.4
 
#2) Download modules dependencies and cache them
RUN mkdir /app
COPY ./package.json /app-temp/package.json 
RUN --mount=type=cache,target=/app-temp/node_modules \      
    npm install --loglevel verbose && \
    cp -r /app-temp/node_modules /app/node_modules     
# Check 
RUN test -d /app/node_modules/@angular 
 
WORKDIR /app
 
#3) Copy all sources files 
COPY ./src ./src
COPY *.json ./
 
#4) Buid the App package
RUN ng build --env=$ENV
 
####################
FROM httpd:2.4
####################
 
#Minimum to debug/adjust
RUN apt update
RUN apt-get install -y less procps vim iputils-ping net-tools wget
 
ARG APACHE_BASE_PATH=/usr/local/apache2
COPY --from=angular_build /app/build/ $APACHE_BASE_PATH/htdocs/
COPY docker/apache2-conf/conf $APACHE_BASE_PATH/conf

The Dockerfiles in local/dev environments

A distinct DockerFile because a distinct requirement

The target is really different here compared to the integration-like environments. We don’t want to use an httpd server because the ng-client module is just enough and at last, the final image size has less importance. Here which matters is make the development cycles as fast as possible.

Use a single step build

Here no readability issue to use a build with a single step. Besides, the size image doesn’t matter. So we do as simple as possible;

Mount a volume from the host with the whole folder of the angular application

Mounting the source code (the src folder that contains js, html sources of our application) is in theory the single thing that we need. But in fact, we could not mount only that part because volume mounting is at runtime, not at build time.
At build time, the node_modules download (npm install) can only be done inside a folder in the image. So when the application starts with ng serve, and with a mounted volume of the source code, it could work only if we could specify the node_modules location of the application to the location in the built image. But we cannot do that because Node has not been designed in that way. Indeed, for some reasons, the node_modules folder associated to a package.json build is necessarily in the node_modules folder at the same level as the package.json.
It means that we need to mount the whole folder containing the source application. In that way, when the contains start, we have « just » to copy the node_modules created during the build at the root of the volume mounting from the host.

Rely on cache during the image build

The single common point in the strategy used is the cache using. We never want to download twice the same dependency in anyway. And as for the integration-like environment, we still need to cope with buildkit cache limitations restricted to the current instruction.
Here it may look like still uglier because we copy not once but twice to make the cache content available at runtime. That additional copy occurs in the CMD instruction because we need to add the JS dependency modules in the mounted volume from the host and that is only available at runtime when the container starts.

Watching modifications on source code to update the application on the fly

That is not specific to Docker but in the DockerFile, I added something concerning watching source code changes in order to ensure that it works in any way.
Webpack has a --watch code to enable it. With the ng-client that uses Webpack under the hood, we have not even need to specify that flag somewhere. The ng-client does that for us by default.
But in fact, things are not so simple. For example, on Linux, the watching files relies on inotify, a Linux kernel subsystem that acts to extend filesystems to notice changes to the filesystem. But that has some default configuration that may make the Webpack watch to fail.
Indeed, the default value of /proc/sys/fs/inotify/max_user_watches is 8192 in multiple OS distribs. It may be not enough for Webpack in the frame of your application source code.
To fix it, in local (outside docker) we could update that value such as :

sudo sysctl fs.inotify.max_user_watches=anotherValue
sudo sysctl -p --system

Unfortunately, does that in a DockerFile will generate an error since the system part of the docker image is protected by design against changes.
A Docker container may be run with the privileged flag, but that is not possible during the build.
As workaround, we could use the --poll flag in the ng command to enable a custom file watching that does not rely on the filesystem native features. --poll takes a value : the interval in ms for watching changes.
Note that the flag was not initially designed for that requirement but it is well helpful in that case.

The DockerFile

# syntax=docker/dockerfile:experimental
 
########################################
FROM node:12.13.1-stretch as angular_build
########################################
RUN apt update
RUN apt-get install -y less procps vim iputils-ping net-tools wget
RUN npm install --global @angular/cli@1.7.4
 
 # RUN sysctl fs.inotify.max_user_watches=524288
#  RUN sysctl -p --system
 
WORKDIR /app
#volumes
RUN echo "volume source code and deps mounting for hotswap ..."
VOLUME /app
 
# Install package.json deps when changed
COPY package.json /app-temp/package.json
RUN --mount=type=cache,target=/app-temp/node_modules \      
    cd /app-temp && npm install && \ 
    mkdir /app-cache && \ 
    cp -r /app-temp/node_modules /app-cache
 
CMD cp -r /app-cache/node_modules/. /app/node_modules && \
     ng serve --port 4200 --host 0.0.0.0 --poll 2000
Ce contenu a été publié dans Non classé. Vous pouvez le mettre en favoris avec ce permalien.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *