Networks : the available network drivers
Subnet range calculation
Docker subnet range by docker network
172.17.0.0/16 -> 65534 available addresses
take an example address 192.168.0.0/16. Here you have 16 bits of the 32 for network addressing, leaving 16 bits for the hosts on the network.
You have 256*256 = 65536 host addresses but, as before, two are pre-allocated so you have 65534 available addresses for hosts on this network, all starting 192.168.
Subnet ranges for docker networks
When docker networks are created (e.g. using docker network create or indirectly through docker-compose) without explicitly specifying a subnet range,
dockerd allocates a new /16
network, starting from 172.N.0.0/16
, where N is a number that is incremented (e.g. N=17, N=18, N=19,
N=20, …).
The default bridge network uses the subnet range : 172.17.0.0/16
Each network has a gateway.
The default bridge gateway : 172.17.0.1
The bridge driver
The bridge
driver allows the communication between any containers on the same network.
We have a default one but we can also create user-defined bridges.
The host driver
The host
driver removes the network isolation between the container and the Docker host, and so we could so use the host’s networking
directly.
With this driver, publishing ports on the host is not possible(and not required either) because by ports of the container are already running on the host.
With this driver no IP address is created for the container.
The overlay driver
The overlay
driver connects multiple Docker daemons together and enable swarm services to communicate with each other
When you deploy a Docker swarm stack, you have by default an overlay network for services defined in the stack.
Each host of the cluster creates an overlay network which its members are containers of the stack that run on it.
The none driver
The none
driver disable all networking for a container.
Networks : some built-in networks created
Communications between applications is an important thing. For a container that actually represents an application, that is as well as right.
As a convention over configuration mind, Docker made the choice to create multiple built-in networks (bridge, host and none) and to configure a default
network for any container started : the « default » bridge
network.
By typing docker network ls
, we can see these :
NETWORK ID NAME DRIVER SCOPE
561ef510e2e2 bridge bridge local
81269f1e8f47 host host local
92c6107f96ff none null local |
The default network : a permissive bridge
If we don’t configure any network for our container, that is used.
Useful information :
– The default network relies on the bridge driver and makes every created containers to make part of that network.
– As a consequence, it allows the communication between any created containers on the same docker daemon.
That is extremely permissive.
– contrary to user bridge networks, default containers are by default only linked by their IPs and not their names (look at NetworkSettings:IPAddress in
docker inspect CONTAINER) . That is not portable, nor scalable. To allow a container resolution by their name, we need to use the --link
that
is deprecated.
– For all theses reasons, we really want to avoid using that default network.
Favor the user defined bridge instead of.
The host network
That allows to use a network configured with a host network driver without the need to declare it.
The none network
That allows to use a network configured with a none network driver without the need to declare it.
The user-defined bridge network with the bridge driver
A clear and good isolation between containers
That is an explicit network that we need to declare with the bridge driver. That is called « user-defined » to make contrast with the default network that
also uses the bridge driver.
With a user-defined bridge, we define network container communication explicitly for each container : so networks not declared in a container is disallowed.
That network requires more configuration but provides a much better isolation.
Note that we need to declare the network definition for any created container because otherwise it will be set with the permissive default network and we
don’t want that.
Example of docker-compose file that uses user-defined bridge network for some containers and none for another
Here our use case : the frontend and the backend containers have to be on the same network because they communicate but the container for the standalone application doesn’t need to communicate with other containers.
version: '2.3' services: spring-boot-backend: ... networks: - network-app react-frontend: ... networks: - network-app standalone-app: ... network_mode: "none" networks: network-app: driver: bridge |
Networks :how to access container through the network
Without docker-compose, the host is the id of the container (or its name if that is defined).
With docker-compose, the host is the service name.
Networks :create them manually
In terms of automation and production grade, it is bad but for testing it is fine.
docker network create -d bridge my-network
Networks : connect them
When we start a container :
$ docker run --network=NETWORK
For a running container :
docker network connect [OPTIONS] NETWORK CONTAINER
We could also specify the ip to use in the specified network such as :
docker network connect --ip IP NETWORK CONTAINER
Networks : inspect them !
To list all networks on the host
Use the command : docker network ls
.
To know detailed information of a network
By default, a container has an IP address for every Docker network it connects to.
It means that the containers network communication has to be considered at the level of the docker networks and not only the container.
The IP address is assigned from the pool assigned to the network, so the Docker daemon effectively acts as a DHCP server for each container. Each network
also has a default subnet mask and gateway.
With the the docker network inspect fooNetwork
command, we could find interesting network information among :
1) the high level configuration of the network (name,id,scope, driver) and its IPAM (Ip Address Management).
For example :
"Name": "bridge", "Id": "52fdc54aa6f9b36a7a76ccafcd050e539d075617d16d48e6cecf92f6e5e0415c", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.17.0.0/16", "Gateway": "172.17.0.1" } ] } |
2) the containers belonging to that network and more particularly their ip addresses within that network.
For example :
"Containers": { "51093fa6226b954b43825be7cf3d81df9f69375b7d008a6e98ae51d1ac98069b": { "Name": "elastic_jepsen", "EndpointID": "cc53275cdcaeb0a5e5cb7f214dd2f0bf6eabc5927b601ad7ffb340a9db134386", "MacAddress": "02:42:ac:11:00:07", "IPv4Address": "172.17.0.7/16", "IPv6Address": "" }, "802d13ce741ce6508d61b6b9114e8b0967c0a8ff30996226350a351eed1b98da": { "Name": "nexus", "EndpointID": "8ffd2f3bec35c1bafd8f2da357760aa469cef16e168c8d2fc454755a6996145d", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "172.17.0.2/16", "IPv6Address": "" }, "9ac07484d74077c996f6f77c160e530331965e616e927befaee8cd73dd84496b": { "Name": "intelligent_rosalind", "EndpointID": "ca4f2f05082de3a12180ec2ff5eaebb240c1368b57f9c5d6758600e1dd23c2b3", "MacAddress": "02:42:ac:11:00:05", "IPv4Address": "172.17.0.5/16", "IPv6Address": "" }, "b1421fdb41d563b75f8ffb33c87f2b78d95cbf2506a68675ab8691ab6c2ef1e1": { "Name": "heuristic_spence", "EndpointID": "41828a604e30585b27a540b6ddc1a5e8660ea5f5fb14b675266ed0e957a7daa1", "MacAddress": "02:42:ac:11:00:06", "IPv4Address": "172.17.0.6/16", "IPv6Address": "" } |
3) the docker options of that network
For example :
"Options": { "com.docker.network.bridge.default_bridge": "true", "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500" } |
Published ports
By default, when a container is started, no of these ports is accessible to the outside world.
Only Docker containers connected to the container’s network access its ports (while we could still access from the host .
To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, we need to publish
port(s) via the -p argument when the container is started.
IP tables
Modifications performed on IP tables by Docker
Docker installs two custom iptables chains named DOCKER-USER and DOCKER, and it ensures that incoming packets are always checked by these two chains first.
If you need to add rules which load before Docker’s rules, add them to the DOCKER-USER chain. These rules are applied before any rules Docker creates
automatically.
Warning
Rules added to the FORWARD chain — either manually, or by another iptables-based firewall — are evaluated after these chains. This means that if you
expose
a port through Docker, this port gets exposed no matter what rules your firewall has configured.
If you want those rules to apply even when a port gets exposed through Docker, you must add these rules to the DOCKER-USER chain.
Case of we need to update them
– If your host is exposed to the Internet.
By default, all external source IPs are allowed to connect to the Docker host.
To allow only a specific IP or network to access the containers, insert a
negated rule at the top of the DOCKER-USER filter chain.
For example, the following rule restricts external access from all IP addresses except
192.168.1.1:
iptables -I DOCKER-USER -i ext_if ! -s 192.168.1.1 -j DROP
– If your host acts as a router.
It will not work any longer with docker because docker sets the policy for the FORWARD chain to DROP.
If you want your system to continue functioning as a
router, you can add explicit ACCEPT
rules to the DOCKER-USER
chain to
allow it:
iptables -I DOCKER-USER -i src_if -o dst_if -j ACCEPT