Docker Networking basics and the types of networks

The docker networking comprises of a overlay network and enabled communication with the outside resources using it.

There are following main types of built in connectivity networking drivers namely the bridged, host, macvlan, overlay network and the null driver with no network.

The docker container networking Model CNM architecture manages the networking for Docker container.

IPAM which stands for the IP address management works in a single docker node, and aids in Enabling the network connectivity among the doccker containers. Its primary responsibility is to allocate the IP address space for the subnets, allocation of the IP addresses to the endpoints and the network etc,.

The networking in docker is essentially an isolated sandbox environment, The isolation of the networking resources is possible by the namespaces

The overlay network enables the communication enabled the network spanning across many docker nodes on an environment like the Docker swarm network, The same networking logic is evident in a bridge networking but it is ony limited to a single docker host unlike the overlay network.

Here’s the output snippet from the docker info command; Listing the available network drivers.

# docker info
Plugins: 
 Volume: local
 Network: bridge host macvlan null overlay

The container networking enables connectivity inbetween the docker containers and also the host machine to docker container and vice-versa.

Listing the default networks in docker:

[vamshi@node01 nginx]$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
68b2ffd36e8f        bridge              bridge              local
c1aca4c87a2b        host                host                local
d5e48683def8        none                null                local

When a container is created by default it connects to the bridge network unless an extra arguments are specified.

When you install the docker by default a docker0 virtual interface is created which behaves as a bridge between the docker containers and the host system.

[vamshi@node01 nginx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242654b42ef	no

For brctl command we need to Install the bridge-util package.

We now examine the docker networks with docker network inspect.

Inspecting the various docker networks:

Inspecting the bridge network:

The bridge networking enables the network connectivity over the dockers in a single docker server host.

[vamshi@node01 nginx]$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "68b2ffd36e8fcdc0c3b170dfdbdbc93bb58351d1b2c011abc80709928463f809",
        "Created": "2020-05-23T10:28:27.206979057Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

This bridge is shown with the ip addr command as follows:

# ip addr show docker0 
   docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:65:4b:42:ef brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:65ff:fe4b:42ef/64 scope link 
       valid_lft forever preferred_lft forever

Inspecting the host network.

[vamshi@node01 ~]$ docker network inspect host 
[
    {
        "Name": "host",
        "Id": "c1aca4c87a2b3e7db4661e0cdedc97245cd5dfdc8aa2c9e6fa4ff1d5ecf9f3c1",
        "Created": "2019-05-16T18:46:19.485377974Z",
        "Scope": "local",
        "Driver": "host",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

Inspecting the null driver network

[vamshi@node01 ~]$ docker network inspect none
[
    {
        "Name": "none",
        "Id": "d5e48683def80b2e739b3be95e58fb11abc580ce29a33ba0df679a7a3972f532",
        "Created": "2019-05-16T18:46:19.477155061Z",
        "Scope": "local",
        "Driver": "null",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

 

The following are special networking architectures to span across multihost docker servers enabling network connectivity among the docker containers.

1.Overlay network

2 macvlan network.

Let us inspect the multi host networking:

The core components of the docker interhost network consists of

Inspecting the overlay network:

[vamshi@docker-master ~]$ docker network inspect overlay-linuxcent 
[
    {
        "Name": "overlay-linuxcent",
        "Id": "qz5ucx9hthyva53cydei0y8yv",
        "Created": "2020-05-25T13:22:35.087032198Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.255.0.0/16",
                    "Gateway": "10.255.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {
            "ingress-sbox": {
                "Name": "overlay-linuxcent-endpoint",
                "EndpointID": "165beb97b22c2857e3637119016ef88e462a05d3b3251c4f66aa0fc9176cfe67",
                "MacAddress": "02:42:0a:ff:00:03",
                "IPv4Address": "10.255.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4096"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "node01.linuxcent.com-95ad856b6f56",
                "IP": "10.100.0.20"
            }
        ]
    }
]

The endpoint is the Virtual IP addressing that routes the traffic to the respective containers running on individual docker nodes.

Inspecting the macvlan network:

vamshi@docker-master ~]$ docker network inspect macvlan-linuxcent 
[
    {
        "Name": "macvlan-linuxcent",
        "Id": "99c6a20bd4029ce5a37139c6e6792ec4f8a075c94b5f3e71efc32d92d41f3f89",
        "Created": "2020-05-25T14:20:00.655299409Z",
        "Scope": "local",
        "Driver": "macvlan",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.20.0.0/16",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

Leave a Reply

Your email address will not be published. Required fields are marked *