How to get the docker container ip ?

The metadata information from the docker containers can be extracted using the docker inspect command.
We see the demonstration as follows:

The docker engine api is based around the golang templates and the commands use extensive formatting around the json function definitions.

[vamshi@node01 ~]$ docker inspect <container-name | container-id> -f '{{ .NetworkSettings.IPAddress }}'
172.17.0.2
[vamshi@node01 ~]$ docker inspect my-container --format='{{ .NetworkSettings.IPAddress }}'
172.17.0.2

RBACs in kubernetes

The kubernetes provides a Role based Access controls as a immediate mechanism as a security measure.

The roles are the grouping of PolicyRules and the capabilities and limitations within a namespace.
The Identities (or) Subjects are the users/ServiceAccounts which are assigned Roles which constitute a RBACs.
This process is acheived by referencing a role from RoleBinding to create RBACs.

In kubernetes there is Role and RoleBindings and the ClusterRole and ClusterRoleBinding.

There is no concept of a deny permission in the RBACs.

The Role and the Subject combined together defines a RoleBinding.

Now lets look at each of the terms in detail.

Subjects:

  • user
  • group
  • serviceAccount

Resources:

  • configmaps
  • pods
  • services

Verbs:

  • create
  • delete
  • get
  • list
  • patch
  • proxy
  • update
  • watch

You Create a kind:Role with a name and then binding with roleRef it to Subject by creating a kind: RoleBinding

[vamshi@master01 k8s]$ kubectl describe serviceaccounts builduser01 
Name:                builduser01
Namespace:           default
Labels:              
Annotations:         
Image pull secrets:  
Mountable secrets:   builduser01-token-rmjsd
Tokens:              builduser01-token-rmjsd
Events:              

The role builduser-role has the permissions to all the resources to create, delete, get, list, patch, update and watch.

[vamshi@master01 k8s]$ kubectl describe role builduser-role
Name: builduser-role
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"builduser-role","namespace":"default"},"ru...
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
* [] [] [create delete get list patch update watch]

Using this you can limit the user access to your cluster

View the current clusterbindings on your kubernetes custer

[vamshi@master01 :~] kubectl get clusterrolebinding
NAME                                                   AGE
cluster-admin                                          2d2h
kubeadm:kubelet-bootstrap                              2d2h
kubeadm:node-autoapprove-bootstrap                     2d2h
kubeadm:node-autoapprove-certificate-rotation          2d2h
kubeadm:node-proxier                                   2d2h
minikube-rbac                                          2d2h
storage-provisioner                                    2d2h
system:basic-user                                      2d2h

The clusterrole describes the Resources and the verbs that are accessible the user.

[vamshi@linux-r5z3:~] kubectl describe clusterrole cluster-admin
Name:         cluster-admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
  *.*        []                 []              [*]
             [*]                []              [*]

Listing the roles on Kubernetes:

[vamshi@master01 :~] kubectl get roles --all-namespaces
NAMESPACE     NAME                                             AGE
kube-public   kubeadm:bootstrap-signer-clusterinfo             2d2h
kube-public   system:controller:bootstrap-signer               2d2h
kube-system   extension-apiserver-authentication-reader        2d2h
kube-system   kube-proxy                                       2d2h
kube-system   kubeadm:kubelet-config-1.15                      2d2h
kube-system   kubeadm:nodes-kubeadm-config                     2d2h
kube-system   system::leader-locking-kube-controller-manager   2d2h
kube-system   system::leader-locking-kube-scheduler            2d2h
kube-system   system:controller:bootstrap-signer               2d2h
kube-system   system:controller:cloud-provider                 2d2h
kube-system   system:controller:token-cleaner                  2d2h

We can further examine the rolebindings on the for the name: system::leader-locking-kube-scheduler which is being associated with the service account kube-scheduler.

[vamshi@master01 :~]  kubectl describe rolebindings system::leader-locking-kube-scheduler -n kube-system
Name:         system::leader-locking-kube-scheduler
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
Role:
  Kind:  Role
  Name:  system::leader-locking-kube-scheduler
Subjects:
  Kind            Name                   Namespace
  ----            ----                   ---------
  User            system:kube-scheduler  
  ServiceAccount  kube-scheduler         kube-system

There is a category of the api groups which contains the following api tags:

apiextensions.k8s.io, apps, autoscaling, batch, Binding, certificates.k8s.io, events.k8s.io, extensions, networking.k8s.io, PodTemplate, policy, scheduling.k8s.io, Secret, storage.k8s.io

The complete roles available in Kubernetes are as follows:

APIService, CertificateSigningRequest, ClusterRole, ClusterRoleBinding, ComponentStatus, ConfigMap, ControllerRevision, CronJob, CSIDriver, CSINode, CustomResourceDefinition, DaemonSet, Deployment, Endpoints, Event, HorizontalPodAutoscaler, Ingress, Job, Lease, LimitRange, LocalSubjectAccessReview, MutatingWebhookConfiguration, Namespace, NetworkPolicy, Node, PersistentVolume, PersistentVolumeClaim, Pod, PodDisruptionBudget, PodSecurityPolicy, PriorityClass, ReplicaSet, ReplicationController, ResourceQuota, Role, RoleBinding, RuntimeClass, SelfSubjectAccessReview, SelfSubjectRulesReview, Service, ServiceAccount, StatefulSet, StorageClass, SubjectAccessReview, TokenReview, ValidatingWebhookConfiguration and VolumeAttachment

Settingup the puppet master and puppet client server

Make sure that you have populated hostname properly on the puppet master server. You can do it with the hostnamectl command.
The hostname assumed by default is “puppet” for your puppet master, but you can give it anyname and reachable over your network to other servers with the mapped FQDN.

Its good practice to setup the /etc/hosts with an alias name called puppet if you are just starting for first time.

Installing the puppet yum repository sources to download the puppet packages.

[root@puppetmaster ~]# sudo rpm -Uvh https://yum.puppet.com/puppet5-release-el-7.noarch.rpm
Retrieving https://yum.puppet.com/puppet5-release-el-7.noarch.rpm
warning: /var/tmp/rpm-tmp.ibJsVY: Header V4 RSA/SHA256 Signature, key ID ef8d349f: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:puppet5-release-5.0.0-12.el7     ################################# [100%]

Installing the Puppet Master service from the yum repository.

[root@puppetmaster ~]# yum install puppetserver

Verify which packages are installed on your machine
[root@puppetmaster ~]# rpm -qa | grep -i puppet
puppetserver-5.3.13-1.el7.noarch
puppet5-release-5.0.0-12.el7.noarch
puppet-agent-5.5.20-1.el7.x86_64

Ensure that you give the following entries updated in the file /etc/puppetlabs/puppet/puppet.conf under the section

[master]
certname = puppetmaster.linuxcent.com
server = puppetmaster.linuxcent.com

Enabling the puppetserver Daemon and starting puppetserver

[root@puppetmaster ~]# systemctl enable puppetserver
[root@puppetmaster ~]# systemctl start puppetserver

The puppet server process starts on the port 8140.

[root@puppetmaster ~]# netstat -ntlp | grep 8140
tcp6       0      0 :::8140                 :::*                    LISTEN      21084/java

Settingup the puppet client.
Installing the yum repository to download the puppet installation packages.

[vamshi@node01 ~]$ sudo rpm -Uvh https://yum.puppet.com/puppet5-release-el-7.noarch.rpm

Installing the puppet agent.

[vamshi@node01 ~]$ sudo yum install puppet-agent

Once we have the puppet agent installed, we need to update the puppet client configuration with the puppetmaster FQDN by updating in the file /etc/puppetlabs/puppet/puppet.conf under the [master] section

[master]
certname = puppetmaster.linuxcent.com
server = puppetmaster.linuxcent.com

Running the puppet agent to setup communication with the puppet master

[vamshi@node01 ~]$ sudo puppet agent --test
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Caching catalog for node01.linuxcent.com
Info: Applying configuration version '1592492078'
Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
Notice: Applied catalog in 0.02 seconds

With this we have successfully raised the signing request to the master
Listing the puppet agent details on the puppet master.

[root@puppetmaster ~]# puppet cert list --all
  "node01.linuxcent.com" (SHA256) 88:08:8A:CF:E3:5B:57:1C:AA:1C:A3:E5:36:47:60:0A:55:6F:C2:CC:9C:09:E1:E7:85:63:2D:29:36:3F:BF:34
[root@puppetmaster ~]# puppet cert sign node01.linuxcent.com
Signing Certificate Request for:
  "node01.linuxcent.com" (SHA256) 88:08:8A:CF:E3:5B:57:1C:AA:1C:A3:E5:36:47:60:0A:55:6F:C2:CC:9C:09:E1:E7:85:63:2D:29:36:3F:BF:34
Notice: Signed certificate request for node01.linuxcent.com
Notice: Removing file Puppet::SSL::CertificateRequest node01.linuxcent.com at '/etc/puppetlabs/puppet/ssl/ca/requests/node01.linuxcent.com.pem'

Now that we have successfully signed the puppet agent request, we are able to see the + sign on the left side of the agent host name as demonstrated in the following output.
[root@puppetmaster ~]# puppet cert list --all
+ "node01.linuxcent.com" (SHA256) 15:07:C2:C1:51:BA:C1:9C:76:06:59:24:D1:12:DC:E2:EE:C1:47:35:DD:BD:E8:79:1E:A5:9E:1D:83:EF:D1:61

The respective ssl signed requests will be saved under the location /etc/puppetlabs/puppet/ssl/ca/signed

[root@node01 signed]# ls
node01.linuxcent.com.pem  puppet.linuxcent.com.pem

To clean up and agent certificates

puppet cert clean node01.linuxcent.com

Which will remove the agent entries from the puppetmaster records and a new certificate request is required to be added to this puppetmaster.

The autosign.conf can also be used if you are going to manage a huge farm of puppet clients, and the manual signing of clients becomes are tedious task, We can setup the whiledcard like *.linuxcent.com to auto approve the signing requests originating from the client hosts present in the network domain of linuxcent.com.

nginx reverse proxy setup for kibana dashboard

How to Setup Nginx Reverse proxy for Kibana.

In this demonstration we will see how to setup the reverse proxy using the nginx webserver to the backend kibana.

We begin by installing the latest version of nginx server on our centos server:

$ sudo yum install nginx -y

The nginx package is going to be present in the epel-repo and you have to enable it.

$ sudo yum --enablerepo=epel install nginx -y

Once the nginx package is installed we need to enable to Daemon and start it with the following command:

[vamshi@node01 ~]$ sudo systemctl enable nginx --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

We now add the create the nginx configuration file for kibana backend, and place it under the location /etc/nginx/conf.d/kibana as shown below:

We can setup the Restricted Access configuration if needed for enhanced security as shown below on the line with auth_basic and auth_basic_user_file, You may skip the Restricted Access configuration if you believe it is now required.

[vamshi@node01 nginx]$ sudo cat conf.d/kibana.conf
server {
    listen 80;
    server_name localhost;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.htpasswd;
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

With the configuration in place .. we now check the nginx config syntax using the -t option as shown below:

[vamshi@node01 nginx]$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Now restart the nginx server and head over to the browser.

$ sudo systemctl restart nginx

On you Browser enter the server ip or FQDN. and you will be auto redirected to the url http://your-kibana-server.com/app/kibana#/home
kibana-home

Setup htacess authorization config with user details.

We now install the htpasswd tool from the package httpd-tools as follows:

$ sudo yum install httpd-tools -y

Adding the Authorization details to our .htpasswd file.

[vamshi@node01 nginx]$ sudo htpasswd -c /etc/nginx/.htpasswd vamshi
New password: 
Re-type new password: 
Adding password for user vamshi

So We have now successfully added the Auth configuration

[vamshi@node01 nginx]$ sudo htpasswd -n /etc/nginx/.htpasswd 
New password: 
Re-type new password: 
/etc/nginx/.htpasswd:$apr1$tlinuxcentMY-htpassEsHEEanL21

As the password is 1 way encryption we cannot decode it and are required to generate new hash.
Verifying the htpasswd configuration logins from the curl command:

$ curl http://kibana-url -u<htpasswd-username>
[vamshi@node01 ~]$ curl kibana.linuxcent.com -uvamshi -I
Enter host password for user 'vamshi':
HTTP/1.1 302 Found
Server: nginx/1.16.1
Date: Thu, 07 Apr 2020 17:48:35 GMT
Content-Length: 0
Connection: keep-alive
location: /spaces/enter
kbn-name: kibana
kbn-license-sig: 2778f2f7e07680b7aefa85db2e7ce7bd33da5592b84cefe62efa8
kbn-xpack-sig: ce2a76732a2f58fcf288db70ad3ea
cache-control: no-cache

If you tend to enter the invalid credentials you will encounter a 401 http error code Restricting the Unauthorized access.

HTTP/1.1 401 Unauthorized
Server: nginx/1.16.1
Date: Thu, 07 Apr 2020 17:51:36 GMT
Content-Type: text/html
Content-Length: 179
Connection: keep-alive
WWW-Authenticate: Basic realm="Restricted Access"

Now we head over to the browser to check the htaccess login page in action as shows follows:
http://your-kibana-server.com
kibana-htpasswd-prompt
Conclusion: With the htpasswd in place, it provides an extra layer of authorized access to your sensitive urls.. in effect now you need to enter the htpasswd logins to access the same kibana dashboard.

Mounting the external volumes to jenkins docker container

Creating a docker volume

To use the external volume for our future container, we need to format a filesystem on the volume.
We use the ext4 filesystem to format our block device, we will demonstrate that as follows:

vamshi@node03:~$ sudo mkfs.ext4 /dev/sdb
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: done
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: bc335e44-d8e9-4926-aa0a-fc7b954c28d1
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Here is the command to create a volume by mentioning the path to the block device and using it in the local scope

docker volume create jenkins_vol1 --driver local --opt device=/dev/sdb
jenkins_vol1

We have successfully creates a docker volume using a block device.

Inspecting the docker volume that is created:

vagrant@node03:~$ docker volume inspect jenkins_vol1
[
    {
        "CreatedAt": "2020-05-12T17:22:11Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/jenkins_vol1/_data",
        "Name": "jenkins_vol1",
        "Options": {
            "device": "/dev/sdb",
            "type": "ext4"
        },
        "Scope": "local"
    }
]

Creating my jenkins container which will use the docker volume jenkins_vol1 and mount it to /var/jenkins_home/.m2

docker run -d -p 8080:8080 --name jenkins --mount 'type=volume,src=temp_vol,dst=/var/jenkins_home/.m2,
volume-driver=local' jenkins:latest

We have successfully started our container and now lots login to the container and check our volume.

jenkins@fc2c49313ddb:/$ df -hT
Filesystem     Type     Size  Used Avail Use% Mounted on
overlay        overlay   29G  4.9G   23G  18% /
tmpfs          tmpfs     64M     0   64M   0% /dev
tmpfs          tmpfs    970M     0  970M   0% /sys/fs/cgroup
shm            tmpfs     64M     0   64M   0% /dev/shm
/dev/sda3      ext4      29G  4.9G   23G  18% /var/jenkins_home
/dev/sdb       ext4     2.0G  6.0M  1.8G   1% /var/jenkins_home/.m2
tmpfs          tmpfs    970M     0  970M   0% /proc/acpi
tmpfs          tmpfs    970M     0  970M   0% /sys/firmware

As we can see from the output the mount point /var/jenkins_home/.m2 is mounted with block device /dev/sdb using a ext4 filesystem

/dev/sdb ext4 2.0G 6.0M 1.8G 1% /var/jenkins_home/.m2

Creating a 200MB temp filesystem volume.

docker volume create --name temp_vol --driver local --opt type=tmpfs --opt device=tmpfs --opt o=size=200m

The inspect of the temp_vol we created is as follows:

vamshi@node03:~$ docker volume inspect temp_vol
[
    {
        "CreatedAt": "2020-05-02T17:31:01Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/temp_vol/_data",
        "Name": "temp_vol",
        "Options": {
            "device": "tmpfs",
            "o": "size=100m,uid=1000",
            "type": "tmpfs"
        },
        "Scope": "local"
    }
]
docker run -d -p 8080:8080 --name jenkins --mount 'type=volume,src=jenkins_vol1,dst=/var/jenkins_home/.m2,volume-driver=local' jenkins:latest
vamshi@node03:~$ docker exec -it jenkins bash
jenkins@2267ba462aa2:/$ df -hT
Filesystem     Type     Size  Used Avail Use% Mounted on
overlay        overlay   29G  4.9G   23G  18% /
tmpfs          tmpfs     64M     0   64M   0% /dev
tmpfs          tmpfs    970M     0  970M   0% /sys/fs/cgroup
shm            tmpfs     64M     0   64M   0% /dev/shm
/dev/sda3      ext4      29G  4.9G   23G  18% /var/jenkins_home
tmpfs          tmpfs    100M     0  100M   0% /var/jenkins_home/.m2
tmpfs          tmpfs    970M     0  970M   0% /proc/acpi
tmpfs          tmpfs    970M     0  970M   0% /sys/firmware
jenkins@2267ba462aa2:/$ exit

Here is shows the mount point details:

tmpfs tmpfs 200M 0 200M 0% /var/jenkins_home/.m2

Please note the mount point /var/jenkins_home/.m2 which has 200MB space as defined.

Thus we can make use of the docker volumes and use the persistent fileystems and attach the block disks to a running container.

Initiating a docker swarm and getting the current docker swarm token

Creating a docker swarm cluster:

The docker swarm can be created by using the following command:

The syntax is defined as follows:

docker swarm init --advertise-addr [available interface IP adress]

The –advertise-addr is used to explicitly define the docker swarm advertise ip. If you have a single interface this option will not be needed but will be real handy if you have more than 1 active public accessible interfaces.
Let us initialize our docker swarm environment.

[vamshi@docker-swarm ~]$ docker swarm init --advertise-addr 10.100.0.20
Swarm initialized: current node (nodeidofmastercdq7nmmq3kcmb5l85k2e) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-verylongstringofcharactercontainingthedockerswarmjoinstring-70bouyqwhfgdcgtw6o0fw6wup \
    10.100.0.20:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The docker swarm creation can be viewed from the docker info command as follows:

[vamshi@docker-swarm ~]$ docker info | grep -C 2 Swarm
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: active
 NodeID: nodeidofmastercdq7nmmq3kcmb5l85k2e
 Is Manager: true

The docker swarm explicitly uses the overlay and macvlan to enable the interhost network connectivity between the container over a swarm network

How to get the docker swarm join token:

This command can come in very handy when you forgot your docker swarm token and you need to join a new docker node to this docker swarm cluster.

[vamshi@docker-swarm ~]$ docker swarm join-token manager -q
SWMTKN-1-verylongstringofcharactercontainingthedockerswarmjoinstring-70bouyqwhfgdcgtw6o0fw6wup

Docker Networking basics and the types of networks

The docker networking comprises of a overlay network and enabled communication with the outside resources using it.

There are following main types of built in connectivity networking drivers namely the bridged, host, macvlan, overlay network and the null driver with no network.

The docker container networking Model CNM architecture manages the networking for Docker container.

IPAM which stands for the IP address management works in a single docker node, and aids in Enabling the network connectivity among the doccker containers. Its primary responsibility is to allocate the IP address space for the subnets, allocation of the IP addresses to the endpoints and the network etc,.

The networking in docker is essentially an isolated sandbox environment, The isolation of the networking resources is possible by the namespaces

The overlay network enables the communication enabled the network spanning across many docker nodes on an environment like the Docker swarm network, The same networking logic is evident in a bridge networking but it is ony limited to a single docker host unlike the overlay network.

Here’s the output snippet from the docker info command; Listing the available network drivers.

# docker info
Plugins: 
 Volume: local
 Network: bridge host macvlan null overlay

The container networking enables connectivity inbetween the docker containers and also the host machine to docker container and vice-versa.

Listing the default networks in docker:

[vamshi@node01 nginx]$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
68b2ffd36e8f        bridge              bridge              local
c1aca4c87a2b        host                host                local
d5e48683def8        none                null                local

When a container is created by default it connects to the bridge network unless an extra arguments are specified.

When you install the docker by default a docker0 virtual interface is created which behaves as a bridge between the docker containers and the host system.

[vamshi@node01 nginx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242654b42ef	no

For brctl command we need to Install the bridge-util package.

We now examine the docker networks with docker network inspect.

Inspecting the various docker networks:

Inspecting the bridge network:

The bridge networking enables the network connectivity over the dockers in a single docker server host.

[vamshi@node01 nginx]$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "68b2ffd36e8fcdc0c3b170dfdbdbc93bb58351d1b2c011abc80709928463f809",
        "Created": "2020-05-23T10:28:27.206979057Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

This bridge is shown with the ip addr command as follows:

# ip addr show docker0 
   docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:65:4b:42:ef brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:65ff:fe4b:42ef/64 scope link 
       valid_lft forever preferred_lft forever

Inspecting the host network.

[vamshi@node01 ~]$ docker network inspect host 
[
    {
        "Name": "host",
        "Id": "c1aca4c87a2b3e7db4661e0cdedc97245cd5dfdc8aa2c9e6fa4ff1d5ecf9f3c1",
        "Created": "2019-05-16T18:46:19.485377974Z",
        "Scope": "local",
        "Driver": "host",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

Inspecting the null driver network

[vamshi@node01 ~]$ docker network inspect none
[
    {
        "Name": "none",
        "Id": "d5e48683def80b2e739b3be95e58fb11abc580ce29a33ba0df679a7a3972f532",
        "Created": "2019-05-16T18:46:19.477155061Z",
        "Scope": "local",
        "Driver": "null",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

 

The following are special networking architectures to span across multihost docker servers enabling network connectivity among the docker containers.

1.Overlay network

2 macvlan network.

Let us inspect the multi host networking:

The core components of the docker interhost network consists of

Inspecting the overlay network:

[vamshi@docker-master ~]$ docker network inspect overlay-linuxcent 
[
    {
        "Name": "overlay-linuxcent",
        "Id": "qz5ucx9hthyva53cydei0y8yv",
        "Created": "2020-05-25T13:22:35.087032198Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.255.0.0/16",
                    "Gateway": "10.255.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {
            "ingress-sbox": {
                "Name": "overlay-linuxcent-endpoint",
                "EndpointID": "165beb97b22c2857e3637119016ef88e462a05d3b3251c4f66aa0fc9176cfe67",
                "MacAddress": "02:42:0a:ff:00:03",
                "IPv4Address": "10.255.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4096"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "node01.linuxcent.com-95ad856b6f56",
                "IP": "10.100.0.20"
            }
        ]
    }
]

The endpoint is the Virtual IP addressing that routes the traffic to the respective containers running on individual docker nodes.

Inspecting the macvlan network:

vamshi@docker-master ~]$ docker network inspect macvlan-linuxcent 
[
    {
        "Name": "macvlan-linuxcent",
        "Id": "99c6a20bd4029ce5a37139c6e6792ec4f8a075c94b5f3e71efc32d92d41f3f89",
        "Created": "2020-05-25T14:20:00.655299409Z",
        "Scope": "local",
        "Driver": "macvlan",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.20.0.0/16",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

enabling ipv4 forwarding on docker server

Common errors when the ipv4 forwarding is not enabled on the linux host leading to unidentifiable issues. here is one such rare log from the system logs

level=warning msg="IPv4 forwarding is disabled. N...t work."

Its good to check the current ipv4.forwarding rules as follows:

[root@LinuxCent ~]# sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0

 

You can also enable the changes for the current session using the -w option

sysctl -w net.ipv4.conf.all.forwarding=1

To make the changes persistent we need to write to a config file and enforce the system to read it.

[root@LinuxCent ~]# vi /etc/sysctl.d/01-rules.conf
net.ipv4.conf.all.forwarding=1

Then apply the changes to the system on the fly with the sysctl command to load the changes from systemwide config files.

# sysctl –system
--system : tells the sysctl to read all the configuration file system wide

 

[root@Linux1 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /etc/sysctl.d/01-rules.conf ...
net.ipv4.conf.all.forwarding = 1
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.conf ...
[root@Linux1 ~]# sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1

Docker container volumes

The concept of docker is to run a compressed image into a container which servers the purpose and then container can be removed, leaving no trace of the data generated during the course of its runtime. This exact case is referred as ephemeral container.
The docker is traditionally non-persistent data storage and retains only the data originally from its image build creation, It provides the facility to integrate the volume mounts to a running container for data storage and manages the persistence issue to a certain extent.
We have realized the ability to store the data on volumes and then make them available to the container runtime environment to satisfy the needs.

As per the practice the docker volumes are mounted to docker image in runtime via the command line arguments which are demonstrated as show below.

This implementation of the docker volumes provides the running container a volume binding capability and this is method in docker volumes can be broadly categorized into two abilities which are listed as follows.

(1) The volume mapping from the host to the target container, This is like directory mapping between the Host and the container which happens during its runtime.
(2) A permanent volume name that can be shared among container and it even persists if the container is deleted and the same volume can be again mounted to another docker container.

The storage options offered by docker are for persistent storage of files and ensures it in cases of docker container restarts and removal of the container.

The example for the host bind volume mapping is as follows:

# docker run -v  src:target --name=container-name  docker-image -d

The syntax of docker directory mount:

# docker run -it -v /usr/local/bin:/target/local --name=docker-container-with-vol  ubuntu:latest /bin/bash

We will invoke a container by mounting the host path to an target container path and mount the host directory to nginx-html /usr/share/nginx/html.

[vamshi@node01 ~]$ docker run -d --name my-nginx -v /home/vamshi/nginx-html:/usr/share/nginx/html -p 80:80 nginx:1.17.2-alpine
34797e6d8939e42bc8cfe36eed4b60521355edadc2fa6c74a26fe4172384575c

Now we log into the container and verify the contents

[vamshi@node01 ~]$ docker exec -it my-nginx1 sh
~ # hostname
34797e6d8939
~ # df -h /usr/share/nginx/html
Filesystem                Size      Used Available Use% Mounted on
/dev/sda1                40.0G     16.1G     23.9G  40% /usr/share/nginx/html

From the df -h output from the container shows the path /usr/share/nginx/html as mount point.

Checking the contents of the webroot directory at /usr/share/nginx/html inside the container.

~ # cat /usr/share/nginx/html/index.html
<H1>Hello from LinuxCent.com</H1>

This is the same file which we have on our host machine and it is shared through the volume mount.
We verify using the curl command as follows

~ # curl localhost
<H1>Hello from LinuxCent.com</H1>

We can also mount the same directory with as many containers as possible on our system and it can be an effective way of updating the static content being utilized within our containers.

This is a bind operation offered by docker to mount the directory to a container.. This information can be identified by inspecting the docker command as follows:

Here is the extract snippet from the docker inspect container my-nginx1 :

            {
                "Type": "bind",
                "Source": "/home/vamshi/nginx/nginx-html",
                "Destination": "/usr/share/nginx/html",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }

As we can see the Type of mount is depicted as a bind here.

The formatter can also used with the filtering options in json format

[vamshi@node01 ~]$ docker container inspect my-nginx1 -f="{{.Mounts}}"
[{bind /home/vamshi/nginx-html /usr/share/nginx/html true rprivate}]

The Dockerfile VOLUME expression

Using the Persistent docker volumes.

We now focus our attention to the Docker volume mounts, which are isolated storage resources in docker and are a persistent storage which can be reused and mounted to the containers.

The VOLUME can be used while writing a Dockerfile, it creates a docker image with the volume settings and then mounts to the container when it is startedup.
We use the expression in dockerfile volume.

VOLUME [ "my-volume01" ]

We now build the image and lets observe the output below.

Step 1/4 : FROM nginx-linuxcent
 ---> 55ceb2abad47
Step 2/4 : COPY nginx-html/index.html /usr/share/nginx/html/
 ---> c482aa15da5a
Removing intermediate container a621e114a01d
Step 3/4 : VOLUME my-volume01
 ---> Running in ac523d6a02f0
 ---> 72423fe5f27d
Removing intermediate container ac523d6a02f0
Step 4/4 : RUN ls /tmp
 ---> Running in 8fc1fbc0f0bb
 ---> 0f453e3cfff1
Removing intermediate container 8fc1fbc0f0bb

Here the volume is mounted at the path /my-volume01 based on absolute path from /.

Here my-volume01 will be mounted to /my-volume01 inside the container path

/ # df -Th /my-volume01/
Filesystem Type Size Used Available Use% Mounted on
/dev/sda1 xfs 40.0G 15.1G 24.9G 38% /my-volume01

The information can be extracted by inspecting the container as follows:

# docker container inspect nginx-with-vol -f="{{.Mounts}}"
[{volume 7d6d92cffac1d216ca062032c99eb105b120d769331a2008d8cad1a2c086ad19 /var/lib/docker/volumes/7d6d92cffac1d216ca062032c99eb105b120d769331a2008d8cad1a2c086ad19/_data my-volume01 local true }]

 

If you would like the volume to be mounted to some other path then you can declare that in Dockerfile VOLUME as below:

VOLUME [ "/mnt/my-volume" ]

The information can be extracted from the docker image by inspecting for the volumes.

            "Image": "sha256:72423fe5f27de1a495e5e875aec83fd5084abc6e1636c09d510b19eb711424cc",
            "Volumes": {
                "/mnt/my-volume": {},
                 }

The volumes defined in the Dockerfile VOLUME expression will not be visible with the docker volume ls as they are present within the scope of specific docker container. But will be displayed with anonymous hash tags in the output, But they can be shared among other docker containers. We will look at sharing the docker volumes in the following section.It is also important to understand that once the volume is explicitly deleted then its data cannot be recovered again.

Using the container volume from one container and accessing them in another container.

This option is beneficial during the times when a container access and the debug tools are disabled, and you need to view the logs of the container and run analysis on it.

Using the volume from the container and mounting it to another container for auditing purposes.
We have now created a container called view-logs which uses the volumes from another container called ngnix-with-vol

docker run -d --name view-logs --volumes-from nginx-with-vol degug-tools

Best Practices:
The view-logs container can have a set of debug and troubleshooting tools to view the logs of other app containers.

Creating a Volume from the docker commandline:

The docker volume resource has to be initialized first and can be done as follows:
Command to create a docker volume:
# docker volume create my-data-vol
[vamshi@node01 ~]$ docker volume inspect my-data-vol

[
    {
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/my-data-vol/_data",
        "Name": "my-data-vol",
        "Options": {},
        "Scope": "local"
    }
]

We can use this volume and then mount this to a container as we mounted the host volume in earlier sections.

We will be using the persistent volume name and then mounting the docker volume to a container bind volume name to the target container path..
These options have a specific changes in syntax and has to be specified exactly while running the mount operation.

Below is the syntax:
docker run –volume :</path/to/mountpoint/> image-name.

We have now the docker volume available and mounting it to the target container path /data as follows:
# docker run -d –name data-vol –volume my-data-vol:/data nginx-linuxcent:v4
2dfa965bbbc79a522e9c109ef8eee20bf47e2b61062f3b3df61d4eb677de4506

Verification:

The docker volume can be listed by the following command

Check the mount information with docker volume inspect.
Here is an extract from the docker inspect container

"Mounts": [
            {
                "Type": "volume",
                "Name": "my-data-vol",
                "Source": "/var/lib/docker/volumes/my-data-vol/_data",
                "Destination": "/var/log/nginx",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": ""
             }

As we can see the Type of mount is depucted as a Volume here.

Conclustion:
We have seen the practices of mounting the hostpath to to target container path is a bind operation and the dependency it creates is the host affinity, which is binded to particular host and has to be avoided if you are dealing with more dynamix data exchange between containers over a network. But it is very useful if you have host-container data exchange.
The Option with Volumes is very dynamic and has less binding dependency on the host machines, They can be declared and used in two ways as demonstrated in the before sections. !st being the explicit volume creation and another is the Volume creation from within the Docker build.
The explicit docker creation and then binding provides the scopt of choosing a mount point inside the container after the image is built.

The Dockerfile’s VOLUME expression can be used to automatically define the volume name and the mount point desired and has all the same techniques of volume sharing for data exchange in run time.

Managing Docker disk space

We come across the challenge to manage the docker engine and its disk space consumption issues in a long run.

To effectively manage its resources we have some of the best options, let us take a look at them in this tutorial.

How to identify the details of disk space usage in docker ?

[root@node01 ~]# docker system df -v
[root@node01 ~]# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              7                   3                   1.442 GB            744.2 MB (51%)
Containers          3                   1                   2.111 MB            0 B (0%)
Local Volumes       7                   1                   251.9 MB            167.8 MB (66%)

This command prints the complete verbose details of Images space usage, Containers space usage and Local Volumes space usage

How to Clean up space on Docker ?

[root@node02 vamshi]# docker system prune [ -a | -f ]

The option -a: Removes all the unused images and the stale containers / images
and -f Forcefully removes the unused images and the stale containers without prompting for confirmation.How

How to Remove docker images?

The docker images can be removed using the docker image rm <container-id | container-name> command.

The command docker rmi is most commonly used also the [/code]docker image rm[/code] which is more easier to read and self explanatory.

[root@node02 vamshi]# docker rmi

The docker images which are dangling and those without any tags can be filtered out using the below syntax and can be removed to save some file system space

We can list out the docker images that are dangling using the filter option as shown below:

# docker images -f "dangling=true"

The list of images received from the above command we pass only the image id’s to the docker image rm command as shown below:

[root@node02 vamshi]# docker image rm $(docker images -qf "dangling=true")

How to list multiple docker images with matching pattern ?

[vamshi@node02 ~]$ docker image ls mysql*
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
rrskris/mysql       v1                  1ab47cba1d63        4 months ago        456 MB
rrskris/mysql       v2                  3bd34czc2b90        4 months ago        456 MB
docker.io/mysql     latest              d435eee2caa5        5 months ago        456 MB

How to remove multiple docker containers with matching pattern

The docker provides good amount of flexibility with the commandline and can be combined with regular expression and awk formatting commands to yield relevant results.

[vamshi@node02 ~]$  docker image rm $(docker image ls | grep -w "^old-image" | awk {'print $3'} )

 

Various states of a docker containers

Docker container lifecycle

The containers has the following stages from the moment it is created from an image till it’s removed from the docker engine  running, restarting, removing, paused, dead and exited.
All statuses apart from running and created are not going to serve a live purpose and tend to use us the system resources. unless the are brought back to the action through the use of docker start command

We can easily perform filtering operation on the containers using the status flag:

# docker ps -f status=[created | dead | exited | running | restarting | removing]

Then the docker also allows removal of individual containers using the rm command we can use the docker container rm to have the docker delete container

[vamshi@node01 ~]$ docker container rm <container-id | container-name>
[vamshi@node01 ~]$ docker rm <container-id | container-name>

You can also use the -f flag to forcefully remove it.

# docker rm -f <container-id | container-name>

On large docker farms containing 100’s of containers, Its often a practical approach to continually keep scanning for the stale containers and cleaning them up.

Clean up the docker containers which are in exited state.

# docker ps -q -f status=exited | xargs -I {} docker rm {}
# docker container ls --filter "status=exited" | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty -I {} docker rm {}

 

List docker containers currently running:

[vamshi@node01 ~]$ docker container ls -f status=running

The docker subsystem also does offer some internal system commands to get the job done using the docker’s garbage collection mechanism.

The docker system build also results in leaving some reminiscences of older build data which has to be cleaned up at regular intervals of time on the docker engine host.

 

How to print out the docker container pid’s

docker ps -qa -f status=running | xargs docker inspect --format='{{ .State.Pid }}'

A docker one liner to clear up some docker stale cache:

[root@node01 ~]# docker container ls --filter "status=exited" --filter=status="dead" | awk '{print $1}' | xargs --no-run-if-empty -I {} docker rm {}