Vamshi Krishna Santhapuri

Experienced Operations Engineer with a demonstrated history of working in the computer software industry. Skilled in Openshift, Kuberntes, Docker, Devops practices, Linux System Administration, Strong engineering professional with a Bachelor of Technology (B.Tech.) focused in Computer Science from JNTUH.

nginx reverse proxy setup for kibana dashboard

How to Setup Nginx Reverse proxy for Kibana.

In this demonstration we will see how to setup the reverse proxy using the nginx webserver to the backend kibana.

We begin by installing the latest version of nginx server on our centos server:

$ sudo yum install nginx -y

The nginx package is going to be present in the epel-repo and you have to enable it.

$ sudo yum --enablerepo=epel install nginx -y

Once the nginx package is installed we need to enable to Daemon and start it with the following command:

[vamshi@node01 ~]$ sudo systemctl enable nginx --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

We now add the create the nginx configuration file for kibana backend, and place it under the location /etc/nginx/conf.d/kibana as shown below:

We can setup the Restricted Access configuration if needed for enhanced security as shown below on the line with auth_basic and auth_basic_user_file, You may skip the Restricted Access configuration if you believe it is now required.

[vamshi@node01 nginx]$ sudo cat conf.d/kibana.conf
server {
    listen 80;
    server_name localhost;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.htpasswd;
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

With the configuration in place .. we now check the nginx config syntax using the -t option as shown below:

[vamshi@node01 nginx]$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Now restart the nginx server and head over to the browser.

$ sudo systemctl restart nginx

On you Browser enter the server ip or FQDN. and you will be auto redirected to the url http://your-kibana-server.com/app/kibana#/home
kibana-home

Setup htacess authorization config with user details.

We now install the htpasswd tool from the package httpd-tools as follows:

$ sudo yum install httpd-tools -y

Adding the Authorization details to our .htpasswd file.

[vamshi@node01 nginx]$ sudo htpasswd -c /etc/nginx/.htpasswd vamshi
New password: 
Re-type new password: 
Adding password for user vamshi

So We have now successfully added the Auth configuration

[vamshi@node01 nginx]$ sudo htpasswd -n /etc/nginx/.htpasswd 
New password: 
Re-type new password: 
/etc/nginx/.htpasswd:$apr1$tlinuxcentMY-htpassEsHEEanL21

As the password is 1 way encryption we cannot decode it and are required to generate new hash.
Verifying the htpasswd configuration logins from the curl command:

$ curl http://kibana-url -u<htpasswd-username>
[vamshi@node01 ~]$ curl kibana.linuxcent.com -uvamshi -I
Enter host password for user 'vamshi':
HTTP/1.1 302 Found
Server: nginx/1.16.1
Date: Thu, 07 Apr 2020 17:48:35 GMT
Content-Length: 0
Connection: keep-alive
location: /spaces/enter
kbn-name: kibana
kbn-license-sig: 2778f2f7e07680b7aefa85db2e7ce7bd33da5592b84cefe62efa8
kbn-xpack-sig: ce2a76732a2f58fcf288db70ad3ea
cache-control: no-cache

If you tend to enter the invalid credentials you will encounter a 401 http error code Restricting the Unauthorized access.

HTTP/1.1 401 Unauthorized
Server: nginx/1.16.1
Date: Thu, 07 Apr 2020 17:51:36 GMT
Content-Type: text/html
Content-Length: 179
Connection: keep-alive
WWW-Authenticate: Basic realm="Restricted Access"

Now we head over to the browser to check the htaccess login page in action as shows follows:
http://your-kibana-server.com
kibana-htpasswd-prompt
Conclusion: With the htpasswd in place, it provides an extra layer of authorized access to your sensitive urls.. in effect now you need to enter the htpasswd logins to access the same kibana dashboard.

Mounting the external volumes to jenkins docker container

Creating a docker volume

To use the external volume for our future container, we need to format a filesystem on the volume.
We use the ext4 filesystem to format our block device, we will demonstrate that as follows:

vamshi@node03:~$ sudo mkfs.ext4 /dev/sdb
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: done
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: bc335e44-d8e9-4926-aa0a-fc7b954c28d1
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Here is the command to create a volume by mentioning the path to the block device and using it in the local scope

docker volume create jenkins_vol1 --driver local --opt device=/dev/sdb
jenkins_vol1

We have successfully creates a docker volume using a block device.

Inspecting the docker volume that is created:

vagrant@node03:~$ docker volume inspect jenkins_vol1
[
    {
        "CreatedAt": "2020-05-12T17:22:11Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/jenkins_vol1/_data",
        "Name": "jenkins_vol1",
        "Options": {
            "device": "/dev/sdb",
            "type": "ext4"
        },
        "Scope": "local"
    }
]

Creating my jenkins container which will use the docker volume jenkins_vol1 and mount it to /var/jenkins_home/.m2

docker run -d -p 8080:8080 --name jenkins --mount 'type=volume,src=temp_vol,dst=/var/jenkins_home/.m2,
volume-driver=local' jenkins:latest

We have successfully started our container and now lots login to the container and check our volume.

jenkins@fc2c49313ddb:/$ df -hT
Filesystem     Type     Size  Used Avail Use% Mounted on
overlay        overlay   29G  4.9G   23G  18% /
tmpfs          tmpfs     64M     0   64M   0% /dev
tmpfs          tmpfs    970M     0  970M   0% /sys/fs/cgroup
shm            tmpfs     64M     0   64M   0% /dev/shm
/dev/sda3      ext4      29G  4.9G   23G  18% /var/jenkins_home
/dev/sdb       ext4     2.0G  6.0M  1.8G   1% /var/jenkins_home/.m2
tmpfs          tmpfs    970M     0  970M   0% /proc/acpi
tmpfs          tmpfs    970M     0  970M   0% /sys/firmware

As we can see from the output the mount point /var/jenkins_home/.m2 is mounted with block device /dev/sdb using a ext4 filesystem

/dev/sdb ext4 2.0G 6.0M 1.8G 1% /var/jenkins_home/.m2

Creating a 200MB temp filesystem volume.

docker volume create --name temp_vol --driver local --opt type=tmpfs --opt device=tmpfs --opt o=size=200m

The inspect of the temp_vol we created is as follows:

vamshi@node03:~$ docker volume inspect temp_vol
[
    {
        "CreatedAt": "2020-05-02T17:31:01Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/temp_vol/_data",
        "Name": "temp_vol",
        "Options": {
            "device": "tmpfs",
            "o": "size=100m,uid=1000",
            "type": "tmpfs"
        },
        "Scope": "local"
    }
]
docker run -d -p 8080:8080 --name jenkins --mount 'type=volume,src=jenkins_vol1,dst=/var/jenkins_home/.m2,volume-driver=local' jenkins:latest
vamshi@node03:~$ docker exec -it jenkins bash
jenkins@2267ba462aa2:/$ df -hT
Filesystem     Type     Size  Used Avail Use% Mounted on
overlay        overlay   29G  4.9G   23G  18% /
tmpfs          tmpfs     64M     0   64M   0% /dev
tmpfs          tmpfs    970M     0  970M   0% /sys/fs/cgroup
shm            tmpfs     64M     0   64M   0% /dev/shm
/dev/sda3      ext4      29G  4.9G   23G  18% /var/jenkins_home
tmpfs          tmpfs    100M     0  100M   0% /var/jenkins_home/.m2
tmpfs          tmpfs    970M     0  970M   0% /proc/acpi
tmpfs          tmpfs    970M     0  970M   0% /sys/firmware
jenkins@2267ba462aa2:/$ exit

Here is shows the mount point details:

tmpfs tmpfs 200M 0 200M 0% /var/jenkins_home/.m2

Please note the mount point /var/jenkins_home/.m2 which has 200MB space as defined.

Thus we can make use of the docker volumes and use the persistent fileystems and attach the block disks to a running container.

Create a user and Grant privileges in mysql database

mysql database user creation

The user creation process in mysql is one of the most important steps in Database administration.
Below we will list some of the Important terms of Authentication, Authorization with practical demonstration.
The process of gaining access to the database engine with an active login credentials and a login request from a trusted source network ensures Authentication.

The part where in the user is allowed to access certain tables in databases or the whole or part of the Databases determines  Authorization.

In SQL administration, The user creation process involves Authentication and Authorization with a practical implementation of a unique username id, identified by the password, the critical component is the source network identification if logging from remote hosts. The permission to specific databases ensuring the least privileges based on the desired role is one of best the practices

Let’s connect using root access to the MySQL Command-Line Tool

[vamshi@mysql01 linuxcent]$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.19 MySQL Community Server - GPL


Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

 

How to create a new mysql user and Grant the privileges

Sample Syntax:

CREATE USER 'mysql_user'@'hostname' IDENTIFIED BY 'user_password';

It is important to understand that the ‘username’ @ ‘hostname’ is a unique entry of identification pattern for Authenticating to the mysql engine.

The hostname field accepts the values such as [/code]“ip address” / 10.100.0.0/24 / localhost [/code]

And only the incoming requests will be allowed matching the user name.

The syntax for creating a user on mysql goes as follows:

Enabling access for a source of localhost identified by the authentication information

CREATE USER 'vamshi'@'localhost' IDENTIFIED BY 'user_password';

Enabling access for a source IP range of 10. network identified with /24 CIDR followed by the authentication information

CREATE USER 'vamshi'@'10.100.0.0/24' IDENTIFIED BY 'user_password';

Enabling access for a source IP range of specific hostname identified the authentication information

CREATE USER 'vamshi'@'hostname' IDENTIFIED BY 'user_password';

The First step of user access is done, Now we need to grant the access to the Databases, which grants the Privileges to perform actions on the DB by the new user.

Granting Privileges

This section deals with the Authorization;

On the mysqlcli prompt, you would need to issue the GRANT command with appropriate access permissions.

What are Privileges types in mysql?

The Grant Authorizes the Following actions

Like the ability to CREATE tables and databases, read or write FILES, and then even SHUTDOWN the server.

The most commonly used privileges are:

  • ALL PRIVILEGES: Grants all privileges to a user account.
  • SELECT: The user account is allowed to read a database.
  • INSERT: The user account is allowed to insert rows into a specific table.
  • UPDATE: The user account is allowed to update table rows.
  • CREATE: The user account is allowed to create databases and tables.
  • DROP: The user account is allowed to drop databases and tables.
  • DELETE: The user account is allowed to delete rows from a specific table.
  • PROCESS: The user is allowed to get the information about the threads executing within the server
  • SHUTDOWN: The user is allowed to use of the SHUTDOWN and RESTART statements within the server.

Now it’s time to grant the privileges to the new user on a tables belonging to a Database or all the tables on a given database;

Here’s what the Simple GRANT SQL statement looks like:

GRANT ALL PRIVILEGES ON Database_name.Table_name TO 'user@'hostname' ;

Let’s break this down and understand what we just told MySQL to do.

GRANT ALL PRIVILEGES (ALL types of Privileges) for only the given Database_Name and given Table_Name to the user Identified by ‘user@’hostname’

The Database_name and the Table_name can be replaced by the wildcard * means to every Database and Table in the Database respectively.

*.* to specify all databases on the server

database_name.* to specify all tables in one database

database_name.table_name to specify all columns of one table

The Privileges assigned to user while connecting from the source hostname can be a IP address / IP address range 10.100.0.0/24 or a DNS name or simple ‘%’ to allow access from anywhere.

Now For simplicity sake we can simulate the user vamshi will need access to only operate on the sales section of the reports Database.

GRANT ALL PRIVILEGES ON reports.sales TO 'joe@'mysql2.linuxcent.com';

What the above command does is to provide only login access to joe from mysql2.linuxcent.com and access the reports table from sales Database.

By replacing the database name with wildcard * will provide the privileges equivalent super_user level access.

This can be demonstrated as follows:

GRANT ALL PRIVILEGES ON *.* TO 'vamshi'@'%';

Or

GRANT INSERT, UPDATE, DELETE ON reports.* to 'vamshi'@'%';

How to Create Another non-root MYSQL DB Super User

This is just as a security measure whilst disabling the root login to the mysql engine.

GRANT ALL PRIVILEGES ON *.* TO 'vamshi_superuser'@'%';

Display MySQL User Account Privileges

To Display the Privileges granted to specific Mysql user Account, use the command SHOW GRANTS.

mysql> SHOW GRANTS FOR 'root'@'localhost' \G;
*************************** 1. row ***************************
Grants for root@localhost: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `root`@`localhost` WITH GRANT OPTION
*************************** 2. row ***************************

Now to compare the results

Saving Your Changes to the MYSQL database

The changes made so far are to be saved to the special user database called the grant tables, In total there are 5 special tables in the mysql database such as

user
db
host
tables_priv
columns_priv

We commit the changes by issuing the FLUSH PRIVILEGES command at the mysql prompt:

mysql> flush privileges ;
Query OK, 0 rows affected (0.01 sec)

Initiating a docker swarm and getting the current docker swarm token

Creating a docker swarm cluster:

The docker swarm can be created by using the following command:

The syntax is defined as follows:

docker swarm init --advertise-addr [available interface IP adress]

The –advertise-addr is used to explicitly define the docker swarm advertise ip. If you have a single interface this option will not be needed but will be real handy if you have more than 1 active public accessible interfaces.
Let us initialize our docker swarm environment.

[vamshi@docker-swarm ~]$ docker swarm init --advertise-addr 10.100.0.20
Swarm initialized: current node (nodeidofmastercdq7nmmq3kcmb5l85k2e) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-verylongstringofcharactercontainingthedockerswarmjoinstring-70bouyqwhfgdcgtw6o0fw6wup \
    10.100.0.20:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The docker swarm creation can be viewed from the docker info command as follows:

[vamshi@docker-swarm ~]$ docker info | grep -C 2 Swarm
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: active
 NodeID: nodeidofmastercdq7nmmq3kcmb5l85k2e
 Is Manager: true

The docker swarm explicitly uses the overlay and macvlan to enable the interhost network connectivity between the container over a swarm network

How to get the docker swarm join token:

This command can come in very handy when you forgot your docker swarm token and you need to join a new docker node to this docker swarm cluster.

[vamshi@docker-swarm ~]$ docker swarm join-token manager -q
SWMTKN-1-verylongstringofcharactercontainingthedockerswarmjoinstring-70bouyqwhfgdcgtw6o0fw6wup

Redirect the Std error and std output 2>&1

The redirect operation > is used in conjunction with stdoutput 1 and stderr 2.

command > [/dev/null] 2>&1

2 Represents the stderror. The &1 here represents the first argument which is /dev/null

The character 2 represents the stderr which takes the entire errors printed to the screen and then appends them to the /dev/null which is the first argument represented by &1.

So the command demonstration will be the following:

$ du -sh /* > /dev/null 2>&1

This redirect command will dump the errors and the output to /dev/null.

Explanation: the default behaviour of redirection operator is to redirect the stdout and we are redirecting them to devnul and then we followup the command with 2>&1 which mentions the stderr 2 and then redirects is to /dev/null, which is denoted by &1 describing the &1 as the first argument which is /dev/null

 

 

Docker Networking basics and the types of networks

The docker networking comprises of a overlay network and enabled communication with the outside resources using it.

There are following main types of built in connectivity networking drivers namely the bridged, host, macvlan, overlay network and the null driver with no network.

The docker container networking Model CNM architecture manages the networking for Docker container.

IPAM which stands for the IP address management works in a single docker node, and aids in Enabling the network connectivity among the doccker containers. Its primary responsibility is to allocate the IP address space for the subnets, allocation of the IP addresses to the endpoints and the network etc,.

The networking in docker is essentially an isolated sandbox environment, The isolation of the networking resources is possible by the namespaces

The overlay network enables the communication enabled the network spanning across many docker nodes on an environment like the Docker swarm network, The same networking logic is evident in a bridge networking but it is ony limited to a single docker host unlike the overlay network.

Here’s the output snippet from the docker info command; Listing the available network drivers.

# docker info
Plugins: 
 Volume: local
 Network: bridge host macvlan null overlay

The container networking enables connectivity inbetween the docker containers and also the host machine to docker container and vice-versa.

Listing the default networks in docker:

[vamshi@node01 nginx]$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
68b2ffd36e8f        bridge              bridge              local
c1aca4c87a2b        host                host                local
d5e48683def8        none                null                local

When a container is created by default it connects to the bridge network unless an extra arguments are specified.

When you install the docker by default a docker0 virtual interface is created which behaves as a bridge between the docker containers and the host system.

[vamshi@node01 nginx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242654b42ef	no

For brctl command we need to Install the bridge-util package.

We now examine the docker networks with docker network inspect.

Inspecting the various docker networks:

Inspecting the bridge network:

The bridge networking enables the network connectivity over the dockers in a single docker server host.

[vamshi@node01 nginx]$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "68b2ffd36e8fcdc0c3b170dfdbdbc93bb58351d1b2c011abc80709928463f809",
        "Created": "2020-05-23T10:28:27.206979057Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

This bridge is shown with the ip addr command as follows:

# ip addr show docker0 
   docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:65:4b:42:ef brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:65ff:fe4b:42ef/64 scope link 
       valid_lft forever preferred_lft forever

Inspecting the host network.

[vamshi@node01 ~]$ docker network inspect host 
[
    {
        "Name": "host",
        "Id": "c1aca4c87a2b3e7db4661e0cdedc97245cd5dfdc8aa2c9e6fa4ff1d5ecf9f3c1",
        "Created": "2019-05-16T18:46:19.485377974Z",
        "Scope": "local",
        "Driver": "host",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

Inspecting the null driver network

[vamshi@node01 ~]$ docker network inspect none
[
    {
        "Name": "none",
        "Id": "d5e48683def80b2e739b3be95e58fb11abc580ce29a33ba0df679a7a3972f532",
        "Created": "2019-05-16T18:46:19.477155061Z",
        "Scope": "local",
        "Driver": "null",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

 

The following are special networking architectures to span across multihost docker servers enabling network connectivity among the docker containers.

1.Overlay network

2 macvlan network.

Let us inspect the multi host networking:

The core components of the docker interhost network consists of

Inspecting the overlay network:

[vamshi@docker-master ~]$ docker network inspect overlay-linuxcent 
[
    {
        "Name": "overlay-linuxcent",
        "Id": "qz5ucx9hthyva53cydei0y8yv",
        "Created": "2020-05-25T13:22:35.087032198Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.255.0.0/16",
                    "Gateway": "10.255.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {
            "ingress-sbox": {
                "Name": "overlay-linuxcent-endpoint",
                "EndpointID": "165beb97b22c2857e3637119016ef88e462a05d3b3251c4f66aa0fc9176cfe67",
                "MacAddress": "02:42:0a:ff:00:03",
                "IPv4Address": "10.255.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4096"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "node01.linuxcent.com-95ad856b6f56",
                "IP": "10.100.0.20"
            }
        ]
    }
]

The endpoint is the Virtual IP addressing that routes the traffic to the respective containers running on individual docker nodes.

Inspecting the macvlan network:

vamshi@docker-master ~]$ docker network inspect macvlan-linuxcent 
[
    {
        "Name": "macvlan-linuxcent",
        "Id": "99c6a20bd4029ce5a37139c6e6792ec4f8a075c94b5f3e71efc32d92d41f3f89",
        "Created": "2020-05-25T14:20:00.655299409Z",
        "Scope": "local",
        "Driver": "macvlan",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.20.0.0/16",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

enabling ipv4 forwarding on docker server

Common errors when the ipv4 forwarding is not enabled on the linux host leading to unidentifiable issues. here is one such rare log from the system logs

level=warning msg="IPv4 forwarding is disabled. N...t work."

Its good to check the current ipv4.forwarding rules as follows:

[root@LinuxCent ~]# sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0

 

You can also enable the changes for the current session using the -w option

sysctl -w net.ipv4.conf.all.forwarding=1

To make the changes persistent we need to write to a config file and enforce the system to read it.

[root@LinuxCent ~]# vi /etc/sysctl.d/01-rules.conf
net.ipv4.conf.all.forwarding=1

Then apply the changes to the system on the fly with the sysctl command to load the changes from systemwide config files.

# sysctl –system
--system : tells the sysctl to read all the configuration file system wide

 

[root@Linux1 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /etc/sysctl.d/01-rules.conf ...
net.ipv4.conf.all.forwarding = 1
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.conf ...
[root@Linux1 ~]# sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1

Docker container volumes

The concept of docker is to run a compressed image into a container which servers the purpose and then container can be removed, leaving no trace of the data generated during the course of its runtime. This exact case is referred as ephemeral container.
The docker is traditionally non-persistent data storage and retains only the data originally from its image build creation, It provides the facility to integrate the volume mounts to a running container for data storage and manages the persistence issue to a certain extent.
We have realized the ability to store the data on volumes and then make them available to the container runtime environment to satisfy the needs.

As per the practice the docker volumes are mounted to docker image in runtime via the command line arguments which are demonstrated as show below.

This implementation of the docker volumes provides the running container a volume binding capability and this is method in docker volumes can be broadly categorized into two abilities which are listed as follows.

(1) The volume mapping from the host to the target container, This is like directory mapping between the Host and the container which happens during its runtime.
(2) A permanent volume name that can be shared among container and it even persists if the container is deleted and the same volume can be again mounted to another docker container.

The storage options offered by docker are for persistent storage of files and ensures it in cases of docker container restarts and removal of the container.

The example for the host bind volume mapping is as follows:

# docker run -v  src:target --name=container-name  docker-image -d

The syntax of docker directory mount:

# docker run -it -v /usr/local/bin:/target/local --name=docker-container-with-vol  ubuntu:latest /bin/bash

We will invoke a container by mounting the host path to an target container path and mount the host directory to nginx-html /usr/share/nginx/html.

[vamshi@node01 ~]$ docker run -d --name my-nginx -v /home/vamshi/nginx-html:/usr/share/nginx/html -p 80:80 nginx:1.17.2-alpine
34797e6d8939e42bc8cfe36eed4b60521355edadc2fa6c74a26fe4172384575c

Now we log into the container and verify the contents

[vamshi@node01 ~]$ docker exec -it my-nginx1 sh
~ # hostname
34797e6d8939
~ # df -h /usr/share/nginx/html
Filesystem                Size      Used Available Use% Mounted on
/dev/sda1                40.0G     16.1G     23.9G  40% /usr/share/nginx/html

From the df -h output from the container shows the path /usr/share/nginx/html as mount point.

Checking the contents of the webroot directory at /usr/share/nginx/html inside the container.

~ # cat /usr/share/nginx/html/index.html
<H1>Hello from LinuxCent.com</H1>

This is the same file which we have on our host machine and it is shared through the volume mount.
We verify using the curl command as follows

~ # curl localhost
<H1>Hello from LinuxCent.com</H1>

We can also mount the same directory with as many containers as possible on our system and it can be an effective way of updating the static content being utilized within our containers.

This is a bind operation offered by docker to mount the directory to a container.. This information can be identified by inspecting the docker command as follows:

Here is the extract snippet from the docker inspect container my-nginx1 :

            {
                "Type": "bind",
                "Source": "/home/vamshi/nginx/nginx-html",
                "Destination": "/usr/share/nginx/html",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }

As we can see the Type of mount is depicted as a bind here.

The formatter can also used with the filtering options in json format

[vamshi@node01 ~]$ docker container inspect my-nginx1 -f="{{.Mounts}}"
[{bind /home/vamshi/nginx-html /usr/share/nginx/html true rprivate}]

The Dockerfile VOLUME expression

Using the Persistent docker volumes.

We now focus our attention to the Docker volume mounts, which are isolated storage resources in docker and are a persistent storage which can be reused and mounted to the containers.

The VOLUME can be used while writing a Dockerfile, it creates a docker image with the volume settings and then mounts to the container when it is startedup.
We use the expression in dockerfile volume.

VOLUME [ "my-volume01" ]

We now build the image and lets observe the output below.

Step 1/4 : FROM nginx-linuxcent
 ---> 55ceb2abad47
Step 2/4 : COPY nginx-html/index.html /usr/share/nginx/html/
 ---> c482aa15da5a
Removing intermediate container a621e114a01d
Step 3/4 : VOLUME my-volume01
 ---> Running in ac523d6a02f0
 ---> 72423fe5f27d
Removing intermediate container ac523d6a02f0
Step 4/4 : RUN ls /tmp
 ---> Running in 8fc1fbc0f0bb
 ---> 0f453e3cfff1
Removing intermediate container 8fc1fbc0f0bb

Here the volume is mounted at the path /my-volume01 based on absolute path from /.

Here my-volume01 will be mounted to /my-volume01 inside the container path

/ # df -Th /my-volume01/
Filesystem Type Size Used Available Use% Mounted on
/dev/sda1 xfs 40.0G 15.1G 24.9G 38% /my-volume01

The information can be extracted by inspecting the container as follows:

# docker container inspect nginx-with-vol -f="{{.Mounts}}"
[{volume 7d6d92cffac1d216ca062032c99eb105b120d769331a2008d8cad1a2c086ad19 /var/lib/docker/volumes/7d6d92cffac1d216ca062032c99eb105b120d769331a2008d8cad1a2c086ad19/_data my-volume01 local true }]

 

If you would like the volume to be mounted to some other path then you can declare that in Dockerfile VOLUME as below:

VOLUME [ "/mnt/my-volume" ]

The information can be extracted from the docker image by inspecting for the volumes.

            "Image": "sha256:72423fe5f27de1a495e5e875aec83fd5084abc6e1636c09d510b19eb711424cc",
            "Volumes": {
                "/mnt/my-volume": {},
                 }

The volumes defined in the Dockerfile VOLUME expression will not be visible with the docker volume ls as they are present within the scope of specific docker container. But will be displayed with anonymous hash tags in the output, But they can be shared among other docker containers. We will look at sharing the docker volumes in the following section.It is also important to understand that once the volume is explicitly deleted then its data cannot be recovered again.

Using the container volume from one container and accessing them in another container.

This option is beneficial during the times when a container access and the debug tools are disabled, and you need to view the logs of the container and run analysis on it.

Using the volume from the container and mounting it to another container for auditing purposes.
We have now created a container called view-logs which uses the volumes from another container called ngnix-with-vol

docker run -d --name view-logs --volumes-from nginx-with-vol degug-tools

Best Practices:
The view-logs container can have a set of debug and troubleshooting tools to view the logs of other app containers.

Creating a Volume from the docker commandline:

The docker volume resource has to be initialized first and can be done as follows:
Command to create a docker volume:
# docker volume create my-data-vol
[vamshi@node01 ~]$ docker volume inspect my-data-vol

[
    {
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/my-data-vol/_data",
        "Name": "my-data-vol",
        "Options": {},
        "Scope": "local"
    }
]

We can use this volume and then mount this to a container as we mounted the host volume in earlier sections.

We will be using the persistent volume name and then mounting the docker volume to a container bind volume name to the target container path..
These options have a specific changes in syntax and has to be specified exactly while running the mount operation.

Below is the syntax:
docker run –volume :</path/to/mountpoint/> image-name.

We have now the docker volume available and mounting it to the target container path /data as follows:
# docker run -d –name data-vol –volume my-data-vol:/data nginx-linuxcent:v4
2dfa965bbbc79a522e9c109ef8eee20bf47e2b61062f3b3df61d4eb677de4506

Verification:

The docker volume can be listed by the following command

Check the mount information with docker volume inspect.
Here is an extract from the docker inspect container

"Mounts": [
            {
                "Type": "volume",
                "Name": "my-data-vol",
                "Source": "/var/lib/docker/volumes/my-data-vol/_data",
                "Destination": "/var/log/nginx",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": ""
             }

As we can see the Type of mount is depucted as a Volume here.

Conclustion:
We have seen the practices of mounting the hostpath to to target container path is a bind operation and the dependency it creates is the host affinity, which is binded to particular host and has to be avoided if you are dealing with more dynamix data exchange between containers over a network. But it is very useful if you have host-container data exchange.
The Option with Volumes is very dynamic and has less binding dependency on the host machines, They can be declared and used in two ways as demonstrated in the before sections. !st being the explicit volume creation and another is the Volume creation from within the Docker build.
The explicit docker creation and then binding provides the scopt of choosing a mount point inside the container after the image is built.

The Dockerfile’s VOLUME expression can be used to automatically define the volume name and the mount point desired and has all the same techniques of volume sharing for data exchange in run time.

BASH “switch case” in Linux with practical example

The switch case in BASH is more relevant and is widely used among the Linux admins/Devops folks to leverage the power of control flow in shell scripts.

As we have seen the if..elif..else..fi Control Structure: Bash If then Else. The switch case has a stronger case where it really simplifies out the control flow by running the specific block of bash code based on the user selection or the input parameters.

Let’s take a look at the simple Switch case as follows:

OPTION=$1
case $OPTION in
choice1)
Choice1 Statements
;;

choice2)
Choice2 Statements
;;

choiceN)
ChoiceN Statements
;;

*)
echo “User Selected Choice not present”
exit 1

esac

The OPTION is generally read from user input and upon this the specific choice case block is invoked.

Explanation:
In the switch command the control flow is forwarded to case keyword and stops here, it checks for the suitable match to pass over the control to relevant OPTION/CHOICE statement block. Upon the execution of the relevant CHOICE statements the case is exited once the control flow encounters esac keyword at the end.

Using the Pattern match
The control flow in bash identifies the case options and proceeds accordingly.
There can be cases where you can match the Here you might have observed that the user input the regular expression and the logical operators using the | for the input case

#! /bin/bash

echo -en "Enter your logins\nUsername: "
read user_name 
echo -en "Password: "
read user_pass 
while [ -n $user_name -a -n $user_pass ]
do

case $user_name in
    ro*|admin)
        if [ "$user_pass" = "Root" ];
        then
            echo -e "Authentication succeeded \ n You Own this Machine"
	    break
        else
            echo -e "Authentication failure"
            exit
        fi
    ;;
    jenk*)
	if [ "$user_pass" = "Jenkins" ];
	then
		echo "Your home directory is /var/lib/jenkins"
	    	break
	else
        	echo -e "Authentication failure"
	fi
        break
    ;;
    *)
        echo -e "An unexpected error has occurred."
        exit
    ;;
esac

done

You should kindly note that the regex used for the cases at ro*|admin and jenk*

We now have demonstrated by entering the username as jenkins and this will get matched with the jenkins case the control flow successfully enters into relevant block of code, checking the password match or not is not relevant for us as we are only concerned till the case choice selection.
We have named the switch case into a script switch-case.sh and run it, Here are the results.

OUTPUT :

[vamshi@node02 switch-case]$ sh switch-case.sh
Enter your logins
Username: jenkins
Password: Jenkins
Your home directory is /var/lib/jenkins

We have entered the correct password and successfully runs the jenkins case block statements

We shall also see the or ro*|admin case, demonstrated as follows.

[vamshi@node02 switch-case]$ sh switch-case.sh 
Enter your logins
Username: root
Password: Root
Authentication succeeded \ n You Own this Machine

We now test the admin username and see the results.

[vamshi@node02 switch-case]$ sh switch-case.sh 
Enter your logins
Username: admin
Password: Root
Authentication succeeded \ n You Own this Machine

Here is a more advanced script used to deploy a python application using the switch case..
Please refer to the Command line arguments section for user input

A complete functional Bash switch case can be seen at https://github.com/rrskris/python-deployment-script/blob/master/deploy-python.sh

Please feel free to share your experiences in comments.

Control Structure: Bash If then Else

The Bash being a scripting language does tend offer the conditional if else, We shall look at them in the following sections.

Firstly there needs to be a conditional check that has to be performed in order for the corresponding Block of code to be executed.

To break down the semantics of conditional control structures in BASH we need to understand The conditional keyword that performs the validation, the It is represented most commonly as “[“ and very rarely represented as “test” keyword.

It can be better understood by the following demonstration:

vamshi@linux-pc:~/Linux> [ 1 -gt 2 ]
vamshi@linux-pc:~/Linux> echo $?
1
vamshi@linux-pc:~/Linux>
vamshi@linux-pc:~/Linux> [ 1 -lt 2 ]
vamshi@linux-pc:~/Linux> echo $?
0

The [ is synonymous to the command test on the linux kernel.

vamshi.santhapuri@linux-pc:~/Linux> test 1 -gt 2

vamshi.santhapuri@linux-pc:~/Linux> echo $?
1
vamshi.santhapuri@linux-pc:~/Linux> test 1 -lt 2
vamshi.santhapuri@linux-pc:~/Linux> echo $?
0

We Shall now look at the different variations of Conditional controls structures.

  1. if then..fi

    if [ Condition ] ; then
    
    statement1...statementN
    
    fi
  2. if then..else..fi

    if [ Condition ] ; then
    
        If Block statements
    
    ...
    
    else
        else-Block statement
    
    fi
  3. if..then..elif then..elifN then..fi

    if [ Condition ] ; then
    
        If Block statement1
    
    ...
    
    elif [ elif Condition ]; then   # 1st elif Condition
    
        elif Block statement1
    
    
    elif [ elif Condition ]; then    # 2nd elif Condition
    
        elif Block statements
    
    elif [ elif Condition ]; then    # nth elif Condition
    
        elif Block statements
    
    fi

    An else can also be appended accordingly when all the if and elif conditions fail, which we will see in this section .

     

  4. if..then..elif then..elifN then..else..fi

    The “if elif elif else fi” control structure is like multiple test checking control diversion strategy in bash, gives the user the power to write as many test conditions as possible until a test condition is matched leading in the resultant block of code being executed. Writing this multiple elif can be tedious task and the switch case is mostly preferred

    if [ Condition ] ; then
    
        If Block statement
    
    elif [ elif Condition ]; then   # 1st elif Condition
    
        elif Block statement1
    
    elif [ elif Condition ]; then    # nth elif Condition
    
        elif Block statement
    
    ...
    
    else Block statementN # else block while gets control when none of if or elif are true.
    
        else Block statements
    
    fi

    Atleast one of the block statements are executed in this control flow similar to a switch case. The else block here takes the default case when none of the if nor the elif conditions matches up.

  5. Nested if then..fi Control structure Blocks

    Adding to the if..elif..else there is also the nested if block wherein the nested conditions are validated which can be Demonstrated as follows:

    if [ condition ]; then
    
        Main If Block Statements
    
        if [ condition ]; then # 1st inner if condition
    
            1st Inner If-Block statements
    
            if [ condition ]; then # 2nd inner if condition
    
                2nd Inner If-Block statements
              
                if [ condition ]; then 
                    Nth Inner If Block statements 
    
                fi
    
            fi
    
        fi
    
    fi

    This logic of nested ifs are used while dealing with scenarios where the outermost block of statements must be validated before, if the test succeeds then the control flow is passed to the innermost if test statement execution. Thus the name Nested if.

 

Here is the switch case bash script with practical explanation.
We will look at the Exit codes within the BASH in the next sections.

Managing Docker disk space

We come across the challenge to manage the docker engine and its disk space consumption issues in a long run.

To effectively manage its resources we have some of the best options, let us take a look at them in this tutorial.

How to identify the details of disk space usage in docker ?

[root@node01 ~]# docker system df -v
[root@node01 ~]# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              7                   3                   1.442 GB            744.2 MB (51%)
Containers          3                   1                   2.111 MB            0 B (0%)
Local Volumes       7                   1                   251.9 MB            167.8 MB (66%)

This command prints the complete verbose details of Images space usage, Containers space usage and Local Volumes space usage

How to Clean up space on Docker ?

[root@node02 vamshi]# docker system prune [ -a | -f ]

The option -a: Removes all the unused images and the stale containers / images
and -f Forcefully removes the unused images and the stale containers without prompting for confirmation.How

How to Remove docker images?

The docker images can be removed using the docker image rm <container-id | container-name> command.

The command docker rmi is most commonly used also the [/code]docker image rm[/code] which is more easier to read and self explanatory.

[root@node02 vamshi]# docker rmi

The docker images which are dangling and those without any tags can be filtered out using the below syntax and can be removed to save some file system space

We can list out the docker images that are dangling using the filter option as shown below:

# docker images -f "dangling=true"

The list of images received from the above command we pass only the image id’s to the docker image rm command as shown below:

[root@node02 vamshi]# docker image rm $(docker images -qf "dangling=true")

How to list multiple docker images with matching pattern ?

[vamshi@node02 ~]$ docker image ls mysql*
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
rrskris/mysql       v1                  1ab47cba1d63        4 months ago        456 MB
rrskris/mysql       v2                  3bd34czc2b90        4 months ago        456 MB
docker.io/mysql     latest              d435eee2caa5        5 months ago        456 MB

How to remove multiple docker containers with matching pattern

The docker provides good amount of flexibility with the commandline and can be combined with regular expression and awk formatting commands to yield relevant results.

[vamshi@node02 ~]$  docker image rm $(docker image ls | grep -w "^old-image" | awk {'print $3'} )