Control Structure: Bash If then Else

The Bash being a scripting language does tend offer the conditional if else, We shall look at them in the following sections.

Firstly there needs to be a conditional check that has to be performed in order for the corresponding Block of code to be executed.

To break down the semantics of conditional control structures in BASH we need to understand The conditional keyword that performs the validation, the It is represented most commonly as “[“ and very rarely represented as “test” keyword.

It can be better understood by the following demonstration:

vamshi@linux-pc:~/Linux> [ 1 -gt 2 ]
vamshi@linux-pc:~/Linux> echo $?
1
vamshi@linux-pc:~/Linux>
vamshi@linux-pc:~/Linux> [ 1 -lt 2 ]
vamshi@linux-pc:~/Linux> echo $?
0

The [ is synonymous to the command test on the linux kernel.

vamshi.santhapuri@linux-pc:~/Linux> test 1 -gt 2

vamshi.santhapuri@linux-pc:~/Linux> echo $?
1
vamshi.santhapuri@linux-pc:~/Linux> test 1 -lt 2
vamshi.santhapuri@linux-pc:~/Linux> echo $?
0

We Shall now look at the different variations of Conditional controls structures.

  1. if then..fi

    if [ Condition ] ; then
    
    statement1...statementN
    
    fi
  2. if then..else..fi

    if [ Condition ] ; then
    
        If Block statements
    
    ...
    
    else
        else-Block statement
    
    fi
  3. if..then..elif then..elifN then..fi

    if [ Condition ] ; then
    
        If Block statement1
    
    ...
    
    elif [ elif Condition ]; then   # 1st elif Condition
    
        elif Block statement1
    
    
    elif [ elif Condition ]; then    # 2nd elif Condition
    
        elif Block statements
    
    elif [ elif Condition ]; then    # nth elif Condition
    
        elif Block statements
    
    fi

    An else can also be appended accordingly when all the if and elif conditions fail, which we will see in this section .

     

  4. if..then..elif then..elifN then..else..fi

    The “if elif elif else fi” control structure is like multiple test checking control diversion strategy in bash, gives the user the power to write as many test conditions as possible until a test condition is matched leading in the resultant block of code being executed. Writing this multiple elif can be tedious task and the switch case is mostly preferred

    if [ Condition ] ; then
    
        If Block statement
    
    elif [ elif Condition ]; then   # 1st elif Condition
    
        elif Block statement1
    
    elif [ elif Condition ]; then    # nth elif Condition
    
        elif Block statement
    
    ...
    
    else Block statementN # else block while gets control when none of if or elif are true.
    
        else Block statements
    
    fi

    Atleast one of the block statements are executed in this control flow similar to a switch case. The else block here takes the default case when none of the if nor the elif conditions matches up.

  5. Nested if then..fi Control structure Blocks

    Adding to the if..elif..else there is also the nested if block wherein the nested conditions are validated which can be Demonstrated as follows:

    if [ condition ]; then
    
        Main If Block Statements
    
        if [ condition ]; then # 1st inner if condition
    
            1st Inner If-Block statements
    
            if [ condition ]; then # 2nd inner if condition
    
                2nd Inner If-Block statements
              
                if [ condition ]; then 
                    Nth Inner If Block statements 
    
                fi
    
            fi
    
        fi
    
    fi

    This logic of nested ifs are used while dealing with scenarios where the outermost block of statements must be validated before, if the test succeeds then the control flow is passed to the innermost if test statement execution. Thus the name Nested if.

 

Here is the switch case bash script with practical explanation.
We will look at the Exit codes within the BASH in the next sections.

Managing Docker disk space

We come across the challenge to manage the docker engine and its disk space consumption issues in a long run.

To effectively manage its resources we have some of the best options, let us take a look at them in this tutorial.

How to identify the details of disk space usage in docker ?

[root@node01 ~]# docker system df -v
[root@node01 ~]# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              7                   3                   1.442 GB            744.2 MB (51%)
Containers          3                   1                   2.111 MB            0 B (0%)
Local Volumes       7                   1                   251.9 MB            167.8 MB (66%)

This command prints the complete verbose details of Images space usage, Containers space usage and Local Volumes space usage

How to Clean up space on Docker ?

[root@node02 vamshi]# docker system prune [ -a | -f ]

The option -a: Removes all the unused images and the stale containers / images
and -f Forcefully removes the unused images and the stale containers without prompting for confirmation.How

How to Remove docker images?

The docker images can be removed using the docker image rm <container-id | container-name> command.

The command docker rmi is most commonly used also the [/code]docker image rm[/code] which is more easier to read and self explanatory.

[root@node02 vamshi]# docker rmi

The docker images which are dangling and those without any tags can be filtered out using the below syntax and can be removed to save some file system space

We can list out the docker images that are dangling using the filter option as shown below:

# docker images -f "dangling=true"

The list of images received from the above command we pass only the image id’s to the docker image rm command as shown below:

[root@node02 vamshi]# docker image rm $(docker images -qf "dangling=true")

How to list multiple docker images with matching pattern ?

[vamshi@node02 ~]$ docker image ls mysql*
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
rrskris/mysql       v1                  1ab47cba1d63        4 months ago        456 MB
rrskris/mysql       v2                  3bd34czc2b90        4 months ago        456 MB
docker.io/mysql     latest              d435eee2caa5        5 months ago        456 MB

How to remove multiple docker containers with matching pattern

The docker provides good amount of flexibility with the commandline and can be combined with regular expression and awk formatting commands to yield relevant results.

[vamshi@node02 ~]$  docker image rm $(docker image ls | grep -w "^old-image" | awk {'print $3'} )

 

Various states of a docker containers

Docker container lifecycle

The containers has the following stages from the moment it is created from an image till it’s removed from the docker engine  running, restarting, removing, paused, dead and exited.
All statuses apart from running and created are not going to serve a live purpose and tend to use us the system resources. unless the are brought back to the action through the use of docker start command

We can easily perform filtering operation on the containers using the status flag:

# docker ps -f status=[created | dead | exited | running | restarting | removing]

Then the docker also allows removal of individual containers using the rm command we can use the docker container rm to have the docker delete container

[vamshi@node01 ~]$ docker container rm <container-id | container-name>
[vamshi@node01 ~]$ docker rm <container-id | container-name>

You can also use the -f flag to forcefully remove it.

# docker rm -f <container-id | container-name>

On large docker farms containing 100’s of containers, Its often a practical approach to continually keep scanning for the stale containers and cleaning them up.

Clean up the docker containers which are in exited state.

# docker ps -q -f status=exited | xargs -I {} docker rm {}
# docker container ls --filter "status=exited" | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty -I {} docker rm {}

 

List docker containers currently running:

[vamshi@node01 ~]$ docker container ls -f status=running

The docker subsystem also does offer some internal system commands to get the job done using the docker’s garbage collection mechanism.

The docker system build also results in leaving some reminiscences of older build data which has to be cleaned up at regular intervals of time on the docker engine host.

 

How to print out the docker container pid’s

docker ps -qa -f status=running | xargs docker inspect --format='{{ .State.Pid }}'

A docker one liner to clear up some docker stale cache:

[root@node01 ~]# docker container ls --filter "status=exited" --filter=status="dead" | awk '{print $1}' | xargs --no-run-if-empty -I {} docker rm {}

What is Docker and what comprises inside the docker image

A Docker Container is an isolated independent instance of kernel space, which means any number of docker instances can run independent applications.

The Docker Containers by design are isolated application runtime environment using the common host system resources exposed through cgroups and the host filesystem through the tarball filesystem obtained from generating a docker image.

All because of the Kernel namespaces.. Which provisions the pids and manages its port ranges, filesystem partitions, networking, and the most astonishing feature of having root privileges inside of the container but not outside of the container all by the help of chroot functionality.
The Docker storage implements the concept of the copy-on-write (COW) layered filesystems.

Each container gets its own network isolation.

Thus Containers are Lightweight than a VM.On the back end this functions by using the chroot filesystem much like its predecessor like LXC’s, with its own hierarchy.

It also controls group resources(cgroups), groups together resources and then applies the limits on Block i/o, memory, CPU.
Namespace: It takes the system wide resources, wraps them and provides those resources as a isolated environment to the instances.

By Using a container you don’t really have to install an OS, enabling no repetition of similar workforce and you are not using the whole disk space repetitively for the similar OS files..

There’s only a single kernel which will be shared by multiple docker containers.

In this post we will explain some of the practical Docker use cases and commands :

There are two parts to the Docker Engine interns of user interaction:
One being the docker Daemon and the other being the docker client which send commands to interact with the Docker Daemon

How to build a Docker Image?:

# docker build -t <name>:<version-number> -f Dockerfile <.>

The . at the end is important because it signifies the current context and the context cannot span backward.

The option of --no-cache is important when building container images which are dependent upon downloading latest libraries from the internet or practically from your on-premise code repository which contains the freshly compiled code artifacts.

Build the Docker image with no caching:

# docker build --no-cache -t frontend-centos-lc:dev0.1 -f Dockerfile .

Once the docker container is successfully built, we can take a look at the newly created image:
Creating a docker image from scratch rootfilesystem is also a better option to create a base docker image, which gives you the freedom to package the libraries you wish and have complete control over it.

List docker images command

# docker images
# docker image ls

What are present inside the Docker Image?

The images are composed of multiple layers which form a auto union filesystem by bringing the various layers of docker image with each stages of build command creating interdependent layers.. The base image being the minimal rootfs in most cases comprised of a stripped down version of linux rootfilesystem. You can find more details from here, Building a Docker image from rootfilesystem
We run the docker inspect command on the docker image to describe various build related details.

# docker image inspect <image-name | image-id >

Example given:

root@node03:/home/vamshi# docker images nexusreg.linuxcent.com:8123/ubuntu-vamshi:v1 --no-trunc
REPOSITORY                                       TAG                 IMAGE ID                                                                  CREATED             SIZE
nexusreg.netenrichcloud.com:8088/ubuntu-vamshi   v1                  sha256:9a0b6e4f09562a0e515bb2a0ef2eca193437373fb3941c4956e13a281fe457d7   6 months ago        354MB
root@node03:/home/vamshi# 

Can be listed by the –tree option

# docker container inspect 73caf780c813
# docker images --tree

This –tree option is deprecated and history command is used to provide the image layer details.

# docker history <image layer id>

The images are stored under /var/lib/docker/<storage driver> and can be viewed, the filesystem container the container hash name followed by the docker container filesystem organized in the sub-directories.

Using the docker Tag command to tag the existing docker images to match a meaningful Repository name and append a version tag.

Example given for docker tag command.

# docker tag frontend-centos-nginx:dev0.1 my-repo:8123/frontend-nginx:v0.1

Run the docker command again to check the images, and see the newly tagged image present.

We use the docker push choose to upload the image to the docker registry which is a remote docker repository using the docker push command.

# docker push <docker Registry_name>/<image-name>:<version>
# docker push my-repo:8123/frontend-nginx:v0.1

 

create multiple files and Directories at once in Windows Command line

The windows commandline also known as cmd provides the cli interaction to the windows operating system.

The Windows powershell which has been recently made opensource provides cross platform compatibility and can also be installed on Linux OS.

The command to create a file in Powershell:

PS /home/vamshi> New-Item -ItemType file testfile.txt

There is another simpler way to create a file in windows and also used often in Linux/Unix

PS /home/vamshi> echo " " > testfile2.txt
PS /home/cloud_user> dir

Directory: /home/vamshi

Mode LastWriteTime Length Name
---- ------------- ------ ----
----- 04/25/2020 07:40 2 testfile.txt
----- 04/25/2020 07:40 2 testfile2.txt

Command to create a Directory in Powershell.

PS /home/vamshi>New-Item -ItemType directory testDir.

How to create multiple files in Windows using Command-line?

Write down the list of all filenames in a file filenames.txt as below:

PS /home/vamshi> cat filenames.txt
file1
file2
file3
file4

To achieve this we have to create a for loop.

PS /home/vamshi> cat filenames.txt | foreach-object -process { echo "" > $_ }
PS /home/cloud_user> dir

Directory: /home/vamshi

Mode LastWriteTime Length Name
---- ------------- ------ ----
----- 04/25/2020 07:49 1 file1
----- 04/25/2020 07:49 1 file2
----- 04/25/2020 07:49 1 file3
----- 04/25/2020 07:49 1 file4

The foreach command can be used along with New-Item -ItemType as below:

PS /home/vamshi> cat filenames.txt | foreach-object -process { New-Item -ItemType file $_ }

Create multiple Directories in windows:

PS /home/vamshi> cat ./dirs.txt
dir1
dir2
dir3
PS /home/vamshi> cat ./newdirs.txt | foreach-object -process { New-Item -ItemType directory $_ }
PS /home/vamshi> dir
Directory: /home/vamshi
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 04/25/2020 07:54 dir1
d---- 04/25/2020 07:54 dir2
d---- 04/25/2020 07:54 dir3

Reset Gitlab password from cli

How to change the user password on gitlab?

Here is the demonstration to reset the gitlab password:

We connect to the gitlab-rails console to reset the password of the user,

In this demonstration we are going to reset the root password of the user called root whose uid is 1

[vamshi@gitlab ~]$ sudo gitlab-rails console -e production
-------------------------------------------------------------------------------------
GitLab: 11.10.4-ee (88a3c791734)
GitLab Shell: 9.0.0
PostgreSQL: 9.6.11
-------------------------------------------------------------------------------------
Loading production environment (Rails 5.0.7.2)
Loading production environment (Rails 5.0.7.2)
irb(main):001:0> User.where(id:1).first
=> #<User id:1 @root>

Now we can confirm the Uid 1 Belongs to the root user and this is the account whose password we want to reset

irb(main):001:0> user=User.where(id: 1).first
=> #<User id:1 @root>

Now enter your new password and followup with the password confirmation:

irb(main):003:0> user.password = 'AlphanumericPassword'
=> "AlphanumericPassword"
irb(main):004:0> user.password_confirmation = 'AlphanumericPassword'
=> "AlphanumericPassword"

Now save the password

irb(main):005:0> user.save
Enqueued ActionMailer::DeliveryJob (Job ID: 961d30a2b-df21-45c8-83e6-1993c85e6030) to Sidekiq(mailers) with arguments: "DeviseMailer", "password_change", "deliver_now", #<GlobalID:0x10007feafe3d64e0 @uri=#<URI::GID gid://gitlab/User/1>>
=> true

Then we exit out of the interactive ruby shell only after saving the changes

Connect to remote Docker server on tcp port

The docker master is where the docker server/engine daemon exists. There is strategic importance of maintaining a unique docker server in Build and Deployment during continuous release cycles, The docker clients such as the jenkins CICD server and other docker hosts connect to this master ensuring credibility and atomicity of the docker build process, And most of the times the Dynamic Docker agent from the jenkins build can connect to it and execute the docker builds.
The Docker master is the server where the build images are initially created when you run the docker build command during continuous build process.

To make a docker instance as the docker master you need to identify the following things.

Have an up to date docker daemon running with good amount of disk space for mount point /var/lib/docker.

Next up, In the file /etc/sysconfig/docker add the line OPTIONS="-H tcp://0.0.0.0:4243" at the end of the file.
As this docker master is running on a Centos machine we have the filepath /etc/sysconfig/docker.
But on Ubuntu/Debian the filepath location could be /etc/default/docker
And then restart docker daemon as follows:

[vamshi@docker-master01 ~]$ sudo systemctl restart docker

Confirm the changes with the ps command as follows:

[vamshi@docker-master01 ~]$  ps -ef | grep docker
root 2556 1 0 16:09 ? 00:00:05 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json -H tcp://0.0.0.0:4243 --storage-driver overlay2

Connecting to Docker master from client on TCP

Now the changes we got to make on the docker client are as follows:

Make sure the docker daemon on client is stopped and disabled, the following command does them both at once:

[vamshi@jenkins01 ~]$ sudo systemctl disable docker --now

From the docker client, we should test and establish the connection to the docker server through tcp ip port 4243

[vamshi@jenkins01 ~]$ docker -H tcp://10.100.0.10:4243 version
Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
Go version: go1.10.3
Git commit: cccb291/1.13.1
Built: Tue Mar 3 17:21:24 2020
OS/Arch: linux/amd64

Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
Go version: go1.10.3
Git commit: b2f74b2/1.13.1
Built: Wed May 1 14:55:20 2019
OS/Arch: linux/amd64
Experimental: false

Now that we have confirmed the successful connection from the client to the docker master server we can make the changes permanent, we shall export the  DOCKER_HOST to the system user profile.
Now on the docker client(here: our Jenkins server) with export of DOCKER_HOST as the environment variables.

[vamshi@jenkins01 ~]$ sudo sh -c 'echo "export DOCKER_HOST=\"tcp://10.100.0.10:4243\"" > /etc/profile.d/docker.sh'

Now we see the results as our docker client is able to connect to the master.

[vamshi@jenkins01 ~]$ docker version

Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
Go version: go1.10.3
Git commit: cccb291/1.13.1
Built: Tue Mar 3 17:21:24 2020
OS/Arch: linux/amd64

Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
Go version: go1.10.3
Git commit: b2f74b2/1.13.1
Built: Wed May 1 14:55:20 2019
OS/Arch: linux/amd64
Experimental: false

You might generally face an error saying : Cannot connect to the Docker daemon at unix:///var/run/docker.sock
This is generally caused by not having privileges to access the /var/run/docker.sock and the socket attributes being owned by the docker group is must. See https://linuxcent.com/cannot-connect-to-the-docker-daemon-at-unix-var-run-docker-sock-is-the-docker-daemon-running/ on changing the group ownership for unix:///var/run/docker.sock
The solution is to add your user to the docker group
# useradd -aG docker <username>

The best way to identify this issue is to run the docker info and docker version commands.

# docker version

The docker version command output has two sections.
The first section is describes Client information; which is your workstation.

The second part of the output describes about the server side information.
And here you can list out the

# docker version
Client:
Version: 1.13.1
API version: 1.26
Package version:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
(or)
Cannot connect to the Docker daemon at tcp://<docker-server-ip>:4243. Is the docker daemon running?

Either of them can mean that the destined server is not running.
Ensure by running ps -ef | grep docker

# docker info

This presents the complete information about the docker system.
In case of a tcp connection outage or if the server is not running, this command doesn’t yield any output and the output describes the error details

Its is a best practice to have a docker group created on the server and have the user part of the docker group.

# sudo groupadd docker

And add the curent user as part of the docker group.

# sudo usermod -aG docker $USER

Rename files in linux

The linux mv command has very featureset, It can be used to rename the file(s) and Directory names, also also used to relocate the contents and help better in organizing the files and directories on a linux OS.

Syntax of mv command:

$ mv [OPTIONS] </path/to/Source> </path/to/Destination>

How to rename a single file

The rename operation is linux is done using the mv command

[vamshi@linuxcent mv]$ ls
demo.txt
$ mv demo-today.txt demo-old.txt
[vamshi@linuxcent mv]$ ls
demo-old.txt

Here the file demo-today.txt has been renamed to demo-old.txt

How to move or relocate multiple files and directories at once into a Destination Directory

Out DemoProject Directory contains the following content

[vamshi@node02 DemoProject]$ ls
api LICENSE mvnw mvnw.cmd README.md

We are only interested to move out only selected directories core/ site/ admin/ and the file pom.xml to the target destination /tmp/Demo-test/, We can achieve this using the option -t --target-directory= Option

[vamshi@node02 DemoProject]$ mv -vi core/ site/ admin/ pom.xml -t /tmp/Demo-test/
‘core/’ -> ‘/tmp/Demo-test/core’
‘site/’ -> ‘/tmp/Demo-test/site’
‘admin/’ -> ‘/tmp/Demo-test/admin’
‘pom.xml’ -> ‘/tmp/Demo-test/pom.xml’

As a result we have successfully moved the selected content:

[vamshi@linuxcent DemoProject]$ ls /tmp/Demo-test/
admin core pom.xml site

Renaming multiple files with extensions

Here’s what we will be demonstrating in this tutorial, We will use a combination of tools like cut combining them with a for loop to accomplish our task in an iterative loop.

For simplicity sake let’s consider we have 10 files ending with .txt extension, as seen below

[vamshi@linuxcent ~]$ ls
file10.txt file1.txt file2.txt file3.txt file4.txt file5.txt file6.txt file7.txt file8.txt file9.txt

We will now rename them and append an extension of .txt to all the files as demonstrated below:

[vamshi@node02 source]$ for i in *.txt; do sh -c "mv $i `echo $i| cut -d'.' -f1 `.html" ; done
[vamshi@linuxceent ~]$ ls
file10.html file1.html file2.html file3.html file4.html file5.html file6.html file7.html file8.html file9.html

Using the rename command to rename the file extensions.

The linux rename command takes the arguments

We have here 10 files with .html extension

[vamshi@linuxcent ~]$ rename .html .doc *
[vamshi@linuxcent ~]$ ls
file10.doc file1.doc file2.doc file3.doc file4.doc file5.doc file6.doc file7.doc file8.doc file9.doc

While we also might have many other files in another extension format and we can change thrir extension format in the following method.
Suppose have 3 files with .txt extension as file11.txt file12.txt file13.txt and remaining files with .doc extension, they all can be renamed to .html as per the following format.

[vamshi@linuxcent source]$ ls
file10.doc file11.txt file12.txt file13.txt file1.doc file2.doc file3.doc file4.doc file5.doc file6.doc file7.doc file8.doc file9.doc
[vamshi@linuxcent ~]$ ls
file10.doc file11.txt file12.txt file13.txt file1.doc file2.doc file3.doc file4.doc file5.doc file6.doc file7.doc file8.doc file9.doc
[vamshi@linuxcent ~]$ rename .doc .txt .html *
[vamshi@node02 source]$ ls
file10.html file11.html file12.html file13.html file1.html file2.html file3.html file4.html file5.html file6.html file7.html file8.html file9.html

check linux version

The Linux OS has many Distributions and various versions with slight modifications in the kernel versions, a major and minor version identification is crucial to practical administration of a linux server. Linux being open source, the release cycles are continuous resulting in many changes with Longterm release and short term release cycles in some most known and popular linux Distributors like Centos/Debian.. A special mentioning of the generic Desktop PC/laptop based versions of fedora/Ubuntu/Arch/OpenSuse Leap-Tumbleweed Linux where fan following is heavy, the Release cycles are very aggressive with new versions releasing once in for every 2-3 weeks.

In this Section we will see how to check linux version, on most popular Distributions.
This will be handy during the kernel patch process to identify the current linux version.
Lets list our some practical linux commands and scenarios

Checking the Linux OS Version using /etc/os-release

[vamshi@node02 ]$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

Checking the Linux Version from the file /etc/issue

This file is being relinquished in latest Systemd variants of Linux, But tends to gives a less adequate information just lying around for aged linux user’s familiarity with systemd versions.

vamshi@node03:/$ cat /etc/issue
Debian GNU/Linux 10 \n \l

Check Linux Version using lsb_release:

The lsb_release prints the (Linux Standard Base) and Distribution information on the Linux host.

[vamshi@node02 cp-command]$ lsb_release -a
LSB Version:    :core-4.1-ia32:core-4.1-noarch
Distributor ID:    CentOS
Description:    CentOS Linux release 7.7.1908 (Core)
Release:    7.7.1908
Codename:    Core

How to check the running kernel version information ?

Check Linux Version information with uname command

[vamshi@node02 cp-command]$ uname -a
Linux node02.linuxcent.com 3.10.0-1062.9.1.el7.x86_64 #1 SMP Fri Dec 6 15:49:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Checking the Linux Version using hostnamectl

The command hostnamectl gives the complete information of the underlying architecture, kernel version and the Linux OS name

vamshi@node03:/$ hostnamectl 
Static hostname: node03
Icon name: computer-vm
Chassis: vm
Machine ID: b4adcdb84c724856b577524ebbfa0003
Boot ID: 1a2db0a0ae8b4c5ba86b390c68af7024
Virtualization: oracle
Operating System: Debian GNU/Linux 10 (buster)
Kernel: Linux 4.19.0-5-amd64
Architecture: x86-64

How to create Temporary filesystem on Linux

The greatest advantage with a tmpfs is that you can use the faster access Volatile memory to store files instead of Secondary storage system.

Lets head off to the Demonstration using tmpfs.

Create a Directory named test-docs in /mnt

[vamshi@SERVER02 mnt]$ sudo mkdir /mnt/test-docs

We will be using the dd command to dump the data and create a file with given Block size, as below and compare the write speeds on tmpfs (Linux temporary Filesystem) vs conventional storage filesystem.

[vamshi@SERVER02 mnt]$ sudo mount -t tmpfs -o size=1G tmpfs /mnt/test-docs

Output from the mount command:

[vamshi@SERVER02 mnt]$ df -hT /mnt/test-docs/
Filesystem     Type   Size Used Avail Use% Mounted on
tmpfs          tmpfs  1.0G     0  1.0G   0% /mnt/test-docs

Now add the entry to /etc/fstab to make the mount persistent across reboots(We should write How to create an fstab entry in another article)

tmpfs       /mnt/webdocs tmpfs   nodev,nosuid,noexec,nodiratime,size=1G   0 0

With these changes it is going to perform faster system read writes during the runtime.

Please advise caution as this uses the RAM.

Now to test the setting Temporary Filesystem vs Storage Filesystem write speeds

We will demonstrate see about the temporary(tmpfs) filesystem on linux in action and practical examples.

Writing a 500MB file on the same server to a Secondary storage vs Writing it to tmpfs.

Lets us write a single 500MB file and measure its write speed and time it’s operation

[vamshi@SERVER02 mnt]$ time sudo dd if=/dev/zero of=/mnt/test-docs/dump.txt bs=1k count=500000
500000+0 records in
500000+0 records out
512000000 bytes (512 MB) copied, 0.815591 s, 628 MB/s

real    0m0.871s
user    0m0.103s
sys    0m0.760s

 

This operation finished under 0.8 seconds time which is around 100 milliseconds lesser than 1 second and write operation completed with 628MB/s.

Lets check on the file size on the tmpfs mountpoint /mnt/test-docs/

[vamshi@SERVER02 mnt]$ df -hT /mnt/test-docs/
Filesystem     Type   Size Used Avail Use% Mounted on
tmpfs          tmpfs  1.0G 489M  536M 48% /mnt/test-docs

Now lets run the same write operation to the secondary storage filesystem

[vamshi@SERVER02 mnt]$ time sudo dd if=/dev/zero of=/mnt/dump.txt bs=1k count=500000
500000+0 records in
500000+0 records out
512000000 bytes (512 MB) copied, 1.38793 s, 369 MB/s

real    0m1.510s
user    0m0.028s
sys    0m1.227s

The write operation to the disk took twice the time at a write speed of 369MB/s

The space usage on the [/code]/mnt/test-docs/[/code] after the operation is 489MB out of 1024MB.

[vamshi@SERVER02 mnt]$ df -hT /mnt/test-docs/
Filesystem     Type   Size Used Avail Use% Mounted on
tmpfs          tmpfs  1.0G 489M  536M 48% /mnt/test-docs

CONCLUSION:

The write/read speeds to tmpfs were substantially faster compared to the secondary storage block device mount points, The fact of the matter is tmpfs although the clear winner is tmpfs, it cannot be used for persistent data storage as its just sits on top of the memory and in no way offers long term data storage persistence.

So what is a practical usage of tmpfs in linux ?

The practical usage can be storing of the static web content to serve the images,css and js to fasten the request speeds.
The secondary storage as we know for straight persistence and long term data storage capabilities.