Vamshi Krishna Santhapuri

Experienced Operations Engineer with a demonstrated history of working in the computer software industry. Skilled in Openshift, Kuberntes, Docker, Devops practices, Linux System Administration, Strong engineering professional with a Bachelor of Technology (B.Tech.) focused in Computer Science from JNTUH.

Docker container volumes

The concept of docker is to run a compressed image into a container which servers the purpose and then container can be removed, leaving no trace of the data generated during the course of its runtime. This exact case is referred as ephemeral container.
The docker is traditionally non-persistent data storage and retains only the data originally from its image build creation, It provides the facility to integrate the volume mounts to a running container for data storage and manages the persistence issue to a certain extent.
We have realized the ability to store the data on volumes and then make them available to the container runtime environment to satisfy the needs.

As per the practice the docker volumes are mounted to docker image in runtime via the command line arguments which are demonstrated as show below.

This implementation of the docker volumes provides the running container a volume binding capability and this is method in docker volumes can be broadly categorized into two abilities which are listed as follows.

(1) The volume mapping from the host to the target container, This is like directory mapping between the Host and the container which happens during its runtime.
(2) A permanent volume name that can be shared among container and it even persists if the container is deleted and the same volume can be again mounted to another docker container.

The storage options offered by docker are for persistent storage of files and ensures it in cases of docker container restarts and removal of the container.

The example for the host bind volume mapping is as follows:

# docker run -v  src:target --name=container-name  docker-image -d

The syntax of docker directory mount:

# docker run -it -v /usr/local/bin:/target/local --name=docker-container-with-vol  ubuntu:latest /bin/bash

We will invoke a container by mounting the host path to an target container path and mount the host directory to nginx-html /usr/share/nginx/html.

[vamshi@node01 ~]$ docker run -d --name my-nginx -v /home/vamshi/nginx-html:/usr/share/nginx/html -p 80:80 nginx:1.17.2-alpine
34797e6d8939e42bc8cfe36eed4b60521355edadc2fa6c74a26fe4172384575c

Now we log into the container and verify the contents

[vamshi@node01 ~]$ docker exec -it my-nginx1 sh
~ # hostname
34797e6d8939
~ # df -h /usr/share/nginx/html
Filesystem                Size      Used Available Use% Mounted on
/dev/sda1                40.0G     16.1G     23.9G  40% /usr/share/nginx/html

From the df -h output from the container shows the path /usr/share/nginx/html as mount point.

Checking the contents of the webroot directory at /usr/share/nginx/html inside the container.

~ # cat /usr/share/nginx/html/index.html
<H1>Hello from LinuxCent.com</H1>

This is the same file which we have on our host machine and it is shared through the volume mount.
We verify using the curl command as follows

~ # curl localhost
<H1>Hello from LinuxCent.com</H1>

We can also mount the same directory with as many containers as possible on our system and it can be an effective way of updating the static content being utilized within our containers.

This is a bind operation offered by docker to mount the directory to a container.. This information can be identified by inspecting the docker command as follows:

Here is the extract snippet from the docker inspect container my-nginx1 :

            {
                "Type": "bind",
                "Source": "/home/vamshi/nginx/nginx-html",
                "Destination": "/usr/share/nginx/html",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }

As we can see the Type of mount is depicted as a bind here.

The formatter can also used with the filtering options in json format

[vamshi@node01 ~]$ docker container inspect my-nginx1 -f="{{.Mounts}}"
[{bind /home/vamshi/nginx-html /usr/share/nginx/html true rprivate}]

The Dockerfile VOLUME expression

Using the Persistent docker volumes.

We now focus our attention to the Docker volume mounts, which are isolated storage resources in docker and are a persistent storage which can be reused and mounted to the containers.

The VOLUME can be used while writing a Dockerfile, it creates a docker image with the volume settings and then mounts to the container when it is startedup.
We use the expression in dockerfile volume.

VOLUME [ "my-volume01" ]

We now build the image and lets observe the output below.

Step 1/4 : FROM nginx-linuxcent
 ---> 55ceb2abad47
Step 2/4 : COPY nginx-html/index.html /usr/share/nginx/html/
 ---> c482aa15da5a
Removing intermediate container a621e114a01d
Step 3/4 : VOLUME my-volume01
 ---> Running in ac523d6a02f0
 ---> 72423fe5f27d
Removing intermediate container ac523d6a02f0
Step 4/4 : RUN ls /tmp
 ---> Running in 8fc1fbc0f0bb
 ---> 0f453e3cfff1
Removing intermediate container 8fc1fbc0f0bb

Here the volume is mounted at the path /my-volume01 based on absolute path from /.

Here my-volume01 will be mounted to /my-volume01 inside the container path

/ # df -Th /my-volume01/
Filesystem Type Size Used Available Use% Mounted on
/dev/sda1 xfs 40.0G 15.1G 24.9G 38% /my-volume01

The information can be extracted by inspecting the container as follows:

# docker container inspect nginx-with-vol -f="{{.Mounts}}"
[{volume 7d6d92cffac1d216ca062032c99eb105b120d769331a2008d8cad1a2c086ad19 /var/lib/docker/volumes/7d6d92cffac1d216ca062032c99eb105b120d769331a2008d8cad1a2c086ad19/_data my-volume01 local true }]

 

If you would like the volume to be mounted to some other path then you can declare that in Dockerfile VOLUME as below:

VOLUME [ "/mnt/my-volume" ]

The information can be extracted from the docker image by inspecting for the volumes.

            "Image": "sha256:72423fe5f27de1a495e5e875aec83fd5084abc6e1636c09d510b19eb711424cc",
            "Volumes": {
                "/mnt/my-volume": {},
                 }

The volumes defined in the Dockerfile VOLUME expression will not be visible with the docker volume ls as they are present within the scope of specific docker container. But will be displayed with anonymous hash tags in the output, But they can be shared among other docker containers. We will look at sharing the docker volumes in the following section.It is also important to understand that once the volume is explicitly deleted then its data cannot be recovered again.

Using the container volume from one container and accessing them in another container.

This option is beneficial during the times when a container access and the debug tools are disabled, and you need to view the logs of the container and run analysis on it.

Using the volume from the container and mounting it to another container for auditing purposes.
We have now created a container called view-logs which uses the volumes from another container called ngnix-with-vol

docker run -d --name view-logs --volumes-from nginx-with-vol degug-tools

Best Practices:
The view-logs container can have a set of debug and troubleshooting tools to view the logs of other app containers.

Creating a Volume from the docker commandline:

The docker volume resource has to be initialized first and can be done as follows:
Command to create a docker volume:
# docker volume create my-data-vol
[vamshi@node01 ~]$ docker volume inspect my-data-vol

[
    {
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/my-data-vol/_data",
        "Name": "my-data-vol",
        "Options": {},
        "Scope": "local"
    }
]

We can use this volume and then mount this to a container as we mounted the host volume in earlier sections.

We will be using the persistent volume name and then mounting the docker volume to a container bind volume name to the target container path..
These options have a specific changes in syntax and has to be specified exactly while running the mount operation.

Below is the syntax:
docker run –volume :</path/to/mountpoint/> image-name.

We have now the docker volume available and mounting it to the target container path /data as follows:
# docker run -d –name data-vol –volume my-data-vol:/data nginx-linuxcent:v4
2dfa965bbbc79a522e9c109ef8eee20bf47e2b61062f3b3df61d4eb677de4506

Verification:

The docker volume can be listed by the following command

Check the mount information with docker volume inspect.
Here is an extract from the docker inspect container

"Mounts": [
            {
                "Type": "volume",
                "Name": "my-data-vol",
                "Source": "/var/lib/docker/volumes/my-data-vol/_data",
                "Destination": "/var/log/nginx",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": ""
             }

As we can see the Type of mount is depucted as a Volume here.

Conclustion:
We have seen the practices of mounting the hostpath to to target container path is a bind operation and the dependency it creates is the host affinity, which is binded to particular host and has to be avoided if you are dealing with more dynamix data exchange between containers over a network. But it is very useful if you have host-container data exchange.
The Option with Volumes is very dynamic and has less binding dependency on the host machines, They can be declared and used in two ways as demonstrated in the before sections. !st being the explicit volume creation and another is the Volume creation from within the Docker build.
The explicit docker creation and then binding provides the scopt of choosing a mount point inside the container after the image is built.

The Dockerfile’s VOLUME expression can be used to automatically define the volume name and the mount point desired and has all the same techniques of volume sharing for data exchange in run time.

BASH “switch case” in Linux with practical example

The switch case in BASH is more relevant and is widely used among the Linux admins/Devops folks to leverage the power of control flow in shell scripts.

As we have seen the if..elif..else..fi Control Structure: Bash If then Else. The switch case has a stronger case where it really simplifies out the control flow by running the specific block of bash code based on the user selection or the input parameters.

Let’s take a look at the simple Switch case as follows:

OPTION=$1
case $OPTION in
choice1)
Choice1 Statements
;;

choice2)
Choice2 Statements
;;

choiceN)
ChoiceN Statements
;;

*)
echo “User Selected Choice not present”
exit 1

esac

The OPTION is generally read from user input and upon this the specific choice case block is invoked.

Explanation:
In the switch command the control flow is forwarded to case keyword and stops here, it checks for the suitable match to pass over the control to relevant OPTION/CHOICE statement block. Upon the execution of the relevant CHOICE statements the case is exited once the control flow encounters esac keyword at the end.

Using the Pattern match
The control flow in bash identifies the case options and proceeds accordingly.
There can be cases where you can match the Here you might have observed that the user input the regular expression and the logical operators using the | for the input case

#! /bin/bash

echo -en "Enter your logins\nUsername: "
read user_name 
echo -en "Password: "
read user_pass 
while [ -n $user_name -a -n $user_pass ]
do

case $user_name in
    ro*|admin)
        if [ "$user_pass" = "Root" ];
        then
            echo -e "Authentication succeeded \ n You Own this Machine"
	    break
        else
            echo -e "Authentication failure"
            exit
        fi
    ;;
    jenk*)
	if [ "$user_pass" = "Jenkins" ];
	then
		echo "Your home directory is /var/lib/jenkins"
	    	break
	else
        	echo -e "Authentication failure"
	fi
        break
    ;;
    *)
        echo -e "An unexpected error has occurred."
        exit
    ;;
esac

done

You should kindly note that the regex used for the cases at ro*|admin and jenk*

We now have demonstrated by entering the username as jenkins and this will get matched with the jenkins case the control flow successfully enters into relevant block of code, checking the password match or not is not relevant for us as we are only concerned till the case choice selection.
We have named the switch case into a script switch-case.sh and run it, Here are the results.

OUTPUT :

[vamshi@node02 switch-case]$ sh switch-case.sh
Enter your logins
Username: jenkins
Password: Jenkins
Your home directory is /var/lib/jenkins

We have entered the correct password and successfully runs the jenkins case block statements

We shall also see the or ro*|admin case, demonstrated as follows.

[vamshi@node02 switch-case]$ sh switch-case.sh 
Enter your logins
Username: root
Password: Root
Authentication succeeded \ n You Own this Machine

We now test the admin username and see the results.

[vamshi@node02 switch-case]$ sh switch-case.sh 
Enter your logins
Username: admin
Password: Root
Authentication succeeded \ n You Own this Machine

Here is a more advanced script used to deploy a python application using the switch case..
Please refer to the Command line arguments section for user input

A complete functional Bash switch case can be seen at https://github.com/rrskris/python-deployment-script/blob/master/deploy-python.sh

Please feel free to share your experiences in comments.

Control Structure: Bash If then Else

The Bash being a scripting language does tend offer the conditional if else, We shall look at them in the following sections.

Firstly there needs to be a conditional check that has to be performed in order for the corresponding Block of code to be executed.

To break down the semantics of conditional control structures in BASH we need to understand The conditional keyword that performs the validation, the It is represented most commonly as “[“ and very rarely represented as “test” keyword.

It can be better understood by the following demonstration:

vamshi@linux-pc:~/Linux> [ 1 -gt 2 ]
vamshi@linux-pc:~/Linux> echo $?
1
vamshi@linux-pc:~/Linux>
vamshi@linux-pc:~/Linux> [ 1 -lt 2 ]
vamshi@linux-pc:~/Linux> echo $?
0

The [ is synonymous to the command test on the linux kernel.

vamshi.santhapuri@linux-pc:~/Linux> test 1 -gt 2

vamshi.santhapuri@linux-pc:~/Linux> echo $?
1
vamshi.santhapuri@linux-pc:~/Linux> test 1 -lt 2
vamshi.santhapuri@linux-pc:~/Linux> echo $?
0

We Shall now look at the different variations of Conditional controls structures.

  1. if then..fi

    if [ Condition ] ; then
    
    statement1...statementN
    
    fi
  2. if then..else..fi

    if [ Condition ] ; then
    
        If Block statements
    
    ...
    
    else
        else-Block statement
    
    fi
  3. if..then..elif then..elifN then..fi

    if [ Condition ] ; then
    
        If Block statement1
    
    ...
    
    elif [ elif Condition ]; then   # 1st elif Condition
    
        elif Block statement1
    
    
    elif [ elif Condition ]; then    # 2nd elif Condition
    
        elif Block statements
    
    elif [ elif Condition ]; then    # nth elif Condition
    
        elif Block statements
    
    fi

    An else can also be appended accordingly when all the if and elif conditions fail, which we will see in this section .

     

  4. if..then..elif then..elifN then..else..fi

    The “if elif elif else fi” control structure is like multiple test checking control diversion strategy in bash, gives the user the power to write as many test conditions as possible until a test condition is matched leading in the resultant block of code being executed. Writing this multiple elif can be tedious task and the switch case is mostly preferred

    if [ Condition ] ; then
    
        If Block statement
    
    elif [ elif Condition ]; then   # 1st elif Condition
    
        elif Block statement1
    
    elif [ elif Condition ]; then    # nth elif Condition
    
        elif Block statement
    
    ...
    
    else Block statementN # else block while gets control when none of if or elif are true.
    
        else Block statements
    
    fi

    Atleast one of the block statements are executed in this control flow similar to a switch case. The else block here takes the default case when none of the if nor the elif conditions matches up.

  5. Nested if then..fi Control structure Blocks

    Adding to the if..elif..else there is also the nested if block wherein the nested conditions are validated which can be Demonstrated as follows:

    if [ condition ]; then
    
        Main If Block Statements
    
        if [ condition ]; then # 1st inner if condition
    
            1st Inner If-Block statements
    
            if [ condition ]; then # 2nd inner if condition
    
                2nd Inner If-Block statements
              
                if [ condition ]; then 
                    Nth Inner If Block statements 
    
                fi
    
            fi
    
        fi
    
    fi

    This logic of nested ifs are used while dealing with scenarios where the outermost block of statements must be validated before, if the test succeeds then the control flow is passed to the innermost if test statement execution. Thus the name Nested if.

 

Here is the switch case bash script with practical explanation.
We will look at the Exit codes within the BASH in the next sections.

Managing Docker disk space

We come across the challenge to manage the docker engine and its disk space consumption issues in a long run.

To effectively manage its resources we have some of the best options, let us take a look at them in this tutorial.

How to identify the details of disk space usage in docker ?

[root@node01 ~]# docker system df -v
[root@node01 ~]# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              7                   3                   1.442 GB            744.2 MB (51%)
Containers          3                   1                   2.111 MB            0 B (0%)
Local Volumes       7                   1                   251.9 MB            167.8 MB (66%)

This command prints the complete verbose details of Images space usage, Containers space usage and Local Volumes space usage

How to Clean up space on Docker ?

[root@node02 vamshi]# docker system prune [ -a | -f ]

The option -a: Removes all the unused images and the stale containers / images
and -f Forcefully removes the unused images and the stale containers without prompting for confirmation.How

How to Remove docker images?

The docker images can be removed using the docker image rm <container-id | container-name> command.

The command docker rmi is most commonly used also the [/code]docker image rm[/code] which is more easier to read and self explanatory.

[root@node02 vamshi]# docker rmi

The docker images which are dangling and those without any tags can be filtered out using the below syntax and can be removed to save some file system space

We can list out the docker images that are dangling using the filter option as shown below:

# docker images -f "dangling=true"

The list of images received from the above command we pass only the image id’s to the docker image rm command as shown below:

[root@node02 vamshi]# docker image rm $(docker images -qf "dangling=true")

How to list multiple docker images with matching pattern ?

[vamshi@node02 ~]$ docker image ls mysql*
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
rrskris/mysql       v1                  1ab47cba1d63        4 months ago        456 MB
rrskris/mysql       v2                  3bd34czc2b90        4 months ago        456 MB
docker.io/mysql     latest              d435eee2caa5        5 months ago        456 MB

How to remove multiple docker containers with matching pattern

The docker provides good amount of flexibility with the commandline and can be combined with regular expression and awk formatting commands to yield relevant results.

[vamshi@node02 ~]$  docker image rm $(docker image ls | grep -w "^old-image" | awk {'print $3'} )

 

Various states of a docker containers

Docker container lifecycle

The containers has the following stages from the moment it is created from an image till it’s removed from the docker engine  running, restarting, removing, paused, dead and exited.
All statuses apart from running and created are not going to serve a live purpose and tend to use us the system resources. unless the are brought back to the action through the use of docker start command

We can easily perform filtering operation on the containers using the status flag:

# docker ps -f status=[created | dead | exited | running | restarting | removing]

Then the docker also allows removal of individual containers using the rm command we can use the docker container rm to have the docker delete container

[vamshi@node01 ~]$ docker container rm <container-id | container-name>
[vamshi@node01 ~]$ docker rm <container-id | container-name>

You can also use the -f flag to forcefully remove it.

# docker rm -f <container-id | container-name>

On large docker farms containing 100’s of containers, Its often a practical approach to continually keep scanning for the stale containers and cleaning them up.

Clean up the docker containers which are in exited state.

# docker ps -q -f status=exited | xargs -I {} docker rm {}
# docker container ls --filter "status=exited" | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty -I {} docker rm {}

 

List docker containers currently running:

[vamshi@node01 ~]$ docker container ls -f status=running

The docker subsystem also does offer some internal system commands to get the job done using the docker’s garbage collection mechanism.

The docker system build also results in leaving some reminiscences of older build data which has to be cleaned up at regular intervals of time on the docker engine host.

 

How to print out the docker container pid’s

docker ps -qa -f status=running | xargs docker inspect --format='{{ .State.Pid }}'

A docker one liner to clear up some docker stale cache:

[root@node01 ~]# docker container ls --filter "status=exited" --filter=status="dead" | awk '{print $1}' | xargs --no-run-if-empty -I {} docker rm {}

How to identify if a docker container files have been modified

The docker container is simply a run time copy of a docker image resources, The docker container utilizes the filesystem structure originally packed into it via the union filesystem packaged from various image layers during the docker image creation.

The docker provides a standard diff command which compares the filesystem data in docker image with the container.

Syntax:

# docker diff [CONTAINER ID | CONTAINER NAME]

Before jumping in lets examine a docker container below and take a look at filesystem by logging into it.

root@node03:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
e3de85aaf61c        9a0b6e4f0956        "sh"                2 months ago        Up 3 minutes                            jovial_hertz

We have a running container with a random name jovial_hertz, and we login to the container as follows

root@node03:~# docker exec -it jovial_hertz bash

We are now inside the container and now will create a directory linuxcent and the also create a ASCII text file test and then exit out from the container.

root@e3de85aaf61c:~# mkdir linuxcent
root@e3de85aaf61c:~# cd linuxcent
root@e3de85aaf61c:~# touch test
root@e3de85aaf61c:~# exit

Created a directory called linuxcent and a touched a test file and now logged out from the container with the exit command.
The docker diff command will run against the container should result in the modified data and we contemplate the results

root@node03:~# docker diff jovial_hertz 
C /root
A /root/linuxcent
A /root/linuxcent/test
C /root/.bash_history

The Flags A in front of /root/linuxcent and /root/linuxcent/test indicate that these are directory and file that were the new additions to the container and Flag C indicates that the other 2 files were changed.
Thus it helps us to compare and contrast the new changes to a container filesystem for better auditing.

Docker login to private registry

STEP 1: Docker login to private registry

Lets see the syntax of docker login command followed by the authorized username and the repository URL.
Syntax:

[root@docker03:~]# docker login [DOCKER-REGISTRY-SERVER] -u <username> [-p][your password will be seen here]

The -p is the option for password which can be given along with the docker command or you can type it in the password prompt after hitting enter on the docker login command.

Example given:

[root@docker03:~]# docker login nexusreg.linuxcent.com:5000 -u vamshi
Password:
Login Succeeded

Once the docker login is succeeded a json file will be generated under your home directory at the following path which contains the auth metadata information.

[root@docker03:~]# cat $HOME/.docker/config.json
{
	"auths": {
		"nexusreg.linuxcent.com:5000": {
			"auth": "1234W46TmV0ZW5yaWNoMjAxOQ=="
		}
	},
	"HttpHeaders": {
		"User-Agent": "Docker-Client/18.09.1 (linux)"
	}
}

The docker login repository URL can be found out from your docker client machine using docker info command if you had previously logged in, as we see below:

[root@docker03:~]# docker info | grep Registry
Registry: https://index.docker.io/v1/

How to logout from the specific docker registry use the docker logout command.
The syntax is shown as below:
docker logout [DOCKER-REGISTRY-SERVER]

Example given:

[root@docker03:~]# docker logout 
Removing login credentials for https://index.docker.io/v1/

Build context in Dockerfile; Best practices

Best practices while building the Dockerfile.

The context in Dockerfile is relative to the current working directory of the Dockerfile and that the location where Dockerfile is present becomes its context.

Which means we can create a Directory with some content and place our Dockerfile inside it and then traverse a number of directories away from the directory and can still execute the build command

Here is an example of out general approach to building an image from a Dockerfile with . context:

# docker build --tag nginx-linuxcent .

And we list the image as follows:

[vamshi@docker01 ~]$ docker images
REPOSITORY                                   TAG                 IMAGE ID            CREATED             SIZE
nginx-linuxcent                              latest              0b0a4ea4d48a        3 minutes ago      210.1 MB

The build context is a . dot and the Dockerfile is present in the same directory.
If You are working locally you don’t really need a repository name and specifying just the image name is sufficient and then adding a tag is considered optional, in such cases a latest tag is appended to the end of the newly build image

As a standard practice that the Dockerfile doesn’t traverse back from the current working directory. Lets see an demonstration of of building a Dockerfile by giving the relative path from its Dockerfile.

Example given:

[vamshi@node01 ~]$ ls nginx/
default.conf Dockerfile-nginx index.html nginx.conf Portal.tar.gz

Here is our nginx/Dockerfile-nginx.

cat Dockerfile-nginx
FROM nginx:1.17.2-alpine
COPY index.html /usr/share/nginx/html/
ADD Portal.tar.gz /tmp/new1/portal
CMD ["/usr/sbin/nginx"]

Our command to build this Dockerfile-nginx now becomes:

[vamshi@docker01 ~]$ docker build -t nginx-linuxcent -f nginx/Dockerfile-nginx nginx/
Sending build context to Docker daemon 155.4 MB
Step 1/4 : FROM nginx:1.17.2-alpine
---> 55ceb2abad47
Step 2/4 : COPY index.html /usr/share/nginx/html/
---> cc652d0fc2b7
Removing intermediate container 11f195a0e2ac
Step 3/4 : ADD Portal.tar.gz /tmp/new1/portal1/
---> b18a86545c47
Removing intermediate container 1e1849be08b4
Step 4/4 : CMD /usr/sbin/nginx
---> Running in fdac087b636b
---> 02e2795eab12
Removing intermediate container fdac087b636b
Successfully built 02e2795eab12

Or you can also mention the absolute path as shown below.

[vamshi@docker01 ~]$  # docker build -t nginx-linuxcent -f /home/vamshi/nginx/Dockerfile-nginx /home/vamshi/nginx/

The above example successfully builds a docker image. The Directory nginx/ is its build context as nginx/Dockerfile-nginx is the relative path of the input Dockerfile-nginx to docker build command.

Dockerfile and the context being different

Placing the Dockerfile-nginx inside the nginx directory and context placed one directory above the Dockerfile-nginx.

We now need to modify and carefully place the ADD/COPY commands relative to its directory in order for it to work properly, The context being one directory ahead, they should be prefixed with the directory name as we see below:

FROM nginx:1.17.2-alpine
COPY nginx/index.html /usr/share/nginx/html/
ADD nginx/Portal.tar.gz /tmp/new1/portal1/
CMD ["/usr/sbin/nginx"]

Now our docker build command takes the following syntax:

[vamshi@docker01 ~]$ docker build -t nginx-linuxcent -f nginx/Dockerfile-nginx .
Sending build context to Docker daemon 155.4 MB
Step 1/4 : FROM nginx:1.17.2-alpine
 ---> 55ceb2abad47
Step 2/4 : COPY nginx/index.html /usr/share/nginx/html/
 ---> Using cache
 ---> cc652d0fc2b7
Step 3/4 : ADD nginx/Portal.tar.gz /tmp/new1/portal1/
 ---> Using cache
 ---> b18a86545c47
Step 4/4 : CMD /usr/sbin/nginx
 ---> Using cache
 ---> 02e2795eab12
Successfully built 02e2795eab12

Here the context remains outside the directory and the Dockerfile is present inside the subdirectory, the ADD/COPY commands are prefixed with the relative path of the dubirectory

Common errors Encountered with context mismatch:

unable to prepare context: The Dockerfile must be within the build context

How to tag a Docker image with a repository name during build process?

You can name your Dockerfile anything and it doesnt matter to the build process as long as you refer it with the -f

The standard naming convention is as shown below.

# docker build -t <DOCKER_IMAGE-NAME>:<TAG> -f Dockerfile .

Syntax:

# docker build -t <REPOSITORY/REGISTRY NAME>/<DOCKER_IMAGE-NAME>:<TAG> -f Dockerfile .
# docker build --tag mydocker-registry-name/nginx-linuxcent:version1.0 -f Dockerfile .

Here the Dockerfile need not be explicitly mentioned with -f as the name of the file is Dockerfile and the context being .

# docker build --tag mydocker-registry-name/nginx-linuxcent:version1.0 - .

The Build context . at the end is important because it signifies the current context and the context cannot span backward.

The tag name is a must for best practices and helps in identifying the newly build images and tagging enables visible versioning and better identification of images.

Docker build with no-cache

Creating Docker images with the --no-cache option when you do not use cache when building the image, The default option for this is set to false and can be used explicitly to enforce no-cache..

It can be at times important when building container images which are dependent upon downloading latest libraries from the internet or practically from your on-premise code repository which contains the freshly compiled code artifacts.

Build the Docker image with no cache:

# docker build --no-cache -t mydocker-registry-name/nginx:version0.1 -f Dockerfile .

Once the docker container is successfully built, we can take a look at the newly created image as below.

# docker images
REPOSITORY                                       TAG                 IMAGE ID            CREATED             SIZE
mydocker-registry-name/nginx:                    version0.1          bcdd25553d01        3 minutes ago       298 MB

Conclusion:

The docker build context becomes the present path of the Dockerfile.The docker image build is a simple process if things are neatly organized and the context can be quiet tricky if you are managing multiple Docker builds. You have the flexibility to give the absolute of relative path to the docker build.
Its always advised to implement the relative path and use the . dot as context being in the same directory where your Dockerfile is present to run the Docker builds.

Ensure to use the no-cache option

And have a proper tagging in place to enable better version identity of your docker images.

If at all you need to build an image being in different context then always write the Dockerfile relative to the directory path of current context

How to Inject Variables into Docker image in build time without modifying the Dockerfile

How to Inject Variables into Docker image in build time without modifying the Dockerfile

 

We have a requirement wherein we need to modify the specific Dockerfile with the build information and we make use of string replace operations like SED to modify the data in Dockerfile, But we can make use of the docker build time arguments to achieve the results efficiently.

Here’s a snippet of Dockerfile

FROM ubuntu:17.04
LABEL maintainer="vamshi" version="1.0.0" description="JRUBY Docker image"
WORKDIR /app
ARG JRUBY_VER 9.2.11.1
ENV JRUBY_VER ${JRUBY_VER}
ADD https://repo1.maven.org/maven2/org/jruby/jruby-dist/${JRUBY_VER}/jruby-dist-${JRUBY_VER}-bin.tar.gz .

We now build this Dockerfile as follows passing the –build-arg:

# docker build --build-arg JRUBY_VER=9.2.8.0 -t jruby:v2  -f ./Dockerfile .

Here we have passed in the build arguments of –build-arg the JRUBY_VER=9.2.8.0 and this then assigns the Argument to the ENV and passed it to ADD command which downloads tar.gz.

When run the command without any build-args it readsup the predefined ARG JRUBY_VER 9.2.11.1 from the Dockerfile.

Conclusion:
Another best usecase is while you have a continuous Docker build pipeline system and want to pass the build time arguments on user input, Its best to use ARG statements inside while writing your Dockerfile

Environmental Variables in Dockerfile.

The Docker Environment variables can be declared during the (1) Docker image creation inside the Dockerfile and (2) During the docker container run time.

The Dockerfile ENV syntax is shown as follows:

ENV VAR_NAME value

We will look at the Dockerfile syntax in the following snippet:

# Exporting the Environment Variables
ENV MAVEN_HOME /usr/share/maven/
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
# Added as per Java team
ENV JAVA_OPTS -Xms256m -Xmx768m

We can even reference the ENV variable after the declaration at the next statement.

RUN ${JAVA_HOME}/bin/java -version

Conclusion:

Setting the ENV is a lot better than Hard coding the values into the Dockerfile, as it makes maintaining the Dockerfile a simpler task.

Its a good practice for storing the path names, version of packages which are set to be modified over a longer course of time.

Best practices in creating a Dockerfile – build docker images

The Dockerfile is a very simple to understand format containing of Statements often referred to as Docker DSL(Domain Specific Language), It tends to become quiet complex and difficult to understand over the time.

[vamshi@docker01 ~]$ cat Dockerfile
# Dockerfile which runs a Latest Ubuntu image and sleeps for 100 seconds
FROM ubuntu:18.04
LABEL maintainer="vamshi" version="1.0.0" description="My First Docker Image"
RUN apt-get update && apt-get dist-upgrade -y && apt-get autoremove -y && apt-get install -y tomcat
RUN apt-get remove --purge -y $(apt-mark showauto) && rm -rf /var/lib/apt/lists/*
WORKDIR /data
ENTRYPOINT ["sleep", "100"]

Let’s examine the Dockerfile and the Statements as follows:

The First line starting with # is a comment.
The Second line with FROM tag determines the image and the latest tag; .
Eg:

FROM ubuntu:latest FROM ubuntu:18.04 or FROM centos:7

The LABEL is a Descriptive tag and contains the information about the original author credits
RUN command simulates a shell command and the subsequent statements are executed as a shell command inside the container.
WORKDIR determines the  directory context inside the running container.
ENTRYPOINT is the invocation of the container main process which runs when the docker container runs and its failure to run means the termination or the end of the particular container.
From the above Dockerfile you would have noticed 2 formats of Instruction, Now lets discuss them in details:
Shell Form
The instructions are written as shell commands

RUN apt-get update , this in turn is formatted as bin sh -c “apt-get update” and enables for command expansion, inclusion of the special characters and it enables combining of multiple commands.

Exec Form
It is JSON array style
These instructions are also shell commands but they are represented in the form of elements in a list.

["command-name","arg1","arg2"]

This format has the following drawbacks
Here the shell is not provided
No scope for variable expansion and
also this format doesn’t allow the special characters like (&&, ||, >….) to be included into the command expression statements.

While running the docker container, the CMD takes the run time arguments and the JSON list format works as a preventive measure.

Advantage of CMD and ENTRYPOINT using square bracket JSON array notation

The most advantageous point with CMD or ENTRYPOINT being written in JSON list format is the during the container runtime, the CMD can take certain arguments which can alter the main container process.. and Thus we can shield against variable expansion, Injecting special characters and not providing a shell as a counter measure security practice.

How to build a Dockerfile?

From the current working directory navigate to the location where Dockerfile is present and run the below command.

# docker build --tag first-docker-image -f Dockerfile .

How to extract Build description from docker image?

We are able to extract the docker LABEL description and MAINTAINER information from the docker command which will help in identifying its purpose when have some hundreds of docker images.

[vamshi@node01 ~]$ docker image inspect first-image --format='{{.Config.Labels}}'
map[description:My First Docker Image maintainer:vamshi version:1.0.0]

Best Practices while building Docker images.

  • While writing the Dockerfile, Its a good practice to include the version information.
  • Include the description of the image which can be easily understood while inspecting the docker image from the inspect command.
  • Grouping RUN shell commands together with && which are inter-dependent and relevant. The Docker by design stores a single STATEMENT as One Layer Image. This will enable better compressing and storing of docker image layers. This technique is called cache-busting.
    RUN apt-get update && apt-get dist-upgrade -y
  • Its a good practice to cleanup the installed package sources and build files which lightens up the size of docker image layer.
    RUN apt-get autoremove -y && rm -rf /var/lib/apt/lists/*
  • The Docker statements can be grouped together that have require less to no modification to save the network bandwidth and increase the docker build time.