In this blog we will setup the basic filebeat configuration to export the logs to elastic search and create an index in the elasticsearch server.
We will install the standard linux tar file installation of filebeat.
In this blog we will setup the basic filebeat configuration to export the logs to elastic search and create an index in the elasticsearch server.
We will install the standard linux tar file installation of filebeat.
To use the external volume for our future container, we need to format a filesystem on the volume.
We use the ext4 filesystem to format our block device, we will demonstrate that as follows:
vamshi@node03:~$ sudo mkfs.ext4 /dev/sdb mke2fs 1.43.4 (31-Jan-2017) Discarding device blocks: done Creating filesystem with 524288 4k blocks and 131072 inodes Filesystem UUID: bc335e44-d8e9-4926-aa0a-fc7b954c28d1 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done
Here is the command to create a volume by mentioning the path to the block device and using it in the local scope
docker volume create jenkins_vol1 --driver local --opt device=/dev/sdb jenkins_vol1
We have successfully creates a docker volume using a block device.
Inspecting the docker volume that is created:
vagrant@node03:~$ docker volume inspect jenkins_vol1
[
{
"CreatedAt": "2020-05-12T17:22:11Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/jenkins_vol1/_data",
"Name": "jenkins_vol1",
"Options": {
"device": "/dev/sdb",
"type": "ext4"
},
"Scope": "local"
}
]
Creating my jenkins container which will use the docker volume jenkins_vol1 and mount it to /var/jenkins_home/.m2
docker run -d -p 8080:8080 --name jenkins --mount 'type=volume,src=temp_vol,dst=/var/jenkins_home/.m2, volume-driver=local' jenkins:latest
We have successfully started our container and now lots login to the container and check our volume.
jenkins@fc2c49313ddb:/$ df -hT Filesystem Type Size Used Avail Use% Mounted on overlay overlay 29G 4.9G 23G 18% / tmpfs tmpfs 64M 0 64M 0% /dev tmpfs tmpfs 970M 0 970M 0% /sys/fs/cgroup shm tmpfs 64M 0 64M 0% /dev/shm /dev/sda3 ext4 29G 4.9G 23G 18% /var/jenkins_home /dev/sdb ext4 2.0G 6.0M 1.8G 1% /var/jenkins_home/.m2 tmpfs tmpfs 970M 0 970M 0% /proc/acpi tmpfs tmpfs 970M 0 970M 0% /sys/firmware
As we can see from the output the mount point /var/jenkins_home/.m2 is mounted with block device /dev/sdb using a ext4 filesystem
/dev/sdb ext4 2.0G 6.0M 1.8G 1% /var/jenkins_home/.m2
Creating a 200MB temp filesystem volume.
docker volume create --name temp_vol --driver local --opt type=tmpfs --opt device=tmpfs --opt o=size=200m
The inspect of the temp_vol we created is as follows:
vamshi@node03:~$ docker volume inspect temp_vol
[
{
"CreatedAt": "2020-05-02T17:31:01Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/temp_vol/_data",
"Name": "temp_vol",
"Options": {
"device": "tmpfs",
"o": "size=100m,uid=1000",
"type": "tmpfs"
},
"Scope": "local"
}
]
docker run -d -p 8080:8080 --name jenkins --mount 'type=volume,src=jenkins_vol1,dst=/var/jenkins_home/.m2,volume-driver=local' jenkins:latest
vamshi@node03:~$ docker exec -it jenkins bash jenkins@2267ba462aa2:/$ df -hT Filesystem Type Size Used Avail Use% Mounted on overlay overlay 29G 4.9G 23G 18% / tmpfs tmpfs 64M 0 64M 0% /dev tmpfs tmpfs 970M 0 970M 0% /sys/fs/cgroup shm tmpfs 64M 0 64M 0% /dev/shm /dev/sda3 ext4 29G 4.9G 23G 18% /var/jenkins_home tmpfs tmpfs 100M 0 100M 0% /var/jenkins_home/.m2 tmpfs tmpfs 970M 0 970M 0% /proc/acpi tmpfs tmpfs 970M 0 970M 0% /sys/firmware jenkins@2267ba462aa2:/$ exit
Here is shows the mount point details:
tmpfs tmpfs 200M 0 200M 0% /var/jenkins_home/.m2
Please note the mount point /var/jenkins_home/.m2 which has 200MB space as defined.
Thus we can make use of the docker volumes and use the persistent fileystems and attach the block disks to a running container.
The user creation process in mysql is one of the most important steps in Database administration.
Below we will list some of the Important terms of Authentication, Authorization with practical demonstration.
The process of gaining access to the database engine with an active login credentials and a login request from a trusted source network ensures Authentication.
The part where in the user is allowed to access certain tables in databases or the whole or part of the Databases determines Authorization.
In SQL administration, The user creation process involves Authentication and Authorization with a practical implementation of a unique username id, identified by the password, the critical component is the source network identification if logging from remote hosts. The permission to specific databases ensuring the least privileges based on the desired role is one of best the practices
Let’s connect using root access to the MySQL Command-Line Tool
[vamshi@mysql01 linuxcent]$ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 14 Server version: 8.0.19 MySQL Community Server - GPL Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>
Sample Syntax:
CREATE USER 'mysql_user'@'hostname' IDENTIFIED BY 'user_password';
It is important to understand that the ‘username’ @ ‘hostname’ is a unique entry of identification pattern for Authenticating to the mysql engine.
The hostname field accepts the values such as [/code]“ip address” / 10.100.0.0/24 / localhost [/code]
And only the incoming requests will be allowed matching the user name.
The syntax for creating a user on mysql goes as follows:
Enabling access for a source of localhost identified by the authentication information
CREATE USER 'vamshi'@'localhost' IDENTIFIED BY 'user_password';
Enabling access for a source IP range of 10. network identified with /24 CIDR followed by the authentication information
CREATE USER 'vamshi'@'10.100.0.0/24' IDENTIFIED BY 'user_password';
Enabling access for a source IP range of specific hostname identified the authentication information
CREATE USER 'vamshi'@'hostname' IDENTIFIED BY 'user_password';
The First step of user access is done, Now we need to grant the access to the Databases, which grants the Privileges to perform actions on the DB by the new user.
Granting Privileges
This section deals with the Authorization;
On the mysqlcli prompt, you would need to issue the GRANT command with appropriate access permissions.
What are Privileges types in mysql?
The Grant Authorizes the Following actions
Like the ability to CREATE tables and databases, read or write FILES, and then even SHUTDOWN the server.
The most commonly used privileges are:
ALL PRIVILEGES: Grants all privileges to a user account.SELECT: The user account is allowed to read a database.INSERT: The user account is allowed to insert rows into a specific table.UPDATE: The user account is allowed to update table rows.CREATE: The user account is allowed to create databases and tables.DROP: The user account is allowed to drop databases and tables.DELETE: The user account is allowed to delete rows from a specific table.PROCESS: The user is allowed to get the information about the threads executing within the serverSHUTDOWN: The user is allowed to use of the SHUTDOWN and RESTART statements within the server.Now it’s time to grant the privileges to the new user on a tables belonging to a Database or all the tables on a given database;
Here’s what the Simple GRANT SQL statement looks like:
GRANT ALL PRIVILEGES ON Database_name.Table_name TO 'user@'hostname' ;
Let’s break this down and understand what we just told MySQL to do.
GRANT ALL PRIVILEGES (ALL types of Privileges) for only the given Database_Name and given Table_Name to the user Identified by ‘user@’hostname’
The Database_name and the Table_name can be replaced by the wildcard * means to every Database and Table in the Database respectively.
*.* to specify all databases on the server
database_name.* to specify all tables in one database
database_name.table_name to specify all columns of one table
The Privileges assigned to user while connecting from the source hostname can be a IP address / IP address range 10.100.0.0/24 or a DNS name or simple ‘%’ to allow access from anywhere.
Now For simplicity sake we can simulate the user vamshi will need access to only operate on the sales section of the reports Database.
GRANT ALL PRIVILEGES ON reports.sales TO 'joe@'mysql2.linuxcent.com';
What the above command does is to provide only login access to joe from mysql2.linuxcent.com and access the reports table from sales Database.
By replacing the database name with wildcard * will provide the privileges equivalent super_user level access.
This can be demonstrated as follows:
GRANT ALL PRIVILEGES ON *.* TO 'vamshi'@'%';
Or
GRANT INSERT, UPDATE, DELETE ON reports.* to 'vamshi'@'%';
This is just as a security measure whilst disabling the root login to the mysql engine.
GRANT ALL PRIVILEGES ON *.* TO 'vamshi_superuser'@'%';
To Display the Privileges granted to specific Mysql user Account, use the command SHOW GRANTS.
mysql> SHOW GRANTS FOR 'root'@'localhost' \G; *************************** 1. row *************************** Grants for root@localhost: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `root`@`localhost` WITH GRANT OPTION *************************** 2. row ***************************
Now to compare the results
The changes made so far are to be saved to the special user database such as database called the grant tables, In total there are 5 special tables in the mysql
user db host tables_priv columns_priv
We commit the changes by issuing the FLUSH PRIVILEGES command at the mysql prompt:
mysql> flush privileges ; Query OK, 0 rows affected (0.01 sec)
The redirect operation > is used in conjunction with stdoutput 1 and stderr 2.
command > [/dev/null] 2>&1
2 Represents the stderror. The &1 here represents the first argument which is /dev/null
The character 2 represents the stderr which takes the entire errors printed to the screen and then appends them to the /dev/null which is the first argument represented by &1.
So the command demonstration will be the following:
$ du -sh /* > /dev/null 2>&1
This redirect command will dump the errors and the output to /dev/null.
Explanation: the default behaviour of redirection operator is to redirect the stdout and we are redirecting them to devnul and then we followup the command with 2>&1 which mentions the stderr 2 and then redirects is to /dev/null, which is denoted by &1 describing the &1 as the first argument which is /dev/null
The installation of docker can be done from the distribution repositories or from manually installing the packages.
First up we need to get some basic packages on our debian server as follows:
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
Adding the repository for the docker
Using the distribution for Docker installation.
Adding the apt-key from the trusted docker repository.
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
Verify the key details using the apt-key command as follows:
root@node03:~# apt-key list docker
pub rsa4096 2017-02-22 [SCEA]
9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid [ unknown] Docker Release (CE deb) <[email protected]>
sub rsa4096 2017-02-22 [S]
Now lets add the docker stable repo
add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable”
If you want to install the nightly package of docker then use the following command.
add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) nightly”
Now lets the install the docker package from command line.
sudo apt-get update
It is a must to run the apt-get update as we have newly added the docker repo and followit up with the installation
# apt-get install docker-ce docker-ce-cli containerd.io
Enable the dockerd engine daemon process to auto start.
vagrant@node03:~$ sudo systemctl enable docker --now
vamshi@node03:~$ id uid=1000(vamshi) gid=1000(vamshi) groups=1000(vamshi),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),108(netdev)
vamshi@node03:~$ grep docker /etc/group docker:x:999:
# usermod -aG dockerroot vamshi
vamshi@node03:~$ id uid=1000(vamshi) gid=1000(vamshi) groups=1000(vamshi),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),108(netdev),1001(docker)
The user is now part of the docker group and has access to the /var/run/docker.sock
vamshi@node03:~$ ls -l /var/run/docker.sock srw-rw---- 1 root docker 0 May 22 08:40 /var/run/docker.sock
The docker.sock file will be automatically updated to the root:docker owner:group permissions respectively.
And now you will be able to successfully run the docker from your user account.
Common errors when the ipv4 forwarding is not enabled on the linux host leading to unidentifiable issues. here is one such rare log from the system logs
level=warning msg="IPv4 forwarding is disabled. N...t work."
Its good to check the current ipv4.forwarding rules as follows:
[root@LinuxCent ~]# sysctl net.ipv4.conf.all.forwarding net.ipv4.conf.all.forwarding = 0
You can also enable the changes for the current session using the -w option
sysctl -w net.ipv4.conf.all.forwarding=1
To make the changes persistent we need to write to a config file and enforce the system to read it.
[root@LinuxCent ~]# vi /etc/sysctl.d/01-rules.conf net.ipv4.conf.all.forwarding=1
Then apply the changes to the system on the fly with the sysctl command to load the changes from systemwide config files.
# sysctl –system
--system : tells the sysctl to read all the configuration file system wide
[root@Linux1 ~]# sysctl --system * Applying /usr/lib/sysctl.d/00-system.conf ... net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 * Applying /etc/sysctl.d/01-rules.conf ... net.ipv4.conf.all.forwarding = 1 * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ... kernel.yama.ptrace_scope = 0 * Applying /usr/lib/sysctl.d/50-default.conf ... kernel.sysrq = 16 kernel.core_uses_pid = 1 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.all.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.default.promote_secondaries = 1 net.ipv4.conf.all.promote_secondaries = 1 fs.protected_hardlinks = 1 fs.protected_symlinks = 1 * Applying /etc/sysctl.d/99-sysctl.conf ... * Applying /etc/sysctl.conf ...
[root@Linux1 ~]# sysctl net.ipv4.conf.all.forwarding net.ipv4.conf.all.forwarding = 1
The Bash being a scripting language does tend offer the conditional if else, We shall look at them in the following sections.
Firstly there needs to be a conditional check that has to be performed in order for the corresponding Block of code to be executed.
To break down the semantics of conditional control structures in BASH we need to understand The conditional keyword that performs the validation, the It is represented most commonly as “[“ and very rarely represented as “test” keyword.
It can be better understood by the following demonstration:
vamshi@linux-pc:~/Linux> [ 1 -gt 2 ] vamshi@linux-pc:~/Linux> echo $? 1 vamshi@linux-pc:~/Linux> vamshi@linux-pc:~/Linux> [ 1 -lt 2 ] vamshi@linux-pc:~/Linux> echo $? 0
The [ is synonymous to the command test on the linux kernel.
vamshi.santhapuri@linux-pc:~/Linux> test 1 -gt 2 vamshi.santhapuri@linux-pc:~/Linux> echo $? 1 vamshi.santhapuri@linux-pc:~/Linux> test 1 -lt 2 vamshi.santhapuri@linux-pc:~/Linux> echo $? 0
We Shall now look at the different variations of Conditional controls structures.
if [ Condition ] ; then statement1...statementN fi
if [ Condition ] ; then
If Block statements
...
else
else-Block statement
fi
if [ Condition ] ; then
If Block statement1
...
elif [ elif Condition ]; then # 1st elif Condition
elif Block statement1
elif [ elif Condition ]; then # 2nd elif Condition
elif Block statements
elif [ elif Condition ]; then # nth elif Condition
elif Block statements
fi
An else can also be appended accordingly when all the if and elif conditions fail, which we will see in this section .
The “if elif elif else fi” control structure is like multiple test checking control diversion strategy in bash, gives the user the power to write as many test conditions as possible until a test condition is matched leading in the resultant block of code being executed. Writing this multiple elif can be tedious task and the switch case is mostly preferred
if [ Condition ] ; then
If Block statement
elif [ elif Condition ]; then # 1st elif Condition
elif Block statement1
elif [ elif Condition ]; then # nth elif Condition
elif Block statement
...
else Block statementN # else block while gets control when none of if or elif are true.
else Block statements
fi
Atleast one of the block statements are executed in this control flow similar to a switch case. The else block here takes the default case when none of the if nor the elif conditions matches up.
Adding to the if..elif..else there is also the nested if block wherein the nested conditions are validated which can be Demonstrated as follows:
if [ condition ]; then
Main If Block Statements
if [ condition ]; then # 1st inner if condition
1st Inner If-Block statements
if [ condition ]; then # 2nd inner if condition
2nd Inner If-Block statements
if [ condition ]; then
Nth Inner If Block statements
fi
fi
fi
fi
This logic of nested ifs are used while dealing with scenarios where the outermost block of statements must be validated before, if the test succeeds then the control flow is passed to the innermost if test statement execution. Thus the name Nested if.
Here is the switch case bash script with practical explanation.
We will look at the Exit codes within the BASH in the next sections.
# docker run -d -p <host port>:<Container port> --name frontend-lc rrskris/frontend-lc:v0.1
To run the docker container interactively use the below command:
docker run -it -p <Host port>:<Container port> --name frontend-lc rrskris/frontend-lc:v0.1
We come across the challenge to manage the docker engine and its disk space consumption issues in a long run.
To effectively manage its resources we have some of the best options, let us take a look at them in this tutorial.
[root@node01 ~]# docker system df -v
[root@node01 ~]# docker system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 7 3 1.442 GB 744.2 MB (51%) Containers 3 1 2.111 MB 0 B (0%) Local Volumes 7 1 251.9 MB 167.8 MB (66%)
This command prints the complete verbose details of Images space usage, Containers space usage and Local Volumes space usage
[root@node02 vamshi]# docker system prune [ -a | -f ]
The option -a: Removes all the unused images and the stale containers / images
and -f Forcefully removes the unused images and the stale containers without prompting for confirmation.How
The docker images can be removed using the docker image rm <container-id | container-name> command.
The command docker rmi is most commonly used also the [/code]docker image rm[/code] which is more easier to read and self explanatory.
[root@node02 vamshi]# docker rmi
The docker images which are dangling and those without any tags can be filtered out using the below syntax and can be removed to save some file system space
We can list out the docker images that are dangling using the filter option as shown below:
# docker images -f "dangling=true"
The list of images received from the above command we pass only the image id’s to the docker image rm command as shown below:
[root@node02 vamshi]# docker image rm $(docker images -qf "dangling=true")
How to list multiple docker images with matching pattern ?
[vamshi@node02 ~]$ docker image ls mysql* REPOSITORY TAG IMAGE ID CREATED SIZE rrskris/mysql v1 1ab47cba1d63 4 months ago 456 MB rrskris/mysql v2 3bd34czc2b90 4 months ago 456 MB docker.io/mysql latest d435eee2caa5 5 months ago 456 MB
The docker provides good amount of flexibility with the commandline and can be combined with regular expression and awk formatting commands to yield relevant results.
[vamshi@node02 ~]$ docker image rm $(docker image ls | grep -w "^old-image" | awk {'print $3'} )
The containers has the following stages from the moment it is created from an image till it’s removed from the docker engine running, restarting, removing, paused, dead and exited.
All statuses apart from running and created are not going to serve a live purpose and tend to use us the system resources. unless the are brought back to the action through the use of docker start command
We can easily perform filtering operation on the containers using the status flag:
# docker ps -f status=[created | dead | exited | running | restarting | removing]
Then the docker also allows removal of individual containers using the rm command we can use the docker container rm to have the docker delete container
[vamshi@node01 ~]$ docker container rm <container-id | container-name>
[vamshi@node01 ~]$ docker rm <container-id | container-name>
You can also use the -f flag to forcefully remove it.
# docker rm -f <container-id | container-name>
On large docker farms containing 100’s of containers, Its often a practical approach to continually keep scanning for the stale containers and cleaning them up.
Clean up the docker containers which are in exited state.
# docker ps -q -f status=exited | xargs -I {} docker rm {}
# docker container ls --filter "status=exited" | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty -I {} docker rm {}
[vamshi@node01 ~]$ docker container ls -f status=running
The docker subsystem also does offer some internal system commands to get the job done using the docker’s garbage collection mechanism.
The docker system build also results in leaving some reminiscences of older build data which has to be cleaned up at regular intervals of time on the docker engine host.
docker ps -qa -f status=running | xargs docker inspect --format='{{ .State.Pid }}'
A docker one liner to clear up some docker stale cache:
[root@node01 ~]# docker container ls --filter "status=exited" --filter=status="dead" | awk '{print $1}' | xargs --no-run-if-empty -I {} docker rm {}
A Docker Container is an isolated independent instance of kernel space, which means any number of docker instances can run independent applications.
The Docker Containers by design are isolated application runtime environment using the common host system resources exposed through cgroups and the host filesystem through the tarball filesystem obtained from generating a docker image.
All because of the Kernel namespaces.. Which provisions the pids and manages its port ranges, filesystem partitions, networking, and the most astonishing feature of having root privileges inside of the container but not outside of the container all by the help of chroot functionality.
The Docker storage implements the concept of the copy-on-write (COW) layered filesystems.
Each container gets its own network isolation.
Thus Containers are Lightweight than a VM.On the back end this functions by using the chroot filesystem much like its predecessor like LXC’s, with its own hierarchy.
It also controls group resources(cgroups), groups together resources and then applies the limits on Block i/o, memory, CPU.
Namespace: It takes the system wide resources, wraps them and provides those resources as a isolated environment to the instances.
By Using a container you don’t really have to install an OS, enabling no repetition of similar workforce and you are not using the whole disk space repetitively for the similar OS files..
There’s only a single kernel which will be shared by multiple docker containers.
In this post we will explain some of the practical Docker use cases and commands :
There are two parts to the Docker Engine interns of user interaction:
One being the docker Daemon and the other being the docker client which send commands to interact with the Docker Daemon
# docker build -t <name>:<version-number> -f Dockerfile <.>
The . at the end is important because it signifies the current context and the context cannot span backward.
The option of --no-cache is important when building container images which are dependent upon downloading latest libraries from the internet or practically from your on-premise code repository which contains the freshly compiled code artifacts.
Build the Docker image with no caching:
# docker build --no-cache -t frontend-centos-lc:dev0.1 -f Dockerfile .
Once the docker container is successfully built, we can take a look at the newly created image:
Creating a docker image from scratch rootfilesystem is also a better option to create a base docker image, which gives you the freedom to package the libraries you wish and have complete control over it.
# docker images
# docker image ls
The images are composed of multiple layers which form a auto union filesystem by bringing the various layers of docker image with each stages of build command creating interdependent layers.. The base image being the minimal rootfs in most cases comprised of a stripped down version of linux rootfilesystem. You can find more details from here, Building a Docker image from rootfilesystem
We run the docker inspect command on the docker image to describe various build related details.
# docker image inspect <image-name | image-id >
Example given:
root@node03:/home/vamshi# docker images nexusreg.linuxcent.com:8123/ubuntu-vamshi:v1 --no-trunc REPOSITORY TAG IMAGE ID CREATED SIZE nexusreg.netenrichcloud.com:8088/ubuntu-vamshi v1 sha256:9a0b6e4f09562a0e515bb2a0ef2eca193437373fb3941c4956e13a281fe457d7 6 months ago 354MB root@node03:/home/vamshi#
Can be listed by the –tree option
# docker container inspect 73caf780c813
# docker images --tree
This –tree option is deprecated and history command is used to provide the image layer details.
# docker history <image layer id>
The images are stored under /var/lib/docker/<storage driver> and can be viewed, the filesystem container the container hash name followed by the docker container filesystem organized in the sub-directories.
Using the docker Tag command to tag the existing docker images to match a meaningful Repository name and append a version tag.
Example given for docker tag command.
# docker tag frontend-centos-nginx:dev0.1 my-repo:8123/frontend-nginx:v0.1
Run the docker command again to check the images, and see the newly tagged image present.
We use the docker push choose to upload the image to the docker registry which is a remote docker repository using the docker push command.
# docker push <docker Registry_name>/<image-name>:<version>
# docker push my-repo:8123/frontend-nginx:v0.1