Docker community edition installation and Best practices

The installation of docker can be done from the distribution repositories or from manually installing the packages.

First up we need to get some basic packages on our debian server as follows:

sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

 

Adding the repository for the docker

Using the distribution for Docker installation.

Adding the apt-key from the trusted docker repository.

curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

Verify the key details using the apt-key command as follows:

root@node03:~# apt-key list docker
pub   rsa4096 2017-02-22 [SCEA]
      9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
uid           [ unknown] Docker Release (CE deb) <[email protected]>
sub   rsa4096 2017-02-22 [S]

Now lets add the docker stable repo

add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable”

If you want to install the nightly package of docker then use the following command.

add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) nightly”

Now lets the install the docker package from command line.

sudo apt-get update

It is a must to run the apt-get update as we have newly added the docker repo and followit up with the installation

# apt-get install docker-ce docker-ce-cli containerd.io

Post installation Procedures on docker.

Enable the dockerd engine daemon process to auto start.

vagrant@node03:~$ sudo systemctl enable docker --now

Adding the users to docker group and modify the docker OPTS to include the docker group privileges.

vamshi@node03:~$ id
uid=1000(vamshi) gid=1000(vamshi) groups=1000(vamshi),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),108(netdev)
vamshi@node03:~$ grep docker /etc/group
docker:x:999:

# usermod -aG dockerroot vamshi

vamshi@node03:~$ id
uid=1000(vamshi) gid=1000(vamshi) groups=1000(vamshi),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),108(netdev),1001(docker)

The user is now part of the docker group and has access to the /var/run/docker.sock

vamshi@node03:~$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 May 22 08:40 /var/run/docker.sock

The docker.sock file will be automatically updated to the root:docker owner:group permissions respectively.

And now you will be able to successfully run the docker from your user account.

enabling ipv4 forwarding on docker server

Common errors when the ipv4 forwarding is not enabled on the linux host leading to unidentifiable issues. here is one such rare log from the system logs

level=warning msg="IPv4 forwarding is disabled. N...t work."

Its good to check the current ipv4.forwarding rules as follows:

[root@LinuxCent ~]# sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0

 

You can also enable the changes for the current session using the -w option

sysctl -w net.ipv4.conf.all.forwarding=1

To make the changes persistent we need to write to a config file and enforce the system to read it.

[root@LinuxCent ~]# vi /etc/sysctl.d/01-rules.conf
net.ipv4.conf.all.forwarding=1

Then apply the changes to the system on the fly with the sysctl command to load the changes from systemwide config files.

# sysctl –system
--system : tells the sysctl to read all the configuration file system wide

 

[root@Linux1 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /etc/sysctl.d/01-rules.conf ...
net.ipv4.conf.all.forwarding = 1
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.conf ...
[root@Linux1 ~]# sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1

Managing Docker disk space

We come across the challenge to manage the docker engine and its disk space consumption issues in a long run.

To effectively manage its resources we have some of the best options, let us take a look at them in this tutorial.

How to identify the details of disk space usage in docker ?

[root@node01 ~]# docker system df -v
[root@node01 ~]# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              7                   3                   1.442 GB            744.2 MB (51%)
Containers          3                   1                   2.111 MB            0 B (0%)
Local Volumes       7                   1                   251.9 MB            167.8 MB (66%)

This command prints the complete verbose details of Images space usage, Containers space usage and Local Volumes space usage

How to Clean up space on Docker ?

[root@node02 vamshi]# docker system prune [ -a | -f ]

The option -a: Removes all the unused images and the stale containers / images
and -f Forcefully removes the unused images and the stale containers without prompting for confirmation.How

How to Remove docker images?

The docker images can be removed using the docker image rm <container-id | container-name> command.

The command docker rmi is most commonly used also the [/code]docker image rm[/code] which is more easier to read and self explanatory.

[root@node02 vamshi]# docker rmi

The docker images which are dangling and those without any tags can be filtered out using the below syntax and can be removed to save some file system space

We can list out the docker images that are dangling using the filter option as shown below:

# docker images -f "dangling=true"

The list of images received from the above command we pass only the image id’s to the docker image rm command as shown below:

[root@node02 vamshi]# docker image rm $(docker images -qf "dangling=true")

How to list multiple docker images with matching pattern ?

[vamshi@node02 ~]$ docker image ls mysql*
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
rrskris/mysql       v1                  1ab47cba1d63        4 months ago        456 MB
rrskris/mysql       v2                  3bd34czc2b90        4 months ago        456 MB
docker.io/mysql     latest              d435eee2caa5        5 months ago        456 MB

How to remove multiple docker containers with matching pattern

The docker provides good amount of flexibility with the commandline and can be combined with regular expression and awk formatting commands to yield relevant results.

[vamshi@node02 ~]$  docker image rm $(docker image ls | grep -w "^old-image" | awk {'print $3'} )

 

Various states of a docker containers

Docker container lifecycle

The containers has the following stages from the moment it is created from an image till it’s removed from the docker engine  running, restarting, removing, paused, dead and exited.
All statuses apart from running and created are not going to serve a live purpose and tend to use us the system resources. unless the are brought back to the action through the use of docker start command

We can easily perform filtering operation on the containers using the status flag:

# docker ps -f status=[created | dead | exited | running | restarting | removing]

Then the docker also allows removal of individual containers using the rm command we can use the docker container rm to have the docker delete container

[vamshi@node01 ~]$ docker container rm <container-id | container-name>
[vamshi@node01 ~]$ docker rm <container-id | container-name>

You can also use the -f flag to forcefully remove it.

# docker rm -f <container-id | container-name>

On large docker farms containing 100’s of containers, Its often a practical approach to continually keep scanning for the stale containers and cleaning them up.

Clean up the docker containers which are in exited state.

# docker ps -q -f status=exited | xargs -I {} docker rm {}
# docker container ls --filter "status=exited" | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty -I {} docker rm {}

 

List docker containers currently running:

[vamshi@node01 ~]$ docker container ls -f status=running

The docker subsystem also does offer some internal system commands to get the job done using the docker’s garbage collection mechanism.

The docker system build also results in leaving some reminiscences of older build data which has to be cleaned up at regular intervals of time on the docker engine host.

 

How to print out the docker container pid’s

docker ps -qa -f status=running | xargs docker inspect --format='{{ .State.Pid }}'

A docker one liner to clear up some docker stale cache:

[root@node01 ~]# docker container ls --filter "status=exited" --filter=status="dead" | awk '{print $1}' | xargs --no-run-if-empty -I {} docker rm {}

What is Docker and what comprises inside the docker image

A Docker Container is an isolated independent instance of kernel space, which means any number of docker instances can run independent applications.

The Docker Containers by design are isolated application runtime environment using the common host system resources exposed through cgroups and the host filesystem through the tarball filesystem obtained from generating a docker image.

All because of the Kernel namespaces.. Which provisions the pids and manages its port ranges, filesystem partitions, networking, and the most astonishing feature of having root privileges inside of the container but not outside of the container all by the help of chroot functionality.
The Docker storage implements the concept of the copy-on-write (COW) layered filesystems.

Each container gets its own network isolation.

Thus Containers are Lightweight than a VM.On the back end this functions by using the chroot filesystem much like its predecessor like LXC’s, with its own hierarchy.

It also controls group resources(cgroups), groups together resources and then applies the limits on Block i/o, memory, CPU.
Namespace: It takes the system wide resources, wraps them and provides those resources as a isolated environment to the instances.

By Using a container you don’t really have to install an OS, enabling no repetition of similar workforce and you are not using the whole disk space repetitively for the similar OS files..

There’s only a single kernel which will be shared by multiple docker containers.

In this post we will explain some of the practical Docker use cases and commands :

There are two parts to the Docker Engine interns of user interaction:
One being the docker Daemon and the other being the docker client which send commands to interact with the Docker Daemon

How to build a Docker Image?:

# docker build -t <name>:<version-number> -f Dockerfile <.>

The . at the end is important because it signifies the current context and the context cannot span backward.

The option of --no-cache is important when building container images which are dependent upon downloading latest libraries from the internet or practically from your on-premise code repository which contains the freshly compiled code artifacts.

Build the Docker image with no caching:

# docker build --no-cache -t frontend-centos-lc:dev0.1 -f Dockerfile .

Once the docker container is successfully built, we can take a look at the newly created image:
Creating a docker image from scratch rootfilesystem is also a better option to create a base docker image, which gives you the freedom to package the libraries you wish and have complete control over it.

List docker images command

# docker images
# docker image ls

What are present inside the Docker Image?

The images are composed of multiple layers which form a auto union filesystem by bringing the various layers of docker image with each stages of build command creating interdependent layers.. The base image being the minimal rootfs in most cases comprised of a stripped down version of linux rootfilesystem. You can find more details from here, Building a Docker image from rootfilesystem
We run the docker inspect command on the docker image to describe various build related details.

# docker image inspect <image-name | image-id >

Example given:

root@node03:/home/vamshi# docker images nexusreg.linuxcent.com:8123/ubuntu-vamshi:v1 --no-trunc
REPOSITORY                                       TAG                 IMAGE ID                                                                  CREATED             SIZE
nexusreg.netenrichcloud.com:8088/ubuntu-vamshi   v1                  sha256:9a0b6e4f09562a0e515bb2a0ef2eca193437373fb3941c4956e13a281fe457d7   6 months ago        354MB
root@node03:/home/vamshi# 

Can be listed by the –tree option

# docker container inspect 73caf780c813
# docker images --tree

This –tree option is deprecated and history command is used to provide the image layer details.

# docker history <image layer id>

The images are stored under /var/lib/docker/<storage driver> and can be viewed, the filesystem container the container hash name followed by the docker container filesystem organized in the sub-directories.

Using the docker Tag command to tag the existing docker images to match a meaningful Repository name and append a version tag.

Example given for docker tag command.

# docker tag frontend-centos-nginx:dev0.1 my-repo:8123/frontend-nginx:v0.1

Run the docker command again to check the images, and see the newly tagged image present.

We use the docker push choose to upload the image to the docker registry which is a remote docker repository using the docker push command.

# docker push <docker Registry_name>/<image-name>:<version>
# docker push my-repo:8123/frontend-nginx:v0.1

 

Reset Gitlab password from cli

How to change the user password on gitlab?

Here is the demonstration to reset the gitlab password:

We connect to the gitlab-rails console to reset the password of the user,

In this demonstration we are going to reset the root password of the user called root whose uid is 1

[vamshi@gitlab ~]$ sudo gitlab-rails console -e production
-------------------------------------------------------------------------------------
GitLab: 11.10.4-ee (88a3c791734)
GitLab Shell: 9.0.0
PostgreSQL: 9.6.11
-------------------------------------------------------------------------------------
Loading production environment (Rails 5.0.7.2)
Loading production environment (Rails 5.0.7.2)
irb(main):001:0> User.where(id:1).first
=> #<User id:1 @root>

Now we can confirm the Uid 1 Belongs to the root user and this is the account whose password we want to reset

irb(main):001:0> user=User.where(id: 1).first
=> #<User id:1 @root>

Now enter your new password and followup with the password confirmation:

irb(main):003:0> user.password = 'AlphanumericPassword'
=> "AlphanumericPassword"
irb(main):004:0> user.password_confirmation = 'AlphanumericPassword'
=> "AlphanumericPassword"

Now save the password

irb(main):005:0> user.save
Enqueued ActionMailer::DeliveryJob (Job ID: 961d30a2b-df21-45c8-83e6-1993c85e6030) to Sidekiq(mailers) with arguments: "DeviseMailer", "password_change", "deliver_now", #<GlobalID:0x10007feafe3d64e0 @uri=#<URI::GID gid://gitlab/User/1>>
=> true

Then we exit out of the interactive ruby shell only after saving the changes

Connect to remote Docker server on tcp port

The docker master is where the docker server/engine daemon exists. There is strategic importance of maintaining a unique docker server in Build and Deployment during continuous release cycles, The docker clients such as the jenkins CICD server and other docker hosts connect to this master ensuring credibility and atomicity of the docker build process, And most of the times the Dynamic Docker agent from the jenkins build can connect to it and execute the docker builds.
The Docker master is the server where the build images are initially created when you run the docker build command during continuous build process.

To make a docker instance as the docker master you need to identify the following things.

Have an up to date docker daemon running with good amount of disk space for mount point /var/lib/docker.

Next up, In the file /etc/sysconfig/docker add the line OPTIONS="-H tcp://0.0.0.0:4243" at the end of the file.
As this docker master is running on a Centos machine we have the filepath /etc/sysconfig/docker.
But on Ubuntu/Debian the filepath location could be /etc/default/docker
And then restart docker daemon as follows:

[vamshi@docker-master01 ~]$ sudo systemctl restart docker

Confirm the changes with the ps command as follows:

[vamshi@docker-master01 ~]$  ps -ef | grep docker
root 2556 1 0 16:09 ? 00:00:05 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json -H tcp://0.0.0.0:4243 --storage-driver overlay2

Connecting to Docker master from client on TCP

Now the changes we got to make on the docker client are as follows:

Make sure the docker daemon on client is stopped and disabled, the following command does them both at once:

[vamshi@jenkins01 ~]$ sudo systemctl disable docker --now

From the docker client, we should test and establish the connection to the docker server through tcp ip port 4243

[vamshi@jenkins01 ~]$ docker -H tcp://10.100.0.10:4243 version
Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
Go version: go1.10.3
Git commit: cccb291/1.13.1
Built: Tue Mar 3 17:21:24 2020
OS/Arch: linux/amd64

Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
Go version: go1.10.3
Git commit: b2f74b2/1.13.1
Built: Wed May 1 14:55:20 2019
OS/Arch: linux/amd64
Experimental: false

Now that we have confirmed the successful connection from the client to the docker master server we can make the changes permanent, we shall export the  DOCKER_HOST to the system user profile.
Now on the docker client(here: our Jenkins server) with export of DOCKER_HOST as the environment variables.

[vamshi@jenkins01 ~]$ sudo sh -c 'echo "export DOCKER_HOST=\"tcp://10.100.0.10:4243\"" > /etc/profile.d/docker.sh'

Now we see the results as our docker client is able to connect to the master.

[vamshi@jenkins01 ~]$ docker version

Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
Go version: go1.10.3
Git commit: cccb291/1.13.1
Built: Tue Mar 3 17:21:24 2020
OS/Arch: linux/amd64

Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
Go version: go1.10.3
Git commit: b2f74b2/1.13.1
Built: Wed May 1 14:55:20 2019
OS/Arch: linux/amd64
Experimental: false

You might generally face an error saying : Cannot connect to the Docker daemon at unix:///var/run/docker.sock
This is generally caused by not having privileges to access the /var/run/docker.sock and the socket attributes being owned by the docker group is must. See https://linuxcent.com/cannot-connect-to-the-docker-daemon-at-unix-var-run-docker-sock-is-the-docker-daemon-running/ on changing the group ownership for unix:///var/run/docker.sock
The solution is to add your user to the docker group
# useradd -aG docker <username>

The best way to identify this issue is to run the docker info and docker version commands.

# docker version

The docker version command output has two sections.
The first section is describes Client information; which is your workstation.

The second part of the output describes about the server side information.
And here you can list out the

# docker version
Client:
Version: 1.13.1
API version: 1.26
Package version:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
(or)
Cannot connect to the Docker daemon at tcp://<docker-server-ip>:4243. Is the docker daemon running?

Either of them can mean that the destined server is not running.
Ensure by running ps -ef | grep docker

# docker info

This presents the complete information about the docker system.
In case of a tcp connection outage or if the server is not running, this command doesn’t yield any output and the output describes the error details

Its is a best practice to have a docker group created on the server and have the user part of the docker group.

# sudo groupadd docker

And add the curent user as part of the docker group.

# sudo usermod -aG docker $USER