How to create user account in Linux

The Linux system provides a couple of command line utilities to create new users on the system

As we are aware, the Linux login has the essential fields listed as follows:

  • A unique system wide username,
  • A Strong password,
  • The home directory and
  • A login shell.

These are the mandatory fields to enable account creation.

The other fields are the UID and GID numbers associated with User an Group name numerical IDs which will be generated sequentially allocated by the Linux Kernel

We can do a broad categorization of login accounts into 2 types, those are the Privileged and the normal user.

The Absolute Privileged account is root which comes by default in all the linux machines.

The normal account can be enabled with root Privileged by assigning user to certain groups and providing elevated access in the scope.

What is the Process to create a User account in Linux?

The user creation has to be done with root privileges using useradd command.

$ sudo useradd newuser

Now it’s time to enter the password

$ sudo passwd newuser

How to check if the userid is present and active on the system?

The new user details will be updated to /etc/passwd file and the login information updated to /etc/shadow

Now let’s check if the user account is created and has a valid shell

vamshi@node03:/$ grep vamshi /etc/passwd

vamshi:x:1001:1001::/home/vamshi:/bin/bash

How to Add the user to new groups in Linux?

Usermod command line linux utility enables to add user to groups and the ability to add an existing user to new groups additionally or overwrite the group membership

$ usermod -aG dockerroot wheel vamshi

The option -a: appends the user to two new groups called dockerroot and wheel with out overwriting the existing user assigned groups, violating this option will restrict the newuser to be part of only the mentioned groups in the command

How to check and verify if the user is a member of group in Linux?

[vamshi@node02 Linux-blog]$ id vamshi
uid=1001(vamshi) gid=1001(vamshi) groups=1001(vamshi),0(root),10(wheel),992(dockerroot)

How to Verify the Login Confirmation in Linux?

From the root user account run the command: su - newuser to check the new login account environment.

How to find the group names assigned to the user

The user can list of his active membership groups by running the linux command groups

The user can run the groups command to list the groups with active membership

[vamshi@linuxcent ~]$ groups
vamshi root wheel dockerroot

Login to the server remotely using SSH

You may now use the ssh command to login with the new username and enter your password at login prompt.

$ ssh [email protected]

How to connect to server with SSH running on non-standard port like 2202?

[vamshi@linuxcent ~]$ ssh localhost -p 2202
Last login: Mon Mar 13 17:57:56 2020 from 10.100.0.1

How to create a useraccount in Linux using useradd command?

The usercreation can also be done with parametrized command as demonstrated below:

$ sudo useradd vamshi -b /home/ -m -s /bin/bash

Alternatively you can be more elaborate as mentioned below:

$ sudo useradd vamshi -c "Vamshi's user account" -d /home/vamshi -m -s /bin/bash -G dockerroot

The useradd command-utility options describes as follows:

-b or --base-dir : base directory of new user home directory.

-c or --comment : Description about the user Or as A Standard Practice can be used to Mention the Current User’s Full name.

-d or --home-dir : create the user’s home directory

-m or --create-home :  create the user’s home directory as per -d option.

-s or --shell : Type of Login Shell.

-u or --uid : is the Unique UID on linux machine

-G or --groups : list of secondary groups to be assigned

-k or --skel : determines the default parameters if no options are passed while account creation. Present at /etc/default/useradd

With the skel properties finely tunes, you can proceed to use adduser command which is based on the default skel behavior as shown below:

$ sudo adduser vamshi

How to using the SSH key pair to login:
Use the -i followed by the /path/to/id_rsa private key file

$ ssh -i ~/.ssh/id_rsa [email protected]
$ ssh -i ~/.ssh/id_rsa -l linuxcent.com

-l : using the login name

-i : is the identity file; rsa the private key file

 

Troubleshooting the SSH connection in Verbose mode printing Debug information

Using -v option with the ssh command will print the debug information while logging

The verbosity levels -v can be concatenated from one to Nine; eg -v to -vvvvvvvvv

$ ssh -i ~/.ssh/id_rsa [email protected] -vvvvvvvvv

How to make a file or Folder undeletable on Linux

How to make a file or Folder/Directory un-deletable on Linux?

The linux operating as we know if famous for the phrase “Everything is a file”, In such circumstances it is interesting to explore the possibilities of making a file undeletable, even by the owner of the file and for that matter even the root user, In the Linux Ecosystem the root is the poweruser.

This section we will see the potential of such feature.

As we have already seen the section on deleting files on Linux (removing the files in Linux).

We will now demonstrate the power of Linux where you can restrict the deletion of a file on Linux.

Linux offers a chattr commandline utility which generally modifies the file attributes as the name suggests, but the practical use is to make a file undeletable.

Sample command syntax:

[vamshi@linuxcent ~]$ chattr +i <samplefile>
vamshi@linuxcent delete-dir]$ sudo chattr +i samplefile2.txt
Now we do ls -l samplefile2.txt
[vamshi@linuxcent ~]$ sudo chattr +i samplefile2.txt
[vamshi@linuxcent ~]$ ls -l samplefile2.txt
-rw-rw-r--. 1 vamshi vamshi 4 Apr 8 15:42 samplefile2.txt

Now we shall try to write some content to this file and see no change in the basic file permissions(see changing ownership of files).

[vamshi@linuxcent delete-dir]$ echo "New content" > samplefile2.txt
-bash: samplefile2.txt: Permission denied

Deleting file forcefully with the --force option ?

[vamshi@linuxcent delete-dir]$ sudo /bin/rm -f samplefile2.txt

/bin/rm: cannot remove ‘samplefile2.txt’: Operation not permitted

Linux command lsattr offers the ability to view the permissions set by the chattr command.
The current File attributes can be listed using lsattr followed by the filename [/code]samplefile2.txt[/code] as below

[vamshi@linuxcent delete-dir]$ lsattr samplefile2.txt
----i----------- samplefile2.txt

Even the root user on the host is unable to delete the file or modify its contents.

The file can be deleted only when the attributes are unset, It is demonstrated as follows:

[vamshi@linuxcent delete-dir]$ sudo chattr -i samplefile2.txt
[vamshi@linuxcent delete-dir]$ lsattr samplefile2.txt
---------------- samplefile2.txt

As we can see the lsattr doesn’t hold true anymore attributes on our file samplefile2.txt and is now being treated as any other normal file with basic file attributes.
The - operation removes the special linux file attributes on the mentioned file.

The chattr / lsattr linux commandline utilities currently supports the popular filesystems such as ext3,ext4,xfs, btrfs etc,.

Linux Find command – with Practical examples

The Linux find command utility is a powerful command in locating the files and directories at system wide level, the

The basic syntax of find command is very straight forward. The number of filters and types of options you choose while forming the find command determines and complexity of the operation and is directly proportional to the run time of the command.

Lets examine the general syntax of Linux find command line utility

$ find [path] [Options/filters] [expression]

How to Find all the files that are modified within 1 day.

$ find . -mtime -1

Use find command to find all the files that are modified within last x Minutes

$ find . -mmin -360

How to Find all the files that are created within given Minutes

$ find . -cmin -360

How to Find all the files under a specific size on the filesystem.

$ find . -type f -size -100G

NOTE: the - will find the files including and under the mentioned size

How to find really big size files with a given size using Linux Find Command.

$ find . -type f -size +100G

NOTE: the + will find the files exceeding the mentioned size

How to Find all the files with a pattern match using Find command?

vamshi.santhapuri@linux-pc:~/Linux/Programs> find *.py
my_program.py
myProgram.py
My_program.py

OutPut Demonstrated:

vamshi@linuxcent:~/Linux/Programs> ls -l
total 0
-rwxr-xr-x 1 vamshi users 0 Apr 8 16:33 my_program.py
-rwxr-xr-x 1 vamshi users 0 Apr 8 16:32 myProgram.py
-rwxr-xr-x 1 vamshi users 0 Apr 8 16:32 My_program.py ex

 

Find Files Based on file-type using option -type

Find only the regular file type

$ sudo find / -type f -print

Find only the Directory

$ sudo find / -type d -print

Lets see some special file formats like character files as in network level files and block files as in storage device file type.

$ sudo find /dev/ -type b -print
/dev/sdb8
/dev/sdb7
/dev/sdb6
/dev/sdb5
/dev/sdb4
/dev/sdb1
/dev/sdb
/dev/sda6
/dev/sda5

How to find empty files and Directories in Linux using Find command?

$ find . -empty # Finds all files and directories that are empty
$ find . -type f -empty #Find onlt empty files
$ find . -type d -empty #Find only empty Directories

How to Find all the hidden files and directories using Find command

You can also add the type flag as required

$ find . -name ".*" //Prints all the files and directories that are hidden

Find the character stream files and identify them yourselves.

$ sudo find / -type c -print

 

How to exclude some files or directories while using the find command?

The negation Operator ! in find command line

[vamshi@linuxcent ~]$ find / -type d  ! -path /mount ! -path /mnt/3TB/

How to find the files with .py extension and then setting the execute permission on them ?

-exec is a special builtin option which takes the specified command on the selected files

[vamshi@linuxcent ~]$ find *.py -exec chmod +x {} \;

A practical example of Finding and Deleting Big size files using Find Command

We Demonstrate a Big thread dump file that we generated in our Test environment by finding it on our server and then deleting it.

[vamshi@linuxcent ~]$ find /var -type f -size +100G -exec rm {} \;

 

To find the files with matching file permissions

[vamshi@linuxcent ~]$ sudo find / -type f -perm 0600 -print

More complex find operations can be performed by combining the negation operator and by excluding the certain mount points and using the logical AND OR operators.

For example:

sudo find / -xdev -type d \( -perm -0002 -a ! -perm -1000 \) -exec ls -ld {} \;

Ignore case using the find command

The filter -iname enables the find to perform case insesitive search.

[vamshi@node02 Linux-blog]$ find . -iname "rEadMe*" 
./Debian-Distro/README-Debian-Distro
./OpenSuse-Distro/README-Opensuse-Distro
./README
./Redhat-Distro/Centos/README-CentOS
./Redhat-Distro/README-Redhat-Distro

How to use Maxdepth Option in the find command to explore the directory depth?

[vamshi@node02 Linux-blog]$ find . -maxdepth 1 -name "README*" 
./README

By using find command filter option -maxdepth Now we can extract the results based on the one level deeper subdirectory, which is demonstrated as follows:

[vamshi@node02 Linux-blog]$ find . -maxdepth 2 -name "README*" 
./Debian-Distro/README-Debian-Distro
./OpenSuse-Distro/README-Opensuse-Distro
./README
./Redhat-Distro/README-Redhat-Distro

How to use mindepth Option in the find command to restrict the results based on the subdirectory depth?

The -mindepth filter option restricts the find command search to nth subdirectory level from the given path.

[vamshi@node02 Linux-blog]$ find . -mindepth 3 -name "README*" 
./Redhat-Distro/Centos/README-CentOS

 

How to use grep command in Linux and Unix

The grep command is one of the most essential commands in Unix/Linux ecosystem, It is used to extract match and find the occurrence of string literals and patterns among the text files.

In this section we will focus on the important scenarios and provide the realtime explanation about the Various Grep command options offered in linux command line.

We now have a text file with Following content.

Below is the content of the file called intro.txt.

[vamshi@linuxcent grep]$ cat intro.txt
the first line is in lower case
THE SECOND LINE IS IN UPPER CASE


There following are Most popular Linux server Distributions:
1) Redhat Enterprise Linux
2) Ubuntu
3) Centos
4) Debian
5) SUSE Linux Enterprise Server
This is a Demonstration of Linux grep command

Let’s start with search of a string pattern in the given file

The grep command sample search syntax is

$ grep “string pattern” <filename>
$ cat < filename | * > | grep "string pattern"

Lets see the Demonstration

[vamshi@linuxcent grep]$ grep "Linux" intro.txt
There following are Most popular Linux server Distributions:
1) Redhat Enterprise Linux
5) SUSE Linux Enterprise Server
This is a Demonstration of Linux grep command

Searching for a pattern on multiple files using grep command

Lets see the Demonstration

[vamshi@linuxcent grep]$ grep "Linux" intro.txt intro-demo.txt
intro.txt:There following are Most popular Linux server Distributions:
intro.txt:1) Redhat Enterprise Linux
intro.txt:5) SUSE Linux Enterprise Server
intro.txt:This is a Demonstration of Linux grep command
intro-demo.txt:There following are Most popular Linux server Distributions:
intro-demo.txt:1) Redhat Enterprise Linux
intro-demo.txt:5) SUSE Linux Enterprise Server
intro-demo.txt:This is a Demonstration of Linux grep command

 

How to print the sequence number of matched pattern in file using Grep command in Linux?

Print the sequence number of lines that contain the matched string pattern can be achieved using -n Option:

[vamshi@linuxcent grep]$ grep -n "Linux" intro.txt
4:There following are Most popular Linux server Distributions:
5:1) Redhat Enterprise Linux
9:5) SUSE Linux Enterprise Server
11:This is a Demonstration of Linux grep command

The -b Option prints the subsequent starting offset index of the matching word from the lines within the given filename.

[vamshi@linuxcent grep]$ grep -b "Linux" intro.txt
66:There following are Most popular Linux server Distributions:
127:1) Redhat Enterprise Linux
184:5) SUSE Linux Enterprise Server
217:This is a Demonstration of Linux grep command

So far we have seen only the matching string pattern, but if you want to find the occurance of particular word then use grep -w Option

Lets us see the Demonstration below:

[vamshi@linuxcent grep]$ grep -w "Cent" intro.txt


And As expected here we have not seen any occurrence of the word Cent from the file although the word Centos was present, But it didnt match because we used -w option to match the word Cent.

But when we grep for the Word Linux then we can see the exact occurrence of the word and the matching lines are printed.

[vamshi@linuxcent grep]$ grep -w "Linux" intro.txt
There following are Most popular Linux server Distributions:
1) Redhat Enterprise Linux
5) SUSE Linux Enterprise Server
This is a Demonstration of Linux grep command

This can be used in conjunction with -o prints only completely matched word string pattern.

[vamshi@linuxcent grep]$ grep -w -o "Linux" intro.txt
Linux
Linux
Linux
Linux

How to use the Invert Option in Linux grep command

The -v option prints all the lines not matching the given string pattern.

[vamshi@linuxcent grep]$ grep -v "Linux" intro.txt
the first line is in lower case
THE SECOND LINE IS IN UPPER CASE


2) Ubuntu
3) Centos
4) Debian

 

How to print the names of all the files matching the string pattern using Linux Grep command.

Using the -l Option of grep command will print all the file names containing the matching string pattern/string literal

[vamshi@node02 grep]$ grep -l Linux *
intro-demo.txt
intro.txt

How to get the number of Count Matches using Grep command ?

In Linux Grep command we can use the Option -c to get the total occurance Count of the matching pattern/string

Lets see the Demonstration below

[vamshi@linuxcent grep]$ grep -c "Linux" intro.txt
4

How to Exclude specific Directories while searching with grep command ?

The --exclude-dir="Some Directory name" options skips the contents of the mention directory Lets see the Demonstration:

$ grep -w -E "data" * -R --exclude-dir="backup" --exclude-dir="adb-fastboot" --exclude-dir="IntelliJ-IDEA"

How to use Linux Grep commmand to ignore the case  OR perform Case Insensitive search ?

Linux Grep Command offers the -i to skip the case and perform the search.

[vamshi@linuxcent grep]$ grep -i Cent intro.txt
3)Centos

Lets search for the Work Upper from the file with -i option

[vamshi@linuxcent grep]$ grep Upper -i intro-demo.txt
THE SECOND LINE IS IN UPPER CASE

How to Grep for the lines before and After the pattern occurrence in a file

We can use the below options in Linux Grep command to extract the N number of lines matching the containing pattern before/After from the given file.. Lets look at the Options in detail

Generic Syntax:

grep -<A|B|C> “string pattern” filename*

Lets see the Demonstration as follows:

-A: Prints N number of lines After the pattern match and the line containing the match

[vamshi@linuxcent grep]$  grep -A2 docker /etc/passwd
dockerroot:x:996:992:Docker User:/var/lib/docker:/sbin/nologin
mysql:x:27:27:MariaDB Server:/var/lib/mysql:/sbin/nologin
ntp:x:38:38::/etc/ntp:/sbin/nologin

-B: Prints N number of lines Before the pattern match and the line containing the match

[vamshi@linuxcent grep]$  grep -B2 docker /etc/passwd
apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin
builduser1:x:1002:1002::/home/builduser1:/bin/bash
dockerroot:x:996:992:Docker User:/var/lib/docker:/sbin/nologin

-C: Prints the line Containing the pattern and the N number of lines Before and After

[vamshi@linuxcent grep]$ grep -C2 docker /etc/passwd
apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin
builduser1:x:1002:1002::/home/builduser1:/bin/bash
dockerroot:x:996:992:Docker User:/var/lib/docker:/sbin/nologin
mysql:x:27:27:MariaDB Server:/var/lib/mysql:/sbin/nologin
ntp:x:38:38::/etc/ntp:/sbin/nologin

Linux Grep command using the Regular Expression (regex) search pattern

How to use the Special Meta-characters in Grep Command

The Special Meta-Character ^ (caret) symbol to match expression at the start of a line.
Practical examples of grep command using meta-characters are as follows:

[vamshi@linuxcent grep]$ grep ^root /etc/services
rootd           1094/tcp                # ROOTD
rootd           1094/udp                # ROOTD

$ (dollar) symbol to match expression at the end of a line

[vamshi@linuxcent grep]$ grep "bash$" /etc/passwd
root:x:0:0:root:/root:/bin/bash
vagrant:x:1000:1000:vagrant:/home/vagrant:/bin/bash
vamshi:x:1001:1001::/home/vamshi:/bin/bash
jenkins:x:997:994:Jenkins Automation Server:/var/lib/jenkins:/bin/bash
builduser1:x:1002:1002::/home/builduser1:/bin/bash
linuxcent:x:1003:1004::/home/linuxcent:/bin/bash

Another Bonus regex for evading Empty lines from a text file using the Meta-characters ^ Carat and $ Dollar Symbols.

[vamshi@node02 grep]$ grep -Ev "^$" intro.txt
the first line is in lower case
THE SECOND LINE IS IN UPPER CASE
There following are Most popular Linux server Distributions:
1) Redhat Enterprise Linux
2) Ubuntu
3) Centos
4) Debian
5) SUSE Linux Enterprise Server
This is a Demonstration of Linux grep command

As the result of the above command observed Empty lines in our text file are now excluded with the inverted matching of Empty line pattern expression.

Digit Matching operation using Linux Grep Command

The Grep command Option -P offers the perl-regex pattern matching to perform some of the tricky pattern matching conditions, Lets see the Demonstration.

[vamshi@linuxcent grep]$ grep -P "[\W][\d][\d]/" /etc/services

Now we Demonstrate Numerical pattern matching in the following ways to match the exact occurrence of IPv4 address

[vamshi@linuxcent grep]$ ip a |grep -E "[0-9].[0-9].[0-9].[0-9]"
inet 127.0.0.1/8 scope host lo
inet 10.100.0.20/24 brd 10.100.0.255 scope global noprefixroute eth1

The similar result can be extracted using -P, Demonstrated as below:

[vamshi@linuxcent grep]$ ip a  | grep -P "[\d].[\d].[\d].[\d]"
inet 127.0.0.1/8 scope host lo
inet 10.100.0.20/24 brd 10.100.0.255 scope global noprefixroute eth1

Highlighting the Search patterns with Color codes

If you want to highlight the search pattern from the output we can make use of Color code Option is grep

[vamshi@linuxcent ~]$ ps -ef |grep java --color=yes

 

grep command and the Meta-characters with practical examples

In the Extended Regular Expression the metacharacters ?, \+, \{,  \|, \(, and \) enhances and takes the search string to manifest into a rich pattern

The square brackets [/code][ ][/code] includes a list of characters and it matches a single character in the list. Within the bracket, we can specify a  range, like [a-z] [0-9] [abcd], But it only matches a single character in the give list range.
The brackets can contain the carat ^ or the $ symbols and can be globbed with the wildcard characters for finding the repetitive pattern.

Here we shall see the demonstration of a few metacharacters conjunction into a single grep command

Using the Infix Operator | with the meta-characters

$ dmesg -H | egrep '(s|h)d[a-z]'

A quick practical look at the dmesg to find about the errors and warnings using grep command.

$ dmesg -H | grep -Ei “error|warn”

How to find the exact work match for the word error or err with a blank white space at the start of the word from dmesg?
$ dmesg -H | grep -Ei “(er)[r]{1,2}(or)”
$ dmesg -H | grep -Ei “(\We)[r]+(or)|(\Werr)”

Explanation: The grep search metacharacter regular expression matches the occurring of the string literals “error” with the minimum 1 time occurrence (error) of literal r and maximum 2 times.

The output only prints lines containing the word “error“ in a case insensitive grep search.

 

Another great practical example here we apply the same logic to enhance the metacharacter regex pattern to search for the occurrence of error with but without a trailing colon “:” symbol

dmesg -H | grep -Ei “(error)[:]{0}”

 

How to create symbolic Link or Softlinks in Linux and differentiate between Softlink vs Hardlink

The concept of Links in Linux/Unix based systems is Unique and gives a very deeper understanding of the Linux working internals at various levels.

The symbolic also known as Soft link is a special type of linking that acts as a pointer to the original file present on the disk.

The Softlink span across different filesystems extensively and widely used during software package installation and configuring

Lets look at the example of java command linking:

[root@node02 ]# which java
/bin/java
[root@linuxcent ]# ls -l /bin/java
lrwxrwxrwx. 1 root root 22 Apr 10 11:52 /bin/java -> /etc/alternatives/java
[root@node02 boot]# ls -l /etc/alternatives/java
lrwxrwxrwx. 1 root root 73 Apr 10 11:52 /etc/alternatives/java -> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.242.b08-0.el7_7.x86_64/jre/bin/java
[root@linuxcent ]#

If you upgrade java on your system then the /bin/java command points out to a newer installed version of /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.242.b08-0.el7_7.x86_64/jre/bin/java

We will see the demonstration of a Symbolic link.

In Linux to make the links between files, the command-line utility is “ln”.

Softlink Creation syntax using the Linux command line utility ln -s option:

ln -s <source file|Directory > <destination Link file|Directory>

Below is an example

[vamshi@linuxcent html]$ ln -sf /var/www/html/index.html /tmp/index.html

[vamshi@linuxcent html]$ ls -l /tmp/index.html

lrwxrwxrwx. 1 vamshi vamshi 24 Apr  1 18:23 /tmp/index.html -> /var/www/html/index.html

 

The second file is called a symbolic ink to /tmp/index.html

Now the second file /tmp/index.html is called a Symbolic link to the First file on the disk /var/www/html/index.html

It means that the second file is just pointing to the first file’s location on disk without actually copying the contents of the file.

[vamshi@linuxcent html]$ cat /var/www/html/index.html
Welcome to LinuxCent.com
[vamshi@linuxcent html]$ cat /tmp/index.html
Welcome to LinuxCent.com

When you edit either of the file, the contents of the original file on disk are directly modified.

 

How to create Softlinks to Directories in Linux ?
The same logic applies to creating the soft links to directories, Lets see the Demonstration below:

[vamshi@linuxcent html]$  ln -sf /var/www/html/linuxcent/static/ /tmp/static
[vamshi@linuxcent html]$ ls -l /tmp/static
lrwxrwxrwx. 1 vamshi vamshi 32 Apr  8 18:37 /tmp/static -> /var/www/html/linuxcent/static/


 

How do we Remove the linking between the files is even simpler.

Simply run the unlink command on its Destination:

Sample command:

unlink <destination Link file | Directory>

Lets see the Demonstration

unlink /tmp/data-dir

 

Understanding Symbolic / Hard link concept better with improving knowledge on Inode.

To better understand the concept of Symbolic linking we need to understand the concept of Inode numbers present in Linux. I will give a brief overview of it in this section But for Detailed Review, Please see the What is Inode in Linux Section.

We can list out the inode number of any file on linux system using [/code]ls -i <filename>[/code]

Now The extreme left column indicates the system wide unique inode number:

[root@node02 ~]# ls -li /var/www/html/index.html /tmp/index.html 
67290702 lrwxrwxrwx. 1 root root 24 Apr 8 19:09 /tmp/index.html -> /var/www/html/index.html
33557754 -rw-r--r--. 1 root root 25 Apr 8 18:19 /var/www/html/index.html

We can see here the inode number for both the files have different, Because the second file is just a pointer to the original source file..

So why do we create the Symbolic Links ?

We have the advantages of the Softlinks

  1. One Advantage of Softlinks/symbolic links is they span across different filesystems
  2. Can have many softlinks to a single file/Directory.
  3. Can create Symbolic Links of Directories and Files respectively.
  4. Rsync program by default preserves the symbolic links.
  5. The softlinks become activated immediately if the source is recreated or recovered after any kinds of network outages.

What are the best practices when using Softlinks ?

The best practice while creating softlinks is to mention absolute path for source and destination links.

On the other hand you should also understand the disadvantages or shortcomings.

  1. It is Important to observe that If the source file is deleted then the symbolic link becomes useless.
  2. Incases of the Network filesystem issues leading to unavailability of softlinks.

It is also Essential to understand about the Hardlinks to get an overall understanding.

 

Creating a hard link is a simple operation using ln command with no options

Sample Command:

$ ln /path/to/source/ /path/to/HardLink/

So lets start of by creating a hard link of our file /var/www/html/index.html

[vamshi@linuxcent linuxcent]$ ln /var/www/html/index.html /tmp/index-hl.html

The command ls -il lists the inode number of the files.

[vamshi@linuxcent  ]$ ls -i
33557752 index-hl.html  33557752 index.html

So to conclude, The hard linking results in the same inode number, howmany ever times the hardlink of same file is created.
The data continues to persists in the same storage location on the filesystem even if one of the hardlink file is deleted.
As long as the inode number is identical on the files, no filename change matters to the filesystem.
There will be no change in Hardlink behaviour.

Let’s add some content to the hardlink file here and see the Demonstration

[vamshi@linuxcent ]$ echo "We are updating the file to check out the Hardlinks" >>  /tmp/index-hl.html
We are updating the file to check out the Hardlinks

The new line is added in the original file.

[vamshi@linuxcent linuxcent]$ cat index.html
Welcome to LinuxCent.com
We are updating the file to check out the Hardlinks

Content manipulations to either of the files will be treated as a same file.

How to Identify how many times a particular file was linked ?

Note that Linux Command ls -li provides the actual link aggregate count information in the fourth column represented by the number 2 here, which means that it has a second hardlink reference and both the files are interchangeable.
And It should be noted that a file has a link count of 1 by default as it references itself

$ls -li
33557752 -rw-rw-r--. 2 vamshi vamshi 59 Apr  8 19:48 index-hl.html
33557752 -rw-rw-r--. 2 vamshi vamshi 59 Apr  8 19:48 index.html

In case either one of the file is deleted, the other file survives and continues to function properly.

Lets see some hard facts on the Hardlinks.

  1. Hardlinks can’t be created to Directories.
  2. They do not span across filesystems.

Linux Copy File Command for Files and Directories – cp Command Examples

Linux copy files command: cp is generally used for organizing the data on the Linux operating system, It copies the files and directories.

We shall take a deeper look at Linux cp command-utility in the section

In order to copy files and directories, you must have read permissions on the source file(s) and write permissions on the destination directory

How do I copy files under Linux operating systems?

How do I make a 2nd copy of a file on a Linux bash shell?

How can I copies files and directories on a Linux

Linux Copy File command Syntax

cp sourcefile destinationfile
cp sourcefile DESTDIR
cp sourcefile1 sourcefile2 DESTDIR
cp [OPTION] SOURCE DESTFILE
cp [OPTION] SOURCE DESTDIR

How to Copy a Directory if the destination does not exist?

To achieve this we can make use of the following cp command options -R or -r: Copy directories recursively.

Linux cp command Syntax with -R option:

cp -R SOURCE DESTINATION

If the destination doesn’t exist, it will be created.

It can also be used to Copy the contents Recursively

Lets see the demonstration as follows:

[vamshi@linuxcent ]$ cp -R dir1/ dir1-copy
[vamshi@linuxcent ]$ ls -l 
total 0
drwxrwxr-x. 2 vamshi vamshi 6 Apr 11 06:35 dir1
drwxrwxr-x. 2 vamshi vamshi 6 Apr 11 06:37 dir1-Recursive

Using the verbose Option -v to print the copy activity information onto the output screen.

Let’s use the -v flag to print the verbose information onto the screen.

How to Preserve the Source file and Directory permission?

Linux Copy command Syntax with -p option:

-p option preserves the mode, ownership and timestamps from the source to the destination

cp -p file1 file1-copy

Lets us see the Demonstration as Below

[vamshi@node02 cp-command]$ cp -Rp dir1/ dir1-copy
[vamshi@node02 cp-command]$ ls -ld dir1*
drwxrwxr-x. 2 vamshi vamshi 6 Apr 11 06:35 dir1
drwxrwxr-x. 2 vamshi vamshi 6 Apr 11 06:35 dir1-copy
drwxrwxr-x. 2 vamshi vamshi 6 Apr 11 06:37 dir1-Recursive

From the out we can conclude the the Linux copy command with -p Option preserves the original timestamps information and copies it to the destination

Linux cp command with Force copy -f Option, It forcefully overwrites the destination content
Sample Syntax:

cp -f file1 file1-copy

How to Copy Multiple files at once ?

Asterisk / wildcard (*) character is used to copy files multiple files with same pattern.

[vamshi@linuxcent ]$ cp -varpf file* DEST/
‘file10.txt’ -> ‘DEST/file10.txt’
‘file1.txt’ -> ‘DEST/file1.txt’
‘file2.txt’ -> ‘DEST/file2.txt’
‘file3.txt’ -> ‘DEST/file3.txt’
‘file4.txt’ -> ‘DEST/file4.txt’
‘file5.txt’ -> ‘DEST/file5.txt’
‘file6.txt’ -> ‘DEST/file6.txt’
‘file7.txt’ -> ‘DEST/file7.txt’
‘file8.txt’ -> ‘DEST/file8.txt’
‘file9.txt’ -> ‘DEST/file9.txt’

The options -p or -d enables preserving the links and can be used in conjunction with -R option to copy contents Recursively from the source directory.

How to Copy Files and Folders on Linux Using the cp Command recursively to Destination Directory

How to preserve the links with cp command?

Using the Options -p preserves the links and -r Option copies the content recussively same as -R Option and -v prints the verbose information

[vamshi@node02 Linux-blog]$ cp -varpf Redhat-Distro/ /tmp/DEST
‘Redhat-Distro/’ -> ‘/tmp/DEST’
‘Redhat-Distro/Fedora’ -> ‘/tmp/DEST/Fedora’
‘Redhat-Distro/Fedora/fedora.txt’ -> ‘/tmp/DEST/Fedora/fedora.txt’
‘Redhat-Distro/Centos’ -> ‘/tmp/DEST/Centos’
‘Redhat-Distro/Centos/centos.txt’ -> ‘/tmp/DEST/Centos/centos.txt’
‘Redhat-Distro/Centos/CentOS-versions’ -> ‘/tmp/DEST/Centos/CentOS-versions’
‘Redhat-Distro/Centos/CentOS-versions/centos7.txt’ -> ‘/tmp/DEST/Centos/CentOS-versions/centos7.txt’
‘Redhat-Distro/Centos/CentOS-versions/centos6.1.txt’ -> ‘/tmp/DEST/Centos/CentOS-versions/centos6.1.txt’
‘Redhat-Distro/Centos/README-CentOS’ -> ‘/tmp/DEST/Centos/README-CentOS’
‘Redhat-Distro/README-Redhat-Distro’ -> ‘/tmp/DEST/README-Redhat-Distro’
‘Redhat-Distro/RHEL-Versions’ -> ‘/tmp/DEST/RHEL-Versions’
‘Redhat-Distro/RHEL-Versions/redhat5.txt’ -> ‘/tmp/DEST/RHEL-Versions/redhat5.txt’
‘Redhat-Distro/RHEL-Versions/redhat8.txt’ -> ‘/tmp/DEST/RHEL-Versions/redhat8.txt’
‘Redhat-Distro/redhat.txt’ -> ‘/tmp/DEST/redhat.txt’

How to make a symbolic link with Linux cp command to files ?

As we know that ln command us useful to create symboic links, But the Linux copy command Syntax can do that to files with -s Option which creates Symbolic links:

cp -s SOURCE DESTINATION

Linux copy command Syntax with Softlink with Demonstration:

[vamshi@linuxcent ~]$ ls -l total 0
-rw-rw-r--. 1 vamshi vamshi 0 Apr 11 06:39 file1.txt

lrwxrwxrwx. 1 vamshi vamshi 9 Apr 11 06:39 file2.txt -> file1.txt

Linux cp command with interactive prompt using -i option

Sample Syntax:

cp -i file1 file1-copy

Also you can make it a best practice to setup alias alias for cp command.
The best practice is enable options -av

cp -av SOURCE DESTINATION
export cp="cp -av"

How can i copy the hidden files ?

To Copy the hidden files we can use cp command with -a option,lets us see in a practical example.

$ cp -av source/ destination/
‘source/.config1’ -> ‘destination/source/.config1’
‘source/.config2’ -> ‘destination/source/.config2’
‘source/.config3’ -> ‘destination/source/.config3’

Generally the hidden files in Linux are prefixed with a dot . So we can also use the wildcard character *, and copy them, below is another pracctical example

[vamshi@linuxcent cp-command]$ cp -av source/.conf* destination/
‘source/.config1’ -> ‘destination/.config1’
‘source/.config2’ -> ‘destination/.config2’
‘source/.config3’ -> ‘destination/.config3’

How to Copy a File from One Location to Another With a Different Name on Linux Using the cp Command

Assuming we have a couple of users on our linux server called Alice and Bob

[alice@linuxcent ~]$ sudo cp -avrpf /home/alice/djangoproject1/ /home/bob/
‘djangoproject1/’ -> ‘/home/bob/djangoproject1’
‘djangoproject1/__init__.py’ -> ‘/home/bob/djangoproject1/__init__.py’
‘djangoproject1/asgi.py’ -> ‘/home/bob/djangoproject1/asgi.py’
‘djangoproject1/settings.py’ -> ‘/home/bob/djangoproject1/settings.py’
‘djangoproject1/urls.py’ -> ‘/home/bob/djangoproject1/urls.py’
‘djangoproject1/wsgi.py’ -> ‘/home/bob/djangoproject1/wsgi.py’
‘djangoproject1/__pycache__’ -> ‘/home/bob/djangoproject1/__pycache__’
‘djangoproject1/__pycache__/__init__.cpython-36.pyc’ -> ‘/home/bob/djangoproject1/__pycache__/__init__.cpython-36.pyc’
‘djangoproject1/__pycache__/settings.cpython-36.pyc’ -> ‘/home/bob/djangoproject1/__pycache__/settings.cpython-36.pyc’

How to backup files using cp command?

The linux cp command offer the option --backup to backup the data files, below is the command.

cp --backup source destination

Check the Open Ports in Linux

The Ports on a Linux OS are used for exchange and transfer of data on the network connected devices.

A Very high number of security exploitation happen due to no surveillance of in bound connections targeting specific ports. It is most essential to identify the underlying Linux process opening up specific ports for listening over a shared network.

Thus, It is important to identify which ports are open on your Linux machine.

Firstly for basic administration tasks, identifying the port and its correlating application, so that you are well aware of the Open sockets and Enhance the network security by preventing the network intrusion by writing the firewall rules.

In this tutorial we will look at some of the most popular Linux network tools and see how to gather information, and identify the web server like process Apache Httpd or Nginx running so that you don’t conflict when you are configuring them and troubleshooting.

We will discuss some of the popular tools and their general commands syntax.

Netstat Command and its Syntax

netstat

Netstat gives you multiple features and a must know tool if you are in your day to day activities.

It gives out the information about the Open ports in you Linux machine along with the Established connection, TimeWait and Closed state connections.

The netstat command goes by netstat -ntlp

ss

It stands for Socket Statistics, It is a command line utility which provides the information about Open ports, the corresponding process ID which opened the port.

ss command is the successor to netstat command in Linux and has the similar options as its predecessor, In Fact it necessarily is a better enhancement over the old netstat command,

First up lets run ss -tua, which lists all the TCP and UDP Sockets.

[vamshi@linuxcent ~]$ ss -tua

Netid  State     Recv-Q Send-Q                                  Local Address:Port                                                   Peer Address:Port

udp    UNCONN     0      0                                           127.0.0.1:domain                                                            *:*

udp    UNCONN     0      0                                           127.0.0.2:domain                                                            *:*

udp    UNCONN     0      0                                                   *:bootpc                                                            *:*

udp    UNCONN     0      0                                                   *:sunrpc                                                            *:*

udp    UNCONN     0      0                                           127.0.0.1:323                                                               *:*

udp    UNCONN     0      0                                                   *:lanserver                                                         *:*

udp    UNCONN     0      0                                                [::]:sunrpc                                                         [::]:*

udp    UNCONN     0      0                                               [::1]:323                                                            [::]:*

udp    UNCONN     0      0                                                [::]:lanserver                                                      [::]:*

tcp    LISTEN     0      100                                         127.0.0.1:smtp                                                              *:*

tcp    LISTEN     0      128                                                 *:sunrpc                                                            *:*

tcp    LISTEN     0      128                                                 *:ssh                                                               *:*

tcp    ESTAB      0      0                                         10.100.0.20:ssh                                                      10.100.0.1:45662

tcp    LISTEN     0      100                                             [::1]:smtp                                                           [::]:*

tcp    LISTEN     0      70                                               [::]:33060                                                          [::]:*

tcp    LISTEN     0      128                                              [::]:mysql                                                          [::]:*

tcp    LISTEN     0      128                                              [::]:sunrpc                                                         [::]:*

tcp    LISTEN     0      128                                              [::]:http                                                           [::]:*

tcp    LISTEN     0      128                                              [::]:ssh                                                            [::]:*

 

How to get the Established connection information using ss command in linux

$ ss -tua state established

 

[vamshi@node02 ~]$ ss -l sport = 80

The detailed options ss offers are as follows:

-a : Displays all Sockets:

-i : Displays internal TCP information

-t : Displays only TCP Sockets

-l : Displays Only Listening Sockets

-u : Displays only UDP Sockets

-r : Resolves host names

-n : Doesn’t Resolve the Hostnames

-p : Display process information using the Socket

 

ss command takes the following state option filters

established syn-sent |syn-recv |fin-wait-{1,2} |time-wait |closed |close-wait |last-ack |listen |closing

 

nmap

It is one of the most popular open source tools to explore networks and mainly used for security auditing.

NMAP is used extensively for Host Discovery , Port and Protocol Scanning such as ICMP, TCP and UDP Port Scanning.

The service and its version Detection, Operating system Detection.

It is one of the Intrusion prevention tool when used effectively and used for greater security Reporting.

run the command sudo nmap localhost

$ sudo nmap 10.100.0.0/24
$ vamshi@linuxcent: ~ $ nmap 10.100.0.20

Starting Nmap 7.70 ( https://nmap.org ) at 2020-04-10 13:45 GMT

Nmap scan report for 10.100.0.20

Host is up (0.00014s latency).

Not shown: 996 closed ports

PORT     STATE SERVICE

21/tcp   open ftp

22/tcp   open ssh

53/tcp   open domain

80/tcp   open http

111/tcp  open rpcbind

3306/tcp open  mysql

8009/tcp open  ajp13

MAC Address: 08:00:27:5A:26:BD (Oracle VirtualBox virtual NIC)


Nmap done: 1 IP address (1 host up) scanned in 1.51 seconds

It lists all the ports that are Open and even lists up the MAC Address of the Target Host.

How to scan a list of hosts from a file using nmap command

You can do it and run the command as demonstrated below:

$ nmap -iL /etc/hosts # where you have a list of ip/dns names

To increase the verbosity of nmap output use -v<n> where n is a number ranging from 0 – 9

This offer a lot of options to gather information passively and lets see some options listed below:

 

-sn : Lists and does Ping Scan on the network; Least aggressive

-sL : Lists hosts to scan on the network

-O : Enables Target Host OS detection.

-sS : Does the syn Scan, in stealth mode.

-sT : Performs TCP port Scan

-sU : Performs UDP port Scan.

-p : Scans only for Target Port Listed Eg: # sudo nmap -v  -sS -p80 10.100.0.0/24

-PA : TCP Ack Flag is set

-sV : Extracts the Service Version information and the Operating System information from Target Hosts.

 

lsof

lsof linux command gives the information on the list of open files on the system as the abbreviation says. It’s one of the valuable tools when troubleshooting under fire, Gives you the practical linux system behaviour

Start it by running lsof command, will print a bunch of information including the all the programs currently started and owned by you, It includes the block/filesystem files, network stream data/character files, virtual memory paging and temporary data files.

Listing the openfiles using lsof

Now run lsof -u <Your login username>

$ lsof -u vamshi

# Prints a bunch of information about the open files for a particular user.

Now lsof has a lot of options to extract relevant information

To list the total number of open files on the system

$ sudo lsof | wc -l
$ sudo lsof -i TCP :22
$ sudo sudo lsof -P -i:22
vamshi@linuxcent:~$ sudo lsof -i -P :22

COMMAND  PID USER   FD TYPE DEVICE SIZE/OFF NODE NAME

sshd     380   root    3u  IPv4  13672     0t0  TCP *:22 (LISTEN)

sshd     380   root    4u  IPv6  13683     0t0  TCP *:22 (LISTEN)

sshd    1295   root    3u  IPv4  20367     0t0  TCP 10.100.0.30:22->10.100.0.1:39054 (ESTABLISHED)

sshd    1307 vamshi    3u  IPv4  20367     0t0  TCP 10.100.0.30:22->10.100.0.1:39054 (ESTABLISHED)


Lists the information of open and Established connections of the given port

lsof offer a lot of options and filters, lets list some of the most commonly used ones below:

 

-u : takes the username as filter option, Lists openfiles caused by the given username

-i : Lists the open files belonging to the Service port numbers

-P : Inhibits the translation of service names based on port number and prints the port number for simplicity purpose.

 

Change file(s) and Directory/Folder ownership – chmod command in Linux

The Linux command utility chmod is used to set permissions on files and directories

The command chmod, or Change Mode, is widely used to modify the access permissions of files and directories, This facilitates the users to keep the data secure and properly organized.
There are three types the file permissions can be modified using chmod command:

1) Symblolic Notation

2) Octal Notation

3) Reference operator

The Owners, Groups and Others have different permissions to access a particular file.
Lets see the Sample syntax of chmod

$ chmod [option] [mode] < file | directory>

The Users categories are symbolically represented as

-u is used for the User/Owner

g is used for the Group

o is used for the Others

The Permission modes used to modify the Read, Write and Execute permissions respectively on a file or Directory and are symbolically represented as follows

r is for Read permission

w is for Write permission

x is for Execute permission

The Octal notation is simpler interms of representation and execution with r=4 w=2 x=1 total adding up to numerical aggregate value of 7 taking the place holders of Owner(-u) Group(-g) Others(-o) respectively.

Lets see the file permissions for a sample file with complete access to every user category i.e owner, group and others, Meaning that any person who has access to the file can modify,execute(run) or delete the newfile.txt

[vamshi@node02 linuxcent]$ ls -l file.txt
-rwxrwxrwx. 1 vamshi vamshi 3 Apr 10 19:13 file.txt

From the above listed permissions we would like to release the “others” from writing this file but only limit them to read it.

The command will be to delete the Write permissions from others:

[vamshi@linuxcent ~]$ $ chmod o-w file.txt
[vamshi@linuxcent ~]$ ls -l file.txt
-rwxrwxr-x. 1 vamshi vamshi 3 Apr 10 19:13 file.txt

We use the + and - operators to provide the Read,Write and Execute permissions to user categories, It’s Demonstrated as follows:

[vamshi@node02 linuxcent]$ chmod og-rw file.txt
[vamshi@node02 linuxcent]$ ls -l file.txt
-rwx------. 1 vamshi vamshi 3 Apr 10 20:23 file.txt

Combining options such read operation for other and group, write for group and execute permission being removed for owner.

[vamshi@node02 linuxcent]$ chmod og+r,g+w,u-x file.txt
[vamshi@node02 linuxcent]$ ls -l file.txt
-rw-rw-r--. 1 vamshi vamshi 3 Apr 10 20:23 file.txt

Creating a new file newfile.txt and working with it

[vamshi@node02 linuxcent]$ echo "Welcome to LinuxCent" > newfile.txt

Here are the file’s permissions when it’s newly created. It is resultant of the umask

Lets look at a Linux practical example to setup only execute permission for all the user categories

$ chmod +x  /etc/profile.d/java-path.sh

Listing out the System defined Default umask in Symbolic notation.

The Default Umask permissions can be demonstrated as below:

[vamshi@node02 linuxcent]$ umask -S
u=rwx,g=rwx,o=rx

Umask in Octal Notation as below:

[vamshi@node02 linuxcent]$ umask
0002

Now the new lets see the file permissions:

[vamshi@node02 linuxcent]$ ls -l newfile.txt
-rw-rw-r--. 1 vamshi vamshi 21 Apr 10 19:49 newfile.txt

This is the inverse notation of the umask Octal value(002) set at the system wide and our file has the 664 Octal Notation.

The Octal mode of Permissions

These permissions carry a numerical value all adding up to a sum of 4 and categorized as follows:

r=4 w=2 x=1

For Example chmod 754 file1.txt

[vamshi@node02 linuxcent]$ ls -lth newfile.txt
-rwxr-xr--. 1 vamshi vamshi 3 Apr 10 19:13 newfile.txt

How to apply reference permissions in Linux using chmod ?

We shall be using the option --reference which takes the reference from the existing file and applies the permission

Lets see the demonstration below:

vamshi@linuxcent ~]$ ls -l anotherfile.txt
-rwxrwx---. 1 vamshi vamshi  33 Apr 8 19:52 anotherfile.txt
vamshi@linuxcent ~]$ chmod --reference=anotherfile.txt newfile.txt
[vamshi@linuxcent ~ ]$ ls -l newfile.txt 
-rwxrwx---. 1 vamshi vamshi 21 Apr 10 19:49 newfile.txt

With this reference operator, the nexfile.txt permissions are modified identical to anotherfile.txt

 

Docker container volumes

The concept of docker is to run a compressed image into a container which servers the purpose and then container can be removed, leaving no trace of the data generated during the course of its runtime. This exact case is referred as ephemeral container.
The docker is traditionally non-persistent data storage and retains only the data originally from its image build creation, It provides the facility to integrate the volume mounts to a running container for data storage and manages the persistence issue to a certain extent.
We have realized the ability to store the data on volumes and then make them available to the container runtime environment to satisfy the needs.

As per the practice the docker volumes are mounted to docker image in runtime via the command line arguments which are demonstrated as show below.

This implementation of the docker volumes provides the running container a volume binding capability and this is method in docker volumes can be broadly categorized into two abilities which are listed as follows.

(1) The volume mapping from the host to the target container, This is like directory mapping between the Host and the container which happens during its runtime.
(2) A permanent volume name that can be shared among container and it even persists if the container is deleted and the same volume can be again mounted to another docker container.

The storage options offered by docker are for persistent storage of files and ensures it in cases of docker container restarts and removal of the container.

The example for the host bind volume mapping is as follows:

# docker run -v  src:target --name=container-name  docker-image -d

The syntax of docker directory mount:

# docker run -it -v /usr/local/bin:/target/local --name=docker-container-with-vol  ubuntu:latest /bin/bash

We will invoke a container by mounting the host path to an target container path and mount the host directory to nginx-html /usr/share/nginx/html.

[vamshi@node01 ~]$ docker run -d --name my-nginx -v /home/vamshi/nginx-html:/usr/share/nginx/html -p 80:80 nginx:1.17.2-alpine
34797e6d8939e42bc8cfe36eed4b60521355edadc2fa6c74a26fe4172384575c

Now we log into the container and verify the contents

[vamshi@node01 ~]$ docker exec -it my-nginx1 sh
~ # hostname
34797e6d8939
~ # df -h /usr/share/nginx/html
Filesystem                Size      Used Available Use% Mounted on
/dev/sda1                40.0G     16.1G     23.9G  40% /usr/share/nginx/html

From the df -h output from the container shows the path /usr/share/nginx/html as mount point.

Checking the contents of the webroot directory at /usr/share/nginx/html inside the container.

~ # cat /usr/share/nginx/html/index.html
<H1>Hello from LinuxCent.com</H1>

This is the same file which we have on our host machine and it is shared through the volume mount.
We verify using the curl command as follows

~ # curl localhost
<H1>Hello from LinuxCent.com</H1>

We can also mount the same directory with as many containers as possible on our system and it can be an effective way of updating the static content being utilized within our containers.

This is a bind operation offered by docker to mount the directory to a container.. This information can be identified by inspecting the docker command as follows:

Here is the extract snippet from the docker inspect container my-nginx1 :

            {
                "Type": "bind",
                "Source": "/home/vamshi/nginx/nginx-html",
                "Destination": "/usr/share/nginx/html",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }

As we can see the Type of mount is depicted as a bind here.

The formatter can also used with the filtering options in json format

[vamshi@node01 ~]$ docker container inspect my-nginx1 -f="{{.Mounts}}"
[{bind /home/vamshi/nginx-html /usr/share/nginx/html true rprivate}]

The Dockerfile VOLUME expression

Using the Persistent docker volumes.

We now focus our attention to the Docker volume mounts, which are isolated storage resources in docker and are a persistent storage which can be reused and mounted to the containers.

The VOLUME can be used while writing a Dockerfile, it creates a docker image with the volume settings and then mounts to the container when it is startedup.
We use the expression in dockerfile volume.

VOLUME [ "my-volume01" ]

We now build the image and lets observe the output below.

Step 1/4 : FROM nginx-linuxcent
 ---> 55ceb2abad47
Step 2/4 : COPY nginx-html/index.html /usr/share/nginx/html/
 ---> c482aa15da5a
Removing intermediate container a621e114a01d
Step 3/4 : VOLUME my-volume01
 ---> Running in ac523d6a02f0
 ---> 72423fe5f27d
Removing intermediate container ac523d6a02f0
Step 4/4 : RUN ls /tmp
 ---> Running in 8fc1fbc0f0bb
 ---> 0f453e3cfff1
Removing intermediate container 8fc1fbc0f0bb

Here the volume is mounted at the path /my-volume01 based on absolute path from /.

Here my-volume01 will be mounted to /my-volume01 inside the container path

/ # df -Th /my-volume01/
Filesystem Type Size Used Available Use% Mounted on
/dev/sda1 xfs 40.0G 15.1G 24.9G 38% /my-volume01

The information can be extracted by inspecting the container as follows:

# docker container inspect nginx-with-vol -f="{{.Mounts}}"
[{volume 7d6d92cffac1d216ca062032c99eb105b120d769331a2008d8cad1a2c086ad19 /var/lib/docker/volumes/7d6d92cffac1d216ca062032c99eb105b120d769331a2008d8cad1a2c086ad19/_data my-volume01 local true }]

 

If you would like the volume to be mounted to some other path then you can declare that in Dockerfile VOLUME as below:

VOLUME [ "/mnt/my-volume" ]

The information can be extracted from the docker image by inspecting for the volumes.

            "Image": "sha256:72423fe5f27de1a495e5e875aec83fd5084abc6e1636c09d510b19eb711424cc",
            "Volumes": {
                "/mnt/my-volume": {},
                 }

The volumes defined in the Dockerfile VOLUME expression will not be visible with the docker volume ls as they are present within the scope of specific docker container. But will be displayed with anonymous hash tags in the output, But they can be shared among other docker containers. We will look at sharing the docker volumes in the following section.It is also important to understand that once the volume is explicitly deleted then its data cannot be recovered again.

Using the container volume from one container and accessing them in another container.

This option is beneficial during the times when a container access and the debug tools are disabled, and you need to view the logs of the container and run analysis on it.

Using the volume from the container and mounting it to another container for auditing purposes.
We have now created a container called view-logs which uses the volumes from another container called ngnix-with-vol

docker run -d --name view-logs --volumes-from nginx-with-vol degug-tools

Best Practices:
The view-logs container can have a set of debug and troubleshooting tools to view the logs of other app containers.

Creating a Volume from the docker commandline:

The docker volume resource has to be initialized first and can be done as follows:
Command to create a docker volume:
# docker volume create my-data-vol
[vamshi@node01 ~]$ docker volume inspect my-data-vol

[
    {
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/my-data-vol/_data",
        "Name": "my-data-vol",
        "Options": {},
        "Scope": "local"
    }
]

We can use this volume and then mount this to a container as we mounted the host volume in earlier sections.

We will be using the persistent volume name and then mounting the docker volume to a container bind volume name to the target container path..
These options have a specific changes in syntax and has to be specified exactly while running the mount operation.

Below is the syntax:
docker run –volume :</path/to/mountpoint/> image-name.

We have now the docker volume available and mounting it to the target container path /data as follows:
# docker run -d –name data-vol –volume my-data-vol:/data nginx-linuxcent:v4
2dfa965bbbc79a522e9c109ef8eee20bf47e2b61062f3b3df61d4eb677de4506

Verification:

The docker volume can be listed by the following command

Check the mount information with docker volume inspect.
Here is an extract from the docker inspect container

"Mounts": [
            {
                "Type": "volume",
                "Name": "my-data-vol",
                "Source": "/var/lib/docker/volumes/my-data-vol/_data",
                "Destination": "/var/log/nginx",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": ""
             }

As we can see the Type of mount is depucted as a Volume here.

Conclustion:
We have seen the practices of mounting the hostpath to to target container path is a bind operation and the dependency it creates is the host affinity, which is binded to particular host and has to be avoided if you are dealing with more dynamix data exchange between containers over a network. But it is very useful if you have host-container data exchange.
The Option with Volumes is very dynamic and has less binding dependency on the host machines, They can be declared and used in two ways as demonstrated in the before sections. !st being the explicit volume creation and another is the Volume creation from within the Docker build.
The explicit docker creation and then binding provides the scopt of choosing a mount point inside the container after the image is built.

The Dockerfile’s VOLUME expression can be used to automatically define the volume name and the mount point desired and has all the same techniques of volume sharing for data exchange in run time.

What is docker container volume?

Docker volumes are file systems mounted on Docker containers to preserve data generated by the running container. The data doesn’t persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it. … The data cannot be easily moveable somewhere else.

How do I find the volume of a docker container?

We can find out where the volume lives on the host by using the docker inspect command on the host (open a new terminal and leave the previous container running if you’re following along): docker inspect -f “{{json . Mounts}}” vol-test | jq.

What is the purpose of docker volume?

docker volume create creates a volume without having to define a Dockerfile and build an image and run a container. It is used to quickly allow other containers to mount said volume.

How many volumes are there in docker?

Docker volumes are used to persist data from within a Docker container. There are a few different types of Docker volumes: host, anonymous, and, named. Knowing what the difference is and when to use each type can be difficult, but hopefully, I can ease that pain here.

How do I create a docker volume?

docker volume create

  1. Description. Create a volume. …
  2. Usage. $ docker volume create [OPTIONS] [VOLUME]
  3. Extended description. Creates a new volume that containers can consume and store data in. …
  4. Options. Name, shorthand. …
  5. Examples. Create a volume and then configure the container to use it: …
  6. Parent command. Command. …
  7. Related commands.

What is Docker desktop volume?

Docker volumes on Windows are always created in the path of the graph driver, which is where Docker stores all image layers, writeable container layers and volumes. By default the root of the graph driver in Windows is C:\ProgramData\docker , but you can mount a volume to a specific directory when you run a container.

How do I add volume to a running docker container?

Step 1 – Copy. If path, where we are going to add volume, is not empty, then make sure to copy the content to the host system, As adding a volume will overwrite container data at that location. …
Step 2 – Create new Image. …
Step 3 – Delete container. …
Step 4 – Create a new container.

How do I know my container size?

To view the approximate size of a running container, you can use the command docker container ls -s . Running docker image ls shows the sizes of your images.

Can we attach volume to running container?

To attach a volume into a running container, we are going to: use nsenter to mount the whole filesystem containing this volume on a temporary mountpoint; create a bind mount from the specific directory that we want to use as the volume, to the right location of this volume; umount the temporary mountpoint.

Linux rsync command

Linux Command Utility [/code]rsync[/code] is a very robust, fast content copy command which can be used within the same linux host and over a connected network between 2 linux hosts. It is a special program which has intelligence in terms of not copying data repetitively if the destination has the same copy as source based on file checksum calculations.

We shall explore some of the practical rsync command features and demonstrate them

Syntax of command

$ rsync [OPTIONS] /source/path/ /dest/path/

Running rsync on the same host?

[vamshi@linuxcent ~]$ rsync -avx newfile.txt /tmp/
sending incremental file list
newfile.txt

sent 133 bytes  received 35 bytes  336.00 bytes/sec
total size is 21  speedup is 0.12

Running rsync between two hosts in a network

$ rsync [OPTIONS] host:/source/path/ /dest/path/

Run rsync in Dry-run mode by using [/code]-n[/code] option

$ rsync -avn Source_host:/source/path Destination_host:/dest/path

This generally runs over the SSH protocol and you are required to enter the login credentials appropriately.

How to invoke SSH remote shell in rsync?

In case you are using a SSH keys then you have to invoke the remote shell to authenticate to the remote server with your private keypair. This is Demonstrated as follows:

$ rsync -avxn --rsh="ssh -i ~/.ssh/vamshi_id_rsa" vamshi@<Your.Source.IP.DNS>:"/<Source_Path>" "/<Destination_Path>"

For more information about the SSH key setup, Please refer to our SSH keys section

How to persist Hard links on the system using rsync. Following is the Demonstration

Flag : -H. Using this option enables to preserve the HardLinks over the destination copy of the data.

$ rsync -avHx /path/to/source/ /path/to/destination/

The most practical example of working with rsync comes in replicating mission critical data or transferring Database dumps within the DB servers etc.,

How to exclude certain directories in rsync in linux ?

using --exclude filter option is demonstrated as follows:

$ rsync -avx /source/path/to/backup-v31/ /dest/databackups/backup-v31/ --exclude="DontTouchMyData/"

Using the delete option, enables us to delete the directories from the source upon completion of the operation.

Note: This operation has the same effects as the mv command on linux but performed over the network between source and destination hosts.

$ rsync -avx --delete /source/path/to/backup-v31/ /dest/databackups/backup-v31/ --exclude="data/" --exclude="data/board" --exclude="cache/apt" --exclude="opt"

Redirect the rsync output to a file by appending output redirection symbol to a file on current location

$rsync -avx --delete /source/path/to/backup-v31/ /dest/databackups/backup-v31/ --exclude="data/" --exclude="data/board" --exclude="cache/apt" --exclude="opt"  >>/tmp/rsync.log

What is rsync command Linux?

rsync or remote synchronization is a software utility for Unix-Like systems that efficiently sync files and directories between two hosts or machines. … Copying/syncing to/from another host over any remote shell like ssh, rsh.

How do I use rsync in Linux?

Syntax of rsync command:

  • -v, u2013verbose Verbose output.
  • -q, u2013quiet suppress message output.
  • -a, u2013archive archive files and directory while synchronizing ( -a equal to following options -rlptgoD)
  • -r, u2013recursive sync files and directories recursively.
  • -b, u2013backup take the backup during synchronization.

What is rsync in bash?

rsync is a fast and versatile command-line utility for synchronizing files and directories between two locations over a remote shell, or from/to a remote Rsync daemon. … Rsync can be used for mirroring data, incremental backups, copying files between systems, and as a replacement for scp , sftp , and cp commands.

How do I transfer files using rsync?

You can use SecureShell (SSH) or Remote Sync (Rsync) to transfer files to a remote server. Secure Copy (SCP) uses SSH to copy only the files or directories that you select. On first use, Rsync copies all files and directories and then it copies only the files and directories that you have changed.

What is rsync command do?

Rsync is typically used for synchronizing files and directories between two different systems. For example, if the command rsync local-file user@remote-host:remote-file is run, rsync will use SSH to connect as user to remote-host.

Why do we use rsync?

Syntax of rsync command:

  • -v, u2013verbose Verbose output.
  • -q, u2013quiet suppress message output.
  • -a, u2013archive archive files and directory while synchronizing ( -a equal to following options -rlptgoD)
  • -r, u2013recursive sync files and directories recursively.
  • -b, u2013backup take the backup during synchronization.

What is rsync in RHEL?

Rsync can be used to quickly move large amounts of data to both local and remote destinations. For this reason, rsync is often used to copy data, make backups, migrate hosts, and bridge the gap between site staging and production environments.

How does rsync work in Linux?

An rsync process operates by communicating with another rsync process, a sender and a receiver. At startup, an rsync client connects to a peer process. If the transfer is local (that is, between file systems mounted on the same host) the peer can be created with fork, after setting up suitable pipes for the connection.

How do I rsync a file in Linux?

Copy a single file locally If you want to copy a file from one location to another within your system, you can do so by typing rsync followed by the source file name and the destination directory. Note: Instead of u201c/home/tin/file1. txtu201d, we can also type u201cfile1u201d as we are currently working in the home directory.

How do I use rsync?

You can use secure shell (SSH) or Remote Sync (Rsync) to transfer files to a remote server. Secure Copy (SCP) uses SSH to copy only the files or directories that you select. On first use, Rsync copies all files and directories and then it copies only the files and directories that you have changed.

Best Linux Text Editors

Best Linux Text Editors

You can choose between several text editors in Linux. Each editor has advantages and advantages.

1. Vi/Vim
Vi is a powerful and the most popular command-line-based editor. Commonly used for writing code and editing configuration files. First of all, the advantage is availability. Vi is always installed on any distribution. The second advantage is the consumption of system resources. One of the cons is non-intuitive, but short commands.

Vi has 3 modes: command, input, and last line mode. Command mode is the default.

2. Nano
Nano is WYSIWYG (what you see is what you get) editor and is installed by default in Ubuntu and many other Linux distributions. Action/commands are done in a CTRL and Key manner, for example, CTRL + X save a file. Features: Autoconf support, case-sensitive search function, auto-indent ability, regular expression search and replace.

3. Gedit
Gedit is the default text editor for the GNOME desktop environment. Gedit’s aim is simple and easy to use for beginner Linux users. Useful features are syntax highlighting, clipboard support, brackets matching, search and replace with support of regular expressions

4. GNU Emacs
Emacs is the extensible self-documenting editor. It provides an interpreter for Emacs Lisp. Main function: text editing including a project planner, mail and newsreader, debugger interface, calendar.

5. Leaf Pad
GTK+ based editor is popular among new Linux users because it is easy to use. It supports the codeset option, auto codeset detection, and Drag & Drop function. It does not provide syntax coloring.

Which text editor is best Linux?

12 Best Text Editors For Linux Distros

  • Sublime Text. Sublime Text is a feature-packed text editor built for u201ccode, markup, and prose.u201d It natively supports tons of programming languages and markup languages. …
  • Atom. …
  • Vim. …
  • Gedit. …
  • GNU Emacs. …
  • Visual Studio Code. …
  • nano. …
  • KWrite.

What are the most common text editors in Linux?

Top 10 Text Editors for Linux Desktop

  • VIM. If you are bored of using the default u201cviu201d editor in linux and want to edit your text in an advanced text editor that is packed with powerful performance and lots of options, then vim is your best choice. …
  • Geany. …
  • Sublime Text Editor. …
  • Brackets. …
  • Gedit. …
  • Kate. …
  • Eclipse. …
  • Kwrite.

What text editor comes with Linux?

Almost all Linux distributions, even older versions, come with the Vim editor installed.

What is the best text editor 2020?

10 best code editors for 2020

  • Visual studio code. Visual studio code commonly referred to as VS code, is one of the best code editors in the market. …
  • Sublime text. If you are looking for a very lightweight yet robust code editor, the sublime text is your option. …
  • Atom Editor. …
  • Notepad++ …
  • Bluefish. …
  • Brackets. …
  • Phpstorm. …
  • GNU Emacs.

What text editor should I use for Linux?

There are two command-line text editors in Linuxxae: vim and nano. You can use one of these two available options should you ever need to write a script, edit a configuration file, create a virtual host, or jot down a quick note for yourself. These are but a few examples of what you can do with these tools.

What is the best text editor to use?

Best text editors in 2021: for Linux, Mac, and Windows coders and programmers

  • Sublime Text.
  • Atom.
  • Visual Studio Code.
  • Espresso.
  • Brackets.
  • Notepad++
  • Vim.
  • BBedit.

What is the best IDE for Linux in 2020?

10 Best IDEs For Linux In 2020!

  • NetBeans.
  • zend Studio.
  • Komodo IDE.
  • Anjuta.
  • MonoDevelop.
  • CodeLite.
  • KDevelop.
  • Geany.

Is VI the best text editor?

Vim is the best text editor/IDE out there. It is the x26quot;editor of choice of old-time Unix hackersx26quot;. Vim is one of the most popular programming editors out there. Itx26#39;s loved by geeks for its speed, extensive feature set, and flexibility.

Which text editor is used in Linux?

A Linux system supports multiple text editors. There are two types of text editors in Linux, which are given below: Command-line text editors such as Vi, nano, pico, and more. GUI text editors such as gedit (for Gnome), Kwrite, and more.

Which is the most common text editor?

The 15 Most Popular Text Editors for Developers

  • UltraEdit.
  • Dreamweaver.
  • Komodo Edit / Komodo IDE.
  • Aptana.
  • PSPad.
  • Vim.
  • TextMate.
  • Notepad++