Vamshi Krishna Santhapuri

Experienced Operations Engineer with a demonstrated history of working in the computer software industry. Skilled in Openshift, Kuberntes, Docker, Devops practices, Linux System Administration, Strong engineering professional with a Bachelor of Technology (B.Tech.) focused in Computer Science from JNTUH.

How to get the docker container ip ?

The metadata information from the docker containers can be extracted using the docker inspect command.
We see the demonstration as follows:

The docker engine api is based around the golang templates and the commands use extensive formatting around the json function definitions.

[vamshi@node01 ~]$ docker inspect <container-name | container-id> -f '{{ .NetworkSettings.IPAddress }}'
172.17.0.2
[vamshi@node01 ~]$ docker inspect my-container --format='{{ .NetworkSettings.IPAddress }}'
172.17.0.2

RBACs in kubernetes

The kubernetes provides a Role based Access controls as a immediate mechanism as a security measure.

The roles are the grouping of PolicyRules and the capabilities and limitations within a namespace.
The Identities (or) Subjects are the users/ServiceAccounts which are assigned Roles which constitute a RBACs.
This process is acheived by referencing a role from RoleBinding to create RBACs.

In kubernetes there is Role and RoleBindings and the ClusterRole and ClusterRoleBinding.

There is no concept of a deny permission in the RBACs.

The Role and the Subject combined together defines a RoleBinding.

Now lets look at each of the terms in detail.

Subjects:

  • user
  • group
  • serviceAccount

Resources:

  • configmaps
  • pods
  • services

Verbs:

  • create
  • delete
  • get
  • list
  • patch
  • proxy
  • update
  • watch

You Create a kind:Role with a name and then binding with roleRef it to Subject by creating a kind: RoleBinding

[vamshi@master01 k8s]$ kubectl describe serviceaccounts builduser01 
Name:                builduser01
Namespace:           default
Labels:              
Annotations:         
Image pull secrets:  
Mountable secrets:   builduser01-token-rmjsd
Tokens:              builduser01-token-rmjsd
Events:              

The role builduser-role has the permissions to all the resources to create, delete, get, list, patch, update and watch.

[vamshi@master01 k8s]$ kubectl describe role builduser-role
Name: builduser-role
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"builduser-role","namespace":"default"},"ru...
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
* [] [] [create delete get list patch update watch]

Using this you can limit the user access to your cluster

View the current clusterbindings on your kubernetes custer

[vamshi@master01 :~] kubectl get clusterrolebinding
NAME                                                   AGE
cluster-admin                                          2d2h
kubeadm:kubelet-bootstrap                              2d2h
kubeadm:node-autoapprove-bootstrap                     2d2h
kubeadm:node-autoapprove-certificate-rotation          2d2h
kubeadm:node-proxier                                   2d2h
minikube-rbac                                          2d2h
storage-provisioner                                    2d2h
system:basic-user                                      2d2h

The clusterrole describes the Resources and the verbs that are accessible the user.

[vamshi@linux-r5z3:~] kubectl describe clusterrole cluster-admin
Name:         cluster-admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
  *.*        []                 []              [*]
             [*]                []              [*]

Listing the roles on Kubernetes:

[vamshi@master01 :~] kubectl get roles --all-namespaces
NAMESPACE     NAME                                             AGE
kube-public   kubeadm:bootstrap-signer-clusterinfo             2d2h
kube-public   system:controller:bootstrap-signer               2d2h
kube-system   extension-apiserver-authentication-reader        2d2h
kube-system   kube-proxy                                       2d2h
kube-system   kubeadm:kubelet-config-1.15                      2d2h
kube-system   kubeadm:nodes-kubeadm-config                     2d2h
kube-system   system::leader-locking-kube-controller-manager   2d2h
kube-system   system::leader-locking-kube-scheduler            2d2h
kube-system   system:controller:bootstrap-signer               2d2h
kube-system   system:controller:cloud-provider                 2d2h
kube-system   system:controller:token-cleaner                  2d2h

We can further examine the rolebindings on the for the name: system::leader-locking-kube-scheduler which is being associated with the service account kube-scheduler.

[vamshi@master01 :~]  kubectl describe rolebindings system::leader-locking-kube-scheduler -n kube-system
Name:         system::leader-locking-kube-scheduler
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
Role:
  Kind:  Role
  Name:  system::leader-locking-kube-scheduler
Subjects:
  Kind            Name                   Namespace
  ----            ----                   ---------
  User            system:kube-scheduler  
  ServiceAccount  kube-scheduler         kube-system

There is a category of the api groups which contains the following api tags:

apiextensions.k8s.io, apps, autoscaling, batch, Binding, certificates.k8s.io, events.k8s.io, extensions, networking.k8s.io, PodTemplate, policy, scheduling.k8s.io, Secret, storage.k8s.io

The complete roles available in Kubernetes are as follows:

APIService, CertificateSigningRequest, ClusterRole, ClusterRoleBinding, ComponentStatus, ConfigMap, ControllerRevision, CronJob, CSIDriver, CSINode, CustomResourceDefinition, DaemonSet, Deployment, Endpoints, Event, HorizontalPodAutoscaler, Ingress, Job, Lease, LimitRange, LocalSubjectAccessReview, MutatingWebhookConfiguration, Namespace, NetworkPolicy, Node, PersistentVolume, PersistentVolumeClaim, Pod, PodDisruptionBudget, PodSecurityPolicy, PriorityClass, ReplicaSet, ReplicationController, ResourceQuota, Role, RoleBinding, RuntimeClass, SelfSubjectAccessReview, SelfSubjectRulesReview, Service, ServiceAccount, StatefulSet, StorageClass, SubjectAccessReview, TokenReview, ValidatingWebhookConfiguration and VolumeAttachment

Generate SSL certificates using openssl

Generate SSL certificates using openssl with a Certificate Signing Request and signing it by a Certificate Authority.

The file ca.key and ca.crt are the Certificate Authority

We will be genrating the .key and .csr (Certificate Signing Request) files from the below command.

[root@node01 ssl]# openssl req -new -sha256 -newkey rsa:2048 -nodes -keyout linuxcent.com.key -days 365 -out linuxcent.com.csr -sha256 -subj "/C=IN/ST=TG/L=My Location/O=Company Ltd./OU=IT/CN=linuxcent.com/subjectAltName=DNS.1=linuxcent.com"

Verify the .csr file that is generated as shown below:

[root@node01 ssl]# openssl req -in linuxcent.com.csr -noout -text
Certificate Request:
Data:
Version: 0 (0x0)
Subject: C=IN, ST=TG, L=MY Location, O=Company Ltd., OU=IT, CN=linuxcent.com/subjectAltName=DNS.1=linuxcent.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:00:e4:b4:24:d7:22:ec:5d:c1:37:8c:d1:a0:62:17:
96:24:77:8d:75:4e:d5:74:15:4d:61:e0:8b:66:d6:
                Exponent: 65537 (0x10001)
        Attributes:
            a0:00
    Signature Algorithm: sha256WithRSAEncryption
         87:ef:83:b2:a6:f5:3a:f3:6f:1c:e4:02:ec:bf:5d:75:64:1d:
-- OUTPUT TRUNCATED --

Now we will using the root ca.key and [/code]ca.crt[/code] to digitally sign this .csr and generate a .crt

[root@node01 ssl]# openssl x509 -req -in linuxcent.com.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out linuxcent.com.crt -days 365 -sha256
Signature ok
subject=/C=IN/ST=TG/L=My Location/O=Company Ltd./OU=IT/CN=linuxcent.com/subjectAltName=DNS.1=linuxcent.com

We have generated the .crt file from the .csr

[root@node01 ssl]# ls linuxcent.com.crt linuxcent.com.key 
linuxcent.com.crt linuxcent.com.key

Therefore we have successfully generated the linuxcent.com.key file and linuxcent.com.crt, and digitally self signed with the root CA key and certificates.

Generating Self Signed SSL certificates using openssl

The x509 is the certificate signing utility we will be using here.

We generate the ssl self signed certificate using the following command, request as demonstrated below.

openssl req -x509 -days 365 -sha1 -newkey rsa:2048 -nodes -keyout linuxcent.com.key -out linuxcent.com.crt -sha256 -subj "/C=IN/ST=State/L=My Location/O=Company Ltd./OU=IT/CN=linuxcent.com/subjectAltName=DNS.1=linuxcent.com"

The Days parameter can be specified to any number of days depending on your requirement.

The Self signed certificates are mostly commonly used within the internal network or among small group of familiar individuals like an office for specific purposes and not advised to be used out in the public domain as the browser does not identify the certificate authenticity or the ingenuity of the concerned website. The Self-signed certificates are not validated with any third party until and unless you import them to the browsers previously.

Settingup the puppet master and puppet client server

Make sure that you have populated hostname properly on the puppet master server. You can do it with the hostnamectl command.
The hostname assumed by default is “puppet” for your puppet master, but you can give it anyname and reachable over your network to other servers with the mapped FQDN.

Its good practice to setup the /etc/hosts with an alias name called puppet if you are just starting for first time.

Installing the puppet yum repository sources to download the puppet packages.

[root@puppetmaster ~]# sudo rpm -Uvh https://yum.puppet.com/puppet5-release-el-7.noarch.rpm
Retrieving https://yum.puppet.com/puppet5-release-el-7.noarch.rpm
warning: /var/tmp/rpm-tmp.ibJsVY: Header V4 RSA/SHA256 Signature, key ID ef8d349f: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:puppet5-release-5.0.0-12.el7     ################################# [100%]

Installing the Puppet Master service from the yum repository.

[root@puppetmaster ~]# yum install puppetserver

Verify which packages are installed on your machine
[root@puppetmaster ~]# rpm -qa | grep -i puppet
puppetserver-5.3.13-1.el7.noarch
puppet5-release-5.0.0-12.el7.noarch
puppet-agent-5.5.20-1.el7.x86_64

Ensure that you give the following entries updated in the file /etc/puppetlabs/puppet/puppet.conf under the section

[master]
certname = puppetmaster.linuxcent.com
server = puppetmaster.linuxcent.com

Enabling the puppetserver Daemon and starting puppetserver

[root@puppetmaster ~]# systemctl enable puppetserver
[root@puppetmaster ~]# systemctl start puppetserver

The puppet server process starts on the port 8140.

[root@puppetmaster ~]# netstat -ntlp | grep 8140
tcp6       0      0 :::8140                 :::*                    LISTEN      21084/java

Settingup the puppet client.
Installing the yum repository to download the puppet installation packages.

[vamshi@node01 ~]$ sudo rpm -Uvh https://yum.puppet.com/puppet5-release-el-7.noarch.rpm

Installing the puppet agent.

[vamshi@node01 ~]$ sudo yum install puppet-agent

Once we have the puppet agent installed, we need to update the puppet client configuration with the puppetmaster FQDN by updating in the file /etc/puppetlabs/puppet/puppet.conf under the [master] section

[master]
certname = puppetmaster.linuxcent.com
server = puppetmaster.linuxcent.com

Running the puppet agent to setup communication with the puppet master

[vamshi@node01 ~]$ sudo puppet agent --test
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Caching catalog for node01.linuxcent.com
Info: Applying configuration version '1592492078'
Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
Notice: Applied catalog in 0.02 seconds

With this we have successfully raised the signing request to the master
Listing the puppet agent details on the puppet master.

[root@puppetmaster ~]# puppet cert list --all
  "node01.linuxcent.com" (SHA256) 88:08:8A:CF:E3:5B:57:1C:AA:1C:A3:E5:36:47:60:0A:55:6F:C2:CC:9C:09:E1:E7:85:63:2D:29:36:3F:BF:34
[root@puppetmaster ~]# puppet cert sign node01.linuxcent.com
Signing Certificate Request for:
  "node01.linuxcent.com" (SHA256) 88:08:8A:CF:E3:5B:57:1C:AA:1C:A3:E5:36:47:60:0A:55:6F:C2:CC:9C:09:E1:E7:85:63:2D:29:36:3F:BF:34
Notice: Signed certificate request for node01.linuxcent.com
Notice: Removing file Puppet::SSL::CertificateRequest node01.linuxcent.com at '/etc/puppetlabs/puppet/ssl/ca/requests/node01.linuxcent.com.pem'

Now that we have successfully signed the puppet agent request, we are able to see the + sign on the left side of the agent host name as demonstrated in the following output.
[root@puppetmaster ~]# puppet cert list --all
+ "node01.linuxcent.com" (SHA256) 15:07:C2:C1:51:BA:C1:9C:76:06:59:24:D1:12:DC:E2:EE:C1:47:35:DD:BD:E8:79:1E:A5:9E:1D:83:EF:D1:61

The respective ssl signed requests will be saved under the location /etc/puppetlabs/puppet/ssl/ca/signed

[root@node01 signed]# ls
node01.linuxcent.com.pem  puppet.linuxcent.com.pem

To clean up and agent certificates

puppet cert clean node01.linuxcent.com

Which will remove the agent entries from the puppetmaster records and a new certificate request is required to be added to this puppetmaster.

The autosign.conf can also be used if you are going to manage a huge farm of puppet clients, and the manual signing of clients becomes are tedious task, We can setup the whiledcard like *.linuxcent.com to auto approve the signing requests originating from the client hosts present in the network domain of linuxcent.com.

nginx reverse proxy setup for kibana dashboard

How to Setup Nginx Reverse proxy for Kibana.

In this demonstration we will see how to setup the reverse proxy using the nginx webserver to the backend kibana.

We begin by installing the latest version of nginx server on our centos server:

$ sudo yum install nginx -y

The nginx package is going to be present in the epel-repo and you have to enable it.

$ sudo yum --enablerepo=epel install nginx -y

Once the nginx package is installed we need to enable to Daemon and start it with the following command:

[vamshi@node01 ~]$ sudo systemctl enable nginx --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

We now add the create the nginx configuration file for kibana backend, and place it under the location /etc/nginx/conf.d/kibana as shown below:

We can setup the Restricted Access configuration if needed for enhanced security as shown below on the line with auth_basic and auth_basic_user_file, You may skip the Restricted Access configuration if you believe it is now required.

[vamshi@node01 nginx]$ sudo cat conf.d/kibana.conf
server {
    listen 80;
    server_name localhost;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.htpasswd;
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

With the configuration in place .. we now check the nginx config syntax using the -t option as shown below:

[vamshi@node01 nginx]$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Now restart the nginx server and head over to the browser.

$ sudo systemctl restart nginx

On you Browser enter the server ip or FQDN. and you will be auto redirected to the url http://your-kibana-server.com/app/kibana#/home
kibana-home

Setup htacess authorization config with user details.

We now install the htpasswd tool from the package httpd-tools as follows:

$ sudo yum install httpd-tools -y

Adding the Authorization details to our .htpasswd file.

[vamshi@node01 nginx]$ sudo htpasswd -c /etc/nginx/.htpasswd vamshi
New password: 
Re-type new password: 
Adding password for user vamshi

So We have now successfully added the Auth configuration

[vamshi@node01 nginx]$ sudo htpasswd -n /etc/nginx/.htpasswd 
New password: 
Re-type new password: 
/etc/nginx/.htpasswd:$apr1$tlinuxcentMY-htpassEsHEEanL21

As the password is 1 way encryption we cannot decode it and are required to generate new hash.
Verifying the htpasswd configuration logins from the curl command:

$ curl http://kibana-url -u<htpasswd-username>
[vamshi@node01 ~]$ curl kibana.linuxcent.com -uvamshi -I
Enter host password for user 'vamshi':
HTTP/1.1 302 Found
Server: nginx/1.16.1
Date: Thu, 07 Apr 2020 17:48:35 GMT
Content-Length: 0
Connection: keep-alive
location: /spaces/enter
kbn-name: kibana
kbn-license-sig: 2778f2f7e07680b7aefa85db2e7ce7bd33da5592b84cefe62efa8
kbn-xpack-sig: ce2a76732a2f58fcf288db70ad3ea
cache-control: no-cache

If you tend to enter the invalid credentials you will encounter a 401 http error code Restricting the Unauthorized access.

HTTP/1.1 401 Unauthorized
Server: nginx/1.16.1
Date: Thu, 07 Apr 2020 17:51:36 GMT
Content-Type: text/html
Content-Length: 179
Connection: keep-alive
WWW-Authenticate: Basic realm="Restricted Access"

Now we head over to the browser to check the htaccess login page in action as shows follows:
http://your-kibana-server.com
kibana-htpasswd-prompt
Conclusion: With the htpasswd in place, it provides an extra layer of authorized access to your sensitive urls.. in effect now you need to enter the htpasswd logins to access the same kibana dashboard.

Mounting the external volumes to jenkins docker container

Creating a docker volume

To use the external volume for our future container, we need to format a filesystem on the volume.
We use the ext4 filesystem to format our block device, we will demonstrate that as follows:

vamshi@node03:~$ sudo mkfs.ext4 /dev/sdb
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: done
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: bc335e44-d8e9-4926-aa0a-fc7b954c28d1
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Here is the command to create a volume by mentioning the path to the block device and using it in the local scope

docker volume create jenkins_vol1 --driver local --opt device=/dev/sdb
jenkins_vol1

We have successfully creates a docker volume using a block device.

Inspecting the docker volume that is created:

vagrant@node03:~$ docker volume inspect jenkins_vol1
[
    {
        "CreatedAt": "2020-05-12T17:22:11Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/jenkins_vol1/_data",
        "Name": "jenkins_vol1",
        "Options": {
            "device": "/dev/sdb",
            "type": "ext4"
        },
        "Scope": "local"
    }
]

Creating my jenkins container which will use the docker volume jenkins_vol1 and mount it to /var/jenkins_home/.m2

docker run -d -p 8080:8080 --name jenkins --mount 'type=volume,src=temp_vol,dst=/var/jenkins_home/.m2,
volume-driver=local' jenkins:latest

We have successfully started our container and now lots login to the container and check our volume.

jenkins@fc2c49313ddb:/$ df -hT
Filesystem     Type     Size  Used Avail Use% Mounted on
overlay        overlay   29G  4.9G   23G  18% /
tmpfs          tmpfs     64M     0   64M   0% /dev
tmpfs          tmpfs    970M     0  970M   0% /sys/fs/cgroup
shm            tmpfs     64M     0   64M   0% /dev/shm
/dev/sda3      ext4      29G  4.9G   23G  18% /var/jenkins_home
/dev/sdb       ext4     2.0G  6.0M  1.8G   1% /var/jenkins_home/.m2
tmpfs          tmpfs    970M     0  970M   0% /proc/acpi
tmpfs          tmpfs    970M     0  970M   0% /sys/firmware

As we can see from the output the mount point /var/jenkins_home/.m2 is mounted with block device /dev/sdb using a ext4 filesystem

/dev/sdb ext4 2.0G 6.0M 1.8G 1% /var/jenkins_home/.m2

Creating a 200MB temp filesystem volume.

docker volume create --name temp_vol --driver local --opt type=tmpfs --opt device=tmpfs --opt o=size=200m

The inspect of the temp_vol we created is as follows:

vamshi@node03:~$ docker volume inspect temp_vol
[
    {
        "CreatedAt": "2020-05-02T17:31:01Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/temp_vol/_data",
        "Name": "temp_vol",
        "Options": {
            "device": "tmpfs",
            "o": "size=100m,uid=1000",
            "type": "tmpfs"
        },
        "Scope": "local"
    }
]
docker run -d -p 8080:8080 --name jenkins --mount 'type=volume,src=jenkins_vol1,dst=/var/jenkins_home/.m2,volume-driver=local' jenkins:latest
vamshi@node03:~$ docker exec -it jenkins bash
jenkins@2267ba462aa2:/$ df -hT
Filesystem     Type     Size  Used Avail Use% Mounted on
overlay        overlay   29G  4.9G   23G  18% /
tmpfs          tmpfs     64M     0   64M   0% /dev
tmpfs          tmpfs    970M     0  970M   0% /sys/fs/cgroup
shm            tmpfs     64M     0   64M   0% /dev/shm
/dev/sda3      ext4      29G  4.9G   23G  18% /var/jenkins_home
tmpfs          tmpfs    100M     0  100M   0% /var/jenkins_home/.m2
tmpfs          tmpfs    970M     0  970M   0% /proc/acpi
tmpfs          tmpfs    970M     0  970M   0% /sys/firmware
jenkins@2267ba462aa2:/$ exit

Here is shows the mount point details:

tmpfs tmpfs 200M 0 200M 0% /var/jenkins_home/.m2

Please note the mount point /var/jenkins_home/.m2 which has 200MB space as defined.

Thus we can make use of the docker volumes and use the persistent fileystems and attach the block disks to a running container.

Create a user and Grant privileges in mysql database

mysql database user creation

The user creation process in mysql is one of the most important steps in Database administration.
Below we will list some of the Important terms of Authentication, Authorization with practical demonstration.
The process of gaining access to the database engine with an active login credentials and a login request from a trusted source network ensures Authentication.

The part where in the user is allowed to access certain tables in databases or the whole or part of the Databases determines  Authorization.

In SQL administration, The user creation process involves Authentication and Authorization with a practical implementation of a unique username id, identified by the password, the critical component is the source network identification if logging from remote hosts. The permission to specific databases ensuring the least privileges based on the desired role is one of best the practices

Let’s connect using root access to the MySQL Command-Line Tool

[vamshi@mysql01 linuxcent]$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.19 MySQL Community Server - GPL


Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

 

How to create a new mysql user and Grant the privileges

Sample Syntax:

CREATE USER 'mysql_user'@'hostname' IDENTIFIED BY 'user_password';

It is important to understand that the ‘username’ @ ‘hostname’ is a unique entry of identification pattern for Authenticating to the mysql engine.

The hostname field accepts the values such as [/code]“ip address” / 10.100.0.0/24 / localhost [/code]

And only the incoming requests will be allowed matching the user name.

The syntax for creating a user on mysql goes as follows:

Enabling access for a source of localhost identified by the authentication information

CREATE USER 'vamshi'@'localhost' IDENTIFIED BY 'user_password';

Enabling access for a source IP range of 10. network identified with /24 CIDR followed by the authentication information

CREATE USER 'vamshi'@'10.100.0.0/24' IDENTIFIED BY 'user_password';

Enabling access for a source IP range of specific hostname identified the authentication information

CREATE USER 'vamshi'@'hostname' IDENTIFIED BY 'user_password';

The First step of user access is done, Now we need to grant the access to the Databases, which grants the Privileges to perform actions on the DB by the new user.

Granting Privileges

This section deals with the Authorization;

On the mysqlcli prompt, you would need to issue the GRANT command with appropriate access permissions.

What are Privileges types in mysql?

The Grant Authorizes the Following actions

Like the ability to CREATE tables and databases, read or write FILES, and then even SHUTDOWN the server.

The most commonly used privileges are:

  • ALL PRIVILEGES: Grants all privileges to a user account.
  • SELECT: The user account is allowed to read a database.
  • INSERT: The user account is allowed to insert rows into a specific table.
  • UPDATE: The user account is allowed to update table rows.
  • CREATE: The user account is allowed to create databases and tables.
  • DROP: The user account is allowed to drop databases and tables.
  • DELETE: The user account is allowed to delete rows from a specific table.
  • PROCESS: The user is allowed to get the information about the threads executing within the server
  • SHUTDOWN: The user is allowed to use of the SHUTDOWN and RESTART statements within the server.

Now it’s time to grant the privileges to the new user on a tables belonging to a Database or all the tables on a given database;

Here’s what the Simple GRANT SQL statement looks like:

GRANT ALL PRIVILEGES ON Database_name.Table_name TO 'user@'hostname' ;

Let’s break this down and understand what we just told MySQL to do.

GRANT ALL PRIVILEGES (ALL types of Privileges) for only the given Database_Name and given Table_Name to the user Identified by ‘user@’hostname’

The Database_name and the Table_name can be replaced by the wildcard * means to every Database and Table in the Database respectively.

*.* to specify all databases on the server

database_name.* to specify all tables in one database

database_name.table_name to specify all columns of one table

The Privileges assigned to user while connecting from the source hostname can be a IP address / IP address range 10.100.0.0/24 or a DNS name or simple ‘%’ to allow access from anywhere.

Now For simplicity sake we can simulate the user vamshi will need access to only operate on the sales section of the reports Database.

GRANT ALL PRIVILEGES ON reports.sales TO 'joe@'mysql2.linuxcent.com';

What the above command does is to provide only login access to joe from mysql2.linuxcent.com and access the reports table from sales Database.

By replacing the database name with wildcard * will provide the privileges equivalent super_user level access.

This can be demonstrated as follows:

GRANT ALL PRIVILEGES ON *.* TO 'vamshi'@'%';

Or

GRANT INSERT, UPDATE, DELETE ON reports.* to 'vamshi'@'%';

How to Create Another non-root MYSQL DB Super User

This is just as a security measure whilst disabling the root login to the mysql engine.

GRANT ALL PRIVILEGES ON *.* TO 'vamshi_superuser'@'%';

Display MySQL User Account Privileges

To Display the Privileges granted to specific Mysql user Account, use the command SHOW GRANTS.

mysql> SHOW GRANTS FOR 'root'@'localhost' \G;
*************************** 1. row ***************************
Grants for root@localhost: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `root`@`localhost` WITH GRANT OPTION
*************************** 2. row ***************************

Now to compare the results

Saving Your Changes to the MYSQL database

The changes made so far are to be saved to the special user database called the grant tables, In total there are 5 special tables in the mysql database such as

user
db
host
tables_priv
columns_priv

We commit the changes by issuing the FLUSH PRIVILEGES command at the mysql prompt:

mysql> flush privileges ;
Query OK, 0 rows affected (0.01 sec)

Initiating a docker swarm and getting the current docker swarm token

Creating a docker swarm cluster:

The docker swarm can be created by using the following command:

The syntax is defined as follows:

docker swarm init --advertise-addr [available interface IP adress]

The –advertise-addr is used to explicitly define the docker swarm advertise ip. If you have a single interface this option will not be needed but will be real handy if you have more than 1 active public accessible interfaces.
Let us initialize our docker swarm environment.

[vamshi@docker-swarm ~]$ docker swarm init --advertise-addr 10.100.0.20
Swarm initialized: current node (nodeidofmastercdq7nmmq3kcmb5l85k2e) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-verylongstringofcharactercontainingthedockerswarmjoinstring-70bouyqwhfgdcgtw6o0fw6wup \
    10.100.0.20:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The docker swarm creation can be viewed from the docker info command as follows:

[vamshi@docker-swarm ~]$ docker info | grep -C 2 Swarm
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: active
 NodeID: nodeidofmastercdq7nmmq3kcmb5l85k2e
 Is Manager: true

The docker swarm explicitly uses the overlay and macvlan to enable the interhost network connectivity between the container over a swarm network

How to get the docker swarm join token:

This command can come in very handy when you forgot your docker swarm token and you need to join a new docker node to this docker swarm cluster.

[vamshi@docker-swarm ~]$ docker swarm join-token manager -q
SWMTKN-1-verylongstringofcharactercontainingthedockerswarmjoinstring-70bouyqwhfgdcgtw6o0fw6wup

Redirect the Std error and std output 2>&1

The redirect operation > is used in conjunction with stdoutput 1 and stderr 2.

command > [/dev/null] 2>&1

2 Represents the stderror. The &1 here represents the first argument which is /dev/null

The character 2 represents the stderr which takes the entire errors printed to the screen and then appends them to the /dev/null which is the first argument represented by &1.

So the command demonstration will be the following:

$ du -sh /* > /dev/null 2>&1

This redirect command will dump the errors and the output to /dev/null.

Explanation: the default behaviour of redirection operator is to redirect the stdout and we are redirecting them to devnul and then we followup the command with 2>&1 which mentions the stderr 2 and then redirects is to /dev/null, which is denoted by &1 describing the &1 as the first argument which is /dev/null

 

 

Docker Networking basics and the types of networks

The docker networking comprises of a overlay network and enabled communication with the outside resources using it.

There are following main types of built in connectivity networking drivers namely the bridged, host, macvlan, overlay network and the null driver with no network.

The docker container networking Model CNM architecture manages the networking for Docker container.

IPAM which stands for the IP address management works in a single docker node, and aids in Enabling the network connectivity among the doccker containers. Its primary responsibility is to allocate the IP address space for the subnets, allocation of the IP addresses to the endpoints and the network etc,.

The networking in docker is essentially an isolated sandbox environment, The isolation of the networking resources is possible by the namespaces

The overlay network enables the communication enabled the network spanning across many docker nodes on an environment like the Docker swarm network, The same networking logic is evident in a bridge networking but it is ony limited to a single docker host unlike the overlay network.

Here’s the output snippet from the docker info command; Listing the available network drivers.

# docker info
Plugins: 
 Volume: local
 Network: bridge host macvlan null overlay

The container networking enables connectivity inbetween the docker containers and also the host machine to docker container and vice-versa.

Listing the default networks in docker:

[vamshi@node01 nginx]$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
68b2ffd36e8f        bridge              bridge              local
c1aca4c87a2b        host                host                local
d5e48683def8        none                null                local

When a container is created by default it connects to the bridge network unless an extra arguments are specified.

When you install the docker by default a docker0 virtual interface is created which behaves as a bridge between the docker containers and the host system.

[vamshi@node01 nginx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242654b42ef	no

For brctl command we need to Install the bridge-util package.

We now examine the docker networks with docker network inspect.

Inspecting the various docker networks:

Inspecting the bridge network:

The bridge networking enables the network connectivity over the dockers in a single docker server host.

[vamshi@node01 nginx]$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "68b2ffd36e8fcdc0c3b170dfdbdbc93bb58351d1b2c011abc80709928463f809",
        "Created": "2020-05-23T10:28:27.206979057Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

This bridge is shown with the ip addr command as follows:

# ip addr show docker0 
   docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:65:4b:42:ef brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:65ff:fe4b:42ef/64 scope link 
       valid_lft forever preferred_lft forever

Inspecting the host network.

[vamshi@node01 ~]$ docker network inspect host 
[
    {
        "Name": "host",
        "Id": "c1aca4c87a2b3e7db4661e0cdedc97245cd5dfdc8aa2c9e6fa4ff1d5ecf9f3c1",
        "Created": "2019-05-16T18:46:19.485377974Z",
        "Scope": "local",
        "Driver": "host",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

Inspecting the null driver network

[vamshi@node01 ~]$ docker network inspect none
[
    {
        "Name": "none",
        "Id": "d5e48683def80b2e739b3be95e58fb11abc580ce29a33ba0df679a7a3972f532",
        "Created": "2019-05-16T18:46:19.477155061Z",
        "Scope": "local",
        "Driver": "null",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

 

The following are special networking architectures to span across multihost docker servers enabling network connectivity among the docker containers.

1.Overlay network

2 macvlan network.

Let us inspect the multi host networking:

The core components of the docker interhost network consists of

Inspecting the overlay network:

[vamshi@docker-master ~]$ docker network inspect overlay-linuxcent 
[
    {
        "Name": "overlay-linuxcent",
        "Id": "qz5ucx9hthyva53cydei0y8yv",
        "Created": "2020-05-25T13:22:35.087032198Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.255.0.0/16",
                    "Gateway": "10.255.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {
            "ingress-sbox": {
                "Name": "overlay-linuxcent-endpoint",
                "EndpointID": "165beb97b22c2857e3637119016ef88e462a05d3b3251c4f66aa0fc9176cfe67",
                "MacAddress": "02:42:0a:ff:00:03",
                "IPv4Address": "10.255.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4096"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "node01.linuxcent.com-95ad856b6f56",
                "IP": "10.100.0.20"
            }
        ]
    }
]

The endpoint is the Virtual IP addressing that routes the traffic to the respective containers running on individual docker nodes.

Inspecting the macvlan network:

vamshi@docker-master ~]$ docker network inspect macvlan-linuxcent 
[
    {
        "Name": "macvlan-linuxcent",
        "Id": "99c6a20bd4029ce5a37139c6e6792ec4f8a075c94b5f3e71efc32d92d41f3f89",
        "Created": "2020-05-25T14:20:00.655299409Z",
        "Scope": "local",
        "Driver": "macvlan",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.20.0.0/16",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

enabling ipv4 forwarding on docker server

Common errors when the ipv4 forwarding is not enabled on the linux host leading to unidentifiable issues. here is one such rare log from the system logs

level=warning msg="IPv4 forwarding is disabled. N...t work."

Its good to check the current ipv4.forwarding rules as follows:

[root@LinuxCent ~]# sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0

 

You can also enable the changes for the current session using the -w option

sysctl -w net.ipv4.conf.all.forwarding=1

To make the changes persistent we need to write to a config file and enforce the system to read it.

[root@LinuxCent ~]# vi /etc/sysctl.d/01-rules.conf
net.ipv4.conf.all.forwarding=1

Then apply the changes to the system on the fly with the sysctl command to load the changes from systemwide config files.

# sysctl –system
--system : tells the sysctl to read all the configuration file system wide

 

[root@Linux1 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /etc/sysctl.d/01-rules.conf ...
net.ipv4.conf.all.forwarding = 1
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.conf ...
[root@Linux1 ~]# sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1