Environmental Variables in Dockerfile.

The Docker Environment variables can be declared during the (1) Docker image creation inside the Dockerfile and (2) During the docker container run time.

The Dockerfile ENV syntax is shown as follows:

ENV VAR_NAME value

We will look at the Dockerfile syntax in the following snippet:

# Exporting the Environment Variables
ENV MAVEN_HOME /usr/share/maven/
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
# Added as per Java team
ENV JAVA_OPTS -Xms256m -Xmx768m

We can even reference the ENV variable after the declaration at the next statement.

RUN ${JAVA_HOME}/bin/java -version

Conclusion:

Setting the ENV is a lot better than Hard coding the values into the Dockerfile, as it makes maintaining the Dockerfile a simpler task.

Its a good practice for storing the path names, version of packages which are set to be modified over a longer course of time.

How to identify if a docker container files have been modified

The docker container is simply a run time copy of a docker image resources, The docker container utilizes the filesystem structure originally packed into it via the union filesystem packaged from various image layers during the docker image creation.

The docker provides a standard diff command which compares the filesystem data in docker image with the container.

Syntax:

# docker diff [CONTAINER ID | CONTAINER NAME]

Before jumping in lets examine a docker container below and take a look at filesystem by logging into it.

root@node03:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
e3de85aaf61c        9a0b6e4f0956        "sh"                2 months ago        Up 3 minutes                            jovial_hertz

We have a running container with a random name jovial_hertz, and we login to the container as follows

root@node03:~# docker exec -it jovial_hertz bash

We are now inside the container and now will create a directory linuxcent and the also create a ASCII text file test and then exit out from the container.

root@e3de85aaf61c:~# mkdir linuxcent
root@e3de85aaf61c:~# cd linuxcent
root@e3de85aaf61c:~# touch test
root@e3de85aaf61c:~# exit

Created a directory called linuxcent and a touched a test file and now logged out from the container with the exit command.
The docker diff command will run against the container should result in the modified data and we contemplate the results

root@node03:~# docker diff jovial_hertz 
C /root
A /root/linuxcent
A /root/linuxcent/test
C /root/.bash_history

The Flags A in front of /root/linuxcent and /root/linuxcent/test indicate that these are directory and file that were the new additions to the container and Flag C indicates that the other 2 files were changed.
Thus it helps us to compare and contrast the new changes to a container filesystem for better auditing.

Docker login to private registry

STEP 1: Docker login to private registry

Lets see the syntax of docker login command followed by the authorized username and the repository URL.
Syntax:

[root@docker03:~]# docker login [DOCKER-REGISTRY-SERVER] -u <username> [-p][your password will be seen here]

The -p is the option for password which can be given along with the docker command or you can type it in the password prompt after hitting enter on the docker login command.

Example given:

[root@docker03:~]# docker login nexusreg.linuxcent.com:5000 -u vamshi
Password:
Login Succeeded

Once the docker login is succeeded a json file will be generated under your home directory at the following path which contains the auth metadata information.

[root@docker03:~]# cat $HOME/.docker/config.json
{
	"auths": {
		"nexusreg.linuxcent.com:5000": {
			"auth": "1234W46TmV0ZW5yaWNoMjAxOQ=="
		}
	},
	"HttpHeaders": {
		"User-Agent": "Docker-Client/18.09.1 (linux)"
	}
}

The docker login repository URL can be found out from your docker client machine using docker info command if you had previously logged in, as we see below:

[root@docker03:~]# docker info | grep Registry
Registry: https://index.docker.io/v1/

How to logout from the specific docker registry use the docker logout command.
The syntax is shown as below:
docker logout [DOCKER-REGISTRY-SERVER]

Example given:

[root@docker03:~]# docker logout 
Removing login credentials for https://index.docker.io/v1/

Setup and configure Zookeper and Kafka on Linux

We would need java runtime environment to install and operate the kafka and zookeeper programs on our linux environment as a dependency which uses the java runtime environment.
So lets quickly check the current java version on the system with java -version.
If its not present Lets now begin with the setup of Java and quickly download the java stable version from the epel repository.

# yum install java-1.8.0-openjdk

Once the java version is installed we verify it with the java -version command.
Creating the kafka user account.

# useradd kafka -m -d /opt/kafka

Adding the kafka password

# echo "kafka" | passwd kafka --stdin
usermod -aG wheel kafka
# su -l kafka

Now login to the system as kafka user.

# cd /opt

Navigate to the url : https://downloads.apache.org/kafka/ and download the latest kafka version

# curl https://downloads.apache.org/kafka/2.5.0/kafka_2.13-2.5.0.tgz -o /opt/kafka.tgz
# tar -xvzf ~/opt/kafka.tgz --strip 1
# cd /opt/kafka
# cp -a /opt/kafka/config/server.properties ~/opt/kafka/config/server.properties.bkp
# echo -e "\ndelete.topic.enable = true\nauto.create.topics.enable = true" >> /opt/kafka/config/server.properties

Adding the Kafka start/stop scripts to systemctl controller daemon services

# sudo vim /etc/systemd/system/zookeeper.service
Add the following lines to /etc/systemd/system/zookeeper.service
[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target

[Service]
Type=simple
User=kafka
ExecStart=/opt/kafka/bin/zookeeper-server-start.sh /opt/kafka/config/zookeeper.properties
ExecStop=/opt/kafka/bin/zookeeper-server-stop.sh
Restart=on-abnormal

[Install]
WantedBy=multi-user.target

Now adding the Kafka control scripts to systemd service

# sudo vim /etc/systemd/system/kafka.service
Add the following lines to /etc/systemd/system/kafka.service
[Unit]
Requires=zookeeper.service
After=zookeeper.service

[Service]
Type=simple
User=kafka
ExecStart=/bin/sh -c '/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties > /opt/kafka/kafka.log 2>&1'
ExecStop=/opt/kafka/bin/kafka-server-stop.sh
Restart=on-abnormal

[Install]
WantedBy=multi-user.target
# sudo chown kafka.kafka -R /opt/kafka/logs
# sudo mkdir /var/log/kafka-logs
# sudo chown kafka.kafka -R /var/log/kafka-logs

Now start the zookeeper and kafka services as follows:

# sudo systemctl enable zookeeper --now

# sudo systemctl enable kafka --now





Docker container volumes

The concept of docker is to run a compressed image into a container which servers the purpose and then container can be removed, leaving no trace of the data generated during the course of its runtime. This exact case is referred as ephemeral container.
The docker is traditionally non-persistent data storage and retains only the data originally from its image build creation, It provides the facility to integrate the volume mounts to a running container for data storage and manages the persistence issue to a certain extent.
We have realized the ability to store the data on volumes and then make them available to the container runtime environment to satisfy the needs.

As per the practice the docker volumes are mounted to docker image in runtime via the command line arguments which are demonstrated as show below.

This implementation of the docker volumes provides the running container a volume binding capability and this is method in docker volumes can be broadly categorized into two abilities which are listed as follows.

(1) The volume mapping from the host to the target container, This is like directory mapping between the Host and the container which happens during its runtime.
(2) A permanent volume name that can be shared among container and it even persists if the container is deleted and the same volume can be again mounted to another docker container.

The storage options offered by docker are for persistent storage of files and ensures it in cases of docker container restarts and removal of the container.

The example for the host bind volume mapping is as follows:

# docker run -v  src:target --name=container-name  docker-image -d

The syntax of docker directory mount:

# docker run -it -v /usr/local/bin:/target/local --name=docker-container-with-vol  ubuntu:latest /bin/bash

We will invoke a container by mounting the host path to an target container path and mount the host directory to nginx-html /usr/share/nginx/html.

[vamshi@node01 ~]$ docker run -d --name my-nginx -v /home/vamshi/nginx-html:/usr/share/nginx/html -p 80:80 nginx:1.17.2-alpine
34797e6d8939e42bc8cfe36eed4b60521355edadc2fa6c74a26fe4172384575c

Now we log into the container and verify the contents

[vamshi@node01 ~]$ docker exec -it my-nginx1 sh
~ # hostname
34797e6d8939
~ # df -h /usr/share/nginx/html
Filesystem                Size      Used Available Use% Mounted on
/dev/sda1                40.0G     16.1G     23.9G  40% /usr/share/nginx/html

From the df -h output from the container shows the path /usr/share/nginx/html as mount point.

Checking the contents of the webroot directory at /usr/share/nginx/html inside the container.

~ # cat /usr/share/nginx/html/index.html
<H1>Hello from LinuxCent.com</H1>

This is the same file which we have on our host machine and it is shared through the volume mount.
We verify using the curl command as follows

~ # curl localhost
<H1>Hello from LinuxCent.com</H1>

We can also mount the same directory with as many containers as possible on our system and it can be an effective way of updating the static content being utilized within our containers.

This is a bind operation offered by docker to mount the directory to a container.. This information can be identified by inspecting the docker command as follows:

Here is the extract snippet from the docker inspect container my-nginx1 :

            {
                "Type": "bind",
                "Source": "/home/vamshi/nginx/nginx-html",
                "Destination": "/usr/share/nginx/html",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }

As we can see the Type of mount is depicted as a bind here.

The formatter can also used with the filtering options in json format

[vamshi@node01 ~]$ docker container inspect my-nginx1 -f="{{.Mounts}}"
[{bind /home/vamshi/nginx-html /usr/share/nginx/html true rprivate}]

The Dockerfile VOLUME expression

Using the Persistent docker volumes.

We now focus our attention to the Docker volume mounts, which are isolated storage resources in docker and are a persistent storage which can be reused and mounted to the containers.

The VOLUME can be used while writing a Dockerfile, it creates a docker image with the volume settings and then mounts to the container when it is startedup.
We use the expression in dockerfile volume.

VOLUME [ "my-volume01" ]

We now build the image and lets observe the output below.

Step 1/4 : FROM nginx-linuxcent
 ---> 55ceb2abad47
Step 2/4 : COPY nginx-html/index.html /usr/share/nginx/html/
 ---> c482aa15da5a
Removing intermediate container a621e114a01d
Step 3/4 : VOLUME my-volume01
 ---> Running in ac523d6a02f0
 ---> 72423fe5f27d
Removing intermediate container ac523d6a02f0
Step 4/4 : RUN ls /tmp
 ---> Running in 8fc1fbc0f0bb
 ---> 0f453e3cfff1
Removing intermediate container 8fc1fbc0f0bb

Here the volume is mounted at the path /my-volume01 based on absolute path from /.

Here my-volume01 will be mounted to /my-volume01 inside the container path

/ # df -Th /my-volume01/
Filesystem Type Size Used Available Use% Mounted on
/dev/sda1 xfs 40.0G 15.1G 24.9G 38% /my-volume01

The information can be extracted by inspecting the container as follows:

# docker container inspect nginx-with-vol -f="{{.Mounts}}"
[{volume 7d6d92cffac1d216ca062032c99eb105b120d769331a2008d8cad1a2c086ad19 /var/lib/docker/volumes/7d6d92cffac1d216ca062032c99eb105b120d769331a2008d8cad1a2c086ad19/_data my-volume01 local true }]

 

If you would like the volume to be mounted to some other path then you can declare that in Dockerfile VOLUME as below:

VOLUME [ "/mnt/my-volume" ]

The information can be extracted from the docker image by inspecting for the volumes.

            "Image": "sha256:72423fe5f27de1a495e5e875aec83fd5084abc6e1636c09d510b19eb711424cc",
            "Volumes": {
                "/mnt/my-volume": {},
                 }

The volumes defined in the Dockerfile VOLUME expression will not be visible with the docker volume ls as they are present within the scope of specific docker container. But will be displayed with anonymous hash tags in the output, But they can be shared among other docker containers. We will look at sharing the docker volumes in the following section.It is also important to understand that once the volume is explicitly deleted then its data cannot be recovered again.

Using the container volume from one container and accessing them in another container.

This option is beneficial during the times when a container access and the debug tools are disabled, and you need to view the logs of the container and run analysis on it.

Using the volume from the container and mounting it to another container for auditing purposes.
We have now created a container called view-logs which uses the volumes from another container called ngnix-with-vol

docker run -d --name view-logs --volumes-from nginx-with-vol degug-tools

Best Practices:
The view-logs container can have a set of debug and troubleshooting tools to view the logs of other app containers.

Creating a Volume from the docker commandline:

The docker volume resource has to be initialized first and can be done as follows:
Command to create a docker volume:
# docker volume create my-data-vol
[vamshi@node01 ~]$ docker volume inspect my-data-vol

[
    {
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/my-data-vol/_data",
        "Name": "my-data-vol",
        "Options": {},
        "Scope": "local"
    }
]

We can use this volume and then mount this to a container as we mounted the host volume in earlier sections.

We will be using the persistent volume name and then mounting the docker volume to a container bind volume name to the target container path..
These options have a specific changes in syntax and has to be specified exactly while running the mount operation.

Below is the syntax:
docker run –volume :</path/to/mountpoint/> image-name.

We have now the docker volume available and mounting it to the target container path /data as follows:
# docker run -d –name data-vol –volume my-data-vol:/data nginx-linuxcent:v4
2dfa965bbbc79a522e9c109ef8eee20bf47e2b61062f3b3df61d4eb677de4506

Verification:

The docker volume can be listed by the following command

Check the mount information with docker volume inspect.
Here is an extract from the docker inspect container

"Mounts": [
            {
                "Type": "volume",
                "Name": "my-data-vol",
                "Source": "/var/lib/docker/volumes/my-data-vol/_data",
                "Destination": "/var/log/nginx",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": ""
             }

As we can see the Type of mount is depucted as a Volume here.

Conclustion:
We have seen the practices of mounting the hostpath to to target container path is a bind operation and the dependency it creates is the host affinity, which is binded to particular host and has to be avoided if you are dealing with more dynamix data exchange between containers over a network. But it is very useful if you have host-container data exchange.
The Option with Volumes is very dynamic and has less binding dependency on the host machines, They can be declared and used in two ways as demonstrated in the before sections. !st being the explicit volume creation and another is the Volume creation from within the Docker build.
The explicit docker creation and then binding provides the scopt of choosing a mount point inside the container after the image is built.

The Dockerfile’s VOLUME expression can be used to automatically define the volume name and the mount point desired and has all the same techniques of volume sharing for data exchange in run time.

What is docker container volume?

Docker volumes are file systems mounted on Docker containers to preserve data generated by the running container. The data doesn’t persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it. … The data cannot be easily moveable somewhere else.

How do I find the volume of a docker container?

We can find out where the volume lives on the host by using the docker inspect command on the host (open a new terminal and leave the previous container running if you’re following along): docker inspect -f “{{json . Mounts}}” vol-test | jq.

What is the purpose of docker volume?

docker volume create creates a volume without having to define a Dockerfile and build an image and run a container. It is used to quickly allow other containers to mount said volume.

How many volumes are there in docker?

Docker volumes are used to persist data from within a Docker container. There are a few different types of Docker volumes: host, anonymous, and, named. Knowing what the difference is and when to use each type can be difficult, but hopefully, I can ease that pain here.

How do I create a docker volume?

docker volume create

  1. Description. Create a volume. …
  2. Usage. $ docker volume create [OPTIONS] [VOLUME]
  3. Extended description. Creates a new volume that containers can consume and store data in. …
  4. Options. Name, shorthand. …
  5. Examples. Create a volume and then configure the container to use it: …
  6. Parent command. Command. …
  7. Related commands.

What is Docker desktop volume?

Docker volumes on Windows are always created in the path of the graph driver, which is where Docker stores all image layers, writeable container layers and volumes. By default the root of the graph driver in Windows is C:\ProgramData\docker , but you can mount a volume to a specific directory when you run a container.

How do I add volume to a running docker container?

Step 1 – Copy. If path, where we are going to add volume, is not empty, then make sure to copy the content to the host system, As adding a volume will overwrite container data at that location. …
Step 2 – Create new Image. …
Step 3 – Delete container. …
Step 4 – Create a new container.

How do I know my container size?

To view the approximate size of a running container, you can use the command docker container ls -s . Running docker image ls shows the sizes of your images.

Can we attach volume to running container?

To attach a volume into a running container, we are going to: use nsenter to mount the whole filesystem containing this volume on a temporary mountpoint; create a bind mount from the specific directory that we want to use as the volume, to the right location of this volume; umount the temporary mountpoint.

Apple: How to Work with Terminal in Mac

Apple: How to Work with Terminal in Mac

The iOS operating system is basically modified UNIX with beautiful graphics. This means that if you need something to set up or automate, you can use the command line the UNIX shell.

The shell is available on Apple iMac and MacBooks. On the iPhones and iPads, the command line is hidden.

How to start a MAC terminal?

Click the launchpad on the bottom bar, find the terminal icon and launch it.

First commands

This command displays the current work directory you are in.

pwd

Since the iOS is de facto unix, the entire file system complies with the File Hierarchy Standard (FHS) that specifies the tree directory. At the top of the tree there is a “/” symbol. Under “/” are individual directories and files in tree structure.

Use this command to list the contents of the directory in which you are currently:

ls -l

The first column shows the type (d = directory) and system rights (r = read, w = write, x = execute) in the triad in the following order: owner, group, all. The third column shows who the owner is. The column shows the group that owns the given file. The sixth column shows the time of the last modification and the seventh file name.

Use this command to change the current work directory (to /home):

cd /home

If you specify this command without parameters, the cd command is set to the current home directory that you specify in the $HOME variable.

Do you want to view the contents of any Mac shell variable? For example, $HOME? It is simple:

echo $HOME

How to use terminal: basic advice

Hint#1: Arrow up to view the last commands.

Hint#2: Using Ctrl A, you will get to the top of the line by pressing Ctrl E at the end of the line.

Hint#3: Hold down the left mouse button to select the text and then right-click on the menu to select “copy” or “paste” as needed.

Hint#4: Magic button “home”: command Ctrl C to interrupt the execution of the current command and get back to the command prompt. For example, you can try this by entering a yes command on the terminal, which causes the ypsilon to endlessly. You interrupt this infinite program with Ctrl C.

Most useful commands in terminal

Overview of hard disk usage
This command displays the current use of disks that are “assembled” to your computer:

df -aH

Option H means human-readable output. By selecting and specifying that we want to display all mounted drives.

Which folder does the disk space take?
Use the cd command to set up the folder you want to see how much it takes. You can check the entire file system using the cd /. Using this command, you will be able to print out in a comprehensible manner how much you deal with:

du -sh /* 2>/dev/null

The beaked twin determines that we do not want to see any error messages.

What does MAC do now? Which processes are most active?

This command displays the most active processes.

top

Each process also has a PID (Process ID), according to which the process can be uniquely identified and, for example, shut down. Use the q button to finish the top.

How to find and destroy a particular process by name?

This command looks for the command bash – the command line you are running:

ps aux | grep bash | grep -v grep

The second column indicates the PID. In my case, it is 2335. Use this command to exit the program. beware, the terminal will disappear! Muhaha 😀

kill 2335

What is currently happening in the system? What bothers MAC?
With this command, you are constantly monitoring what the system says:

tail -f /var/log/system.log

To quit tail command, use Ctrl C

How To Restart Jenkins Safely

Jenkins provides the Frontend User interface and the API to access the jenkins servers and API calls also can be sent from the URL

Process to restart jenkins server safely

Here is our jenkins server hosted on our url: http://jenkins.linuxcent.com:8080

And the API request to restart Jenkins safely is to run http://YourJenkins-url-or-ip/safeRestart
http://jenkins.linuxcent.com:8080/safeRestart
See the below screenshot for more information.

This option is reliable as the restart operation will wait for the currently running jobs to complete and then proceed with restart

Safe Restart jenkins from UI API

Force restart option in jenkins

http://jenkins.linuxcent.com:8080/restart
This option will restart the Jenkins forcefully and the currently running jobs will be subjected for force termination.
Forcefully Restart jenkins from UI API

Restart jenkins server from commandline

Through the command you can initiate the restart command, but this will be a forceful restart of Jenkins server.

It will be stopping and starting the jenkins server from commandline, although you can run the stop and then start with same results.

[vamshi@jenkins jenkins]$ sudo systemctl restart jenkins

On older systemv servers you can also initiate the restart using service command

[vamshi@jenkins jenkins]$ sudo service jenkins restart

How do you restart Jenkins?

Go to the Jenkins installation, open the cmd and run:

  • To stop: jenkins.exe stop.
  • To start: jenkins.exe start.
  • To restart: jenkins.exe restart.

Is the command used to restart Jenkins manually?

To restart Jenkins manually, you can use either of the following commands (by entering their URL in a browser). jenkins_url/safeRestart – Allows all running jobs to complete. … jenkins_url/restart – Forces a restart without waiting for builds to complete.

What is the command to restart Jenkins service on Windows?

  • To stop: jenkins.exe stop.
  • To start: jenkins.exe start.
  • To restart: jenkins.exe restart.

How long does it take to restart Jenkins?

I also restarted the Jenkins service and it worked. It did take 3-4 minutes after I restarted the service for the page to load up, though. So make sure you’re patient before moving on to something else.

How do I restart Jenkins in Kubernetes?

Just kubectl delete pods -l run=jenkins-ci – Will delete all pods with this label (your jenkins containers). Since they are under Deployment, it will re-create the containers. Network routing will be adjusted automatically (again because of the label selector).

How do I set Jenkins to restart itself?

Jenkins -> Manage Jenkins -> Manage Plugins -> Search for Safe Restart -> Install it. Then Restart Safely appear on the Dashboard.

How do I start Jenkins on port 8080?

  • Go to the directory where you installed Jenkins (by default, it’s under Program Files/Jenkins)
  • Open the Jenkins.xml configuration file.
  • Search –httpPort=8080 and replace the 8080 with the new port number that you wish.
  • Restart Jenkins for changes to take effect.

How do I start Jenkins on Mac?

Terminal and Start / Stop daemon
Start Jenkins: sudo launchctl load /Library/LaunchDaemons/org.jenkins-ci.plist.
Stop Jenkins: sudo launchctl unload /Library/LaunchDaemons/org.jenkins-ci.plist.

How do I run Jenkins job daily?

The steps for schedule jobs in Jenkins:

  • click on “Configure” of the job requirement.
  • scroll down to “Build Triggers” – subtitle.
  • Click on the checkBox of Build periodically.

How do I open Jenkins?

To start Jenkins from command line

  1. Open command prompt.
  2. Go to the directory where your war file is placed and run the following command: java -jar jenkins.war.

How to get the docker container ip ?

The metadata information from the docker containers can be extracted using the docker inspect command.
We see the demonstration as follows:

The docker engine api is based around the golang templates and the commands use extensive formatting around the json function definitions.

[vamshi@node01 ~]$ docker inspect <container-name | container-id> -f '{{ .NetworkSettings.IPAddress }}'
172.17.0.2
[vamshi@node01 ~]$ docker inspect my-container --format='{{ .NetworkSettings.IPAddress }}'
172.17.0.2

RBACs in kubernetes

The kubernetes provides a Role based Access controls as a immediate mechanism as a security measure.

The roles are the grouping of PolicyRules and the capabilities and limitations within a namespace.
The Identities (or) Subjects are the users/ServiceAccounts which are assigned Roles which constitute a RBACs.
This process is acheived by referencing a role from RoleBinding to create RBACs.

In kubernetes there is Role and RoleBindings and the ClusterRole and ClusterRoleBinding.

There is no concept of a deny permission in the RBACs.

The Role and the Subject combined together defines a RoleBinding.

Now lets look at each of the terms in detail.

Subjects:

  • user
  • group
  • serviceAccount

Resources:

  • configmaps
  • pods
  • services

Verbs:

  • create
  • delete
  • get
  • list
  • patch
  • proxy
  • update
  • watch

You Create a kind:Role with a name and then binding with roleRef it to Subject by creating a kind: RoleBinding

[vamshi@master01 k8s]$ kubectl describe serviceaccounts builduser01 
Name:                builduser01
Namespace:           default
Labels:              
Annotations:         
Image pull secrets:  
Mountable secrets:   builduser01-token-rmjsd
Tokens:              builduser01-token-rmjsd
Events:              

The role builduser-role has the permissions to all the resources to create, delete, get, list, patch, update and watch.

[vamshi@master01 k8s]$ kubectl describe role builduser-role
Name: builduser-role
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"builduser-role","namespace":"default"},"ru...
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
* [] [] [create delete get list patch update watch]

Using this you can limit the user access to your cluster

View the current clusterbindings on your kubernetes custer

[vamshi@master01 :~] kubectl get clusterrolebinding
NAME                                                   AGE
cluster-admin                                          2d2h
kubeadm:kubelet-bootstrap                              2d2h
kubeadm:node-autoapprove-bootstrap                     2d2h
kubeadm:node-autoapprove-certificate-rotation          2d2h
kubeadm:node-proxier                                   2d2h
minikube-rbac                                          2d2h
storage-provisioner                                    2d2h
system:basic-user                                      2d2h

The clusterrole describes the Resources and the verbs that are accessible the user.

[vamshi@linux-r5z3:~] kubectl describe clusterrole cluster-admin
Name:         cluster-admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
  *.*        []                 []              [*]
             [*]                []              [*]

Listing the roles on Kubernetes:

[vamshi@master01 :~] kubectl get roles --all-namespaces
NAMESPACE     NAME                                             AGE
kube-public   kubeadm:bootstrap-signer-clusterinfo             2d2h
kube-public   system:controller:bootstrap-signer               2d2h
kube-system   extension-apiserver-authentication-reader        2d2h
kube-system   kube-proxy                                       2d2h
kube-system   kubeadm:kubelet-config-1.15                      2d2h
kube-system   kubeadm:nodes-kubeadm-config                     2d2h
kube-system   system::leader-locking-kube-controller-manager   2d2h
kube-system   system::leader-locking-kube-scheduler            2d2h
kube-system   system:controller:bootstrap-signer               2d2h
kube-system   system:controller:cloud-provider                 2d2h
kube-system   system:controller:token-cleaner                  2d2h

We can further examine the rolebindings on the for the name: system::leader-locking-kube-scheduler which is being associated with the service account kube-scheduler.

[vamshi@master01 :~]  kubectl describe rolebindings system::leader-locking-kube-scheduler -n kube-system
Name:         system::leader-locking-kube-scheduler
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
Role:
  Kind:  Role
  Name:  system::leader-locking-kube-scheduler
Subjects:
  Kind            Name                   Namespace
  ----            ----                   ---------
  User            system:kube-scheduler  
  ServiceAccount  kube-scheduler         kube-system

There is a category of the api groups which contains the following api tags:

apiextensions.k8s.io, apps, autoscaling, batch, Binding, certificates.k8s.io, events.k8s.io, extensions, networking.k8s.io, PodTemplate, policy, scheduling.k8s.io, Secret, storage.k8s.io

The complete roles available in Kubernetes are as follows:

APIService, CertificateSigningRequest, ClusterRole, ClusterRoleBinding, ComponentStatus, ConfigMap, ControllerRevision, CronJob, CSIDriver, CSINode, CustomResourceDefinition, DaemonSet, Deployment, Endpoints, Event, HorizontalPodAutoscaler, Ingress, Job, Lease, LimitRange, LocalSubjectAccessReview, MutatingWebhookConfiguration, Namespace, NetworkPolicy, Node, PersistentVolume, PersistentVolumeClaim, Pod, PodDisruptionBudget, PodSecurityPolicy, PriorityClass, ReplicaSet, ReplicationController, ResourceQuota, Role, RoleBinding, RuntimeClass, SelfSubjectAccessReview, SelfSubjectRulesReview, Service, ServiceAccount, StatefulSet, StorageClass, SubjectAccessReview, TokenReview, ValidatingWebhookConfiguration and VolumeAttachment

Settingup the puppet master and puppet client server

Make sure that you have populated hostname properly on the puppet master server. You can do it with the hostnamectl command.
The hostname assumed by default is “puppet” for your puppet master, but you can give it anyname and reachable over your network to other servers with the mapped FQDN.

Its good practice to setup the /etc/hosts with an alias name called puppet if you are just starting for first time.

Installing the puppet yum repository sources to download the puppet packages.

[root@puppetmaster ~]# sudo rpm -Uvh https://yum.puppet.com/puppet5-release-el-7.noarch.rpm
Retrieving https://yum.puppet.com/puppet5-release-el-7.noarch.rpm
warning: /var/tmp/rpm-tmp.ibJsVY: Header V4 RSA/SHA256 Signature, key ID ef8d349f: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:puppet5-release-5.0.0-12.el7     ################################# [100%]

Installing the Puppet Master service from the yum repository.

[root@puppetmaster ~]# yum install puppetserver

Verify which packages are installed on your machine
[root@puppetmaster ~]# rpm -qa | grep -i puppet
puppetserver-5.3.13-1.el7.noarch
puppet5-release-5.0.0-12.el7.noarch
puppet-agent-5.5.20-1.el7.x86_64

Ensure that you give the following entries updated in the file /etc/puppetlabs/puppet/puppet.conf under the section

[master]
certname = puppetmaster.linuxcent.com
server = puppetmaster.linuxcent.com

Enabling the puppetserver Daemon and starting puppetserver

[root@puppetmaster ~]# systemctl enable puppetserver
[root@puppetmaster ~]# systemctl start puppetserver

The puppet server process starts on the port 8140.

[root@puppetmaster ~]# netstat -ntlp | grep 8140
tcp6       0      0 :::8140                 :::*                    LISTEN      21084/java

Settingup the puppet client.
Installing the yum repository to download the puppet installation packages.

[vamshi@node01 ~]$ sudo rpm -Uvh https://yum.puppet.com/puppet5-release-el-7.noarch.rpm

Installing the puppet agent.

[vamshi@node01 ~]$ sudo yum install puppet-agent

Once we have the puppet agent installed, we need to update the puppet client configuration with the puppetmaster FQDN by updating in the file /etc/puppetlabs/puppet/puppet.conf under the [master] section

[master]
certname = puppetmaster.linuxcent.com
server = puppetmaster.linuxcent.com

Running the puppet agent to setup communication with the puppet master

[vamshi@node01 ~]$ sudo puppet agent --test
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Caching catalog for node01.linuxcent.com
Info: Applying configuration version '1592492078'
Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
Notice: Applied catalog in 0.02 seconds

With this we have successfully raised the signing request to the master
Listing the puppet agent details on the puppet master.

[root@puppetmaster ~]# puppet cert list --all
  "node01.linuxcent.com" (SHA256) 88:08:8A:CF:E3:5B:57:1C:AA:1C:A3:E5:36:47:60:0A:55:6F:C2:CC:9C:09:E1:E7:85:63:2D:29:36:3F:BF:34
[root@puppetmaster ~]# puppet cert sign node01.linuxcent.com
Signing Certificate Request for:
  "node01.linuxcent.com" (SHA256) 88:08:8A:CF:E3:5B:57:1C:AA:1C:A3:E5:36:47:60:0A:55:6F:C2:CC:9C:09:E1:E7:85:63:2D:29:36:3F:BF:34
Notice: Signed certificate request for node01.linuxcent.com
Notice: Removing file Puppet::SSL::CertificateRequest node01.linuxcent.com at '/etc/puppetlabs/puppet/ssl/ca/requests/node01.linuxcent.com.pem'

Now that we have successfully signed the puppet agent request, we are able to see the + sign on the left side of the agent host name as demonstrated in the following output.
[root@puppetmaster ~]# puppet cert list --all
+ "node01.linuxcent.com" (SHA256) 15:07:C2:C1:51:BA:C1:9C:76:06:59:24:D1:12:DC:E2:EE:C1:47:35:DD:BD:E8:79:1E:A5:9E:1D:83:EF:D1:61

The respective ssl signed requests will be saved under the location /etc/puppetlabs/puppet/ssl/ca/signed

[root@node01 signed]# ls
node01.linuxcent.com.pem  puppet.linuxcent.com.pem

To clean up and agent certificates

puppet cert clean node01.linuxcent.com

Which will remove the agent entries from the puppetmaster records and a new certificate request is required to be added to this puppetmaster.

The autosign.conf can also be used if you are going to manage a huge farm of puppet clients, and the manual signing of clients becomes are tedious task, We can setup the whiledcard like *.linuxcent.com to auto approve the signing requests originating from the client hosts present in the network domain of linuxcent.com.

nginx reverse proxy setup for kibana dashboard

How to Setup Nginx Reverse proxy for Kibana.

In this demonstration we will see how to setup the reverse proxy using the nginx webserver to the backend kibana.

We begin by installing the latest version of nginx server on our centos server:

$ sudo yum install nginx -y

The nginx package is going to be present in the epel-repo and you have to enable it.

$ sudo yum --enablerepo=epel install nginx -y

Once the nginx package is installed we need to enable to Daemon and start it with the following command:

[vamshi@node01 ~]$ sudo systemctl enable nginx --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

We now add the create the nginx configuration file for kibana backend, and place it under the location /etc/nginx/conf.d/kibana as shown below:

We can setup the Restricted Access configuration if needed for enhanced security as shown below on the line with auth_basic and auth_basic_user_file, You may skip the Restricted Access configuration if you believe it is now required.

[vamshi@node01 nginx]$ sudo cat conf.d/kibana.conf
server {
    listen 80;
    server_name localhost;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.htpasswd;
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

With the configuration in place .. we now check the nginx config syntax using the -t option as shown below:

[vamshi@node01 nginx]$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Now restart the nginx server and head over to the browser.

$ sudo systemctl restart nginx

On you Browser enter the server ip or FQDN. and you will be auto redirected to the url http://your-kibana-server.com/app/kibana#/home
kibana-home

Setup htacess authorization config with user details.

We now install the htpasswd tool from the package httpd-tools as follows:

$ sudo yum install httpd-tools -y

Adding the Authorization details to our .htpasswd file.

[vamshi@node01 nginx]$ sudo htpasswd -c /etc/nginx/.htpasswd vamshi
New password: 
Re-type new password: 
Adding password for user vamshi

So We have now successfully added the Auth configuration

[vamshi@node01 nginx]$ sudo htpasswd -n /etc/nginx/.htpasswd 
New password: 
Re-type new password: 
/etc/nginx/.htpasswd:$apr1$tlinuxcentMY-htpassEsHEEanL21

As the password is 1 way encryption we cannot decode it and are required to generate new hash.
Verifying the htpasswd configuration logins from the curl command:

$ curl http://kibana-url -u<htpasswd-username>
[vamshi@node01 ~]$ curl kibana.linuxcent.com -uvamshi -I
Enter host password for user 'vamshi':
HTTP/1.1 302 Found
Server: nginx/1.16.1
Date: Thu, 07 Apr 2020 17:48:35 GMT
Content-Length: 0
Connection: keep-alive
location: /spaces/enter
kbn-name: kibana
kbn-license-sig: 2778f2f7e07680b7aefa85db2e7ce7bd33da5592b84cefe62efa8
kbn-xpack-sig: ce2a76732a2f58fcf288db70ad3ea
cache-control: no-cache

If you tend to enter the invalid credentials you will encounter a 401 http error code Restricting the Unauthorized access.

HTTP/1.1 401 Unauthorized
Server: nginx/1.16.1
Date: Thu, 07 Apr 2020 17:51:36 GMT
Content-Type: text/html
Content-Length: 179
Connection: keep-alive
WWW-Authenticate: Basic realm="Restricted Access"

Now we head over to the browser to check the htaccess login page in action as shows follows:
http://your-kibana-server.com
kibana-htpasswd-prompt
Conclusion: With the htpasswd in place, it provides an extra layer of authorized access to your sensitive urls.. in effect now you need to enter the htpasswd logins to access the same kibana dashboard.