BPF Verifier Explained: Why eBPF Is Safe for Production Kubernetes

~2,400 words · Reading time: 9 min · Series: eBPF: From Kernel to Cloud, Episode 2 of 18

In Episode 1, we established what eBPF is and why it gives Linux admins and DevOps engineers kernel-level visibility without sidecars or code changes. The obvious follow-up question is the one every experienced engineer should ask before running anything in kernel space:

Is it actually safe to run on production nodes?

The answer is yes — and the reason is one specific component of the Linux kernel called the BPF verifier. This post explains what the verifier is, what it protects your cluster from, and why it changes the risk calculus for eBPF-based tools entirely.


The Fear That Holds Most Teams Back

When I first explain eBPF to Linux admins and DevOps engineers, the reaction is almost always the same:

“So it runs code inside the kernel? On our production nodes? That sounds like a disaster waiting to happen.”

It is a completely reasonable concern. The Linux kernel is not a place where mistakes are tolerated. A buggy kernel module can take down a server instantly — no warning, no graceful shutdown, just a hard panic and a 3 AM phone call.

I know this from personal experience. During 2012–2014, I worked briefly with Linux device driver code. That period taught me one thing clearly: kernel space does not forgive careless code.

So when people started talking about running programs inside the kernel via eBPF, my instinct was scepticism too. Then I understood the BPF verifier. And everything changed.


What the Verifier Actually Is

Think of the BPF verifier as a strict safety gate that sits between your eBPF program and the kernel. Before your eBPF program is allowed to run — before it touches a single system call, network packet, or container event — the verifier reads through every line of it and asks one question:

“Could this program crash or compromise the kernel?”

If the answer is yes, or even maybe, the program is rejected. It does not load. Your cluster stays safe. If the answer is a provable no, the program loads and runs.

This is not a runtime check that catches problems after the fact. It is a load-time guarantee — the kernel proves the program is safe before it ever executes. Here is what that looks like when you deploy Cilium:

You run: kubectl apply -f cilium-daemonset.yaml
         └─► Cilium loads its eBPF programs onto each node
                   └─► Kernel verifier checks every program
                             ├─► SAFE   → program loads, starts observing
                             └─► UNSAFE → rejected, cluster untouched

This is why Cilium can replace kube-proxy on your nodes, why Falco can watch every syscall in every container, and why Tetragon can enforce security policy at the kernel level — all without putting your cluster at risk.


What the Verifier Protects You From

You do not need to know how the verifier works internally. What matters is what it prevents — and why each protection matters specifically in Kubernetes environments.

Infinite loops

An eBPF program that never terminates would freeze the kernel event it is attached to — potentially hanging every container on that node. The verifier rejects any program it cannot prove will finish executing within a bounded number of instructions.

Why this matters: Every eBPF-based tool on your K8s nodes — Cilium, Falco, Tetragon, Hubble — was verified to terminate correctly on every code path before it shipped. You are not trusting the vendor’s claim. The kernel enforced it.

Memory safety violations

An eBPF program cannot read or write memory outside the boundaries it is explicitly granted. No reaching into another container’s memory space. No accessing kernel data structures it was not given permission to touch.

Why this matters: This is the property that makes eBPF safe for multi-tenant clusters. A Falco rule monitoring one namespace cannot accidentally read data from another namespace’s containers. The verifier makes this impossible at the program level, not just at the policy level.

Kernel crashes

The verifier checks that every pointer is valid before it is dereferenced, that every function call uses correct arguments, and that the program cannot corrupt kernel data structures. Programs that could cause a kernel panic are rejected before they load.

Why this matters: Running Cilium or Tetragon on a production node is not the same risk as loading an untested kernel module. The verifier has already proven these programs cannot crash your nodes — before they ever ran on your infrastructure.

Privilege escalation and kernel pointer leaks

eBPF programs cannot leak kernel memory addresses to userspace. This closes a class of container escape and privilege escalation attacks that have historically been possible through kernel module vulnerabilities.

Why this matters: Security tools built on eBPF — like Tetragon, which detects and blocks container escape attempts in real time — are not themselves a vector for the attacks they protect against.


eBPF vs Traditional Observability Agents

To appreciate what the verifier gives you operationally, compare the two main approaches to K8s observability.

Traditional agent — DaemonSet sidecar approach

Your K8s cluster
└─► Node
    ├─► App Pod (your service)
    ├─► Sidecar container (injected into every pod)
    │   └─► Reads /proc, intercepts syscalls via ptrace
    │       └─► 15–30% CPU/memory overhead per pod
    └─► Agent DaemonSet Pod
        └─► Aggregates data from all sidecars

Problems with this model:

  • Sidecar injection requires modifying every pod spec and typically an admission webhook
  • ptrace-based interception adds 50–100% overhead to the traced process and is blocked in hardened containers
  • The agent runs in userspace with elevated privileges — a larger attack surface
  • Updating the agent requires pod restarts across your fleet

eBPF-based tool — Cilium / Falco / Tetragon

Your K8s cluster
└─► Node
    ├─► App Pod (your service — completely unmodified)
    ├─► App Pod (another service — also unmodified)
    └─► eBPF programs (inside the kernel, verifier-checked)
        └─► See every syscall, network packet, file access
            └─► Forward events to userspace agent via ring buffer

Benefits:

  • No sidecar injection — pod specs stay clean, no admission webhook required
  • Kernel-level visibility with near-zero overhead (typically 1–3%)
  • The verifier guarantees the eBPF programs cannot harm your nodes
  • Works identically with Docker, containerd, and CRI-O

Tools You Are Probably Already Running — All Verifier-Protected

You may already be running eBPF on your nodes without thinking about it explicitly. In each case below, the verifier ran before the tool ever touched your cluster.

Tool How the verifier is involved
Cilium Every network policy decision, service load-balancing operation, and Hubble flow log is handled by eBPF programs that passed the verifier at node startup.
Falco Every Falco rule is enforced by a verifier-checked eBPF program attached to syscall hooks. Sub-millisecond detection is only possible because the program runs in kernel space.
AWS VPC CNI On EKS, networking operations have progressively moved to eBPF for performance at scale. If you are on a recent EKS AMI, eBPF is already doing work on your nodes.
systemd Modern systemd uses eBPF for cgroup-based resource accounting and network traffic control. Active on most current Ubuntu, RHEL, and Amazon Linux 2023 installations.

Questions to Ask When Evaluating eBPF Tools

When a vendor tells you their tool uses eBPF, these three questions will quickly tell you how mature their implementation is.

1. What kernel version do you require?

The verifier’s capabilities have expanded significantly across kernel versions. Tools targeting kernel 5.8+ can use more powerful features safely. Tools claiming to work on kernel 4.x are constrained by an older, more limited verifier. The table below shows exactly where each major distribution stands.

Distribution Default kernel eBPF support level Notes
Ubuntu 16.04 LTS 4.4 Basic eBPF only No BTF. kprobes and socket filters work but modern tooling like Cilium and Falco eBPF driver will not run. EOL — do not use for new deployments.
Ubuntu 18.04 LTS 4.15 eBPF, no BTF No CO-RE. Tools must be compiled against the exact running kernel headers. The HWE kernel (5.4) improves this but BTF still varies by build.
Ubuntu 20.04 LTS 5.4 BTF available, verify before use CO-RE capable on most deployments. CONFIG_DEBUG_INFO_BTF was absent on some early builds. Verify with ls /sys/kernel/btf/vmlinux before deploying eBPF tooling. Cloud images generally have it enabled.
Ubuntu 20.10+ 5.8 Full BTF + CO-RE First Ubuntu release where BTF was consistently enabled by default. Ring buffers available. Not an LTS release — use 22.04 for production.
Ubuntu 22.04 LTS 5.15 Full modern eBPF — production ready BTF embedded. Ring buffers, global variables, LSM hooks. Default baseline for EKS-optimised Ubuntu AMIs. Recommended for new deployments.
Ubuntu 24.04 LTS 6.8 Full modern eBPF + latest features Open-coded iterators, improved verifier precision, enhanced LSM support. Best Ubuntu option for cutting-edge eBPF tooling today.
Debian 10 (Buster) 4.19 Basic eBPF, no BTF eBPF programs load but CO-RE is unavailable. Must compile against exact kernel headers. EOL — migrate to Debian 11 or 12.
Debian 11 (Bullseye) 5.10 LTS Full BTF + CO-RE BTF enabled. CO-RE works. Cilium, Falco, and Tetragon all fully supported. Solid production baseline for Debian environments through 2026.
Debian 12 (Bookworm) 6.1 LTS Full modern eBPF — production ready Same kernel generation as Amazon Linux 2023. LSM hooks, ring buffers, full CO-RE. Recommended Debian version for eBPF workloads today.
Debian 13 (Trixie) 6.12 LTS Full modern eBPF + latest features Released August 2025. Same kernel generation as RHEL 10 / Rocky 10 / AlmaLinux 10. Maximum eBPF feature availability across all program types.
RHEL 7.6 3.10 (backported) Tech Preview only — not production safe First RHEL release to enable eBPF but explicitly marked as Tech Preview. Limited to kprobes and tracepoints. No XDP, no socket filters, no BTF. Do not use for eBPF in production.
RHEL 8 / Rocky 8 / AlmaLinux 8 4.18 (heavily backported) Full BPF + BTF — functionally 5.4-equivalent Red Hat backports make RHEL 8 kernels functionally comparable to upstream 5.4 for most eBPF use cases. BTF enabled across all releases. CO-RE works. Cilium treats RHEL 8.6+ as its minimum supported RHEL-family version.
RHEL 9 / Rocky 9 / AlmaLinux 9 5.14 (heavily backported) Full modern eBPF — production ready BTF embedded. XDP, tc, kprobe, tracepoint, and LSM hooks all supported. Falco, Cilium, and Tetragon fully supported. Recommended RHEL-family version for eBPF deployments today. Supported until 2032.
RHEL 10 / Rocky 10 / AlmaLinux 10 6.12 Full modern eBPF + latest features Same kernel generation as Debian 13 and upstream 6.12 LTS. Rocky 10 released June 2025, AlmaLinux 10 released May 2025. Enhanced eBPF functionality throughout.
Amazon Linux 2023 6.1+ Full modern eBPF — production ready BTF embedded. Full CO-RE. Recommended for EKS. Also resolves the NetworkManager deprecation issues in EKS 1.33+ — see the EKS 1.33 post.

Quick check for any distro: Run ls /sys/kernel/btf/vmlinux on your node. If the file exists, your kernel has BTF enabled and CO-RE-based eBPF tools will work correctly. If it does not exist, you are limited to tools that compile against your specific kernel headers. Run uname -r to confirm the exact kernel version.

Rocky Linux and AlmaLinux note: Both distros rebuild directly from RHEL sources. Their kernel versions and eBPF capabilities are effectively identical to the corresponding RHEL release. When Cilium or Falco document “RHEL 9 support”, that applies equally to Rocky 9 and AlmaLinux 9 without any additional configuration.

2. Do you use CO-RE?

CO-RE (Compile Once, Run Everywhere) means the tool’s eBPF programs work correctly across different kernel versions without recompilation. Tools using CO-RE are more portable and significantly less likely to break after a routine node OS update. This is a reliable signal of engineering maturity in the vendor’s eBPF implementation.

3. What eBPF program types do you use?

Different program types have different privilege levels and access scopes. A tool that only needs kprobe access is asking for considerably less privilege than one requiring lsm hooks.

  • kprobe / tracepoint — observability and debugging
  • tc (traffic control) — network policy enforcement
  • xdp (eXpress Data Path) — high-performance packet processing
  • lsm (Linux Security Module) — security policy enforcement (used by Tetragon)

Understanding the program type tells you what the tool can and cannot see on your nodes, and how much kernel access you are granting it.


How Falco Uses the Verifier — A Step-by-Step Walkthrough

Here is exactly what happens when Falco starts on one of your K8s nodes, and where the verifier fits in:

1. Falco pod starts on the node (via DaemonSet)

2. Falco loads its eBPF programs into the kernel:
   └─► BPF verifier checks each program
       ├─► Can it crash the kernel?            No → continue
       ├─► Can it loop forever?                No → continue
       ├─► Can it access out-of-bounds memory? No → continue
       └─► PASS → program loads

3. Falco's eBPF programs attach to syscall hooks:
   └─► sys_enter_execve   (every process execution in every container)
   └─► sys_enter_openat   (every file open)
   └─► sys_enter_connect  (every outbound network connection)

4. A container runs an unexpected shell (potential attack):
   └─► execve() called inside the container
   └─► Falco's eBPF hook fires in kernel space
   └─► Event forwarded to Falco userspace via ring buffer
   └─► Falco rule matches: "shell spawned in container"
   └─► Alert fired in under 1 millisecond

5. Your container, your other pods, your node: completely unaffected

Step 2 is what the verifier makes safe. Without it, attaching eBPF hooks to every syscall on your production node would be an unacceptable risk. With it, Falco can offer this level of visibility with a mathematical safety guarantee.


The Bottom Line

You do not need to understand BPF bytecode, register states, or static analysis to use eBPF tools safely in production. What you do need to understand is this:

The BPF verifier is the reason eBPF is fundamentally different from kernel modules. It does not just make eBPF “safer” in a vague sense — it provides a mathematical proof that each program cannot crash your kernel before that program ever runs.

This is why eBPF-based tools can deliver deep kernel-level visibility into every container, every syscall, and every network flow — with near-zero overhead, no sidecar injection, and production safety that kernel modules could never guarantee.

The next time someone on your team hesitates about running Cilium, Falco, or Tetragon on production nodes because “it runs code in the kernel” — you now know what to tell them. The verifier already checked it. Before it ever touched your cluster.


Further Reading


Questions or corrections? Reach me on LinkedIn. If this was useful, the full series index is on linuxcent.com — search the eBPF Series tag for all episodes.

Cloud AMI Security Risks & How Custom OS Images Fix them and what’s wrong with defaults

~2,800 words  ·  Reading time: 12 min  ·  Series: OS Image Security, Post 1 of 6

When you launch an EC2 instance from an AWS Marketplace AMI, or spin up a VM from a cloud-provider base image on GCP or Azure, you’re trusting a decision someone else made months ago about what your server should contain. That decision was made for the widest possible audience — not for your workload, your threat model, or your compliance requirements.

This post tears open what’s actually inside a default cloud image, compares it against what a production-hardened image should contain, and explains why the calculus changes depending on whether you’re deploying to AWS, an on-prem KVM host, or a Nutanix AHV cluster.


What a cloud provider is actually optimising for

AWS, Canonical, Red Hat, and every other publisher shipping to cloud marketplaces are solving a distribution problem, not a security problem. Their images need to:

  • Boot successfully on any instance type in any region
  • Work for the first-time user running their first workload
  • Support every possible use case — web servers, databases, ML training jobs, bastion hosts, everything

That constraint produces images that are, by design, permissive. Permissive gets out of the way. Permissive doesn’t break anything on day one. Permissive is also the opposite of what you want on a production server.

Let’s look at what “permissive” actually means in concrete terms.


Dissecting a default AWS AMI

Take Amazon Linux 2023 (AL2023), one of the more intentionally stripped-down cloud images available. Even with Amazon’s effort to reduce its footprint compared to AL2, a fresh AL2023 instance ships with more than most workloads need.

Services running at boot that most workloads don’t need

chronyd.service            # Fine — you need NTP
systemd-resolved.service   # Fine
dbus-broker.service        # Fine
amazon-ssm-agent.service   # Arguably fine if you use SSM
NetworkManager.service     # Debatable — most cloud workloads don't need NM

On a RHEL 8/9 or Ubuntu 22.04 Marketplace image, the list is longer. You’ll find avahi-daemon (mDNS/DNS-SD service discovery — on a server), bluetooth.service in some configurations, cups on some RHEL variants, and on Ubuntu, snapd running and occupying memory along with its associated mount units.

Every running service is an attack surface. Every socket it opens is a listening endpoint you didn’t ask for.

SSH configuration out of the box

The default sshd_config on most Marketplace images is not hardened. You’ll typically find:

PermitRootLogin prohibit-password   # Better than 'yes', but not 'no'
PasswordAuthentication no           # Usually disabled by cloud-init — good
X11Forwarding yes                   # On a headless server. Why?
AllowAgentForwarding yes            # Unnecessary for most workloads
PrintLastLog yes                    # Minor, but generates audit noise
MaxAuthTries 6                      # CIS recommends 4 or fewer
ClientAliveInterval 0               # No idle timeout

CIS Benchmark Level 1 for RHEL 9 has 40+ SSH-specific controls. A default image satisfies perhaps a third of them.

Kernel parameters that aren’t tuned

# Not set, or not set correctly, on most default images:
net.ipv4.conf.all.send_redirects = 1        # Should be 0
net.ipv4.conf.default.accept_redirects = 1  # Should be 0
net.ipv4.ip_forward = 0                     # Correct if not a router, but often left unset
kernel.randomize_va_space = 2               # Usually correct — verify anyway
fs.suid_dumpable = 0                        # Often not set
kernel.dmesg_restrict = 1                   # Rarely set

These live in /etc/sysctl.d/ and need to be explicitly applied. In a default AMI, they are not.

No audit daemon configured

auditd is installed on most RHEL-family images. It is not configured. The default audit.rules file is essentially empty — the daemon runs but captures almost nothing. On Ubuntu, auditd isn’t even installed by default.

CIS Benchmark Level 2 for RHEL 9 specifies 30+ auditd rules covering file access, privilege escalation, user management changes, network configuration changes, and more. None of them are present in a default AMI.

Package surface

Run rpm -qa | wc -l or dpkg -l | grep -c ^ii on a fresh instance. AL2023 comes in around 350 packages. Ubuntu 22.04 Server minimal sits around 500. RHEL 9 from Marketplace — depending on the variant — lands between 400 and 600.

How many of those packages does your application actually need? For a Python web service: Python, your runtime dependencies, and a handful of system libraries. The rest is exposure.


The on-prem story is different — and often worse

Cloud images at least get regular updates from their publishers. On-prem KVM and Nutanix environments tell a different story.

The KVM / QCOW2 situation

Most teams running KVM get their base images one of three ways:

  1. Download a cloud image (cloud-init enabled QCOW2) from the distro vendor and use it directly
  2. Convert an existing VMware VMDK or OVA and hope for the best
  3. Run a manual Kickstart/Preseed install once, then treat the result as the “golden image” forever

Option 1 gives you the same problems as the cloud image analysis above, plus you’re now responsible for handling cloud-init in an environment that might not have a metadata service — so you either ship a seed ISO with every VM, or you rip out cloud-init and manage first-boot differently.

Option 3 is the most common and the most dangerous. That “golden image” was created by someone who’s possibly no longer at the company, contains packages pinned to versions from 18 months ago, and has sshd configured however was convenient at the time. Worse, it gets cloned hundreds of times and none of those clones are ever individually updated at the image level.

The Nutanix AHV specifics

Nutanix AHV images have additional considerations that cloud images don’t deal with:

  • AHV uses a custom paravirtualised SCSI controller (virtio-scsi or the Nutanix variant). Images imported from VMware need pvscsi drivers removed and virtio_scsi added to the initramfs before the disk will be detected at boot.
  • The Nutanix guest tools agent (ngt) is separate from the kernel and needs to be installed inside the image for snapshot quiescence, VSS integration, and in-guest metrics.
  • cloud-init works on AHV but requires the ConfigDrive datasource — not the EC2 datasource that most cloud QCOW2 images default to. An unconfigured datasource means cloud-init times out at boot, costing 3–5 minutes on every first start.
  • NUMA topology on large AHV nodes affects memory allocation in ways that need kernel tuning (vm.zone_reclaim_mode, kernel.numa_balancing) — parameters no generic cloud image sets.

The result is that most Nutanix environments end up with a patchwork: partially converted images, manually applied guest tools, and hardening that was done once per environment rather than once per image.


What a hardened image actually looks like

A properly built hardened image isn’t just “a default image with some hardening applied at the end.” The hardening is architectural — decisions made at build time that change the fundamental shape of what’s inside the image.

Package set — minimal by design

Start from a minimal install group — @minimal-environment on RHEL/Rocky, --variant=minbase on Debian derivatives. Then add only what the image class requires. For a web server image: your runtime, a process supervisor, and nothing else. No man-db, no X11-common, no avahi.

Every package you don’t install is a CVE that can never affect you.

Filesystem hardening

Separate mount points with restrictive options prevent a class of privilege escalation attacks that depend on executing binaries from world-writable locations:

/tmp      nodev,nosuid,noexec
/var      nodev,nosuid
/var/tmp  nodev,nosuid,noexec
/home     nodev,nosuid
/dev/shm  nodev,nosuid,noexec

These are not applied by any default cloud image.

Kernel parameters — baked in at build time

# /etc/sysctl.d/99-hardening.conf

net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.log_martians = 1
net.ipv6.conf.all.accept_redirects = 0
kernel.randomize_va_space = 2
fs.suid_dumpable = 0
kernel.dmesg_restrict = 1
kernel.kptr_restrict = 2
net.core.bpf_jit_harden = 2

Applied at image build time. Present on every instance, every time, before your application code runs.

SSH locked down

Protocol 2
PermitRootLogin no
MaxAuthTries 4
LoginGraceTime 60
X11Forwarding no
AllowAgentForwarding no
AllowTcpForwarding no
PermitUserEnvironment no
Ciphers [email protected],[email protected],aes256-ctr
MACs [email protected],[email protected]
KexAlgorithms curve25519-sha256,diffie-hellman-group16-sha512
ClientAliveInterval 300
ClientAliveCountMax 3
Banner /etc/issue.net

This is approximately CIS Level 1 SSH hardening. It lives in the image — not in a post-deploy playbook.

auditd rules embedded

# Privilege escalation
-a always,exit -F arch=b64 -S execve -C uid!=euid -F euid=0 -k setuid

# Sudo usage
-w /etc/sudoers -p wa -k sudoers

# User and group management
-w /etc/passwd -p wa -k identity
-w /etc/group  -p wa -k identity

# Kernel module loading
-a always,exit -F arch=b64 -S init_module -S delete_module -k modules

The full CIS L2 auditd ruleset runs to ~60 rules. They’re all committed to the image. Every instance generates audit logs from minute one of its existence.

Services disabled at build time

systemctl disable avahi-daemon
systemctl disable cups
systemctl disable postfix
systemctl disable bluetooth
systemctl disable rpcbind
systemctl mask debug-shell.service

The service list varies by distro. The principle is the same: if it’s not required by the image’s purpose, it doesn’t run.


The platform dimension: why you can’t use one image everywhere

This is where the complexity gets real. A CIS-hardened RHEL 9 image built for AWS doesn’t directly work on KVM, and it doesn’t directly work on Nutanix either. The security controls are the same — the platform-specific layer underneath them is not.

Here’s what needs to differ per target platform:

Concern AWS (AMI) KVM (QCOW2) Nutanix AHV
Disk format Raw / VMDK → AMI QCOW2 QCOW2 / VMDK
Boot mechanism GRUB2 + PVGRUB2 or UEFI GRUB2 GRUB2 + UEFI
Network driver ENA (ena kernel module) virtio-net virtio-net
Storage driver NVMe or xen-blkfront virtio-blk / virtio-scsi virtio-scsi
cloud-init datasource ec2 NoCloud / ConfigDrive ConfigDrive
Guest agent AWS SSM / CloudWatch qemu-guest-agent Nutanix Guest Tools
Metadata service 169.254.169.254 None (seed ISO) or local Nutanix AOS

A single pipeline needs to produce platform-specific artefacts from a single hardened source. The hardening doesn’t change. The drivers, datasources, and agents do.


Where this sits relative to CIS and NIST

The controls described above aren’t arbitrary. They map directly to published frameworks.

CIS Benchmark Level 1 covers controls with low operational impact and high security return — SSH configuration, kernel parameters, filesystem mount options, service reduction. Almost everything in the “what a hardened image looks like” section above is CIS Level 1.

CIS Benchmark Level 2 adds auditd configuration, PAM controls, additional filesystem protections, and more aggressive service disablement. It trades some operational flexibility for a significantly smaller attack surface.

NIST SP 800-53 CM-6 (Configuration Settings) directly requires that systems be configured to the most restrictive settings consistent with operational requirements. Baking hardening into the image is a stronger implementation of CM-6 than applying it post-deploy — because it’s guaranteed, auditable at build time, and consistent across every instance regardless of how it was launched.

NIST SP 800-53 SI-2 (Flaw Remediation) maps to your image patching cadence. An image rebuilt monthly against the latest package repositories satisfies SI-2 more completely than runtime patching alone, because it also eliminates packages you don’t need — packages that would need patching if they were present.

The full CIS and NIST control mapping will be covered in depth later in this series.


The build-time vs runtime hardening distinction

This is the most important concept in the entire post.

Hardening applied at runtime — via Ansible, Chef, cloud-init user-data, or a shell script — is conditional. It runs if the automation runs. It applies if nothing fails. It’s consistent only if every deployment goes through exactly the same path.

Hardening embedded in the image is unconditional. It cannot be skipped. It doesn’t depend on connectivity to an Ansible control node. It doesn’t require cloud-init to succeed. It cannot be accidentally omitted by a new team member who doesn’t know the runbook.

This distinction matters most at incident response time. When you’re investigating a compromised instance, the first question you want to answer confidently is: was this instance ever in a known-good state?

  • If your hardening is in the image: yes, from boot.
  • If your hardening is applied post-deploy: it depends on whether everything went right on that specific instance’s first boot.

What comes next

The practical question this raises: how do you build these images in a repeatable, multi-platform way, with CIS scanning integrated into the build pipeline?

Packer covers most of the builder layer. OpenSCAP provides the scanning. Kickstart, cloud-init, and Nutanix AHV-specific tooling fill the gaps. But the orchestration between these — producing a consistent hardened image for three different target platforms from a single source of truth — is where most teams hit friction.

The next post in this series covers the platform-specific differences between AWS, KVM, and Nutanix in depth: what actually needs to change per target when your security baseline is shared.

Next in the series: Cloud vs KVM vs Nutanix — why one image doesn’t fit all →


Questions or corrections? Open an issue or reach me on LinkedIn. If this was useful, the series index has the full roadmap.

EKS 1.33 Upgrade Blocker: Fixing Dead Nodes & NetworkManager on Rocky Linux

The EKS 1.33+ NetworkManager Trap: A Complete systemd-networkd Migration Guide for Rocky & Alma Linux

TL;DR:

  • The Blocker: Upgrading to EKS 1.33+ is breaking worker nodes, especially on free community distributions like Rocky Linux and AlmaLinux. Boot times are spiking past 6 minutes, and nodes are failing to get IPs.
  • The Root Cause: AWS is deprecating NetworkManager in favor of systemd-networkd. However, ripping out NetworkManager can leave stale VPC IPs in /etc/resolv.conf. Combined with the systemd-resolved stub listener (127.0.0.53) and a few configuration missteps, it causes a total internal DNS collapse where CoreDNS pods crash and burn.
  • The Subtext: AWS is pushing this modern networking standard hard. Subtly, this acts as a major drawback for Rocky/Alma AMIs, silently steering frustrated engineers toward Amazon Linux 2023 (AL2023) as the “easy” way out.
  • The “Super Hack”: Automate the clean removal of NetworkManager, bypass the DNS stub listener by symlinking /etc/resolv.conf directly to the systemd uplink, and enforce strict state validation during the AMI build.

If you’ve been in the DevOps and SRE space long enough, you know that vendor upgrades rarely go exactly as planned. But lately, if you are running enterprise Linux distributions like Rocky Linux or AlmaLinux on AWS EKS, you might have noticed the ground silently shifting beneath your feet.

With the push to EKS 1.33+, AWS is mandating a shift toward modern, cloud-native networking standards. Specifically, they are phasing out the legacy NetworkManager in favor of systemd-networkd.

While this makes sense on paper, the transition for community distributions has been incredibly painful. AWS support couldn’t resolve our issues, and my SRE team had practically given up, officially halting our EKS upgrade process. It’s hard not to notice that this massive, undocumented friction in Rocky Linux and AlmaLinux conveniently positions AWS’s own Amazon Linux 2023 (AL2023) as the path of least resistance.

I’m hoping the incredible maintainers at free distributions like Rocky Linux and AlmaLinux take note of this architectural shift. But until the official AMIs catch up, we have to fix it ourselves. Here is the exact breakdown of the cascading failure that brought our clusters to their knees, and the “super hack” script we used to fix it.

The Investigation: A Cascading SRE Failure

When our EKS 1.33+ worker nodes started booting with 6+ minute latencies or outright failing to join the cluster, I pulled apart our Rocky Linux AMIs to monitor the network startup sequence. What I found was a classic cascading failure of services, stale data, and human error.

Step 1: The Race Condition

Initially, the problem was a violent tug-of-war. NetworkManager was not correctly disabled by default, and cloud-init was still trying to invoke it. This conflicted directly with systemd-networkd, paralyzing the network stack during boot. To fix this, we initially disabled the NetworkManager service and removed it from cloud-init.

Step 2: The Stale Data Landmine

Here is where the trap snapped shut. Because NetworkManager was historically the primary service responsible for dynamically generating and updating /etc/resolv.conf, completely disabling it stopped that file from being updated.

When we baked the new AMI via Packer, /etc/resolv.conf was orphaned and preserved the old configuration—specifically, a stale .2 VPC IP address from the temporary subnet where the AMI build ran.

Step 3: The Human Element

We’ve all been there: during a stressful outage, wires get crossed. While troubleshooting the dead nodes, one of our SREs mistakenly stopped the systemd-resolved service entirely, thinking it was conflicting with something else.

Step 4: Total DNS Collapse

When the new AMI booted up and joined the EKS node group, the environment was a disaster zone:

  1. NetworkManager was dead (intentional).
  2. systemd-resolved was stopped (accidental).
  3. /etc/resolv.conf contained a dead, stale IP address from a completely different subnet.

When kubelet started, it dutifully read the host’s broken /etc/resolv.conf and passed it up to CoreDNS. CoreDNS attempted to route traffic to the stale IP, failed, and started crash-looping. Internal DNS resolution (pod.namespace.svc.cluster.local) totally collapsed. The cluster was dead in the water.

Flowchart showing the cascading DNS failure in EKS worker nodes
The perfect storm: How stale data and disabled services led to a total CoreDNS collapse.

Linux Internals: How systemd Manages DNS (And Why CoreDNS Breaks)

To understand how to permanently fix this, we need to look at how systemd actually handles DNS under the hood. When using systemd-networkd, resolv.conf management is handled through a strict partnership with systemd-resolved.

Architecture diagram of systemd-networkd and systemd-resolved D-Bus communication
How systemd collects network data and the critical symlink choice that dictates EKS DNS health.

Here is how the flow works: systemd-networkd collects network and DNS information (from DHCP, Router Advertisements, or static configs) and pushes it to systemd-resolved via D-Bus. To manage your DNS resolution effectively, you must configure the /etc/resolv.conf symbolic link to match your desired mode of operation. You have three choices:

1. The “Recommended” Local DNS Stub (The EKS Killer)

By default, systemd recommends using systemd-resolved as a local DNS cache and manager, providing features like DNS-over-TLS and mDNS.

  • The Symlink: ln -sf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
  • Contents: Points to 127.0.0.53 as the only nameserver.
  • The Problem: This is a disaster for Kubernetes. If Kubelet passes 127.0.0.53 to CoreDNS, CoreDNS queries its own loopback interface inside the pod network namespace, blackholing all cluster DNS.

2. Direct Uplink DNS (The “Super Hack” Solution)

This mode bypasses the local stub entirely. The system lists the actual upstream DNS servers (e.g., your AWS VPC nameservers) discovered by systemd-networkd directly in the file.

  • The Symlink: ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
  • Contents: Lists all actual VPC DNS servers currently known to systemd-resolved.
  • The Benefit: CoreDNS gets the real AWS VPC nameservers, allowing it to route external queries correctly while managing internal cluster resolution perfectly.

3. Static Configuration (Manual)

If you want to manage DNS manually without systemd modifying the file, you break the symlink and create a regular file (rm /etc/resolv.conf). While systemd-networkd still receives DNS info from DHCP, it won’t touch this file. (Not ideal for dynamic cloud environments).


The Solution: A Surgical systemd Cutover

Knowing the internals, the path forward is clear. We needed to not only remove the legacy stack but explicitly rewire the DNS resolution to the Direct Uplink to prevent the stale data trap and bypass the notorious 127.0.0.53 stub listener.

Here is the exact state we achieved:

  1. Lock down cloud-init so it stops triggering legacy network services.
  2. Completely mask NetworkManager to ensure it never wakes up.
  3. Ensure systemd-resolved is enabled and running, but with the DNSStubListener explicitly disabled (DNSStubListener=no) so it doesn’t conflict with anything.
  4. Destroy the stale /etc/resolv.conf and create a symlink to the Direct Uplink (ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf).
  5. Reconfigure and restart systemd-networkd.

Pro-Tip for Debugging: To ensure systemd-networkd is successfully pushing DNS info to the resolver, verify your .network files in /etc/systemd/network/. Ensure UseDNS=yes (which is the default) is set in the [DHCPv4] section. You can always run resolvectl status to see exactly which DNS servers are currently assigned to each interface over D-Bus!

The Automation: Production AMI Prep Script

Manual hacks are great for debugging, but SRE is about repeatable automation. We’ve open-sourced the eks-production-ami-prep.sh script to handle this cutover automatically during your Packer or Image Builder pipeline. It standardizes the cutover, wipes out the stale data, and includes a strict validation suite.


The Results

By actively taking control of the systemd stack and ensuring /etc/resolv.conf was dynamically linked rather than statically abandoned, we completely unblocked our EKS 1.33+ upgrade.

More impressively, our system bootup time dropped from a crippling 6+ minutes down to under 2 minutes. We shouldn’t have to abandon fantastic, free enterprise distributions just because a cloud provider shifts their networking paradigm. If your team is struggling with AWS EKS upgrades on Rocky Linux or AlmaLinux, integrate this automation into your pipeline and get your clusters back in the fast lane.

Signals in Linux; trap command – practical example

The SIGNALS in linux

The signals are the response of the kernel to certain actions generated by the user / by a program or an application and the I/O devices.
The linux trap command gives us a best view to understand the SIGNALS and take advantage of it.
With trap command can be used to respond to certain conditions and invoke the various activities when a shell receives a signal.
The below are the various Signals in linux.

vamshi@linuxcent :~] trap -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1
11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM
16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP
21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ
26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR
31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3
38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8
43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7
58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2
63) SIGRTMAX-1 64) SIGRTMAX

Lets take a look at some Important SIGNALS and their categorization of them:

Job control Signals: These Signals are used to control the Queuing the waiting process
(18) SIGCONT, (19) SIGSTOP , (20) SIGSTP

Termination Signals: These signals are used to interrupt or terminate a running process
(2) SIGINT , (3) SIGQUIT, (6) SIGABRT,  (9) SIGKILL,  (15) SIGTERM.

Async I/O Signals: These signals are generated when data is available on a Input/Output device or when the kernel services wishes to notify applications about resource availability.
(23) SIGURG,  (29) SIGIO,  (29) SIGPOLL.

Timer Signals: These signals are generated when application wishes to trigger timers alarms.
(14) SIGALRM,  (27) SIGPROF,  (26) SIGVTALRM.

Error reporting Signals: These signals occur when running process or an application code endsup into an exception or a fault.
(1) SIGHUP, (4) SIGILL, (5) SIGTRAP, (7) SIGBUS, (8) SIGFPE,  (13) SIGPIPE,  (11) SIGSEGV, (24) SIGXCPU.

Trap command Syntax:

trap [-] [[ARG] SIGNAL]

ARG is a command to be interpreted and executed when the shell receives the signal(s) SIGNAL.

If no arguments are supplied, trap prints the list of commands associated with each signal.
to unset the trap a – is to be used followed by the [ARG] SIGNAL] which we will demonstrate in the following section.

How to set a trap on linux through the command line?

[vamshi@linuxcent ~]$ trap 'echo -e "You Pressed Ctrl-C"' SIGINT

Now you have successfully setup a trap:>

When ever you press Ctrl-c on your keyboard, the message “You Pressed Ctrl-C” gets printed.

[vamshi@linuxcent ~]$ ^CYou Pressed Ctrl-C
[vamshi@linuxcent ~]$ ^CYou Pressed Ctrl-C
[vamshi@linuxcent ~]$ ^CYou Pressed Ctrl-C

Now type the trap command and you can see the currently set trap details.

[vamshi@node01 ~]$ trap
trap -- 'echo -e "You Pressed Ctrl-C"' SIGINT
trap -- '' SIGTSTP
trap -- '' SIGTTIN
trap -- '' SIGTTOU

To unset the trap all you need to do is to run the following command,

[vamshi@node01 ~]$ trap - 'echo -e "You Pressed Ctrl-C"' SIGINT

The same can be evident from the below output:

[vamshi@node01 ~]$ trap
trap -- '' SIGTSTP
trap -- '' SIGTTIN
trap -- '' SIGTTOU
[vamshi@node01 ~]$ ^C
[vamshi@node01 ~]$ ^C

What is trap command in Linux?

A built-in bash command that is used to execute a command when the shell receives any signal is called `trap`. When any event occurs then bash sends the notification by any signal. Many signals are available in bash. The most common signal of bash is SIGINT (Signal Interrupt).

What is trap command in bash?

If you’ve written any amount of bash code, you’ve likely come across the trap command. Trap allows you to catch signals and execute code when they occur. Signals are asynchronous notifications that are sent to your script when certain events occur.

How do you Ctrl-C trap?

To trap Ctrl-C in a shell script, we will need to use the trap shell builtin command. When a user sends a Ctrl-C interrupt signal, the signal SIGINT (Signal number 2) is sent.

What is trap shell?

trap is a wrapper around the fish event delivery framework. It exists for backwards compatibility with POSIX shells. For other uses, it is recommended to define an event handler. The following parameters are available: ARG is the command to be executed on signal delivery.

What signals Cannot be caught?

There are two signals which cannot be intercepted and handled: SIGKILL and SIGSTOP.

How does shell trap work?

The user sets a shell trap. If the user is hit by a physical move, the trap will explode and inflict damage on the opposing Pokémon. The user sets a shell trap. If the user is hit by a physical move, the trap will explode and inflict damage on opposing Pokémon.

How do I wait in Linux?

Approach:

  1. Creating a simple process.
  2. Using a special variable($!) to find the PID(process ID) for that particular process.
  3. Print the process ID.
  4. Using wait command with process ID as an argument to wait until the process finishes.
  5. After the process is finished printing process ID with its exit status.

How use stty command in Linux?

  1. stty –all: This option print all current settings in human-readable form. …
  2. stty -g: This option will print all current settings in a stty-readable form. …
  3. stty -F : This option will open and use the specified DEVICE instead of stdin. …
  4. stty –help : This option will display this help and exit.

Can I trap Sigkill?

You can’t catch SIGKILL (and SIGSTOP ), so enabling your custom handler for SIGKILL is moot. You can catch all other signals, so perhaps try to make a design around those. be default pkill will send SIGTERM , not SIGKILL , which obviously can be caught.

What signal is Ctrl D?

Ctrl + D is not a signal, it’s EOF (End-Of-File). It closes the stdin pipe. If read(STDIN) returns 0, it means stdin closed, which means Ctrl + D was hit (assuming there is a keyboard at the other end of the pipe).

How to Shutdown or Reboot a remote Linux Host from commandline

The Shutdown process in a Linux system is an intelligent chain process where in the system ensures the dependent process have successfully terminated.

TL;DR:

Difference between the Halt and Poweroff in Linux?
What is a Cold Shutdown and Warm Shutdown?
Linux System System Halt : The Halt process instructs the hardware to Stop the functioning of the CPU. Can be referred as a Warm Shutdown.
Linux System Poweroff/Shutdown : The Poweroff function sends the ACPI(Advanced Configuration and Power Interface) to power down the system. Can be referred as a Cold Shutdown.

As you may be aware the Linux runtime environment is a duo combination of processes running in User space and the Kernel space, All the major system activities and resources are initiated and governed and terminated by Kernel space.
So we got the Kernel space and the User space, The kernel space is where all the reseurce related processes run, which follows a finite behaviour, and the the userspace where the processes are dependent on the user actions, most of the userspace programs depend on the kernel space and make a context switch to get the CPU scheduling and such..
So, In the sequence of Shutdown on a linux machine, the userspace processes are first terminated in a systematic fashion through the scripts triggered by the core systemd processes which ensures clean exit and termination all the processes.

The Linux system provides us quite a few commands to enforce fast shutdown or a graceful shutdown of the operating system, each having their own consequences.

Firstly the init or the systemd which is the pid 1 process is what controls the runlevel of the system and it determines which processes are launched and running in that runlevel

The init is a powerful command which executes the runlevel it is told to.
Here the init 0 proceeds to Power-off the machine

$ sudo init 0

Here the init 6 proceeds to Reboot the machine

$ sudo init 6

These commands are real quick as it triggers the kernel space shutdown invocation process.. most often resulting in unclean termination of processes resulting system recovery and journaling while booting.

The following commands Shutdown the machine in seconds after issuing the command But tend to follow the kill sequence and clean exit of the processed.

$ sudo shutdown
$ sudo poweroff
$ sudo systemctl poweroff

Prints a wall message to all users.
All the processes are killed and the volumes are unmounted or converted to be in Read-Only mode while system power off is in progress.
Puts the system into a complete poweroff mode cutting out power supply to the machine completely.

$ sudo halt
$ sudo systemctl halt

Prints a message of “System halted” and Puts the machine in halt mode
If the --force or -f when specified twice the operation is immediately executed without terminating any processes or unmounting any file systems and resulting in data loss

The servers can only be brought back online through physical poweron or Remote Power manager console such as IPMI or ILOM.

To reboot or [/code]systemctl kexec[/code] will to restart the operating system which is one power cycle or equivalent of shutdown followed by the startup.

$ sudo reboot

$ sudo systemctl kexec

$ sudo systemctl reboot

If the --force or -f when specified twice the operation is immediately executed without terminating any processes or unmounting any file systems and resulting in data loss

 

It is important to understand that the commands are all softlinks to systemctl shutdown command. and ensure in proper shutdown sequence process

[vamshi@linuxcent cp-command]$ ls -l /usr/sbin/halt
lrwxrwxrwx. 1 root root 16 Jan 13 14:41 /usr/sbin/halt -> ../bin/systemctl
[vamshi@linuxcent cp-command]$ ls -l /usr/sbin/reboot
lrwxrwxrwx. 1 root root 16 Jan 13 14:41 /usr/sbin/reboot -> ../bin/systemctl
[vamshi@linuxcent cp-command]$ ls -l /usr/sbin/poweroff
lrwxrwxrwx. 1 root root 16 Jan 13 14:41 /usr/sbin/poweroff -> ../bin/systemctl

It is important to observe that all the commands are softlink to the systemctl process, When issuing a shutdown or reboot

The best practice to poweroff the system by enabling broadcast the notification message to all the users connected actively either through the Pseudo remote connection terminal or TTY terminals, Demonstrated as follows:

$ sudo systemctl poweroff

# this writes an entry into the journal, the wtmp and broadcasts the shutdown message to all the users connected through PTS and TTY terminals

What is the difference between systemctl poweroff and systemctl halt ?

The Linux System when put to a Halt state, stops the all the applications and ensures they’re safely exited, filesystems and volumes are unmounted and it is taken into a halted state where in the power connection is still active. And Can only be brought  online with a power reset effectively with a simple reset.
The Halt process instructs the hardware to Stop the functioning of the CPU.
Commonly can be referred as a Warm Shutdown.

Below is the screenshot to demonstrate the same
systemctl halt command in linux

The Poweroff function sends the ACPI(Advanced Configuration and Power Interface) to power down the system.
The Linux System when put to a Poweroff state it becomes completely offline following the systematical clean termination of processes.. and power input is cut off to the external peripherals, which is also sometimes called as cold shutdown, and the startup cold start.
Commonly can be referred to as a Cold Shutdown.

If you found the article worth your time, Please share your inputs in the comments section and share your experiences with shutdown and reboot issues

Can I reboot Linux remotely?

How to shutdown the remote Linux server. You must pass the -t option to the ssh command to force pseudo-terminal allocation. The shutdown accepts -h option i.e. Linux is powered/halted at the specified time.

Can you reboot a server remotely?

Open command prompt, and type “shutdown /m \\RemoteServerName /r /c “Comments”“. … Another command to restart or shutdown the Server remotely is Shutdown /i. Type Shutdown /i on the command prompt and it will open another dialogue box.

What is the Linux command to reboot?

To reboot Linux using the command line:

  1. To reboot the Linux system from a terminal session, sign in or “su”/”sudo” to the “root” account.
  2. Then type “ sudo reboot ” to reboot the box.
  3. Wait for some time and the Linux server will reboot itself.

How do I reboot from remote desktop?

Procedure. Use the Restart Desktop command. Select Options > Restart Desktop from the menu bar. Right-click the remote desktop icon and select Restart Desktop.

What does sudo reboot do?

sudo is short for “Super-user Do”. It has no effect on the command itself (this being reboot ), it merely causes it to run as the super-user rather than as you. It is used to do things that you might not otherwise have permission to do, but doesn’t change what gets done.

How do I remotely turn on a Linux server?

Enter the BIOS of your server machine and enable the wake on lan/wake on network feature. …
Boot your Ubuntu and run “sudo ethtool -s eth0 wol g” assuming eth0 is your network card. …
run also “sudo ifconfig” and annotate the MAC address of the network card as it is required later to wake the PC.

How do I restart a terminal server remotely?

From the remote computer’s Start menu, select Run, and run a command line with optional switches to shut down the computer:
To shut down, enter: shutdown.
To reboot, enter: shutdown –r.
To log off, enter: shutdown –l

How do I send Ctrl Alt Del to remote desktop?

Press the “CTRL,” “ALT” and “END” keys at the same time while you are viewing the Remote Desktop window. This command executes the traditional CTRL+ALT+DEL command on the remote computer instead of on your local computer.

How do I remotely restart a server by IP address?

Type “shutdown -m \ [IP Address] -r -f” (without quotes) at the command prompt, where “[IP Address]” is the IP of the computer you want to restart. For example, if the computer you want to restart is located at 192.168. 0.34, type “shutdown -m \ 192.168.

How do I reboot from command prompt?

  1. From an open command prompt window:
  2. type shutdown, followed by the option you wish to execute.
  3. To shut down your computer, type shutdown /s.
  4. To restart your computer, type shutdown /r.
  5. To log off your computer type shutdown /l.
  6. For a complete list of options type shutdown /?
  7. After typing your chosen option, press Enter.

How does Linux reboot work?

The reboot command is used to restart a computer without turning the power off and then back on. If reboot is used when the system is not in runlevel 0 or 6 (i.e., the system is operating normally), then it invokes the shutdown command with its -r (i.e., reboot) option.

BASH “switch case” in Linux with practical example

The switch case in BASH is more relevant and is widely used among the Linux admins/Devops folks to leverage the power of control flow in shell scripts.

As we have seen the if..elif..else..fi Control Structure: Bash If then Else. The switch case has a stronger case where it really simplifies out the control flow by running the specific block of bash code based on the user selection or the input parameters.

Let’s take a look at the simple Switch case as follows:

OPTION=$1
case $OPTION in
choice1)
Choice1 Statements
;;

choice2)
Choice2 Statements
;;

choiceN)
ChoiceN Statements
;;

*)
echo “User Selected Choice not present”
exit 1

esac

The OPTION is generally read from user input and upon this the specific choice case block is invoked.

Explanation:
In the switch command the control flow is forwarded to case keyword and stops here, it checks for the suitable match to pass over the control to relevant OPTION/CHOICE statement block. Upon the execution of the relevant CHOICE statements the case is exited once the control flow encounters esac keyword at the end.

Using the Pattern match
The control flow in bash identifies the case options and proceeds accordingly.
There can be cases where you can match the Here you might have observed that the user input the regular expression and the logical operators using the | for the input case

#! /bin/bash

echo -en "Enter your logins\nUsername: "
read user_name 
echo -en "Password: "
read user_pass 
while [ -n $user_name -a -n $user_pass ]
do

case $user_name in
    ro*|admin)
        if [ "$user_pass" = "Root" ];
        then
            echo -e "Authentication succeeded \ n You Own this Machine"
	    break
        else
            echo -e "Authentication failure"
            exit
        fi
    ;;
    jenk*)
	if [ "$user_pass" = "Jenkins" ];
	then
		echo "Your home directory is /var/lib/jenkins"
	    	break
	else
        	echo -e "Authentication failure"
	fi
        break
    ;;
    *)
        echo -e "An unexpected error has occurred."
        exit
    ;;
esac

done

You should kindly note that the regex used for the cases at ro*|admin and jenk*

We now have demonstrated by entering the username as jenkins and this will get matched with the jenkins case the control flow successfully enters into relevant block of code, checking the password match or not is not relevant for us as we are only concerned till the case choice selection.
We have named the switch case into a script switch-case.sh and run it, Here are the results.

OUTPUT :

[vamshi@node02 switch-case]$ sh switch-case.sh
Enter your logins
Username: jenkins
Password: Jenkins
Your home directory is /var/lib/jenkins

We have entered the correct password and successfully runs the jenkins case block statements

We shall also see the or ro*|admin case, demonstrated as follows.

[vamshi@node02 switch-case]$ sh switch-case.sh 
Enter your logins
Username: root
Password: Root
Authentication succeeded \ n You Own this Machine

We now test the admin username and see the results.

[vamshi@node02 switch-case]$ sh switch-case.sh 
Enter your logins
Username: admin
Password: Root
Authentication succeeded \ n You Own this Machine

Here is a more advanced script used to deploy a python application using the switch case..
Please refer to the Command line arguments section for user input

A complete functional Bash switch case can be seen at https://github.com/rrskris/python-deployment-script/blob/master/deploy-python.sh

Please feel free to share your experiences in comments.

What is switch in bash?

The Bash case statement has a similar concept with the Javascript or C switch statement. The main difference is that unlike the C switch statement, the Bash case statement doesn’t continue to search for a pattern match once it has found one and executed statements associated with that pattern.

How do you write a switch case in shell?

In shell scripting switch case is represented using keywords case and esac which will do multilevel branching and checking in a better way than multiple if-else conditions. Switch case will need an expression which it needs to evaluate and need to perform multiple operations based on the outcome of the expression.

How does case work in bash?

5 Bash Case Statement Examples

  1. Case statement first expands the expression and tries to match it against each pattern.
  2. When a match is found all of the associated statements until the double semicolon (;;) are executed.
  3. After the first match, case terminates with the exit status of the last command that was executed.

What is the use of switch case in Unix shell scripting?

Switch case in shell scripts is an efficient alternative to the if-elif-else statement that we learned previously. The concept of the switch case statements is that we provide different cases (conditions) to the statement that, when fulfilled, will execute specific blocks of commands.

Which command is used for switch case in Linux script?

esac statement is to give an expression to evaluate and to execute several different statements based on the value of the expression. The interpreter checks each case against the value of the expression until a match is found. If nothing matches, a default condition will be used.

What is a switch or option in command line?

A command line switch (also known as an option, a parameter, or a flag) acts as a modifier to the command you are issuing in the Command Prompt window, in a batch file, or in other scripts. Usually, a switch is a single letter preceded by a forward slash.

What is ESAC bash?

The esac keyword is indeed a required delimiter to end a case statement in bash and most shells used on Unix/Linux excluding the csh family. The original Bourne shell was created by Steve Bourne who previously worked on ALGOL68. This language invented this reversed word technique to delimit blocks.

What is Getopts in shell script?

getopts is a built-in Unix shell command for parsing command-line arguments. It is designed to process command line arguments that follow the POSIX Utility Syntax Guidelines, based on the C interface of getopt. The predecessor to getopts was the external program getopt by Unix System Laboratories.

 

date command formatting with practical examples in Linux / Unix

Date Command in Linux is very extensive and dynamic, provides very rich date formatting and is greatly customizable for working with scripts which depend on time based invocations.

Linux date command can also be used to set the system date and it requires the root permission.

Lets run date command and examine the output.

[vamshi@node02 log]$ date
Wed Apr 1 13:52:21 UTC 2020

Now lets examine some of the most useful options that comes with the date command.

Firstly date command along with -s or --set option can take for following format to set the new system time and date.

How to set the system date in Linux using date command?

[vamshi@node02 log]$ sudo date -s 'Apr 01 2020 13:52:59 UTC'
Wed Apr 1 13:52:59 UTC 2020

The date can also be setup in shot hand notation as follows,but it is more cryptic

[vamshi@node02 log]$ sudo date 040113522020.50
Wed Apr 1 13:52:50 UTC 2020
$ sudo date mmddHHMMyyyy.SS

The format is month of the Year(mm),day of the month(dd),Hour of the day(HH),minute of the Hour(MM) and the Year(yyyy),and the Seconds of the minute(.SS)
Now, Lets dive deep and get to know the date options and Demonstration practical examples in this tutorial:

Another Important Option is -d or –date=”String” which can display the time described
Lets see some examples as follows:

By running the date command, we get an elaborate time and date format along with the TimeZone information.
To covert the Epoc time to human readable date, we can use date command as follows:

[vamshi@node02 log]$ date -d"@1585749164"
Wed Apr 1 13:52:44 UTC 2020

If you want to get a future date then use:

[vamshi@linuxcent ~]$ date -d "+130 days"
Sun Aug 16 02:07:35 UTC 2020

Date command offers a great flexibility to extract past and future dates as we will show below:

$ date "+ %F" -d “+30 days”
$ date "+ %F" --date “+30 days”

To get the date in history; go back to a date some days ago in Linux

[vamshi@node02 log]$ date -d "17 days ago"
Sun Mar 15 13:52:45 UTC 2020

Here we present some of the more useful Format options:

Date Format Command Explanation Result
date +”%a” Prints the Abbreviated Day of the Week Sat-Sun Wed
date +”%A” Prints the Day of the Week Saturday-Sunday Wednesday
date +”%b” Prints Abbreviated Month Jan-Dec Apr
date +”%B” Print un-abbreviated month January-December April
date +”%c” Prints Full Current Date and time format Wed Apr 1 13:52:43 UTC 2020
date +”%D” Prints dd/mm/yy date format 04/01/2020
date +”%d” Prints day of the month (01-31) 01
date +”%D” Prints Date in MM/DD/YY 04/01/20
date +”%e” Prints the Day of the month 01
date +”%F” Prints only the Full date as YYYY-MM-DD 2020-04-01
date +”%H” Prints the hour 00-23 13
date +”%I” Prints the hour in 00-12 01
date +”%j” Prints Julian day of the Year(001-366) 092
date +”%M” Prints the Minute of the hour 00-59 52
date +”%m” Prints the month of the year 01-12 04
date +”%n” Prints the newline character Newline/Empty line
date +”%N” Prints the nanoseconds counts 036416306
date +”%P” Prints AM/PM in the day PM
date +”%r” Get only time in AM/PM notation 13:52:43 PM
date +”%S” Get the current seconds count in the minute (00-60) 43
date +”%s” Get the number of seconds since 1st January 1970 (Epoch time) 1585749164
date +”%T” Time in 24 Hour format HH:MM:YY 13:52:43
date +”%u” Get  current day of the week
1-7
3 for Wednesday
date +”%U” Get the current week of the Year considering Sunday as first week 13
date +”%V” Get the current week of the Year considering Monday as first week 14
date +”%y” or date +”%g” Prints only the last two digits of Year 20
Date +“%Y” or date +”%F” Prints Year in YYYY format 2020
Date +“%z” Prints the current Timezone difference from UTC 00 – for UTC
date +”%Z” Prints Alphabetic time zone abbreviation UTC

How to write the current system time to the Machine’s hardware clock ?

The command hwclock can do that for us.
[/code] # sudo hwclock [OPTIONS][/code]

Lets see a practical example where our Hardware clock was 1 hour and 13 mins behind the actual system time .

[vamshi@node02 ~]$ sudo hwclock
Wed 01 Apr 2020 07:35:05 AM UTC -0.454139 seconds
[vamshi@node02 ~]$ date
Wed Apr 01 08:43:13 UTC 2020

Setting the hardware clock time to system time with option -w or --systohc as seen below.

[vamshi@node02 ~]$ sudo hwclock -w

Confirm it with hwclock command as follows:

[vamshi@node02 ~]$ sudo hwclock
Wed 01 Apr 2020 08:44:05 AM UTC -0.538163 seconds

Most of the times the hardware clock will be out of sync with the system time and its a good practice to set the hardware clock in sync and comes in real handy during the system reboots.

What is the date format in Unix?

Below is a list of common date format options with examples output. It works with the Linux date command line and the mac/Unix date command line.

Date Format Option Meaning Example Output
date +%m-%d-%Y MM-DD-YYYY date format 05-09-2020
date +%D MM/DD/YY date format 05/09/20

How do I format a date in Linux?

These are the most common formatting characters for the date command:
  1. %D – Display date as mm/dd/yy.
  2. %Y – Year (e.g., 2020)
  3. %m – Month (01-12)
  4. %B – Long month name (e.g., November)
  5. %b – Short month name (e.g., Nov)
  6. %d – Day of month (e.g., 01)
  7. %j – Day of year (001-366)
  8. %u – Day of week (1-7)

What %D format in date command does?

%D: Display date as mm/dd/yy.

%d: Display the day of the month (01 to 31). %a: Displays the abbreviated name for weekdays (Sun to Sat). %A: Displays full weekdays (Sunday to Saturday).

How do you change the date in Unix?

The basic way to alter the system’s date in Unix/Linux through the command line environment is by using “date” command. Using date command with no options just displays the current date and time. By using the date command with the additional options, you can set date and time.

What is the date format?

Date Format Types
Format Date order Description
1 MM/DD/YY Month-Day-Year with leading zeros (02/17/2009)
2 DD/MM/YY Day-Month-Year with leading zeros (17/02/2009)
3 YY/MM/DD Year-Month-Day with leading zeros (2009/02/17)
4 Month D, Yr Month name-Day-Year with no leading zeros (February 17, 2009)

How can I get yesterday date in Unix?

  1. Use perl: perl -e ‘@T=localtime(time-86400);printf(“%02d/%02d/%02d”,$T[4]+1,$T[3],$T[5]+1900)’
  2. Install GNU date (it’s in the sh_utils package if I remember correctly) date –date yesterday “+%a %d/%m/%Y” | read dt echo ${dt}
  3. Not sure if this works, but you might be able to use a negative timezone.

How do I display yesterday’s date in Linux?

Yesterday date YES_DAT=$(date –date=’ 1 days ago’ ‘+%Y%d%m’)
Day before yesterdays date DAY_YES_DAT=$(date –date=’ 2 days ago’ ‘+%Y%d%m’)

Which command is used for displaying date and calendar in Unix?

Which command is used for displaying date and calendar in UNIX? Explanation: date command is used for displaying the current system date and time while cal command is used to see the calendar of any specific month/year.

Which command is used for displaying date in the format dd mm yyyy?

To use the date in MM-YYYY format, we can use the command date +%m-%Y. To use the date in Weekday DD-Month, YYYY format, we can use the command date +%A %d-%B, %Y.

What does df command do in Linux?

Display Information of /home File System. To see the information of only device /home file systems in human-readable format use the following command.

sed – The Stream editor in Linux

The Stream Editor(sed) is a text manipulation program, that takes the input from stdin and from the text files, It writes to the stdout and modifies the input files accordingly. The text manipulation means deleting characters and words; Inserting text into the source file on the fly.
This is a transformation operation and quiet a handy skill to have for someone working in linux shell.

The sed comprises of two operations, The first one is a regex search and match operation and the second one is replace operation accordingly. This combines the greater power of search and replace of text from stdin and from the flat files.
Here is general syntax of sed command is:

# sed [-n] -e 'options/commands' files
# sed [-n] -f sed-scriptfile
# sed -i filename -e 'options/commands'

-e is the edit option used on the cli.
-f to take the sed commands from the scriptfile
-n or –quiet option supresses the output unless specified with -p or -s

We will look at some of the notable options the sed offers.

Some practical usecases, But before that we take at our sample README.txt.

Substitute and Replace with sed:

sed command offers the -s option which is exclusive for search and replace operation also known as search and substitution.

[vamshi@node02 sed]$ echo Welcome to LinuxCent | sed -e 's/e/E/'
WElcome to LinuxCent

This replaces the e to E in the input received and prints to stdout.
We can apply the same to the Text file and achieve the same results.

[vamshi@node02 ~]$ sed -e 's|u|U|' README.txt
centos 	
debian 	
redhat 	
Ubuntu

But the important thins to be noted is that the first occurring pattern match per line is only replaced. In out case only 1 letter per line as the letter u is replaced in ubuntu by U.

Substitute and replace globally using the option -g.

We run the below command stdin input stream as show below:

[vamshi@node02 sed]$ echo Welcome to LinuxCent | sed -e 's/e/E/g'
WElcomE to LinuxCEnt

Running the global option g on the fileinput as shown below.

[vamshi@node02 ~]$ sed -e 's/u/U/g' README.txt
centos 
debian 	
redhat 	
UbUntU 	

Substitute the later occurrences using sed. We search for the 3rd occurrence of letter u and if matched replace it with U.

[vamshi@node02 ~]$ sed -e 's/u/U/3g' README.txt
centos 	
debian 	
redhat 	
ubuntU

In the above case we have seen the lowercase u has been replaced with Uppercase U at the third occurrence.
Now let us append the word to the end of the each line using the below syntax:

[vamshi@node02 ~]$ sed -e 's/$/ Linux/' README.txt
centos Linux
debian Linux
redhat Linux
ubuntu Linux

Adding text to the file data at the beginning of each line and writing to the stdout.

[vamshi@node02 Linux-blog]$ sed -e 's/^/Distro name: /' Distronames.txt 
Distro name: centos Linux
Distro name: debian Linux
Distro name: redhat Linux
Distro name: ubuntu Linux

sed Interactive Editor: How to write the modified sed data into the same text file?

We can use the -i Interactive Editor option in combination with most other sed options, the input file content is directly modified according to the command pattern.
Example Given.

[vamshi@node02 sed]$ sed -e 's/e/E/g' -i intro.txt
[vamshi@node02 sed]$ cat intro.txt
WElcomE to LinuxCEnt

We use the -i option to append some text to a file as demonstrated as follows:

[vamshi@node02 Linux-blog]$ sed -i 's/$/ Linux/' README.txt
[vamshi@node02 Linux-blog]$ cat README.txt 
centos Linux
debian Linux
redhat Linux
ubuntu Linux

Here we append the words Linux to end of the each line
Alternate to -i you can also use the output redirection to write to a new file  as shown below.

[vamshi@node02 ~]$ sed -e 's/$/ Linux/' README.txt > OSnames.txt

Delete Operations with sed

Delete all the lines containing the pattern:

[vamshi@node02 ~]$ sed -e /ubu/d README.txt
centos Linux 
debian Linux 
redhat Linux

Here we matched the word ubuntu and hence have deleted that line from output.

We can use the ! inverse operator with the delete, demonstrated as follows:

[vamshi@node02 Linux-blog]$ sed -e '/ubu/!d' Distronames.txt
ubuntu Linux

Using the Ranges in sed

Extracting only the specific /BEGIN and /END pattern using sed.

[vamshi@node02 Linux-blog]$ cat Distronames.txt | sed -n -e '/^centos/,/^debian/p'
centos Linux	
debian Linux

Substitution of Range of lines

[vamshi@node02 Linux-blog]$ sed -e  '1,3s/u/U/' Distronames.txt
centos LinUx.	
debian LinUx.	
redhat LinUx.	
ubuntu Linux.

Delete the . at the end of each line

[vamshi@node02 Linux-blog]$ sed -e 's/.$//' Distronames.txt

Print only the lines containing the word “hat”

[vamshi@node02 Linux-blog]$ sed -n -e '/hat/p' Distronames.txt 
redhat Linux

Use sed to Match the pattern insert text.
Insert the lines before the matched pattern in file

[vamshi@node02 Linux-blog]$ cat README.txt | sed -e '/centos/i\Distro Names '
Distro Names 
centos
debian
redhat
ubuntu

The above scenario we have inserted the sentence “Distro Names” before the occurrence of the work centos.

[vamshi@node02 Linux-blog]$ cat Distronames.txt | sed -e '1a\------------'
Distro Names 
------------
centos
debian
redhat
ubuntu

The ———— are appended to the text after the 1st line

How to create user account in Linux

The Linux system provides a couple of command line utilities to create new users on the system

As we are aware, the Linux login has the essential fields listed as follows:

  • A unique system wide username,
  • A Strong password,
  • The home directory and
  • A login shell.

These are the mandatory fields to enable account creation.

The other fields are the UID and GID numbers associated with User an Group name numerical IDs which will be generated sequentially allocated by the Linux Kernel

We can do a broad categorization of login accounts into 2 types, those are the Privileged and the normal user.

The Absolute Privileged account is root which comes by default in all the linux machines.

The normal account can be enabled with root Privileged by assigning user to certain groups and providing elevated access in the scope.

What is the Process to create a User account in Linux?

The user creation has to be done with root privileges using useradd command.

$ sudo useradd newuser

Now it’s time to enter the password

$ sudo passwd newuser

How to check if the userid is present and active on the system?

The new user details will be updated to /etc/passwd file and the login information updated to /etc/shadow

Now let’s check if the user account is created and has a valid shell

vamshi@node03:/$ grep vamshi /etc/passwd

vamshi:x:1001:1001::/home/vamshi:/bin/bash

How to Add the user to new groups in Linux?

Usermod command line linux utility enables to add user to groups and the ability to add an existing user to new groups additionally or overwrite the group membership

$ usermod -aG dockerroot wheel vamshi

The option -a: appends the user to two new groups called dockerroot and wheel with out overwriting the existing user assigned groups, violating this option will restrict the newuser to be part of only the mentioned groups in the command

How to check and verify if the user is a member of group in Linux?

[vamshi@node02 Linux-blog]$ id vamshi
uid=1001(vamshi) gid=1001(vamshi) groups=1001(vamshi),0(root),10(wheel),992(dockerroot)

How to Verify the Login Confirmation in Linux?

From the root user account run the command: su - newuser to check the new login account environment.

How to find the group names assigned to the user

The user can list of his active membership groups by running the linux command groups

The user can run the groups command to list the groups with active membership

[vamshi@linuxcent ~]$ groups
vamshi root wheel dockerroot

Login to the server remotely using SSH

You may now use the ssh command to login with the new username and enter your password at login prompt.

$ ssh [email protected]

How to connect to server with SSH running on non-standard port like 2202?

[vamshi@linuxcent ~]$ ssh localhost -p 2202
Last login: Mon Mar 13 17:57:56 2020 from 10.100.0.1

How to create a useraccount in Linux using useradd command?

The usercreation can also be done with parametrized command as demonstrated below:

$ sudo useradd vamshi -b /home/ -m -s /bin/bash

Alternatively you can be more elaborate as mentioned below:

$ sudo useradd vamshi -c "Vamshi's user account" -d /home/vamshi -m -s /bin/bash -G dockerroot

The useradd command-utility options describes as follows:

-b or --base-dir : base directory of new user home directory.

-c or --comment : Description about the user Or as A Standard Practice can be used to Mention the Current User’s Full name.

-d or --home-dir : create the user’s home directory

-m or --create-home :  create the user’s home directory as per -d option.

-s or --shell : Type of Login Shell.

-u or --uid : is the Unique UID on linux machine

-G or --groups : list of secondary groups to be assigned

-k or --skel : determines the default parameters if no options are passed while account creation. Present at /etc/default/useradd

With the skel properties finely tunes, you can proceed to use adduser command which is based on the default skel behavior as shown below:

$ sudo adduser vamshi

How to using the SSH key pair to login:
Use the -i followed by the /path/to/id_rsa private key file

$ ssh -i ~/.ssh/id_rsa [email protected]
$ ssh -i ~/.ssh/id_rsa -l linuxcent.com

-l : using the login name

-i : is the identity file; rsa the private key file

 

Troubleshooting the SSH connection in Verbose mode printing Debug information

Using -v option with the ssh command will print the debug information while logging

The verbosity levels -v can be concatenated from one to Nine; eg -v to -vvvvvvvvv

$ ssh -i ~/.ssh/id_rsa [email protected] -vvvvvvvvv

Linux Copy File Command for Files and Directories – cp Command Examples

Linux copy files command: cp is generally used for organizing the data on the Linux operating system, It copies the files and directories.

We shall take a deeper look at Linux cp command-utility in the section

In order to copy files and directories, you must have read permissions on the source file(s) and write permissions on the destination directory

How do I copy files under Linux operating systems?

How do I make a 2nd copy of a file on a Linux bash shell?

How can I copies files and directories on a Linux

Linux Copy File command Syntax

cp sourcefile destinationfile
cp sourcefile DESTDIR
cp sourcefile1 sourcefile2 DESTDIR
cp [OPTION] SOURCE DESTFILE
cp [OPTION] SOURCE DESTDIR

How to Copy a Directory if the destination does not exist?

To achieve this we can make use of the following cp command options -R or -r: Copy directories recursively.

Linux cp command Syntax with -R option:

cp -R SOURCE DESTINATION

If the destination doesn’t exist, it will be created.

It can also be used to Copy the contents Recursively

Lets see the demonstration as follows:

[vamshi@linuxcent ]$ cp -R dir1/ dir1-copy
[vamshi@linuxcent ]$ ls -l 
total 0
drwxrwxr-x. 2 vamshi vamshi 6 Apr 11 06:35 dir1
drwxrwxr-x. 2 vamshi vamshi 6 Apr 11 06:37 dir1-Recursive

Using the verbose Option -v to print the copy activity information onto the output screen.

Let’s use the -v flag to print the verbose information onto the screen.

How to Preserve the Source file and Directory permission?

Linux Copy command Syntax with -p option:

-p option preserves the mode, ownership and timestamps from the source to the destination

cp -p file1 file1-copy

Lets us see the Demonstration as Below

[vamshi@node02 cp-command]$ cp -Rp dir1/ dir1-copy
[vamshi@node02 cp-command]$ ls -ld dir1*
drwxrwxr-x. 2 vamshi vamshi 6 Apr 11 06:35 dir1
drwxrwxr-x. 2 vamshi vamshi 6 Apr 11 06:35 dir1-copy
drwxrwxr-x. 2 vamshi vamshi 6 Apr 11 06:37 dir1-Recursive

From the out we can conclude the the Linux copy command with -p Option preserves the original timestamps information and copies it to the destination

Linux cp command with Force copy -f Option, It forcefully overwrites the destination content
Sample Syntax:

cp -f file1 file1-copy

How to Copy Multiple files at once ?

Asterisk / wildcard (*) character is used to copy files multiple files with same pattern.

[vamshi@linuxcent ]$ cp -varpf file* DEST/
‘file10.txt’ -> ‘DEST/file10.txt’
‘file1.txt’ -> ‘DEST/file1.txt’
‘file2.txt’ -> ‘DEST/file2.txt’
‘file3.txt’ -> ‘DEST/file3.txt’
‘file4.txt’ -> ‘DEST/file4.txt’
‘file5.txt’ -> ‘DEST/file5.txt’
‘file6.txt’ -> ‘DEST/file6.txt’
‘file7.txt’ -> ‘DEST/file7.txt’
‘file8.txt’ -> ‘DEST/file8.txt’
‘file9.txt’ -> ‘DEST/file9.txt’

The options -p or -d enables preserving the links and can be used in conjunction with -R option to copy contents Recursively from the source directory.

How to Copy Files and Folders on Linux Using the cp Command recursively to Destination Directory

How to preserve the links with cp command?

Using the Options -p preserves the links and -r Option copies the content recussively same as -R Option and -v prints the verbose information

[vamshi@node02 Linux-blog]$ cp -varpf Redhat-Distro/ /tmp/DEST
‘Redhat-Distro/’ -> ‘/tmp/DEST’
‘Redhat-Distro/Fedora’ -> ‘/tmp/DEST/Fedora’
‘Redhat-Distro/Fedora/fedora.txt’ -> ‘/tmp/DEST/Fedora/fedora.txt’
‘Redhat-Distro/Centos’ -> ‘/tmp/DEST/Centos’
‘Redhat-Distro/Centos/centos.txt’ -> ‘/tmp/DEST/Centos/centos.txt’
‘Redhat-Distro/Centos/CentOS-versions’ -> ‘/tmp/DEST/Centos/CentOS-versions’
‘Redhat-Distro/Centos/CentOS-versions/centos7.txt’ -> ‘/tmp/DEST/Centos/CentOS-versions/centos7.txt’
‘Redhat-Distro/Centos/CentOS-versions/centos6.1.txt’ -> ‘/tmp/DEST/Centos/CentOS-versions/centos6.1.txt’
‘Redhat-Distro/Centos/README-CentOS’ -> ‘/tmp/DEST/Centos/README-CentOS’
‘Redhat-Distro/README-Redhat-Distro’ -> ‘/tmp/DEST/README-Redhat-Distro’
‘Redhat-Distro/RHEL-Versions’ -> ‘/tmp/DEST/RHEL-Versions’
‘Redhat-Distro/RHEL-Versions/redhat5.txt’ -> ‘/tmp/DEST/RHEL-Versions/redhat5.txt’
‘Redhat-Distro/RHEL-Versions/redhat8.txt’ -> ‘/tmp/DEST/RHEL-Versions/redhat8.txt’
‘Redhat-Distro/redhat.txt’ -> ‘/tmp/DEST/redhat.txt’

How to make a symbolic link with Linux cp command to files ?

As we know that ln command us useful to create symboic links, But the Linux copy command Syntax can do that to files with -s Option which creates Symbolic links:

cp -s SOURCE DESTINATION

Linux copy command Syntax with Softlink with Demonstration:

[vamshi@linuxcent ~]$ ls -l total 0
-rw-rw-r--. 1 vamshi vamshi 0 Apr 11 06:39 file1.txt

lrwxrwxrwx. 1 vamshi vamshi 9 Apr 11 06:39 file2.txt -> file1.txt

Linux cp command with interactive prompt using -i option

Sample Syntax:

cp -i file1 file1-copy

Also you can make it a best practice to setup alias alias for cp command.
The best practice is enable options -av

cp -av SOURCE DESTINATION
export cp="cp -av"

How can i copy the hidden files ?

To Copy the hidden files we can use cp command with -a option,lets us see in a practical example.

$ cp -av source/ destination/
‘source/.config1’ -> ‘destination/source/.config1’
‘source/.config2’ -> ‘destination/source/.config2’
‘source/.config3’ -> ‘destination/source/.config3’

Generally the hidden files in Linux are prefixed with a dot . So we can also use the wildcard character *, and copy them, below is another pracctical example

[vamshi@linuxcent cp-command]$ cp -av source/.conf* destination/
‘source/.config1’ -> ‘destination/.config1’
‘source/.config2’ -> ‘destination/.config2’
‘source/.config3’ -> ‘destination/.config3’

How to Copy a File from One Location to Another With a Different Name on Linux Using the cp Command

Assuming we have a couple of users on our linux server called Alice and Bob

[alice@linuxcent ~]$ sudo cp -avrpf /home/alice/djangoproject1/ /home/bob/
‘djangoproject1/’ -> ‘/home/bob/djangoproject1’
‘djangoproject1/__init__.py’ -> ‘/home/bob/djangoproject1/__init__.py’
‘djangoproject1/asgi.py’ -> ‘/home/bob/djangoproject1/asgi.py’
‘djangoproject1/settings.py’ -> ‘/home/bob/djangoproject1/settings.py’
‘djangoproject1/urls.py’ -> ‘/home/bob/djangoproject1/urls.py’
‘djangoproject1/wsgi.py’ -> ‘/home/bob/djangoproject1/wsgi.py’
‘djangoproject1/__pycache__’ -> ‘/home/bob/djangoproject1/__pycache__’
‘djangoproject1/__pycache__/__init__.cpython-36.pyc’ -> ‘/home/bob/djangoproject1/__pycache__/__init__.cpython-36.pyc’
‘djangoproject1/__pycache__/settings.cpython-36.pyc’ -> ‘/home/bob/djangoproject1/__pycache__/settings.cpython-36.pyc’

How to backup files using cp command?

The linux cp command offer the option --backup to backup the data files, below is the command.

cp --backup source destination

Best Linux Text Editors

Best Linux Text Editors

You can choose between several text editors in Linux. Each editor has advantages and advantages.

1. Vi/Vim
Vi is a powerful and the most popular command-line-based editor. Commonly used for writing code and editing configuration files. First of all, the advantage is availability. Vi is always installed on any distribution. The second advantage is the consumption of system resources. One of the cons is non-intuitive, but short commands.

Vi has 3 modes: command, input, and last line mode. Command mode is the default.

2. Nano
Nano is WYSIWYG (what you see is what you get) editor and is installed by default in Ubuntu and many other Linux distributions. Action/commands are done in a CTRL and Key manner, for example, CTRL + X save a file. Features: Autoconf support, case-sensitive search function, auto-indent ability, regular expression search and replace.

3. Gedit
Gedit is the default text editor for the GNOME desktop environment. Gedit’s aim is simple and easy to use for beginner Linux users. Useful features are syntax highlighting, clipboard support, brackets matching, search and replace with support of regular expressions

4. GNU Emacs
Emacs is the extensible self-documenting editor. It provides an interpreter for Emacs Lisp. Main function: text editing including a project planner, mail and newsreader, debugger interface, calendar.

5. Leaf Pad
GTK+ based editor is popular among new Linux users because it is easy to use. It supports the codeset option, auto codeset detection, and Drag & Drop function. It does not provide syntax coloring.

Which text editor is best Linux?

12 Best Text Editors For Linux Distros

  • Sublime Text. Sublime Text is a feature-packed text editor built for u201ccode, markup, and prose.u201d It natively supports tons of programming languages and markup languages. …
  • Atom. …
  • Vim. …
  • Gedit. …
  • GNU Emacs. …
  • Visual Studio Code. …
  • nano. …
  • KWrite.

What are the most common text editors in Linux?

Top 10 Text Editors for Linux Desktop

  • VIM. If you are bored of using the default u201cviu201d editor in linux and want to edit your text in an advanced text editor that is packed with powerful performance and lots of options, then vim is your best choice. …
  • Geany. …
  • Sublime Text Editor. …
  • Brackets. …
  • Gedit. …
  • Kate. …
  • Eclipse. …
  • Kwrite.

What text editor comes with Linux?

Almost all Linux distributions, even older versions, come with the Vim editor installed.

What is the best text editor 2020?

10 best code editors for 2020

  • Visual studio code. Visual studio code commonly referred to as VS code, is one of the best code editors in the market. …
  • Sublime text. If you are looking for a very lightweight yet robust code editor, the sublime text is your option. …
  • Atom Editor. …
  • Notepad++ …
  • Bluefish. …
  • Brackets. …
  • Phpstorm. …
  • GNU Emacs.

What text editor should I use for Linux?

There are two command-line text editors in Linuxxae: vim and nano. You can use one of these two available options should you ever need to write a script, edit a configuration file, create a virtual host, or jot down a quick note for yourself. These are but a few examples of what you can do with these tools.

What is the best text editor to use?

Best text editors in 2021: for Linux, Mac, and Windows coders and programmers

  • Sublime Text.
  • Atom.
  • Visual Studio Code.
  • Espresso.
  • Brackets.
  • Notepad++
  • Vim.
  • BBedit.

What is the best IDE for Linux in 2020?

10 Best IDEs For Linux In 2020!

  • NetBeans.
  • zend Studio.
  • Komodo IDE.
  • Anjuta.
  • MonoDevelop.
  • CodeLite.
  • KDevelop.
  • Geany.

Is VI the best text editor?

Vim is the best text editor/IDE out there. It is the x26quot;editor of choice of old-time Unix hackersx26quot;. Vim is one of the most popular programming editors out there. Itx26#39;s loved by geeks for its speed, extensive feature set, and flexibility.

Which text editor is used in Linux?

A Linux system supports multiple text editors. There are two types of text editors in Linux, which are given below: Command-line text editors such as Vi, nano, pico, and more. GUI text editors such as gedit (for Gnome), Kwrite, and more.

Which is the most common text editor?

The 15 Most Popular Text Editors for Developers

  • UltraEdit.
  • Dreamweaver.
  • Komodo Edit / Komodo IDE.
  • Aptana.
  • PSPad.
  • Vim.
  • TextMate.
  • Notepad++