<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>eBPF Series Archives - Linux Cent</title>
	<atom:link href="https://linuxcent.com/tag/ebpf-series/feed/" rel="self" type="application/rss+xml" />
	<link>https://linuxcent.com/tag/ebpf-series/</link>
	<description>Find posts on Linux, Scripting, Automation, Devops, Site Reliability Engineering, and Cloud Technologies</description>
	<lastBuildDate>Sat, 04 Apr 2026 04:06:59 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
<site xmlns="com-wordpress:feed-additions:1">211632295</site>	<item>
		<title>eBPF vs Kernel Modules: An Honest Comparison for K8s Engineers</title>
		<link>https://linuxcent.com/ebpf-vs-kernel-modules-kubernetes/</link>
					<comments>https://linuxcent.com/ebpf-vs-kernel-modules-kubernetes/#respond</comments>
		
		<dc:creator><![CDATA[Vamshi Krishna Santhapuri]]></dc:creator>
		<pubDate>Sat, 04 Apr 2026 04:06:59 +0000</pubDate>
				<category><![CDATA[DevSecOps]]></category>
		<category><![CDATA[eBPF]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Linux Tutorials]]></category>
		<category><![CDATA[BPF Verifier]]></category>
		<category><![CDATA[BPF verifier explained]]></category>
		<category><![CDATA[Cilium]]></category>
		<category><![CDATA[Cloud Native]]></category>
		<category><![CDATA[Container Security]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[eBPF From Kernel to Cloud]]></category>
		<category><![CDATA[eBPF Kubernetes]]></category>
		<category><![CDATA[eBPF Safety]]></category>
		<category><![CDATA[eBPF Series]]></category>
		<category><![CDATA[Falco]]></category>
		<category><![CDATA[K8s Security]]></category>
		<category><![CDATA[Linux Administration]]></category>
		<category><![CDATA[Linux Observability]]></category>
		<category><![CDATA[SRE]]></category>
		<category><![CDATA[Tetragon]]></category>
		<guid isPermaLink="false">https://linuxcent.com/?p=1440</guid>

					<description><![CDATA[<p>~2,100 words &#183; Reading time: 8 min &#183; Series: eBPF: From Kernel to Cloud, Episode 3 of 18 In Episode 1 we covered what eBPF is. In Episode 2 we covered why it is safe. The question that comes next is the one most tutorials skip entirely: If eBPF can do everything a kernel module ... <a title="eBPF vs Kernel Modules: An Honest Comparison for K8s Engineers" class="read-more" href="https://linuxcent.com/ebpf-vs-kernel-modules-kubernetes/" aria-label="Read more about eBPF vs Kernel Modules: An Honest Comparison for K8s Engineers">Read more</a></p>
<p>The post <a href="https://linuxcent.com/ebpf-vs-kernel-modules-kubernetes/">eBPF vs Kernel Modules: An Honest Comparison for K8s Engineers</a> appeared first on <a href="https://linuxcent.com">Linux Cent</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><!-- ============================================================
     LINUXCENT.COM — Episode 3
     Title : eBPF vs Kernel Modules: An Honest Comparison for K8s Engineers
     Series: eBPF: From Kernel to Cloud — Episode 3 of 18
     Author: Vamshi Krishna Santhapuri
     Paste into: WordPress Classic editor → Text tab
                 OR Gutenberg → Custom HTML block
     Style: Matches linuxcent.com native theme — no custom CSS needed
     ============================================================ --></p>
<p><em>~2,100 words &middot; Reading time: 8 min &middot; Series: eBPF: From Kernel to Cloud, Episode 3 of 18</em></p>
<p>In <a href="https://linuxcent.com/what-is-ebpf-linux-kubernetes/">Episode 1</a> we covered what eBPF is. In <a href="https://linuxcent.com/bpf-verifier-kubernetes-safety/">Episode 2</a> we covered why it is safe. The question that comes next is the one most tutorials skip entirely:</p>
<p><strong>If eBPF can do everything a kernel module does for observability, why do kernel modules still exist? And when should you still reach for one?</strong></p>
<p>Most comparisons on this topic are written by people who have used one or the other. I have used both &mdash; device driver work from 2012 to 2014 and eBPF in production Kubernetes clusters for the last several years. This is the honest version of that comparison, including the cases where kernel modules are still the right answer.</p>
<hr />
<h2>What Kernel Modules Actually Are</h2>
<p>A kernel module is a piece of compiled code that loads directly into the running Linux kernel. Once loaded, it operates with full kernel privileges &mdash; the same level of access as the kernel itself. There is no sandbox. There is no safety check. There is no verifier.</p>
<p>This is both the power and the problem.</p>
<p>Kernel modules can do things that nothing else in the Linux ecosystem can do: implement new filesystems, add hardware drivers, intercept and modify kernel data structures, hook into scheduler internals. They are how the kernel extends itself without requiring a recompile or a reboot.</p>
<p>But the operating model is unforgiving:</p>
<ul>
<li>A bug in a kernel module causes an immediate kernel panic &mdash; no exceptions, no recovery</li>
<li>Modules must be compiled against the exact kernel headers of the running kernel</li>
<li>A module that works on RHEL 8 may refuse to load on RHEL 9 without recompilation</li>
<li>Loading a module requires root privileges and deliberate coordination in production</li>
<li>Debugging a module failure means kernel crash dumps, kdump analysis, and time</li>
</ul>
<p>I experienced all of these during device driver work. The discipline that environment instils is real &mdash; you think very carefully before touching anything, because mistakes are instantaneous and complete.</p>
<hr />
<h2>What eBPF Does Differently</h2>
<p>eBPF was not designed to replace kernel modules. It was designed to provide a safe, programmable interface to kernel internals for the specific use cases where modules had always been used but were too dangerous: observability, networking, and security monitoring.</p>
<p>The fundamental difference is the verifier, covered in depth in <a href="https://linuxcent.com/bpf-verifier-kubernetes-safety/">Episode 2</a>. Before any eBPF program runs, the kernel proves it is safe. Before any kernel module runs, nothing checks anything.</p>
<p>That single architectural decision produces a completely different operational profile:</p>
<table>
<thead>
<tr>
<th>Property</th>
<th>Kernel module</th>
<th>eBPF program</th>
</tr>
</thead>
<tbody>
<tr>
<td>Safety check before load</td>
<td>None</td>
<td>BPF verifier &mdash; mathematical proof of safety</td>
</tr>
<tr>
<td>A bug causes</td>
<td>Kernel panic, immediate</td>
<td>Program rejected at load time</td>
</tr>
<tr>
<td>Kernel version coupling</td>
<td>Compiled per kernel version</td>
<td>CO-RE: compile once, run on any kernel 5.4+</td>
</tr>
<tr>
<td>Hot load / unload</td>
<td>Risky, requires coordination</td>
<td>Safe, zero downtime, zero pod restarts</td>
</tr>
<tr>
<td>Access scope</td>
<td>Full kernel, unrestricted</td>
<td>Restricted, granted per program type</td>
</tr>
<tr>
<td>Debugging</td>
<td>Kernel crash dumps, kdump</td>
<td>bpftool, bpftrace, readable error messages</td>
</tr>
<tr>
<td>Portability</td>
<td>Recompile per distro per version</td>
<td>Single binary runs across distros and versions</td>
</tr>
<tr>
<td>Production risk</td>
<td>High &mdash; no safety net</td>
<td>Low &mdash; verifier enforced before execution</td>
</tr>
</tbody>
</table>
<hr />
<h2>CO-RE: Why Portability Matters More Than Most Engineers Realise</h2>
<p>The portability column in that table deserves more than a one-line entry, because it is the operational advantage that compounds over time.</p>
<p>A kernel module written for RHEL 8 ships compiled against <code>4.18.0-xxx.el8.x86_64</code> kernel headers. When RHEL 8 moves to a new minor version, the module may need recompilation. When you migrate to RHEL 9 &mdash; kernel 5.14 with a completely different ABI in places &mdash; the module almost certainly needs a full rewrite of any code that touches kernel internals that changed between versions.</p>
<p>If you are running Falco with its kernel module driver and you upgrade a node from Ubuntu 20.04 to 22.04, Falco needs a pre-built module for your exact new kernel or it needs to compile one. If the pre-built is not available and compilation fails &mdash; no runtime security monitoring until it is resolved.</p>
<p>eBPF with CO-RE works differently. CO-RE (Compile Once, Run Everywhere) uses the kernel&rsquo;s embedded BTF (BPF Type Format) information to patch field offsets and data structure layouts at load time to match the running kernel. The eBPF program was compiled once, against a reference kernel. When it loads on a different kernel, libbpf reads the BTF data from <code>/sys/kernel/btf/vmlinux</code> and fixes up the relocations automatically.</p>
<p>The practical result: a Cilium or Falco binary built six months ago loads and runs correctly on a node you just upgraded to a newer kernel version &mdash; without any module rebuilding, without any intervention, without any downtime.</p>
<p>In a Kubernetes environment where node images update regularly &mdash; especially on managed services like EKS, GKE, and AKS &mdash; this is not a minor convenience. It is the difference between eBPF tooling that survives an upgrade cycle and kernel module tooling that breaks one.</p>
<hr />
<h2>Security Implications: Container Escape and Privilege Escalation</h2>
<p>The security difference between the two approaches matters specifically for container environments, and it goes beyond the verifier&rsquo;s protection of your own nodes.</p>
<h3>Kernel modules as an attack surface</h3>
<p>Historically, kernel module vulnerabilities have been a primary vector for container escape. The attack pattern is straightforward: exploit a vulnerability in a loaded kernel module to gain kernel-level code execution, then use that access to break out of the container namespace into the host. Several high-profile CVEs over the past decade have followed this pattern.</p>
<p>The risk is compounded in environments that load third-party kernel modules &mdash; hardware drivers, filesystem modules, observability agents using the kernel module approach &mdash; because each additional module is an additional attack surface at the highest privilege level on the system.</p>
<h3>eBPF&rsquo;s security boundaries</h3>
<p>eBPF does not eliminate the attack surface entirely, but it constrains it in important ways.</p>
<p>First, eBPF programs cannot leak kernel memory addresses to userspace. This is verifier-enforced and closes the class of KASLR bypass attacks that kernel module vulnerabilities have historically enabled.</p>
<p>Second, eBPF programs are sandboxed by design. They cannot access arbitrary kernel memory, cannot call arbitrary kernel functions, and cannot modify kernel data structures they were not explicitly granted access to. A vulnerability in an eBPF program is contained within that sandbox.</p>
<p>Third, the program type system controls what each eBPF program can see and do. A <code>kprobe</code> program watching syscalls cannot suddenly start modifying network packets. The scope is fixed at load time by the program type and verified by the kernel.</p>
<p>For EKS specifically: Falco running in eBPF mode on your nodes is not a kernel module that could be exploited for container escape. It is a verifier-checked program with a constrained access scope. The tool designed to detect container escapes is not itself a container escape vector &mdash; which is the correct security architecture.</p>
<h3>Audit and visibility</h3>
<p>eBPF programs are auditable in ways that kernel modules are not. You can list every eBPF program currently loaded on a node:</p>
<pre><code>$ bpftool prog list
14: kprobe  name sys_enter_execve  tag abc123...  gpl
    loaded_at 2025-03-01T07:30:00+0000  uid 0
    xlated 240B  jited 172B  memlock 4096B  map_ids 3,4

27: cgroup_skb  name egress_filter  tag def456...  gpl
    loaded_at 2025-03-01T07:30:01+0000  uid 0</code></pre>
<p>Every program is listed with its load time, its type, its tag (a hash of the program), and the maps it accesses. You can audit exactly what is running in your kernel at any point. Kernel modules offer no equivalent &mdash; <code>lsmod</code> tells you what is loaded but nothing about what it is actually doing.</p>
<hr />
<h2>EKS and Managed Kubernetes: Where the Difference Is Most Visible</h2>
<p>The eBPF vs kernel module distinction plays out most clearly in managed Kubernetes environments, because you do not control when nodes upgrade.</p>
<p>On EKS, when AWS releases a new optimised AMI for a node group and you update it, your nodes are replaced. Any kernel module-based tooling on those nodes needs pre-built modules for the new kernel, or it needs to compile them at node startup, or it fails. AWS does not provide the kernel source for EKS-optimised AMIs in the same way a standard distribution does, which makes module compilation at runtime unreliable.</p>
<p>This is precisely why the EKS 1.33 migration covered in the <a href="https://linuxcent.com/eks-1-33-networkmanager-systemd-networkd-migration-fix/">EKS 1.33 post</a> was painful for Rocky Linux: it involved kernel-level networking behaviour that had been assumed stable. When the kernel networking stack changed, everything built on top of those assumptions broke.</p>
<p>eBPF-based tooling on EKS does not have this problem, provided the node OS ships with BTF enabled &mdash; which Amazon Linux 2023 and Ubuntu 22.04 EKS-optimised AMIs do. Cilium and Falco survive node replacements without any module rebuilding because CO-RE handles the kernel version differences automatically.</p>
<p>For GKE and AKS the story is similar. Both use node images with BTF enabled on current versions, and both upgrade nodes on a managed schedule that is difficult to predict precisely. eBPF tooling survives this. Kernel module tooling fights it.</p>
<hr />
<h2>When You Should Still Use Kernel Modules</h2>
<p>eBPF is not the right answer for every use case. Kernel modules remain the correct tool when:</p>
<p><strong>You are implementing hardware support.</strong> Device drivers for new hardware still require kernel modules. eBPF cannot provide the low-level hardware interrupt handling, DMA operations, or hardware register access that a device driver needs. If you are bringing up a new network interface card, storage controller, or GPU, you are writing a kernel module.</p>
<p><strong>You need to modify kernel behaviour, not just observe it.</strong> eBPF can observe and filter. It can drop packets, block syscalls via LSM hooks, and redirect traffic. But it cannot fundamentally change how the kernel handles a syscall, implement a new scheduling algorithm from scratch, or add a new filesystem type. Those changes require kernel modules or upstream kernel patches.</p>
<p><strong>You are on a kernel older than 5.4.</strong> Without BTF and CO-RE, eBPF programs must be compiled per kernel version &mdash; which largely eliminates the portability advantage. On RHEL 7 or very old Ubuntu LTS versions still in production, kernel modules may be the more practical path for instrumentation work, though migrating the underlying OS is a better long-term answer.</p>
<p><strong>You need capabilities the eBPF verifier rejects.</strong> The verifier&rsquo;s safety constraints occasionally reject programs that are logically safe but that the verifier cannot prove safe statically. Complex loops, large stack allocations, and certain pointer arithmetic patterns hit verifier limits. In these edge cases, a kernel module can do what the verifier would not allow. These situations are rare and becoming rarer as the verifier improves across kernel versions.</p>
<hr />
<h2>The Practical Decision Framework</h2>
<p>For most engineers reading this &mdash; Linux admins, DevOps engineers, SREs managing Kubernetes clusters &mdash; the decision is straightforward:</p>
<ul>
<li>Observability, security monitoring, network policy, performance profiling on Linux 5.4+ &rarr; eBPF</li>
<li>Hardware drivers, new kernel subsystems, or kernels older than 5.4 &rarr; kernel modules</li>
<li>Production Kubernetes on EKS, GKE, or AKS &rarr; eBPF, always, because CO-RE survives managed upgrades and kernel modules do not</li>
</ul>
<p>The overlap between the two technologies &mdash; the use cases where both could work &mdash; has been shrinking for five years and continues to shrink as the verifier becomes more capable and CO-RE becomes more widely supported. The direction of travel is clear.</p>
<blockquote>
<p>Kernel modules are a precision instrument for modifying kernel behaviour. eBPF is a safe, portable interface for observing and influencing it. In 2025, if you are reaching for a kernel module to instrument a production system, there is almost certainly a better path.</p>
</blockquote>
<hr />
<h2>Up Next</h2>
<p>Episode 4 covers the five things eBPF can observe that no other tool can &mdash; without agents, without sidecars, and without any changes to your application code. If you are running production Kubernetes and want to understand what true zero-instrumentation observability looks like, that is the post.</p>
<p>The full series is on LinkedIn &mdash; search <strong>#eBPFSeries</strong> &mdash; and all episodes are indexed on <a href="https://linuxcent.com">linuxcent.com</a> under the eBPF Series tag.</p>
<hr />
<h2>Further Reading</h2>
<ul>
<li><a href="https://ebpf.io/what-is-ebpf/" target="_blank" rel="noopener noreferrer">ebpf.io &mdash; eBPF official documentation</a></li>
<li><a href="https://nakryiko.com/posts/bpf-core-reference-guide/" target="_blank" rel="noopener noreferrer">Andrii Nakryiko &mdash; BPF CO-RE reference guide</a></li>
<li><a href="https://docs.cilium.io/en/stable/concepts/ebpf/" target="_blank" rel="noopener noreferrer">Cilium: eBPF dataplane architecture</a></li>
<li><a href="https://falco.org/docs/event-sources/kernel/" target="_blank" rel="noopener noreferrer">Falco: kernel driver vs eBPF probe comparison</a></li>
</ul>
<hr />
<p><em>Questions or corrections? Reach me on <a href="https://www.linkedin.com/in/vamshikrishnasanthapuri/" target="_blank" rel="noopener noreferrer">LinkedIn</a>. If this was useful, the full series index is on <a href="https://linuxcent.com">linuxcent.com</a> — search the <strong>eBPF Series</strong> tag for all episodes.</em></p>
<p>The post <a href="https://linuxcent.com/ebpf-vs-kernel-modules-kubernetes/">eBPF vs Kernel Modules: An Honest Comparison for K8s Engineers</a> appeared first on <a href="https://linuxcent.com">Linux Cent</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://linuxcent.com/ebpf-vs-kernel-modules-kubernetes/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1440</post-id>	</item>
		<item>
		<title>BPF Verifier Explained: Why eBPF Is Safe for Production Kubernetes</title>
		<link>https://linuxcent.com/bpf-verifier-kubernetes-safety/</link>
					<comments>https://linuxcent.com/bpf-verifier-kubernetes-safety/#respond</comments>
		
		<dc:creator><![CDATA[Vamshi Krishna Santhapuri]]></dc:creator>
		<pubDate>Sun, 22 Mar 2026 18:12:34 +0000</pubDate>
				<category><![CDATA[DevSecOps]]></category>
		<category><![CDATA[Linux Tutorials]]></category>
		<category><![CDATA[BPF Verifier]]></category>
		<category><![CDATA[BPF verifier explained]]></category>
		<category><![CDATA[Cilium]]></category>
		<category><![CDATA[Cloud Native]]></category>
		<category><![CDATA[Container Security]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[eBPF]]></category>
		<category><![CDATA[eBPF From Kernel to Cloud]]></category>
		<category><![CDATA[eBPF Kubernetes]]></category>
		<category><![CDATA[eBPF Safety]]></category>
		<category><![CDATA[eBPF Series]]></category>
		<category><![CDATA[Falco]]></category>
		<category><![CDATA[K8s Security]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Linux Administration]]></category>
		<category><![CDATA[Linux Observability]]></category>
		<category><![CDATA[SRE]]></category>
		<category><![CDATA[Tetragon]]></category>
		<guid isPermaLink="false">https://linuxcent.com/?p=1424</guid>

					<description><![CDATA[<p>~2,400 words &#183; Reading time: 9 min &#183; Series: eBPF: From Kernel to Cloud, Episode 2 of 18 In Episode 1, we established what eBPF is and why it gives Linux admins and DevOps engineers kernel-level visibility without sidecars or code changes. The obvious follow-up question is the one every experienced engineer should ask before ... <a title="BPF Verifier Explained: Why eBPF Is Safe for Production Kubernetes" class="read-more" href="https://linuxcent.com/bpf-verifier-kubernetes-safety/" aria-label="Read more about BPF Verifier Explained: Why eBPF Is Safe for Production Kubernetes">Read more</a></p>
<p>The post <a href="https://linuxcent.com/bpf-verifier-kubernetes-safety/">BPF Verifier Explained: Why eBPF Is Safe for Production Kubernetes</a> appeared first on <a href="https://linuxcent.com">Linux Cent</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><!-- ============================================================
     LINUXCENT.COM — Episode 2
     Paste into: WordPress Classic editor → Text tab
     OR Gutenberg → Custom HTML block
     Style: Matches linuxcent.com native theme — no custom CSS needed
     Updated: Added comprehensive distro eBPF support table
     ============================================================ --></p>
<p><em>~2,400 words &middot; Reading time: 9 min &middot; Series: eBPF: From Kernel to Cloud, Episode 2 of 18</em></p>
<p>In <a href="https://linuxcent.com/what-is-ebpf-linux-kubernetes/">Episode 1</a>, we established what eBPF is and why it gives Linux admins and DevOps engineers kernel-level visibility without sidecars or code changes. The obvious follow-up question is the one every experienced engineer should ask before running anything in kernel space:</p>
<p><strong>Is it actually safe to run on production nodes?</strong></p>
<p>The answer is yes &mdash; and the reason is one specific component of the Linux kernel called the BPF verifier. This post explains what the verifier is, what it protects your cluster from, and why it changes the risk calculus for eBPF-based tools entirely.</p>
<hr />
<h2>The Fear That Holds Most Teams Back</h2>
<p>When I first explain eBPF to Linux admins and DevOps engineers, the reaction is almost always the same:</p>
<blockquote>
<p>&ldquo;So it runs code inside the kernel? On our production nodes? That sounds like a disaster waiting to happen.&rdquo;</p>
</blockquote>
<p>It is a completely reasonable concern. The Linux kernel is not a place where mistakes are tolerated. A buggy kernel module can take down a server instantly &mdash; no warning, no graceful shutdown, just a hard panic and a 3 AM phone call.</p>
<p>I know this from personal experience. During 2012&ndash;2014, I worked briefly with Linux device driver code. That period taught me one thing clearly: kernel space does not forgive careless code.</p>
<p>So when people started talking about running programs inside the kernel via eBPF, my instinct was scepticism too. Then I understood the BPF verifier. And everything changed.</p>
<hr />
<h2>What the Verifier Actually Is</h2>
<p>Think of the BPF verifier as a strict safety gate that sits between your eBPF program and the kernel. Before your eBPF program is allowed to run &mdash; before it touches a single system call, network packet, or container event &mdash; the verifier reads through every line of it and asks one question:</p>
<blockquote>
<p><strong>&ldquo;Could this program crash or compromise the kernel?&rdquo;</strong></p>
<p>If the answer is yes, or even <em>maybe</em>, the program is rejected. It does not load. Your cluster stays safe. If the answer is a provable no, the program loads and runs.</p>
</blockquote>
<p>This is not a runtime check that catches problems after the fact. It is a <strong>load-time guarantee</strong> &mdash; the kernel proves the program is safe before it ever executes. Here is what that looks like when you deploy Cilium:</p>
<pre><code>You run: kubectl apply -f cilium-daemonset.yaml
         └─► Cilium loads its eBPF programs onto each node
                   └─► Kernel verifier checks every program
                             ├─► SAFE   → program loads, starts observing
                             └─► UNSAFE → rejected, cluster untouched</code></pre>
<p>This is why Cilium can replace kube-proxy on your nodes, why Falco can watch every syscall in every container, and why Tetragon can enforce security policy at the kernel level &mdash; all without putting your cluster at risk.</p>
<hr />
<h2>What the Verifier Protects You From</h2>
<p>You do not need to know how the verifier works internally. What matters is what it prevents &mdash; and why each protection matters specifically in Kubernetes environments.</p>
<h3>Infinite loops</h3>
<p>An eBPF program that never terminates would freeze the kernel event it is attached to &mdash; potentially hanging every container on that node. The verifier rejects any program it cannot prove will finish executing within a bounded number of instructions.</p>
<p><strong>Why this matters:</strong> Every eBPF-based tool on your K8s nodes &mdash; Cilium, Falco, Tetragon, Hubble &mdash; was verified to terminate correctly on every code path before it shipped. You are not trusting the vendor&rsquo;s claim. The kernel enforced it.</p>
<h3>Memory safety violations</h3>
<p>An eBPF program cannot read or write memory outside the boundaries it is explicitly granted. No reaching into another container&rsquo;s memory space. No accessing kernel data structures it was not given permission to touch.</p>
<p><strong>Why this matters:</strong> This is the property that makes eBPF safe for multi-tenant clusters. A Falco rule monitoring one namespace cannot accidentally read data from another namespace&rsquo;s containers. The verifier makes this impossible at the program level, not just at the policy level.</p>
<h3>Kernel crashes</h3>
<p>The verifier checks that every pointer is valid before it is dereferenced, that every function call uses correct arguments, and that the program cannot corrupt kernel data structures. Programs that could cause a kernel panic are rejected before they load.</p>
<p><strong>Why this matters:</strong> Running Cilium or Tetragon on a production node is not the same risk as loading an untested kernel module. The verifier has already proven these programs cannot crash your nodes &mdash; before they ever ran on your infrastructure.</p>
<h3>Privilege escalation and kernel pointer leaks</h3>
<p>eBPF programs cannot leak kernel memory addresses to userspace. This closes a class of container escape and privilege escalation attacks that have historically been possible through kernel module vulnerabilities.</p>
<p><strong>Why this matters:</strong> Security tools built on eBPF &mdash; like Tetragon, which detects and blocks container escape attempts in real time &mdash; are not themselves a vector for the attacks they protect against.</p>
<hr />
<h2>eBPF vs Traditional Observability Agents</h2>
<p>To appreciate what the verifier gives you operationally, compare the two main approaches to K8s observability.</p>
<h3>Traditional agent &mdash; DaemonSet sidecar approach</h3>
<pre><code>Your K8s cluster
└─► Node
    ├─► App Pod (your service)
    ├─► Sidecar container (injected into every pod)
    │   └─► Reads /proc, intercepts syscalls via ptrace
    │       └─► 15–30% CPU/memory overhead per pod
    └─► Agent DaemonSet Pod
        └─► Aggregates data from all sidecars</code></pre>
<p>Problems with this model:</p>
<ul>
<li>Sidecar injection requires modifying every pod spec and typically an admission webhook</li>
<li>ptrace-based interception adds 50&ndash;100% overhead to the traced process and is blocked in hardened containers</li>
<li>The agent runs in userspace with elevated privileges &mdash; a larger attack surface</li>
<li>Updating the agent requires pod restarts across your fleet</li>
</ul>
<h3>eBPF-based tool &mdash; Cilium / Falco / Tetragon</h3>
<pre><code>Your K8s cluster
└─► Node
    ├─► App Pod (your service — completely unmodified)
    ├─► App Pod (another service — also unmodified)
    └─► eBPF programs (inside the kernel, verifier-checked)
        └─► See every syscall, network packet, file access
            └─► Forward events to userspace agent via ring buffer</code></pre>
<p>Benefits:</p>
<ul>
<li>No sidecar injection &mdash; pod specs stay clean, no admission webhook required</li>
<li>Kernel-level visibility with near-zero overhead (typically 1&ndash;3%)</li>
<li>The verifier guarantees the eBPF programs cannot harm your nodes</li>
<li>Works identically with Docker, containerd, and CRI-O</li>
</ul>
<hr />
<h2>Tools You Are Probably Already Running &mdash; All Verifier-Protected</h2>
<p>You may already be running eBPF on your nodes without thinking about it explicitly. In each case below, the verifier ran before the tool ever touched your cluster.</p>
<table>
<thead>
<tr>
<th>Tool</th>
<th>How the verifier is involved</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Cilium</strong></td>
<td>Every network policy decision, service load-balancing operation, and Hubble flow log is handled by eBPF programs that passed the verifier at node startup.</td>
</tr>
<tr>
<td><strong>Falco</strong></td>
<td>Every Falco rule is enforced by a verifier-checked eBPF program attached to syscall hooks. Sub-millisecond detection is only possible because the program runs in kernel space.</td>
</tr>
<tr>
<td><strong>AWS VPC CNI</strong></td>
<td>On EKS, networking operations have progressively moved to eBPF for performance at scale. If you are on a recent EKS AMI, eBPF is already doing work on your nodes.</td>
</tr>
<tr>
<td><strong>systemd</strong></td>
<td>Modern systemd uses eBPF for cgroup-based resource accounting and network traffic control. Active on most current Ubuntu, RHEL, and Amazon Linux 2023 installations.</td>
</tr>
</tbody>
</table>
<hr />
<h2>Questions to Ask When Evaluating eBPF Tools</h2>
<p>When a vendor tells you their tool uses eBPF, these three questions will quickly tell you how mature their implementation is.</p>
<h3>1. What kernel version do you require?</h3>
<p>The verifier&rsquo;s capabilities have expanded significantly across kernel versions. Tools targeting kernel 5.8+ can use more powerful features safely. Tools claiming to work on kernel 4.x are constrained by an older, more limited verifier. The table below shows exactly where each major distribution stands.</p>
<table>
<thead>
<tr>
<th>Distribution</th>
<th>Default kernel</th>
<th>eBPF support level</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Ubuntu 16.04 LTS</strong></td>
<td>4.4</td>
<td>Basic eBPF only</td>
<td>No BTF. kprobes and socket filters work but modern tooling like Cilium and Falco eBPF driver will not run. EOL &mdash; do not use for new deployments.</td>
</tr>
<tr>
<td><strong>Ubuntu 18.04 LTS</strong></td>
<td>4.15</td>
<td>eBPF, no BTF</td>
<td>No CO-RE. Tools must be compiled against the exact running kernel headers. The HWE kernel (5.4) improves this but BTF still varies by build.</td>
</tr>
<tr>
<td><strong>Ubuntu 20.04 LTS</strong></td>
<td>5.4</td>
<td>BTF available, verify before use</td>
<td>CO-RE capable on most deployments. <code>CONFIG_DEBUG_INFO_BTF</code> was absent on some early builds. Verify with <code>ls /sys/kernel/btf/vmlinux</code> before deploying eBPF tooling. Cloud images generally have it enabled.</td>
</tr>
<tr>
<td><strong>Ubuntu 20.10+</strong></td>
<td>5.8</td>
<td>Full BTF + CO-RE</td>
<td>First Ubuntu release where BTF was consistently enabled by default. Ring buffers available. Not an LTS release &mdash; use 22.04 for production.</td>
</tr>
<tr>
<td><strong>Ubuntu 22.04 LTS</strong></td>
<td>5.15</td>
<td>Full modern eBPF &mdash; production ready</td>
<td>BTF embedded. Ring buffers, global variables, LSM hooks. Default baseline for EKS-optimised Ubuntu AMIs. Recommended for new deployments.</td>
</tr>
<tr>
<td><strong>Ubuntu 24.04 LTS</strong></td>
<td>6.8</td>
<td>Full modern eBPF + latest features</td>
<td>Open-coded iterators, improved verifier precision, enhanced LSM support. Best Ubuntu option for cutting-edge eBPF tooling today.</td>
</tr>
<tr>
<td><strong>Debian 10 (Buster)</strong></td>
<td>4.19</td>
<td>Basic eBPF, no BTF</td>
<td>eBPF programs load but CO-RE is unavailable. Must compile against exact kernel headers. EOL &mdash; migrate to Debian 11 or 12.</td>
</tr>
<tr>
<td><strong>Debian 11 (Bullseye)</strong></td>
<td>5.10 LTS</td>
<td>Full BTF + CO-RE</td>
<td>BTF enabled. CO-RE works. Cilium, Falco, and Tetragon all fully supported. Solid production baseline for Debian environments through 2026.</td>
</tr>
<tr>
<td><strong>Debian 12 (Bookworm)</strong></td>
<td>6.1 LTS</td>
<td>Full modern eBPF &mdash; production ready</td>
<td>Same kernel generation as Amazon Linux 2023. LSM hooks, ring buffers, full CO-RE. Recommended Debian version for eBPF workloads today.</td>
</tr>
<tr>
<td><strong>Debian 13 (Trixie)</strong></td>
<td>6.12 LTS</td>
<td>Full modern eBPF + latest features</td>
<td>Released August 2025. Same kernel generation as RHEL 10 / Rocky 10 / AlmaLinux 10. Maximum eBPF feature availability across all program types.</td>
</tr>
<tr>
<td><strong>RHEL 7.6</strong></td>
<td>3.10 (backported)</td>
<td>Tech Preview only &mdash; not production safe</td>
<td>First RHEL release to enable eBPF but explicitly marked as Tech Preview. Limited to kprobes and tracepoints. No XDP, no socket filters, no BTF. Do not use for eBPF in production.</td>
</tr>
<tr>
<td><strong>RHEL 8 / Rocky 8 / AlmaLinux 8</strong></td>
<td>4.18 (heavily backported)</td>
<td>Full BPF + BTF &mdash; functionally 5.4-equivalent</td>
<td>Red Hat backports make RHEL 8 kernels functionally comparable to upstream 5.4 for most eBPF use cases. BTF enabled across all releases. CO-RE works. Cilium treats RHEL 8.6+ as its minimum supported RHEL-family version.</td>
</tr>
<tr>
<td><strong>RHEL 9 / Rocky 9 / AlmaLinux 9</strong></td>
<td>5.14 (heavily backported)</td>
<td>Full modern eBPF &mdash; production ready</td>
<td>BTF embedded. XDP, tc, kprobe, tracepoint, and LSM hooks all supported. Falco, Cilium, and Tetragon fully supported. <strong>Recommended RHEL-family version for eBPF deployments today.</strong> Supported until 2032.</td>
</tr>
<tr>
<td><strong>RHEL 10 / Rocky 10 / AlmaLinux 10</strong></td>
<td>6.12</td>
<td>Full modern eBPF + latest features</td>
<td>Same kernel generation as Debian 13 and upstream 6.12 LTS. Rocky 10 released June 2025, AlmaLinux 10 released May 2025. Enhanced eBPF functionality throughout.</td>
</tr>
<tr>
<td><strong>Amazon Linux 2023</strong></td>
<td>6.1+</td>
<td>Full modern eBPF &mdash; production ready</td>
<td>BTF embedded. Full CO-RE. Recommended for EKS. Also resolves the NetworkManager deprecation issues in EKS 1.33+ &mdash; see the <a href="https://linuxcent.com/eks-1-33-networkmanager-systemd-networkd-migration-fix/">EKS 1.33 post</a>.</td>
</tr>
</tbody>
</table>
<blockquote>
<p><strong>Quick check for any distro:</strong> Run <code>ls /sys/kernel/btf/vmlinux</code> on your node. If the file exists, your kernel has BTF enabled and CO-RE-based eBPF tools will work correctly. If it does not exist, you are limited to tools that compile against your specific kernel headers. Run <code>uname -r</code> to confirm the exact kernel version.</p>
</blockquote>
<blockquote>
<p><strong>Rocky Linux and AlmaLinux note:</strong> Both distros rebuild directly from RHEL sources. Their kernel versions and eBPF capabilities are effectively identical to the corresponding RHEL release. When Cilium or Falco document &ldquo;RHEL 9 support&rdquo;, that applies equally to Rocky 9 and AlmaLinux 9 without any additional configuration.</p>
</blockquote>
<h3>2. Do you use CO-RE?</h3>
<p>CO-RE (Compile Once, Run Everywhere) means the tool&rsquo;s eBPF programs work correctly across different kernel versions without recompilation. Tools using CO-RE are more portable and significantly less likely to break after a routine node OS update. This is a reliable signal of engineering maturity in the vendor&rsquo;s eBPF implementation.</p>
<h3>3. What eBPF program types do you use?</h3>
<p>Different program types have different privilege levels and access scopes. A tool that only needs <code>kprobe</code> access is asking for considerably less privilege than one requiring <code>lsm</code> hooks.</p>
<ul>
<li><code>kprobe</code> / <code>tracepoint</code> &mdash; observability and debugging</li>
<li><code>tc</code> (traffic control) &mdash; network policy enforcement</li>
<li><code>xdp</code> (eXpress Data Path) &mdash; high-performance packet processing</li>
<li><code>lsm</code> (Linux Security Module) &mdash; security policy enforcement (used by Tetragon)</li>
</ul>
<p>Understanding the program type tells you what the tool can and cannot see on your nodes, and how much kernel access you are granting it.</p>
<hr />
<h2>How Falco Uses the Verifier &mdash; A Step-by-Step Walkthrough</h2>
<p>Here is exactly what happens when Falco starts on one of your K8s nodes, and where the verifier fits in:</p>
<pre><code>1. Falco pod starts on the node (via DaemonSet)

2. Falco loads its eBPF programs into the kernel:
   └─► BPF verifier checks each program
       ├─► Can it crash the kernel?            No → continue
       ├─► Can it loop forever?                No → continue
       ├─► Can it access out-of-bounds memory? No → continue
       └─► PASS → program loads

3. Falco's eBPF programs attach to syscall hooks:
   └─► sys_enter_execve   (every process execution in every container)
   └─► sys_enter_openat   (every file open)
   └─► sys_enter_connect  (every outbound network connection)

4. A container runs an unexpected shell (potential attack):
   └─► execve() called inside the container
   └─► Falco's eBPF hook fires in kernel space
   └─► Event forwarded to Falco userspace via ring buffer
   └─► Falco rule matches: "shell spawned in container"
   └─► Alert fired in under 1 millisecond

5. Your container, your other pods, your node: completely unaffected</code></pre>
<blockquote>
<p>Step 2 is what the verifier makes safe. Without it, attaching eBPF hooks to every syscall on your production node would be an unacceptable risk. With it, Falco can offer this level of visibility with a mathematical safety guarantee.</p>
</blockquote>
<hr />
<h2>The Bottom Line</h2>
<p>You do not need to understand BPF bytecode, register states, or static analysis to use eBPF tools safely in production. What you do need to understand is this:</p>
<blockquote>
<p>The BPF verifier is the reason eBPF is fundamentally different from kernel modules. It does not just make eBPF &ldquo;safer&rdquo; in a vague sense &mdash; it provides a mathematical proof that each program cannot crash your kernel before that program ever runs.</p>
</blockquote>
<p>This is why eBPF-based tools can deliver deep kernel-level visibility into every container, every syscall, and every network flow &mdash; with near-zero overhead, no sidecar injection, and production safety that kernel modules could never guarantee.</p>
<p>The next time someone on your team hesitates about running Cilium, Falco, or Tetragon on production nodes because <em>&ldquo;it runs code in the kernel&rdquo;</em> &mdash; you now know what to tell them. The verifier already checked it. Before it ever touched your cluster.</p>
<hr />
<h2>Further Reading</h2>
<ul>
<li><a href="https://docs.cilium.io/en/stable/concepts/ebpf/" target="_blank" rel="noopener noreferrer">Cilium documentation: eBPF and the Linux kernel</a></li>
<li><a href="https://falco.org/docs/event-sources/kernel/ebpf/" target="_blank" rel="noopener noreferrer">Falco: eBPF probe documentation</a></li>
<li><a href="https://tetragon.io/docs/" target="_blank" rel="noopener noreferrer">Tetragon: eBPF-based security observability</a></li>
<li><a href="https://ebpf.io" target="_blank" rel="noopener noreferrer">The official eBPF website: ebpf.io</a></li>
</ul>
<hr />
<p><em>Questions or corrections? Reach me on <a href="https://www.linkedin.com/in/vamshikrishnasanthapuri/" target="_blank" rel="noopener noreferrer">LinkedIn</a>. If this was useful, the full series index is on <a href="https://linuxcent.com">linuxcent.com</a> &mdash; search the <strong>eBPF Series</strong> tag for all episodes.</em></p>
<p>The post <a href="https://linuxcent.com/bpf-verifier-kubernetes-safety/">BPF Verifier Explained: Why eBPF Is Safe for Production Kubernetes</a> appeared first on <a href="https://linuxcent.com">Linux Cent</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://linuxcent.com/bpf-verifier-kubernetes-safety/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1424</post-id>	</item>
		<item>
		<title>What Is eBPF? A Plain-English Guide for Linux and Kubernetes Engineers</title>
		<link>https://linuxcent.com/what-is-ebpf-linux-kubernetes/</link>
					<comments>https://linuxcent.com/what-is-ebpf-linux-kubernetes/#respond</comments>
		
		<dc:creator><![CDATA[Vamshi Krishna Santhapuri]]></dc:creator>
		<pubDate>Thu, 19 Mar 2026 17:53:41 +0000</pubDate>
				<category><![CDATA[Bash]]></category>
		<category><![CDATA[Devops]]></category>
		<category><![CDATA[DevSecOps]]></category>
		<category><![CDATA[eBPF]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[BPF Verifier]]></category>
		<category><![CDATA[Cilium]]></category>
		<category><![CDATA[Cloud Native]]></category>
		<category><![CDATA[Container Security]]></category>
		<category><![CDATA[Datadog]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[eBPF explained]]></category>
		<category><![CDATA[eBPF From Kernel to Cloud]]></category>
		<category><![CDATA[eBPF Kubernetes]]></category>
		<category><![CDATA[eBPF Series]]></category>
		<category><![CDATA[Falco]]></category>
		<category><![CDATA[Linux Administration]]></category>
		<category><![CDATA[Linux kernel observability]]></category>
		<category><![CDATA[Observability]]></category>
		<category><![CDATA[SRE]]></category>
		<category><![CDATA[Tetragon]]></category>
		<guid isPermaLink="false">https://linuxcent.com/?p=1429</guid>

					<description><![CDATA[<p>~1,900 words &#183; Reading time: 7 min &#183; Series: eBPF: From Kernel to Cloud, Episode 1 of 18 Your Linux kernel has had a technology built into it since 2014 that most engineers working with Linux every day have never looked at directly. You have almost certainly been using it &#8212; through Cilium, Falco, Datadog, ... <a title="What Is eBPF? A Plain-English Guide for Linux and Kubernetes Engineers" class="read-more" href="https://linuxcent.com/what-is-ebpf-linux-kubernetes/" aria-label="Read more about What Is eBPF? A Plain-English Guide for Linux and Kubernetes Engineers">Read more</a></p>
<p>The post <a href="https://linuxcent.com/what-is-ebpf-linux-kubernetes/">What Is eBPF? A Plain-English Guide for Linux and Kubernetes Engineers</a> appeared first on <a href="https://linuxcent.com">Linux Cent</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><!-- ============================================================
     LINUXCENT.COM — WordPress HTML Post
     Title : What Is eBPF? A Plain-English Guide for Linux and Kubernetes Engineers
     Series: eBPF: From Kernel to Cloud — Episode 1
     Author: Vamshi Krishna Santhapuri
     Paste into: Gutenberg → Custom HTML block
                 OR Classic editor → Text tab
     Font  : WordPress system font stack — no external imports needed
     ============================================================ --></p>
<p><!-- ============================================================
     LINUXCENT.COM — Episode 1
     Paste into: WordPress Classic editor → Text tab
     OR Gutenberg → Custom HTML block
     Style: Matches linuxcent.com native theme — no custom CSS needed
     ============================================================ --></p>
<p><em>~1,900 words &middot; Reading time: 7 min &middot; Series: eBPF: From Kernel to Cloud, Episode 1 of 18</em></p>
<p>Your Linux kernel has had a technology built into it since 2014 that most engineers working with Linux every day have never looked at directly. You have almost certainly been using it &mdash; through Cilium, Falco, Datadog, or even systemd &mdash; without knowing it was there.</p>
<p>This post is the plain-English introduction to eBPF that I wished existed when I first encountered it. No kernel engineering background required. No bytecode, no BPF maps, no JIT compilation. Just a clear answer to the question every Linux admin and DevOps engineer eventually asks: <strong>what actually is eBPF, and why does it matter for the infrastructure I run every day?</strong></p>
<hr />
<h2>First: Forget the Name</h2>
<p>eBPF stands for <em>extended Berkeley Packet Filter</em>. It is one of the most misleading names in computing for what the technology actually does.</p>
<p>The original BPF was a 1992 mechanism for filtering network packets &mdash; the engine behind <code>tcpdump</code>. The extended version, introduced in Linux 3.18 (2014) and significantly matured through Linux 5.x, is a completely different technology. It is no longer just about packets. It is no longer just about filtering.</p>
<p>Forget the name. Here is what eBPF actually is:</p>
<blockquote>
<p>eBPF lets you run small, safe programs directly inside the Linux kernel &mdash; without writing a kernel module, without rebooting, and without modifying your applications.</p>
</blockquote>
<p>That is the complete definition. Everything else is implementation detail. The one-liner above is what matters for how you use it day to day.</p>
<hr />
<h2>What the Linux Kernel Can See That Nothing Else Can</h2>
<p>To understand why eBPF is significant, you need to understand what the Linux kernel already sees on every server and every Kubernetes node you run.</p>
<p>The kernel is the lowest layer of software on your machine. Every action that happens &mdash; every file opened, every process started, every network packet sent &mdash; passes through the kernel. That means it has a complete, real-time view of everything:</p>
<ul>
<li><strong>Every syscall</strong> &mdash; every <code>open()</code>, <code>execve()</code>, <code>connect()</code>, <code>write()</code> from every process in every container on the node, in real time</li>
<li><strong>Every network packet</strong> &mdash; source, destination, port, protocol, bytes, and latency for every pod-to-pod and pod-to-external connection</li>
<li><strong>Every process event</strong> &mdash; every fork, exec, and exit, including processes spawned inside containers that your container runtime never reports</li>
<li><strong>Every file access</strong> &mdash; which process opened which file, when, and with what permissions, across all workloads on the node simultaneously</li>
<li><strong>CPU and memory usage</strong> &mdash; per-process CPU time, function-level latency, and memory allocation patterns without profiling agents</li>
</ul>
<p>The kernel has always had this visibility. The problem was that there was no safe, practical way to access it without writing kernel modules &mdash; which are complex, kernel version-specific, and genuinely dangerous to run in production. eBPF is the safe, practical way to access it.</p>
<hr />
<h2>The Problem eBPF Solves &mdash; A Real Kubernetes Scenario</h2>
<p>Here is a situation every Kubernetes engineer has faced. A production pod starts behaving strangely &mdash; elevated CPU, slow responses, occasional connection failures. You want to understand what is happening at a low level: what syscalls is it making, what network connections is it opening, is something spawning unexpected processes?</p>
<h3>The old approaches and their problems</h3>
<p><strong>Restart the pod with a debug sidecar.</strong> You lose the current state immediately. The issue may not reproduce. You have modified the workload.</p>
<p><strong>Run strace inside the container via <code>kubectl exec</code>.</strong> strace uses ptrace, which adds 50&ndash;100% CPU overhead to the traced process and is unavailable in hardened containers. You are tracing one process at a time with no cluster-wide view.</p>
<p><strong>Poll <code>/proc</code> with a monitoring agent.</strong> Snapshot-based. Any event that happens between polls is invisible. A process that starts, does something, and exits between intervals is completely missed.</p>
<h3>The eBPF approach</h3>
<pre><code># Use a debug pod on the node — no changes to your workload
$ kubectl debug node/your-node -it --image=cilium/hubble-cli

# Real-time kernel events from every container on this node:
sys_enter_execve  pid=8821  comm=sh    args=["/bin/sh","-c","curl http://..."]
sys_enter_connect pid=8821  comm=curl  dst=203.0.113.42:443
sys_enter_openat  pid=8821  comm=curl  path=/etc/passwd

# Something inside the pod spawned a shell, made an outbound connection,
# and read /etc/passwd — all visible without touching the pod.</code></pre>
<p>Real-time visibility. No overhead on your workload. Nothing restarted. Nothing modified. That is what eBPF makes possible.</p>
<hr />
<h2>Tools You Are Probably Already Running on eBPF</h2>
<p>eBPF is not a standalone product &mdash; it is the foundation that many tools in the cloud-native ecosystem are built on. You may already be running eBPF on your nodes without thinking about it explicitly.</p>
<table>
<thead>
<tr>
<th>Tool</th>
<th>What eBPF does for it</th>
<th>Without eBPF</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Cilium</strong></td>
<td>Replaces kube-proxy and iptables with kernel-level packet routing. 2&ndash;3&times; faster at scale.</td>
<td>iptables rules &mdash; linear lookup, degrades with service count</td>
</tr>
<tr>
<td><strong>Falco</strong></td>
<td>Watches every syscall in every container for security rule violations. Sub-millisecond detection.</td>
<td>Kernel module (risky) or ptrace (high overhead)</td>
</tr>
<tr>
<td><strong>Tetragon</strong></td>
<td>Runtime security enforcement &mdash; can kill a process or drop a network packet at the kernel level.</td>
<td>No practical alternative at this detection speed</td>
</tr>
<tr>
<td><strong>Datadog Agent</strong></td>
<td>Network performance monitoring and universal service monitoring without application code changes.</td>
<td>Language-specific agents injected into application code</td>
</tr>
<tr>
<td><strong>systemd</strong></td>
<td>cgroup resource accounting and network traffic control on your Linux nodes.</td>
<td>Legacy cgroup v1 interfaces with limited visibility</td>
</tr>
</tbody>
</table>
<hr />
<h2>eBPF vs the Old Ways</h2>
<p>Before eBPF, getting deep visibility into a running Linux system meant choosing between three approaches, each with a significant trade-off:</p>
<table>
<thead>
<tr>
<th>Approach</th>
<th>Visibility</th>
<th>Cost</th>
<th>Production safe?</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Kernel modules</strong></td>
<td>Full kernel access</td>
<td>One bug = kernel panic. Version-specific, must recompile per kernel update.</td>
<td>No</td>
</tr>
<tr>
<td><strong>ptrace / strace</strong></td>
<td>One process at a time</td>
<td>50&ndash;100% CPU overhead on the traced process. Unusable in production.</td>
<td>No</td>
</tr>
<tr>
<td><strong>Polling /proc</strong></td>
<td>Snapshots only</td>
<td>Events between polls are invisible. Short-lived processes are missed entirely.</td>
<td>Partial</td>
</tr>
<tr>
<td><strong>eBPF</strong></td>
<td>Full kernel visibility</td>
<td>1&ndash;3% overhead. Verifier-guaranteed safety. Real-time stream, not polling.</td>
<td>Yes</td>
</tr>
</tbody>
</table>
<hr />
<h2>Is It Safe to Run in Production?</h2>
<p>This is always the first question from any experienced Linux admin, and it is exactly the right question to ask. The answer is yes &mdash; and the reason is the <strong>BPF verifier</strong>.</p>
<p>Before any eBPF program is allowed to run on your node, the Linux kernel runs it through a built-in static safety analyser. This analyser examines every possible execution path and asks: could this program crash the kernel, loop forever, or access memory it should not?</p>
<p>If the answer is yes &mdash; or even <em>maybe</em> &mdash; the program is rejected at load time. It never runs.</p>
<blockquote>
<p><strong>This is fundamentally different from kernel modules.</strong> A kernel module loads immediately with no safety check. If it has a bug, you find out at runtime &mdash; usually as a kernel panic. An eBPF program that would cause a panic is rejected before it ever loads. The safety guarantee is mathematical, not hopeful.</p>
</blockquote>
<p>Episode 2 of this series covers the BPF verifier in full: what it checks, how it makes Cilium and Falco safe on your production nodes, and what questions to ask eBPF tool vendors about their implementation.</p>
<hr />
<h2>Common Misconceptions</h2>
<p><strong>eBPF is not a specific tool or product.</strong> It is a kernel technology &mdash; a platform. Cilium, Falco, Tetragon, and Pixie are tools built on top of it. When a vendor says &ldquo;we use eBPF&rdquo;, they mean they build on this kernel capability, not that they share a single implementation.</p>
<p><strong>eBPF is not only for networking.</strong> The Berkeley Packet Filter name suggests networking, but modern eBPF covers security, observability, performance profiling, and tracing. The networking origin is historical, not a limitation.</p>
<p><strong>eBPF is not only for Kubernetes.</strong> It works on any Linux system running kernel 4.9+, including bare metal servers, Docker hosts, and VMs. K8s is the most popular deployment target because of the observability challenges at scale, but it is not a requirement.</p>
<p><strong>You do not need to write eBPF programs to benefit from eBPF.</strong> Most Linux admins and DevOps engineers will use eBPF through tools like Cilium, Falco, and Datadog &mdash; never writing a line of BPF code themselves. This series covers the writing side later. Understanding what eBPF is makes you a significantly better user of these tools today.</p>
<hr />
<h2>Kernel Version Requirements</h2>
<p>eBPF is a Linux kernel feature. The capabilities available depend directly on the kernel version running on your nodes. Run <code>uname -r</code> on any node to check.</p>
<table>
<thead>
<tr>
<th>Kernel</th>
<th>What becomes available</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>4.9+</code></td>
<td>Basic eBPF support. Tracing, socket filtering. Most production systems today meet this minimum.</td>
</tr>
<tr>
<td><code>5.4+</code></td>
<td>BTF (BPF Type Format) and CO-RE &mdash; programs that adapt to different kernel versions without recompile. Recommended minimum for production tooling.</td>
</tr>
<tr>
<td><code>5.8+</code></td>
<td>Ring buffers for high-performance event streaming. Global variables. The target kernel for Cilium, Falco, and Tetragon full feature support.</td>
</tr>
<tr>
<td><code>6.x</code></td>
<td>Open-coded iterators, improved verifier, LSM security enforcement hooks. Amazon Linux 2023 and Ubuntu 22.04+ ship 5.15 or newer and are fully eBPF-ready.</td>
</tr>
</tbody>
</table>
<blockquote>
<p><strong>EKS users:</strong> Amazon Linux 2023 AMIs ship with kernel 6.1+ and support the full modern eBPF feature set out of the box. If you are still on AL2, the migration also resolves the NetworkManager deprecation issues covered in the <a href="https://linuxcent.com/eks-1-33-networkmanager-systemd-networkd-migration-fix/">EKS 1.33 post</a>.</p>
</blockquote>
<hr />
<h2>The Bottom Line</h2>
<p>eBPF is the answer to a question Linux engineers have been asking for years: how do I get deep visibility into what is happening on my servers and Kubernetes nodes &mdash; without adding massive overhead, injecting sidecars, or risking a kernel panic?</p>
<p>The answer is: run small, safe programs at the kernel level, where everything is already visible. Let the BPF verifier guarantee those programs are safe before they run. Stream the results to your observability tools through shared memory maps.</p>
<p>The tools you already use &mdash; Cilium for networking, Falco for security, Datadog for APM &mdash; are built on this foundation. Understanding eBPF means understanding <em>why</em> those tools work the way they do, <em>what</em> they can and cannot see, and <em>how</em> to evaluate new tools that claim to use it.</p>
<blockquote>
<p>Every eBPF-based tool you run on your nodes passed through the BPF verifier before it touched your cluster. Episode 2 covers exactly what that means &mdash; and why it matters for your infrastructure decisions.</p>
</blockquote>
<hr />
<h2>Further Reading</h2>
<ul>
<li><a href="https://ebpf.io/what-is-ebpf/" target="_blank" rel="noopener noreferrer">ebpf.io &mdash; What is eBPF? (official introduction)</a></li>
<li><a href="https://docs.cilium.io/en/stable/concepts/ebpf/" target="_blank" rel="noopener noreferrer">Cilium documentation: eBPF dataplane explained</a></li>
<li><a href="https://falco.org/docs/event-sources/kernel/" target="_blank" rel="noopener noreferrer">Falco: kernel event sources and eBPF driver</a></li>
<li><a href="https://isovalent.com/blog/post/ebpf-documentary/" target="_blank" rel="noopener noreferrer">Isovalent: the story behind eBPF</a></li>
<li><a href="https://www.brendangregg.com/ebpf.html" target="_blank" rel="noopener noreferrer">Brendan Gregg&rsquo;s eBPF reference page</a></li>
</ul>
<hr />
<p><em>Questions or corrections? Reach me on <a href="https://www.linkedin.com/in/vamshikrishnasanthapuri/" target="_blank" rel="noopener noreferrer">LinkedIn</a>. If this was useful, the full series index is on <a href="https://linuxcent.com">linuxcent.com</a> &mdash; search the <strong>eBPF Series</strong> tag for all episodes.</em></p>
<p>The post <a href="https://linuxcent.com/what-is-ebpf-linux-kubernetes/">What Is eBPF? A Plain-English Guide for Linux and Kubernetes Engineers</a> appeared first on <a href="https://linuxcent.com">Linux Cent</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://linuxcent.com/what-is-ebpf-linux-kubernetes/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1429</post-id>	</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Page Caching using Disk: Enhanced 

Served from: linuxcent.com @ 2026-04-13 06:49:01 by W3 Total Cache
-->