Kubernetes Today: v1.33 to v1.35, In-Place Resize GA, and What Comes Next

Reading Time: 6 minutes


CISSP Domain Mapping

Domain Relevance
Domain 3 — Security Architecture Fine-grained node authorization; AppArmor stable; seccomp by default
Domain 4 — Communication Security Gateway API v1.4: BackendTLSPolicy, mutual TLS between gateway and upstream
Domain 7 — Security Operations Structured logging stable; audit pipeline maturity

Introduction

Ten years after the first commit, Kubernetes is not exciting in the way it was in 2015. That’s a compliment. The system is stable. The APIs are mature. The migrations — dockershim, PSP, cloud provider code — are behind us.

What the 1.33–1.35 cycle shows is a project focused on precision: removing edge cases, promoting long-running alpha features to stable, and making the scheduler, storage, and security model more correct rather than more powerful. That’s what a mature infrastructure platform looks like.

Here’s what happened and where the project is headed.


Kubernetes 1.33 — Sidecar Resize, In-Place Resize Beta (April 2025)

Code name: Octarine

In-Place Pod Vertical Scaling reaches Beta

After landing as alpha in 1.27, in-place pod resource resizing became beta in 1.33 — enabled by default via the InPlacePodVerticalScaling feature gate.

The capability: change CPU and memory requests/limits on a running container without terminating and restarting the pod.

# Resize a running container's CPU limit without restart
kubectl patch pod api-pod-xyz --type='json' -p='[
  {
    "op": "replace",
    "path": "/spec/containers/0/resources/requests/cpu",
    "value": "2"
  },
  {
    "op": "replace",
    "path": "/spec/containers/0/resources/limits/cpu",
    "value": "4"
  }
]'

# Verify the resize was applied
kubectl get pod api-pod-xyz -o jsonpath='{.status.containerStatuses[0].resources}'

Why this matters operationally: Before in-place resize, vertical scaling meant terminating the pod, losing in-memory state, waiting for a new pod to become ready. For databases with warm buffer pools, JVM applications with loaded heap caches, or any workload where startup cost is significant, this was a serious limitation. Vertical Pod Autoscaler (VPA) worked around it by restarting pods — acceptable for stateless workloads, problematic for stateful ones.

In 1.33, resizing also works for sidecar containers, combining two 1.32-stable features.

Sidecar Containers — Full Maturity

The first feature to formally combine sidecar and in-place resize: you can now vertically scale a service mesh proxy (Envoy sidecar) without restarting the application pod. For high-traffic services where the proxy itself becomes the CPU bottleneck, this is directly actionable.


Gateway API v1.4 (October 2025)

Gateway API continued its rapid iteration with v1.4:

BackendTLSPolicy (Standard channel): Configure TLS between the gateway and the backend service — not just TLS termination at the gateway, but end-to-end encryption:

apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
  name: api-backend-tls
spec:
  targetRefs:
  - group: ""
    kind: Service
    name: api-service
  validation:
    caCertificateRefs:
    - name: internal-ca
      group: ""
      kind: ConfigMap
    hostname: api.internal.corp

Gateway Client Certificate Validation: The gateway can now validate client certificates — mutual TLS for ingress traffic, not just between services.

TLSRoute to Standard: TLS routing (based on SNI, not HTTP host headers) graduated to the standard channel — enabling TCP workloads with TLS passthrough through the Gateway API model.

ListenerSet: Group multiple Gateway listeners — useful for shared infrastructure where multiple teams need to attach routes to the same gateway without managing separate Gateway resources.


Kubernetes 1.34 — Scheduler Improvements, DRA Continues (August 2025)

The 1.34 release focused on the scheduler and Dynamic Resource Allocation:

DRA structured parameters stabilization: The Dynamic Resource Allocation API matured its parameter model — resource drivers can expose structured claims that the scheduler understands, enabling topology-aware placement of GPU workloads:

apiVersion: resource.k8s.io/v1alpha3
kind: ResourceClaim
metadata:
  name: gpu-claim
spec:
  devices:
    requests:
    - name: gpu
      deviceClassName: gpu.nvidia.com
      selectors:
      - cel:
          expression: device.attributes["nvidia.com/gpu-product"].string() == "A100-SXM4-80GB"
      count: 2

Scheduler QueueingHint stable: Plugins can now tell the scheduler when to re-queue a pod for scheduling — instead of the scheduler periodically retrying all unschedulable pods, plugins signal when relevant cluster state has changed. This significantly reduces scheduler CPU consumption in large clusters with many unschedulable pods.

Fine-grained node authorization improvements: Kubelets can now be restricted from accessing Service resources they don’t need — further reducing the blast radius of a compromised kubelet.


Kubernetes 1.35 — In-Place Resize GA, Memory Limits Unlocked (December 2025)

In-Place Pod Vertical Scaling Graduates to Stable

After landing in alpha (1.27), beta (1.33), in-place resize graduated to GA in 1.35. Two significant improvements accompanied GA:

Memory limit decreases now permitted: Previously, you could increase memory limits in-place but not decrease them. The restriction existed because the kernel doesn’t immediately reclaim memory when the limit is lowered — the OOM killer would need to run. 1.35 lifts this restriction with proper handling: the kernel is instructed to reclaim, and the pod status reflects the resize progress.

Pod-Level Resources (alpha in 1.35): Specify resource requests and limits at the pod level rather than per-container — with in-place resize support. Useful for init containers and sidecar patterns where total pod resources matter more than per-container allocation.

spec:
  # Pod-level resources (alpha) — total budget for all containers
  resources:
    requests:
      cpu: "4"
      memory: "8Gi"
  containers:
  - name: application
    image: myapp:latest
    # No per-container resources; pod-level applies
  - name: log-collector
    image: fluentbit:latest
    restartPolicy: Always  # sidecar

Other 1.35 Highlights

Topology Spread Constraints improvements: Better handling of unschedulable scenarios — whenUnsatisfiable: ScheduleAnyway now has smarter fallback behavior.

VolumeAttributesClass stable: Change storage performance characteristics (IOPS, throughput) of a PersistentVolume without re-provisioning — the storage equivalent of in-place pod resize.

# Change volume IOPS without re-provisioning
kubectl patch pvc database-pvc --type='merge' -p='
  {"spec": {"volumeAttributesClassName": "high-performance"}}'

Job success policy improvements: Declare a Job successful when a subset of pods complete successfully — for distributed training jobs where not all workers need to finish.


What’s in Kubernetes 1.36 (April 22, 2026)

Kubernetes 1.36 is on track for April 22, 2026 release. Based on the enhancement tracking and KEP (Kubernetes Enhancement Proposal) pipeline, expected highlights include:

  • DRA continuing toward stable
  • Pod-level resources moving to beta
  • Scheduler improvements for AI/ML workload placement
  • Further Gateway API integration as core networking model

The project has reached a rhythm: four releases per year, each focused on advancing a predictable set of features through alpha → beta → stable. The drama of the 2019–2022 period (PSP, dockershim, API removals) is behind it.


The State of the Ecosystem in 2026

Control Plane Deployment Models

Model Examples Best For
Managed (cloud provider) GKE, EKS, AKS Most organizations; no control plane ops
Self-managed kubeadm, k3s, Talos Air-gapped, on-prem, specific compliance requirements
Managed (platform) Rancher, OpenShift Enterprises that need multi-cluster management + vendor support

CNI Landscape

CNI Model Notable Feature
Cilium eBPF kube-proxy replacement, network policy at kernel, Hubble observability
Calico eBPF or iptables BGP-based networking, hybrid cloud routing
Flannel VXLAN/host-gw Simple, low overhead, no network policy
Weave Mesh overlay Easy multi-host setup

eBPF-based CNIs (Cilium, Calico in eBPF mode) are now the default recommendation for production clusters. The iptables era of Kubernetes networking is ending.

Security Stack in 2026

A hardened Kubernetes cluster in 2026 runs:

Cluster provisioning:    Cluster API + GitOps (Flux/ArgoCD)
Admission control:       Pod Security Admission (restricted) + Kyverno or OPA/Gatekeeper
Runtime security:        Falco (eBPF-based syscall monitoring)
Network security:        Cilium NetworkPolicy + Cilium Cluster Mesh for multi-cluster
Image security:          Cosign signing in CI + admission webhook for signature verification
Secret management:       External Secrets Operator → HashiCorp Vault or cloud KMS
Observability:           Prometheus + Grafana + Hubble (network flows) + OpenTelemetry

The Permanent Principles That Haven’t Changed

Looking across twelve years and 35 minor versions, some things have not changed:

The API as the universal interface: Everything in Kubernetes is a resource. This remains the most important architectural decision — it makes every tool, every controller, every GitOps system work with the same model.

Reconciliation loops: Every Kubernetes controller watches actual state and drives it toward desired state. The controller pattern from 2014 is unchanged. CRDs and Operators are just more instances of it.

Labels and selectors: The flexible grouping mechanism from 1.0 is still the primary way Kubernetes components find each other. Services find pods. HPA finds Deployments. Operators find their managed resources.

Declarative, not imperative: You describe what you want. Kubernetes figures out how to achieve and maintain it. This principle, inherited from Borg’s BCL configuration, underlies everything from Deployments to Crossplane’s cloud resource management.


What’s Coming: The Next Five Years

WebAssembly on Kubernetes: The Wasm ecosystem (wasmCloud, SpinKube) is building toward running WebAssembly workloads as first-class Kubernetes pods — near-native performance, smaller images, stronger isolation than containers. Still early, but gaining real adoption.

AI inference as infrastructure: LLM serving is becoming a cluster primitive. Tools like KServe and vLLM on Kubernetes are moving from research to production. The scheduler, resource model, and networking will continue adapting to inference workload patterns.

Confidential computing: AMD SEV, Intel TDX, and ARM CCA provide hardware-level memory encryption for pods. The RuntimeClass mechanism and ongoing kernel work are making confidential Kubernetes workloads operational rather than experimental.

Leaner distributions: k3s, k0s, Talos, and Flatcar-based minimal Kubernetes distributions are growing in adoption for edge, IoT, and resource-constrained environments. The pressure is toward smaller, more auditable control planes.


Key Takeaways

  • In-place pod vertical scaling went from alpha (1.27) to stable (1.35) — live CPU and memory resize without pod restart changes the economics of stateful workload management
  • Gateway API v1.4 completes the ingress replacement story: BackendTLSPolicy, client certificate validation, and TLSRoute in standard channel
  • VolumeAttributesClass stable (1.35): Change storage performance in-place — the storage parallel to pod resource resize
  • The eBPF era of Kubernetes networking is established: Cilium as default CNI in GKE, growing in EKS/AKS, replacing iptables-based kube-proxy
  • The Kubernetes project in 2026 is focused on precision — promoting mature features to stable, reducing edge cases, improving scheduler efficiency — not adding new abstractions
  • WebAssembly, confidential computing, and AI inference scheduling are the frontiers to watch

Series Wrap-Up

Era Defining Change
2003–2014 Borg and Omega build the playbook internally at Google
2014–2016 Kubernetes 1.0, CNCF, and winning the container orchestration wars
2016–2018 RBAC stable, CRDs, cloud providers all-in on managed K8s
2018–2020 Operators, service mesh, OPA/Gatekeeper — the extensibility era
2020–2022 Supply chain crisis, PSP deprecated, API removals, dockershim exit
2022–2023 Dockershim and PSP removed, eBPF networking takes over
2023–2025 GitOps standard, sidecar stable, DRA, AI/ML workloads
2025–2026 In-place resize GA, VolumeAttributesClass, Gateway API complete

From 47,501 lines of Go in a 250-file GitHub commit to the operating system of the cloud — and still reconciling.


← EP07: Platform Engineering Era

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

The Platform Engineering Era: GitOps, AI Workloads, and Leaner Kubernetes (2023–2025)

Reading Time: 6 minutes


CISSP Domain Mapping

Domain Relevance
Domain 3 — Security Architecture Cluster API for declarative cluster lifecycle; Crossplane for cloud resource governance
Domain 7 — Security Operations GitOps audit trail as compliance evidence; ArgoCD + Flux for policy-driven deployment
Domain 8 — Software Security AI/ML workload supply chain; GPU operator image provenance

Introduction

By 2023, the question had shifted from “how do we run Kubernetes?” to “how do we let other engineers run their workloads on Kubernetes without becoming a bottleneck?”

This is the platform engineering problem. And it drove the tooling that defined 2023–2025: GitOps as the deployment standard, Cluster API for Kubernetes-on-Kubernetes provisioning, AI/ML workloads forcing new scheduling capabilities, and the Kubernetes project itself shedding more weight to become faster to release and operate.


GitOps: Principle Becomes Practice

GitOps as a term was coined by Weaveworks in 2017. By 2023, it was no longer a debate — it was the default deployment model for organizations running Kubernetes at scale.

The principle: the desired state of your cluster lives in Git. A controller watches the repository and reconciles the cluster state to match. Every deployment is a PR merge. The audit trail is the Git history.

Flux v2 (CNCF graduated) and ArgoCD (CNCF incubating) became the two dominant implementations:

# Flux: GitRepository + Kustomization
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
  name: production-config
  namespace: flux-system
spec:
  interval: 1m
  url: https://github.com/org/k8s-config
  ref:
    branch: main
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: production-apps
  namespace: flux-system
spec:
  interval: 10m
  path: ./clusters/production
  prune: true          # Remove resources deleted from Git
  sourceRef:
    kind: GitRepository
    name: production-config
  healthChecks:
  - apiVersion: apps/v1
    kind: Deployment
    name: api
    namespace: production

The prune: true behavior is critical: resources deleted from Git are deleted from the cluster. This is what makes GitOps a security control — unknown resources that aren’t in Git get removed. No more accumulation of forgotten test deployments, rogue debug pods, or unauthorized configuration changes that outlive the engineer who made them.

ArgoCD’s Application model added a UI, synchronization policies, and multi-cluster management:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: production-api
  namespace: argocd
spec:
  project: production
  source:
    repoURL: https://github.com/org/apps
    targetRevision: HEAD
    path: api/production
  destination:
    server: https://kubernetes.default.svc
    namespace: api
  syncPolicy:
    automated:
      prune: true
      selfHeal: true    # Revert manual kubectl changes
    syncOptions:
    - CreateNamespace=true

The selfHeal: true option is where GitOps becomes enforceable: any manual change made with kubectl is automatically reverted within the sync interval. For compliance-sensitive environments, this is a configuration drift prevention control.


Cluster API: Kubernetes Managing Kubernetes

Cluster API (cluster-sigs/cluster-api) flipped the usual model: instead of using tools like Terraform or Ansible to provision Kubernetes clusters, Cluster API lets you manage Kubernetes clusters as Kubernetes resources — using a management cluster to provision and manage workload clusters.

# Create a new Kubernetes cluster as a Kubernetes resource
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: workload-cluster-prod
spec:
  clusterNetwork:
    pods:
      cidrBlocks: ["192.168.0.0/16"]
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
    kind: AWSCluster
    name: workload-cluster-prod
  controlPlaneRef:
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    name: workload-cluster-prod-control-plane

Cluster API reconciliation handles cluster provisioning, scaling, upgrades, and deletion — all through the Kubernetes API, with all the tooling (RBAC, audit logging, GitOps integration) that entails. Multi-cluster platform teams could now manage hundreds of workload clusters from a single management cluster.


Kubernetes 1.28 — Sidecar Containers Alpha (August 2023)

Sidecar containers had been a Kubernetes pattern since 2015 — a helper container in the same pod as the main application. But there was no native sidecar lifecycle management. Sidecars were just regular init containers or additional containers, which meant:
– Init container sidecars ran before the application and had to block until they succeeded
– Regular container sidecars had no ordering guarantees at startup
– At pod termination, sidecars could die before the application finished draining

1.28 introduced native sidecar support: a new restartPolicy field for init containers:

spec:
  initContainers:
  - name: log-collector
    image: fluentbit:latest
    restartPolicy: Always    # This makes it a sidecar
    # Starts before main containers, stays running, stops after main containers exit
  containers:
  - name: application
    image: myapp:latest

A sidecar container (init container with restartPolicy: Always):
– Starts before application containers
– Stays running throughout the pod lifecycle
– Terminates automatically after all main containers exit
– Restarts if it crashes (unlike regular init containers)

This solved the service mesh sidecar problem: Istio and Linkerd injected Envoy proxies as regular containers, leading to race conditions where the proxy hadn’t started when the application tried to make outbound connections. Native sidecar lifecycle guarantees the proxy is ready before the application starts.

Also in 1.28:
Retroactive default StorageClass assignment: Existing PVCs without a StorageClass assignment get the default applied retroactively — useful for migrations
Non-graceful node shutdown stable: Handle node power failures without manual pod cleanup
Recovery from volume expansion failure: Previously, a failed volume expansion left the PVC in a broken state; 1.28 introduced a mechanism to recover


AI/ML Workloads Force New Kubernetes Capabilities

The LLM wave of 2023 drove GPU workloads onto Kubernetes at a scale and urgency the project hadn’t anticipated. Running LLM inference on Kubernetes required solving problems that CPU-centric cluster scheduling hadn’t encountered:

GPU topology awareness: Inference across multiple GPUs requires GPUs connected by NVLink or on the same PCIe switch, not arbitrary GPUs from different nodes or different PCIe buses. The Dynamic Resource Allocation API (1.26 alpha) was designed exactly for this.

Fractional GPU allocation: NVIDIA’s time-slicing and MIG (Multi-Instance GPU) allow multiple pods to share a single GPU. The GPU operator (NVIDIA) manages this at the node level:

# Check GPU resources visible to Kubernetes
kubectl get nodes -o custom-columns=\
  "NODE:.metadata.name,GPU:.status.allocatable.nvidia\.com/gpu"
# NODE       GPU
# gpu-node-1   8
# gpu-node-2   8

Batch scheduling for training jobs: Training runs require all workers to start simultaneously — a single missing GPU makes the entire job stall. The Kubernetes Job API doesn’t guarantee this. Projects like Volcano (CNCF incubating) and Kueue (Kubernetes SIG Scheduling) added gang scheduling: a job only starts when all requested resources are available.

# Kueue: queue AI training jobs with resource quotas
apiVersion: kueue.x-k8s.io/v1beta1
kind: ClusterQueue
metadata:
  name: gpu-queue
spec:
  namespaceSelector: {}
  resourceGroups:
  - coveredResources: ["nvidia.com/gpu", "cpu", "memory"]
    flavors:
    - name: a100-80gb
      resources:
      - name: nvidia.com/gpu
        nominalQuota: 16

Kubernetes 1.29 — Sidecar to Beta, Load Balancer IP Mode (December 2023)

  • Sidecar containers beta: The lifecycle semantics were refined based on 1.28 alpha feedback
  • Load balancer IP mode alpha: Distinguish between load balancers that use virtual IPs (kube-proxy handles the traffic) vs. those that handle traffic directly (no need for kube-proxy rules) — important for eBPF-based load balancers
  • ReadWriteOncePod volume access stable

Kubernetes 1.30 — Structured Authorization Config (April 2024)

  • Structured authorization configuration beta: Define multiple authorization webhooks with explicit ordering, failure modes, and connection settings — replacing the flat --authorization-mode flag
  • Sidecar containers beta continues
  • Node memory swap support beta: Allow pods to use swap memory — controversial but necessary for workloads with bursty memory patterns that prefer using swap over OOM kill
# Node with swap enabled — kubelet config
kind: KubeletConfiguration
memorySwap:
  swapBehavior: LimitedSwap

The swap support feature reversed a long-standing Kubernetes hard stance: swap was disabled since 1.0 because its interaction with Kubernetes memory accounting was unpredictable. The 1.30 approach adds proper accounting and policies.


Kubernetes 1.31 — Cloud Provider Code Removal Complete (August 2024)

1.31 marked the completion of the cloud provider code removal — the 1.5 million line migration that had been running since 1.26. Core binaries are 40% smaller. The API server, controller manager, and scheduler no longer contain vendor-specific code.

Also in 1.31:
Persistent Volume health monitor stable
AppArmor support stable: AppArmor profiles for pods using the native Kubernetes field (not annotations)
Traffic distribution for Services beta: Express topology preferences for Service routing (prefer local node, prefer same zone)

# Traffic distribution: prefer endpoints in the same zone
apiVersion: v1
kind: Service
metadata:
  name: api
spec:
  trafficDistribution: PreferClose
  selector:
    app: api
  ports:
  - port: 80
    targetPort: 8080

Kubernetes 1.32 — Sidecar Stable, DRA Beta (December 2024)

  • Sidecar containers stable: After nearly a decade of workarounds, the sidecar pattern is a first-class Kubernetes primitive
  • Dynamic Resource Allocation beta: GPU and specialized hardware scheduling ready for production evaluation
  • Job API improvements: Success and failure policies for indexed jobs — granular control over batch workload behavior
  • Custom Resource field selectors: Filter CRDs on arbitrary fields — making large CRD-based systems more efficient to query

Crossplane: Kubernetes as the Control Plane for Everything

Crossplane (CNCF graduated) extended the Kubernetes API model beyond the cluster itself. Using CRDs and controllers, Crossplane lets you manage cloud resources (RDS databases, S3 buckets, VPCs, IAM roles) as Kubernetes resources — provisioned, updated, and deleted through the Kubernetes API.

# Crossplane: provision an RDS PostgreSQL instance as a Kubernetes resource
apiVersion: database.aws.crossplane.io/v1beta1
kind: RDSInstance
metadata:
  name: production-db
spec:
  forProvider:
    region: us-east-1
    dbInstanceClass: db.r6g.xlarge
    masterUsername: admin
    engine: postgres
    engineVersion: "15"
    allocatedStorage: 100
    multiAZ: true
  writeConnectionSecretsToRef:
    name: production-db-credentials
    namespace: production

For platform teams, Crossplane means a single control plane — the Kubernetes API — for both compute workloads and cloud infrastructure. GitOps tools (Flux, ArgoCD) manage both.


Key Takeaways

  • GitOps (Flux, ArgoCD) became the production deployment standard — not for ideological reasons, but because the audit trail, drift detection, and self-healing properties solve real operational and compliance problems
  • Cluster API made Kubernetes cluster lifecycle (provisioning, upgrades, deletion) a Kubernetes-native operation — the same API, tooling, and audit trail
  • Native sidecar containers (1.28 alpha → 1.32 stable) finally resolved the lifecycle ordering problem that service meshes and log collectors had worked around for years
  • AI/ML workloads drove new scheduling capabilities (DRA, gang scheduling via Kueue/Volcano) and made GPU topology awareness a first-class concern
  • Crossplane generalized the Kubernetes API model to cloud infrastructure — the cluster is now a control plane for everything, not just containers

What’s Next

← EP06: The Runtime Reckoning | EP08: Kubernetes Today →

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

The Runtime Reckoning: Dockershim Out, eBPF In, and PSP Finally Dies (2022–2023)

Reading Time: 6 minutes


CISSP Domain Mapping

Domain Relevance
Domain 3 — Security Architecture PSP removal; Pod Security Admission GA; confidential computing via RuntimeClass
Domain 4 — Communication Security eBPF-based CNI (Cilium) replacing iptables; kernel-level network policy enforcement
Domain 7 — Security Operations eBPF for runtime observability without privileged sidecars

Introduction

2022 is the year Kubernetes dealt with its legacy. The Docker shim that everyone had been warned about for two years was actually removed. PodSecurityPolicy — the broken security primitive that clusters had depended on since 1.3 — was deleted. And eBPF started displacing iptables as the networking substrate.

These weren’t additions to Kubernetes. They were the removal of technical debt accumulated over eight years. And the migrations they forced were the most operationally significant events since RBAC went stable.


Kubernetes 1.24 — Dockershim Removed (May 2022)

The dockershim was removed in 1.24. The deprecation had been announced in 1.20 (December 2020) — 18 months of warning. It didn’t matter. Operators who hadn’t migrated still scrambled.

The actual migration was straightforward for most environments:

# On each node, before upgrading to 1.24:
# 1. Install containerd
apt-get install -y containerd.io

# 2. Configure containerd
containerd config default | tee /etc/containerd/config.toml
# Edit: set SystemdCgroup = true in runc options

# 3. Update kubelet to use containerd socket
# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Add: --container-runtime-endpoint=unix:///run/containerd/containerd.sock

# 4. Restart
systemctl daemon-reload && systemctl restart kubelet

What the migration revealed: how many teams were depending on the Docker socket being present on nodes. Tools that mounted /var/run/docker.sock to talk to the Docker daemon — build tools, CI agents, some monitoring agents — broke. The ecosystem had to adapt to nerdctl (containerd’s Docker-compatible CLI), Kaniko, Buildah, or mounting the containerd socket instead.

Other 1.24 highlights:
Beta APIs disabled by default: New beta features would no longer be enabled automatically. This reversed a long-standing policy that had caused too many production clusters to accidentally pick up unstable features
gRPC probes stable: Liveness and readiness probes could now use gRPC health checks natively — no more writing HTTP wrapper endpoints for gRPC services
Non-graceful node shutdown alpha: Handle the case where the node disappears without the kubelet getting to gracefully terminate pods — stateful workloads on node failure


Kubernetes 1.25 — PSP Removed (August 2022)

PodSecurityPolicy was deleted in 1.25. Every cluster that was still using PSP had to migrate to Pod Security Admission (or OPA/Gatekeeper or Kyverno) before upgrading.

Pod Security Admission was GA in 1.25, ready to take over:

# Enforce restricted policy on a namespace
kubectl label namespace production \
  pod-security.kubernetes.io/enforce=restricted \
  pod-security.kubernetes.io/enforce-version=v1.25

# Test a pod against the policy without enforcing
kubectl label namespace staging \
  pod-security.kubernetes.io/warn=restricted \
  pod-security.kubernetes.io/audit=restricted

The dry-run modes (warn, audit) were critical for migration: you could enable them on namespaces and watch what would have been rejected before switching to enforce mode.

The real migration challenge was existing workloads running as root, with privileged security contexts, or with hostPath mounts. The restricted policy rejected all of these. Production applications that had been running for years under permissive PSP policies now failed validation.

Also in 1.25:
Ephemeral containers stable: Attach a debug container to a running pod without restarting it

# Debug a running pod with no shell
kubectl debug -it nginx-pod --image=busybox:latest --target=nginx
  • CSI ephemeral volumes stable
  • cgroups v2 (unified hierarchy) support stable: Enables memory QoS, improved resource accounting

Kubernetes 1.26 — Structured Parameter Scheduling, Storage (December 2022)

1.26 focused on the scheduler and storage:
Dynamic Resource Allocation alpha: A generalization of the device plugin API — allows requesting complex resources (GPUs, FPGAs, network adapters) with scheduling constraints. The foundation for AI/ML workload scheduling on heterogeneous hardware
CrossNamespacePVCDataSource beta: Clone a PVC across namespaces — enables namespace-based data isolation while sharing data sets
Pod scheduling readiness alpha: A pod can declare that it’s not ready to be scheduled until external conditions are met (data pre-loading complete, license validated, etc.)
Removal of in-tree cloud provider code (beta, continued): A long-running effort to move cloud-provider-specific code out of the core Kubernetes binary

The Dynamic Resource Allocation feature deserves emphasis: it’s the mechanism that makes Kubernetes a serious platform for GPU scheduling in AI/ML workloads. Device plugins (the prior mechanism) had limitations — a pod either got a GPU or it didn’t. DRA allows richer resource semantics: this pod needs two GPUs on the same PCIe bus, or this pod needs a specific GPU model.


eBPF Reshapes Kubernetes Networking

The most significant architectural shift in Kubernetes networking during 2022–2023 wasn’t a Kubernetes release feature. It was the adoption of eBPF-based CNI solutions — primarily Cilium — as the default networking layer in major managed Kubernetes offerings.

The iptables problem: kube-proxy has been using iptables rules to implement Service routing since Kubernetes 1.0. Every Service adds iptables rules to every node. At 10,000 services, the iptables rule table on each node has hundreds of thousands of rules. Traversing these rules on every packet is O(n). Updating them requires locking and flushing. At scale, iptables becomes a bottleneck.

The eBPF solution: Cilium replaces kube-proxy entirely, implementing Service routing using eBPF maps — hash tables in kernel memory. Service lookup is O(1). Rule updates don’t require locking. Network policy enforcement happens in the kernel, before packets even reach the application.

# Check if Cilium is running in kube-proxy replacement mode
cilium status | grep "KubeProxy replacement"
# KubeProxy replacement:    True

# eBPF-based service map — inspect directly
cilium service list
# ID   Frontend          Service Type   Backend
# 1    10.96.0.1:443     ClusterIP      10.0.0.5:6443
# 2    10.96.0.10:53     ClusterIP      10.0.1.2:53, 10.0.1.3:53

Network policy enforcement: Cilium’s NetworkPolicy implementation enforces rules at the eBPF layer — packets that would be dropped by policy are dropped before they ever leave the kernel, before they touch the pod’s network stack. This is both faster and more secure than userspace enforcement.

Hubble: Cilium’s observability layer — built on the same eBPF probes — provides real-time network flow visibility, HTTP layer observability (which service called which endpoint, response codes), and DNS query logging without any application changes.

Major adoption milestones:
– GKE’s default CNI became Cilium (Dataplane V2) in 2021
– Amazon EKS added Cilium support
– Azure AKS enabled Cilium-based networking
– Google’s Autopilot clusters use Cilium exclusively


Kubernetes 1.27 — Graceful Failure, In-Place Resize Alpha (April 2023)

  • In-Place Pod Vertical Scaling alpha: Change the CPU and memory resources of a running container without restarting the pod. For databases, JVM-based applications, and anything with warm caches, live resizing is a significant operational improvement
# Resize a container's CPU without restart
kubectl patch pod database-pod --type='json' \
  -p='[{"op": "replace", "path": "/spec/containers/0/resources/requests/cpu", "value": "2"}]'
  • SeccompDefault stable: Enable the default seccomp profile (RuntimeDefault) cluster-wide — a meaningful reduction in the default syscall attack surface for all pods
  • Mutable scheduling directives for Jobs stable: Change node affinity and tolerations of pending (not yet running) Job pods
  • ReadWriteOncePod PersistentVolume access mode stable: A volume can only be mounted by a single pod at a time — the correct semantic for databases with file-level locking requirements

The 1.5 Million Lines Removed: Cloud Provider Code Migration

One of the largest ongoing engineering efforts in Kubernetes 1.26–1.31 was the removal of in-tree cloud provider code. Every major cloud provider (AWS, Azure, GCP, OpenStack, vSphere) had code compiled directly into the Kubernetes control plane binaries.

The result: the Kubernetes API server and controller manager binaries contained code for AWS EBS volumes, GCE persistent disks, Azure managed disks, OpenStack Cinder — regardless of which cloud you were running on.

The migration moved this code to external Cloud Controller Managers (CCM) — separate processes that communicate with the API server like any other controller:

Before: kube-controller-manager (monolithic, includes all cloud providers)
After:  kube-controller-manager (generic) + cloud-controller-manager (cloud-specific, external)

By 1.31, approximately 1.5 million lines of code had been removed from the core binaries, reducing binary sizes by approximately 40%. This is the largest refactor in Kubernetes history.


Gateway API: Replacing Ingress (2022–2023)

The Ingress API, which graduated to stable in 1.19, has fundamental limitations:
– No support for TCP/UDP routing (HTTP only)
– No traffic splitting between multiple backends
– No header-based routing
– Vendor-specific features implemented via annotations (not portable)
– No RBAC granularity within a single Ingress resource

Gateway API (kubernetes-sigs/gateway-api) was designed as the successor, with a role-based model:

GatewayClass  → Managed by infrastructure provider (cluster admin)
Gateway       → Managed by cluster operators
HTTPRoute     → Managed by application developers
# Gateway — cluster operator configures the load balancer
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: production-gateway
spec:
  gatewayClassName: nginx
  listeners:
  - name: https
    port: 443
    protocol: HTTPS
    tls:
      mode: Terminate
      certificateRefs:
      - name: tls-cert

---
# HTTPRoute — application team configures routing
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: api-route
spec:
  parentRefs:
  - name: production-gateway
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /api/v2
    backendRefs:
    - name: api-v2-service
      port: 8080
      weight: 90
    - name: api-v3-canary
      port: 8080
      weight: 10

Gateway API reached GA (v1.0) in October 2023, with the core HTTPRoute, Gateway, and GatewayClass resources graduating to stable.


Key Takeaways

  • Dockershim removal in 1.24 completed the CRI migration that started in 1.5 — the Kubernetes runtime interface is now clean, with containerd and CRI-O as the standard runtimes
  • PSP removal in 1.25 forced a migration that should have happened years earlier; Pod Security Admission’s simplicity is a feature, not a limitation
  • eBPF-based networking (Cilium, Dataplane V2) is now the default in GKE and increasingly in EKS and AKS — O(1) service routing and kernel-level policy enforcement replace the iptables approach that dated to Kubernetes 1.0
  • Dynamic Resource Allocation (1.26 alpha) is the foundation for AI/ML GPU scheduling — more capable than device plugins and designed for heterogeneous hardware requests
  • Gateway API reaching GA replaced the annotation-driven, non-portable Ingress API with a role-oriented, extensible routing API
  • The cloud provider code removal (1.5M lines) is the largest refactor in Kubernetes history, a prerequisite for a maintainable, leaner core

What’s Next

← EP05: Security Hardens | EP07: Platform Engineering Era →

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

Security Hardens: Supply Chain, Pod Security, and the API Cleanup (2020–2022)

Reading Time: 6 minutes


CISSP Domain Mapping

Domain Relevance
Domain 3 — Security Architecture Pod Security Standards replace PSP; namespace-level security profiles
Domain 7 — Security Operations Supply chain security: Sigstore, image signing, SBOM; SolarWinds context
Domain 8 — Software Security Container image provenance; admission-time image verification

Introduction

The 2020–2022 period redefined what “secure Kubernetes” meant. A global pandemic moved workloads to cloud-native infrastructure faster than security practices could follow. SolarWinds happened. Log4Shell happened. The software supply chain became a crisis.

At the same time, the Kubernetes project was doing something it had been reluctant to do: removing APIs and features, including PodSecurityPolicy — the primary security primitive that most enterprise clusters depended on. The replacement was simpler, but the migration was not.


Kubernetes 1.19 — LTS Behavior, Ingress Stable (August 2020)

1.19 extended the support window to one year (from nine months). This was an acknowledgment that enterprise organizations couldn’t upgrade four times per year — a common complaint from operations teams.

  • Ingress graduated to stable: networking.k8s.io/v1 — after years as a beta resource, Ingress finally had a stable API
  • Immutable ConfigMaps and Secrets to beta: Configuration protection becomes broadly available
  • EndpointSlices to GA: The replacement for Endpoints — shards pod-to-service mappings to avoid the single large Endpoints object that caused control plane stress at scale (10,000+ endpoints for a single service)
  • Structured logging (alpha): Machine-parseable log output from Kubernetes control plane components — a prerequisite for reliable SIEM integration
# EndpointSlice: distributed representation of service endpoints
kubectl get endpointslices -n production -l kubernetes.io/service-name=api-service
NAME                  ADDRESSTYPE   PORTS   ENDPOINTS                                   AGE
api-service-abc12     IPv4          8080    10.0.1.5,10.0.1.6,10.0.1.7 + 47 more...   2d
api-service-def34     IPv4          8080    10.0.2.1,10.0.2.2,10.0.2.3 + 47 more...   2d

Kubernetes 1.20 — Dockershim Deprecated (December 2020)

The announcement in 1.20 that the Docker shim was deprecated caused more panic than any previous Kubernetes deprecation. The message was misread by many as “Kubernetes is dropping Docker support” — the PR catastrophe that followed required the Kubernetes blog to publish a dedicated clarification post.

The reality: Docker-built images continued to work on Kubernetes. What was being removed was the code in the kubelet that talked directly to Docker’s daemon using a non-standard interface, rather than through the Container Runtime Interface (CRI). Docker images conform to the OCI (Open Container Initiative) image specification — they run on any CRI-compliant runtime.

The migration path:
containerd: The runtime that Docker itself used internally. Moving to containerd meant removing the Docker layer entirely — the kubelet talks directly to containerd via CRI
CRI-O: An OCI-focused runtime designed specifically for Kubernetes, minimal and purpose-built

# Before (Docker socket): kubelet → dockershim → Docker daemon → containerd → runc
# After (direct CRI):     kubelet → containerd → runc
#                    or:  kubelet → CRI-O → runc

# Check runtime in use on a node
kubectl get node worker-1 -o jsonpath='{.status.nodeInfo.containerRuntimeVersion}'
# containerd://1.6.4

Also in 1.20:
API Priority and Fairness beta: Rate-limit API server requests by priority — prevents a runaway controller from starving other API clients
CronJobs stable: Scheduled jobs graduate after years in beta
Volume snapshot stable


The SolarWinds Context (December 2020)

The SolarWinds supply chain attack, disclosed in December 2020, didn’t directly target Kubernetes. But it accelerated an existing conversation in the cloud-native community: if the build pipeline is compromised, signed binaries mean nothing. If the image registry is compromised, admission control on image names means nothing.

The attack catalyzed work on several fronts:
Sigstore: An open-source project (Google, Red Hat, Purdue University) for signing and verifying software artifacts including container images
SLSA (Supply chain Levels for Software Artifacts): A framework for incrementally improving supply chain security, from basic build provenance to hermetic builds with verified dependencies
SBOM (Software Bill of Materials): A machine-readable inventory of software components in an image — required by US Executive Order 14028 (May 2021) for software sold to the federal government


Kubernetes 1.21 — PodSecurityPolicy Deprecation (April 2021)

PodSecurityPolicy was deprecated in 1.21, announcing its removal in 1.25. The deprecation was contentious — PSP was the only built-in mechanism for enforcing pod security constraints, and every security-conscious cluster depended on it, despite its many flaws.

The replacement approach: Pod Security Standards — three predefined security profiles:

Profile Description Use Case
Privileged No restrictions System-level workloads, trusted components
Baseline Prevents known privilege escalations General application workloads
Restricted Hardened; follows current best practices High-security workloads

Other 1.21 highlights:
CronJobs stable
Immutable ConfigMaps and Secrets stable
Graceful node shutdown beta: The kubelet gracefully terminates pods when a node shuts down (not just when the kubelet stops)
PodDisruptionBudget stable


Kubernetes 1.22 — The Great API Removal (August 2021)

1.22 was the most disruptive Kubernetes release for operations teams since 1.0. Several long-lived beta APIs were removed:

Removed API Replacement Used By
networking.k8s.io/v1beta1 Ingress networking.k8s.io/v1 Every ingress resource
batch/v1beta1 CronJob batch/v1 Every scheduled job
apiextensions.k8s.io/v1beta1 CRD apiextensions.k8s.io/v1 Every CRD definition
rbac.authorization.k8s.io/v1beta1 rbac.authorization.k8s.io/v1 RBAC resources

Teams with Helm charts, Terraform modules, and CI/CD pipelines built against beta API versions had to update their manifests. This was the moment that finally drove home the message: beta APIs in Kubernetes are not stable — they will be removed.

Also in 1.22:
Server-Side Apply stable: Apply semantics moved server-side — field ownership tracking, conflict detection, and merge strategies are handled by the API server rather than client-side kubectl
Memory manager stable: Better NUMA-aware memory allocation for latency-sensitive workloads
Bound Service Account Token Volumes stable: Time-limited, audience-bound tokens for pods — replacing the long-lived, cluster-wide service account tokens that were a persistent security concern

# Bound service account token — expires, audience-restricted
# Projected volume mounts a time-limited token (default 1h expiry)
volumes:
- name: token
  projected:
    sources:
    - serviceAccountToken:
        audience: api
        expirationSeconds: 3600
        path: token

The bound token change was significant from a security perspective: previously, a service account token extracted from a pod would be valid indefinitely, for any audience. Projected tokens expire and are tied to a specific audience.


Pod Security Admission (Kubernetes 1.22, GA in 1.25)

The replacement for PodSecurityPolicy was Pod Security Admission — an admission controller built into the API server (no webhook required) that enforces the three Pod Security Standards at the namespace level:

# Namespace-level security enforcement
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/enforce-version: v1.25
    pod-security.kubernetes.io/warn: restricted
    pod-security.kubernetes.io/warn-version: v1.25

The three modes:
enforce: Reject pods that violate the policy
audit: Allow the pod but add an audit annotation
warn: Allow the pod and send a warning to the client

Pod Security Admission is deliberately simpler than PSP. It does less — it enforces three fixed profiles, not arbitrary rules. For arbitrary policy, you still need OPA/Gatekeeper or Kyverno. But the simplicity means it works reliably, with no authorization edge cases.


Kubernetes 1.23 — Dual-Stack Stable, HPA v2 Stable (December 2021)

  • IPv4/IPv6 dual-stack stable: Pods and Services can have both IPv4 and IPv6 addresses — critical for organizations running mixed-stack networks or migrating from IPv4 to IPv6
  • HPA v2 stable: Horizontal Pod Autoscaler with support for multiple metrics (CPU, memory, custom metrics from Prometheus, external metrics). Scale on Prometheus metrics, not just CPU:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: api-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: api
  minReplicas: 2
  maxReplicas: 20
  metrics:
  - type: Pods
    pods:
      metric:
        name: http_requests_per_second
      target:
        type: AverageValue
        averageValue: 1000m
  • FlexVolume deprecated (in favor of CSI): Another step in the driver out-of-tree migration

The Log4Shell Moment (December 2021)

Log4Shell (CVE-2021-44228) hit on December 9, 2021. The vulnerability allowed unauthenticated remote code execution in any Java application using Log4j 2.x. The blast radius was enormous — Log4j was in everything.

For Kubernetes operators, Log4Shell crystallized several operational realities:

Inventory problem: Do you know which of your pods is running a Java application? Do you know which version of Log4j it includes? Without an SBOM pipeline and admission-time image scanning, you probably don’t have a reliable answer.

Patch velocity problem: Once you know which images are vulnerable, how quickly can you rebuild and redeploy? Organizations with GitOps pipelines and image update automation (Flux’s image reflector, ArgoCD Image Updater) could respond in hours. Organizations without this infrastructure measured response time in days.

Runtime detection problem: Can you detect exploitation attempts in real time? Falco rules for Log4Shell JNDI lookup patterns were available within hours of disclosure — but only organizations already running Falco could use them.

Log4Shell made the case for supply chain security, image scanning, SBOM generation, and runtime detection tooling more effectively than any conference talk.


Sigstore and the Supply Chain Response

In 2021, Sigstore reached a point where its tooling — cosign (image signing), rekor (transparency log), fulcio (keyless signing via OIDC) — was production-ready.

The keyless signing model was significant: instead of managing long-lived signing keys (which themselves become a supply chain risk), fulcio issues short-lived certificates tied to an OIDC identity (a GitHub Actions workflow, a GitLab CI job). The signature proves that a specific workflow built the image.

# Sign an image as part of CI (keyless, OIDC-based)
cosign sign --yes ghcr.io/org/app:v1.0.0

# Verify before deploying
cosign verify \
  --certificate-identity-regexp "https://github.com/org/app/.github/workflows/build.yml" \
  --certificate-oidc-issuer https://token.actions.githubusercontent.com \
  ghcr.io/org/app:v1.0.0

Policy engines (OPA/Gatekeeper, Kyverno) could be configured to reject pods using unsigned or unverified images at admission time — closing the loop from build provenance to runtime enforcement.


Key Takeaways

  • Dockershim deprecation in 1.20 was about removing the non-standard interface, not about dropping Docker image compatibility — containers built with Docker run on containerd or CRI-O without changes
  • The API removals in 1.22 were operationally painful but necessary — beta APIs in Kubernetes are not production-stable commitments
  • Pod Security Admission (PSP’s replacement) trades power for reliability — three fixed profiles enforced at the namespace level, built into the API server, no authorization edge cases
  • SolarWinds and Log4Shell made supply chain security a board-level concern; Sigstore, SBOM, and admission-time image verification moved from “nice to have” to operational requirements
  • Bound service account tokens (1.22 stable) addressed a persistent security gap: pod tokens that expire and are audience-restricted rather than long-lived cluster-wide credentials

What’s Next

← EP04: The Operator Era | EP06: The Runtime Reckoning →

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

The Operator Era: Stateful Workloads, Service Mesh, and the Cloud-Native Stack (2018–2020)

Reading Time: 6 minutes


CISSP Domain Mapping

Domain Relevance
Domain 3 — Security Architecture PodSecurityPolicy enters wide adoption; OPA/Gatekeeper as policy engine
Domain 4 — Communication Security Service mesh mTLS (Istio, Linkerd) for zero-trust pod-to-pod communication
Domain 7 — Security Operations Falco for runtime anomaly detection; Prometheus-based alerting on cluster health

Introduction

By 2018, Kubernetes had won the orchestration market. The question was no longer “which orchestrator?” — it was “how do we run complex workloads on it, and how do we do it safely?”

The 2018–2020 period is defined by three parallel tracks: the Operator pattern maturing into a serious engineering discipline, the service mesh debate consuming enormous community energy, and the security model evolving from “trust everything in the cluster” toward something resembling defense-in-depth.


The OperatorHub Era

The Operator pattern, introduced by CoreOS engineers in 2016, reached critical mass in 2018–2019. In November 2018, Red Hat launched OperatorHub.io — a registry for Kubernetes Operators covering databases (PostgreSQL, MongoDB, CockroachDB), messaging (Kafka, RabbitMQ), monitoring (Prometheus), and more.

The Operator SDK (Red Hat, 2018) gave teams a framework for building Operators in Go, Ansible, or Helm — lowering the barrier from “you need to write a Kubernetes controller from scratch” to “fill in the reconciliation logic.”

The maturity model for Operators was codified into five levels:

Level Capability
1 Basic Install — automated deployment
2 Seamless Upgrades — patch and minor version upgrades
3 Full Lifecycle — backup, failure recovery
4 Deep Insights — metrics, alerts, log processing
5 Auto Pilot — horizontal/vertical scaling, auto-config tuning

Most production Operators in 2019 were at Level 1–2. Getting to Level 3+ required encoding significant domain knowledge — the kind that previously lived in a senior database administrator’s head.


Kubernetes 1.11 — CoreDNS Default, Load Balancing Stable (June 2018)

  • CoreDNS replaced kube-dns as the default DNS provider. CoreDNS is plugin-based — you can extend it for custom DNS resolution logic (split DNS, external name resolution, DNS-based service discovery for non-Kubernetes services)
  • IPVS-based kube-proxy stable: The load balancing mode for Services switched from iptables to IPVS (IP Virtual Server), enabling O(1) service routing instead of O(n) iptables rule traversal — critical at scale
  • TLS bootstrapping stable: Kubelet automatic certificate rotation — kubelets no longer needed manual certificate management

The IPVS kube-proxy mode is a good example of a performance improvement that also has security implications. iptables rules degrade linearly with rule count; at 10,000+ services, iptables becomes a performance and debuggability problem. IPVS uses a hash table — O(1) lookups regardless of service count.


Kubernetes 1.12 — 1.13: Amazon EKS, Runtime Security (September–December 2018)

Amazon EKS Goes GA (June 2018)

Amazon EKS became generally available in June 2018. This was significant not just for AWS customers but for the entire ecosystem: EKS’s launch meant every major cloud provider now had a production-grade managed Kubernetes offering.

EKS’s initial release was deliberately limited — managed control plane, self-managed worker nodes. This contrasted with GKE’s more automated approach, and the community noticed. GKE had been running managed Kubernetes longer, and it showed in feature completeness.

1.12 (September 2018)

  • RuntimeClass alpha: A mechanism to specify which container runtime to use for a pod — containerd, gVisor, Kata Containers. The foundation for confidential computing workloads where you want hardware-isolated containers
  • RBAC delegation: Service accounts could now grant RBAC permissions they themselves held — enabling Operators to manage RBAC for the applications they deploy
  • Volume snapshot alpha: Create point-in-time snapshots of PersistentVolumes — the beginning of Kubernetes-native backup primitives

1.13 (December 2018)

  • kubeadm graduates to GA: The cluster bootstrapping tool was now stable and recommended for production
  • CoreDNS stable
  • CSI stable: Storage drivers could be shipped entirely out of tree

Kubernetes 1.14 — Windows Containers Go Stable (March 2019)

Windows Server container support graduated to stable in 1.14. For the first time, Kubernetes clusters could run Windows workloads as first-class citizens — .NET Framework applications, IIS, SQL Server containers alongside Linux-based microservices.

The implementation required significant work: Windows containers have different networking models, different filesystem semantics, and different process models than Linux containers. Making them a first-class Kubernetes citizen meant handling all of those differences in the node components.

Also in 1.14:
PersistentVolume and StorageClass improvements
kubectl improvements: kubectl diff — show what would change before applying a manifest


The PodSecurityPolicy Problem

PodSecurityPolicy (PSP) was alpha in Kubernetes 1.3, beta in 1.8, and would remain in beta until it was deprecated in 1.21. It was simultaneously the most important security primitive in Kubernetes and the most broken.

PSP let administrators define what a pod was allowed to do:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: restricted
spec:
  privileged: false
  allowPrivilegeEscalation: false
  requiredDropCapabilities:
    - ALL
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    - 'persistentVolumeClaim'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  runAsUser:
    rule: MustRunAsNonRoot
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: MustRunAs
    ranges:
      - min: 1
        max: 65535
  fsGroup:
    rule: MustRunAs
    ranges:
      - min: 1
        max: 65535
  readOnlyRootFilesystem: false

The problem: the admission mechanism was confusing, the UX was hostile, and the authorization model (who could use which PSP) led to privilege escalation paths that were non-obvious. Many teams either disabled PSP entirely or created a permissive policy that made it functionally useless.

The community would spend years working toward a replacement. In 2021 it was deprecated; in 1.25 (2022) it was removed. The replacement — Pod Security Admission — is discussed in EP05.


Kubernetes 1.15 — 1.17: Custom Resource Maturity (2019)

1.15 (June 2019)

  • CRDs continue maturing: Structural schemas, pruning of unknown fields — making CRDs behave more like first-class API types
  • Kustomize integrated into kubectl: Template-free Kubernetes configuration customization. Where Helm uses Go templates, Kustomize uses overlays — a base configuration plus environment-specific patches
# kustomization.yaml — base + production overlay
bases:
  - ../../base
patches:
  - deployment-replicas.yaml
  - resource-limits.yaml
configMapGenerator:
  - name: app-config
    literals:
      - ENV=production

1.16 (September 2019)

  • CRDs graduate to GA (apps/v1, not extensions/v1beta1)
  • Admission webhooks stable: Validating and mutating webhooks that intercept every API request. This is the foundation for OPA/Gatekeeper, Kyverno, and all policy-as-code enforcement in Kubernetes

The admission webhook framework’s graduation to stable in 1.16 was more significant than it appeared. It meant that any security policy engine — OPA/Gatekeeper, Kyverno, Styra, etc. — could now enforce policies on any Kubernetes resource creation or modification, using a stable, documented API.

  • Removal of several deprecated beta APIs: extensions/v1beta1 Deployments, DaemonSets, ReplicaSets — a preview of the more aggressive API cleanup that would come in 1.22

1.17 (December 2019)

  • Volume snapshots beta
  • Cloud Provider labels stable

OPA/Gatekeeper: Policy as Code Enters the Mainstream

Open Policy Agent (OPA) + Gatekeeper emerged as the policy engine of choice for Kubernetes in 2019. Gatekeeper uses the admission webhook framework to intercept API requests and evaluate them against Rego policies:

# Deny containers running as root
package kubernetes.admission

deny[msg] {
  input.request.kind.kind == "Pod"
  container := input.request.object.spec.containers[_]
  container.securityContext.runAsUser == 0
  msg := sprintf("Container %v must not run as root", [container.name])
}

The OPA/Gatekeeper model represented a shift in security thinking: instead of configuring security at the cluster level, you codify security policy in a language (Rego) and enforce it uniformly across all admission requests. Policies can be tested, versioned, and reviewed like code.


Kubernetes 1.18 — Topology-Aware Routing, Immutability (March 2020)

  • Topology-aware service routing alpha: Route service traffic to endpoints in the same zone/node as the caller — reducing cross-zone data transfer costs and latency
  • Immutable ConfigMaps and Secrets alpha: Mark a ConfigMap or Secret as immutable — the API server rejects updates, preventing accidental mutation of configuration that applications have already loaded
  • IngressClass: A mechanism to specify which Ingress controller should handle an Ingress resource — enabling multiple ingress controllers in the same cluster
# Immutable secret — once set, cannot be changed
apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
immutable: true
data:
  password: dGhpcyBpcyBhIHRlc3Q=

The Falco Adoption Wave

CNCF-donated Falco (originated by Sysdig) became the standard tool for Kubernetes runtime security in this period. Falco uses eBPF probes or kernel modules to monitor syscalls and generate alerts based on rules:

# Falco rule: detect shell spawned in a container
- rule: Terminal shell in container
  desc: A shell was spawned in a container
  condition: >
    spawned_process and container and
    shell_procs and proc.tty != 0
  output: >
    A shell was spawned in a container
    (user=%user.name container=%container.name
     shell=%proc.name parent=%proc.pname)
  priority: WARNING

Falco addressed the gap that PodSecurityPolicy couldn’t: admission-time policy prevents known-bad configurations from running, but it can’t detect a compromise that happens at runtime — a shell spawned by an exploited web application, for example.


The Service Mesh Exhaustion

By 2019, the service mesh landscape was producing more overhead than value for many teams. Istio’s operational complexity — its control plane components, its sidecar injection model, its frequent breaking changes between versions — burned teams that adopted it early.

The community questions were real: do you actually need mTLS between every service in your cluster? Is the operational cost of a service mesh worth the security benefit for every organization?

Linkerd 2.x (Buoyant) positioned itself as the lightweight alternative — simpler to operate, less configuration surface, Rust-based proxy instead of Envoy. For teams that wanted the security benefit (mTLS) without the complexity cost, Linkerd 2.x was often the better choice.

The honest answer in 2019-2020: service meshes were the right architecture for organizations with hundreds of services and dedicated platform teams. For most organizations, they were complexity that outpaced the threat model.


Key Takeaways

  • The Operator pattern matured from a pattern into an engineering discipline with tooling (Operator SDK), a registry (OperatorHub), and a capability maturity model
  • EKS going GA completed the managed Kubernetes trifecta — every major cloud provider was now committed
  • CRDs graduating to stable in 1.16 was the foundation for everything built on Kubernetes extensibility — Operators, policy engines, GitOps tools
  • Admission webhooks graduating to stable enabled the policy-as-code ecosystem (OPA/Gatekeeper, Kyverno) — the only viable alternative to PSP’s broken model
  • Falco established runtime security as a distinct discipline from admission-time policy enforcement
  • Service mesh adoption was real but the complexity cost was frequently underestimated; many teams that adopted Istio in 2018-2019 spent 2019-2020 managing it

What’s Next

← EP03: Enterprise Awakening | EP05: Security Hardens →

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

Enterprise Awakening: RBAC, CRDs, Cloud Providers, and Helm Goes Mainstream (2016–2018)

Reading Time: 6 minutes


CISSP Domain Mapping

Domain Relevance
Domain 3 — Security Architecture RBAC graduation to stable; secrets encryption at rest introduced
Domain 4 — Communication Security TLS enforcement on API server; etcd peer encryption
Domain 7 — Security Operations Audit logging framework; API server request auditing

Introduction

By the end of 2016, engineers were running Kubernetes in production. Not as an experiment — in production, handling real traffic. And that’s where the real gaps became visible.

The 2016–2018 period is the era when Kubernetes grew up. RBAC went stable. CRDs replaced the fragile ThirdPartyResource hack. The major cloud providers launched managed services. Helm became the standard for packaging. And the security posture, which had been an afterthought in the Borg-derived model, started getting serious attention.


Kubernetes 1.6 — The RBAC Milestone (March 2017)

Kubernetes 1.6 is the release that made enterprise Kubernetes possible. The headline feature: RBAC (Role-Based Access Control) promoted to beta, enabled by default.

Before RBAC, Kubernetes had attribute-based access control (ABAC) — a flat policy file on the API server that required a restart to change. It worked, but it was operationally painful and offered no granularity at the namespace level.

RBAC introduced four objects:
Role: A set of permissions scoped to a namespace
ClusterRole: A set of permissions cluster-wide or reusable across namespaces
RoleBinding: Assigns a Role to a user/group/service account in a namespace
ClusterRoleBinding: Assigns a ClusterRole cluster-wide

# Example: read-only access to pods in the dev namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: dev
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: dev
subjects:
- kind: User
  name: alice
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Also in 1.6:
etcd v3 as default: Better performance, watch semantics, and transaction support
Node Authorization mode: Kubelets can now only access secrets and pods bound to their own node — a critical lateral movement restriction
Audit logging (alpha): API server logs every request — who did what, to which resource, at what time
– Scale: Tested to 5,000 nodes per cluster

The node authorization mode deserves more attention than it typically gets. Before 1.6, a compromised kubelet could read all secrets in the cluster. Node authorization restricted the kubelet to only the secrets it needed for pods scheduled on that node. This single change dramatically reduced the blast radius of a node compromise.


Kubernetes 1.7 — Custom Resource Definitions (June 2017)

The most significant architectural decision in Kubernetes history after the initial design: ThirdPartyResources (TPRs) were replaced with CustomResourceDefinitions (CRDs).

TPRs were a fragile mechanism introduced in 1.2 that let users define custom API types. They had serious limitations: no schema validation, no versioning, data loss bugs, and poor upgrade behavior. In 1.7, they were replaced with CRDs.

CRDs are what make the Kubernetes API extension model work. They let you define new resource types that the API server stores and serves, with optional schema validation via OpenAPI v3 schemas, version conversion, and admission webhook integration.

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: databases.stable.example.com
spec:
  group: stable.example.com
  versions:
  - name: v1
    served: true
    storage: true
    schema:
      openAPIV3Schema:
        type: object
        properties:
          spec:
            type: object
            properties:
              size:
                type: string
              version:
                type: string
  scope: Namespaced
  names:
    plural: databases
    singular: database
    kind: Database

CRDs enabled the entire Operator ecosystem that would define the next phase of Kubernetes. Without stable, schema-validated custom resources, you can’t build reliable controllers on top of them.

Also in 1.7:
Secrets encryption at rest (alpha): Finally, secrets stored in etcd could be encrypted with AES-CBC or AES-GCM
Network Policy promoted to stable: CNI plugins implementing NetworkPolicy could now enforce pod-level ingress/egress rules
API aggregation layer: Extend the Kubernetes API with custom API servers — the foundation for metrics-server and other API extensions


Kubernetes 1.8 — RBAC Goes Stable (September 2017)

RBAC graduated to stable in 1.8. This was the point of no return for enterprise adoption. Security teams could now enforce least-privilege on Kubernetes API access with a documented, stable API.

Key additions:
Storage Classes stable: Dynamic volume provisioning — request a PersistentVolume and have the underlying storage (EBS, GCE PD, NFS) automatically provisioned
Workloads API (apps/v1beta2): Deployments, ReplicaSets, DaemonSets, and StatefulSets all moved under a unified API group, signaling they were heading toward stable

The admission webhook framework — which would become the foundation for policy enforcement tools like OPA/Gatekeeper — was also being refined in this period.


The Cloud Provider Moment (2017–2018)

October 2017: Docker Surrenders

At DockerCon Europe in October 2017, Docker Inc. announced that Docker Enterprise Edition would ship with Kubernetes support alongside Docker Swarm. This was, effectively, Docker Inc. conceding the orchestration market to Kubernetes. Swarm remained available, but the message was clear: Kubernetes was the production standard.

October 2017: Microsoft Previews AKS

Microsoft previewed Azure Kubernetes Service at DockerCon Europe. The managed Kubernetes race was on.

November 2017: Amazon Announces EKS

At AWS re:Invent 2017, Amazon announced Elastic Kubernetes Service. The three major cloud providers — Google (GKE, running since 2014), Microsoft (AKS), and Amazon (EKS) — were all committed to managed Kubernetes.

For enterprise buyers, this was the signal they needed. Kubernetes was no longer a bet on an experimental technology — it was the supported, managed offering from every major cloud provider.


Kubernetes 1.9 — Workloads API Stable (December 2017)

The Workloads API (apps/v1) went stable in 1.9. This matters because it locked in the API contract for Deployments, ReplicaSets, DaemonSets, and StatefulSets. Infrastructure built on these APIs would not break on upgrades.

# apps/v1 Deployment — the stable form that operators rely on
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"

Also in 1.9:
Windows container support moved to beta — actual Windows Server 2016 nodes in a cluster
CoreDNS available as an alternative to kube-dns: A more extensible, plugin-based DNS server that would replace kube-dns as the default in 1.11


Kubernetes 1.10 — Storage, Auth, and Scale (March 2018)

1.10 continued the enterprise hardening:
CSI (Container Storage Interface) beta: A standardized interface between Kubernetes and storage providers. Before CSI, storage drivers were compiled into the kubelet binary. CSI moved them out-of-tree, allowing storage vendors to ship their own drivers without waiting for a Kubernetes release
External credential providers (alpha): Authenticate against external systems (cloud IAM, HashiCorp Vault) for kubeconfig credentials
Node problem detector stable: Detect and report node-level problems (kernel deadlocks, corrupted file systems) as Kubernetes events and node conditions

The CSI transition was one of the most important infrastructure decisions of this period. It decoupled storage driver development from the Kubernetes release cycle — a necessary step for cloud providers to ship storage integrations rapidly and independently.


The Istio Announcement and Service Mesh Wars (May 2017)

Google and IBM announced Istio in May 2017 — a service mesh that layered mTLS, traffic management, and observability on top of existing Kubernetes deployments without changing application code.

Istio’s architecture: sidecar proxies (Envoy) injected into every pod, managed by a control plane. Every service-to-service call passes through the sidecar, enabling:
– Mutual TLS between services (zero-trust networking at the service layer)
– Fine-grained traffic control (canary releases, circuit breaking, retries)
– Distributed tracing and metrics

Linkerd (from Buoyant) had been working on the same problem since 2016. The two projects would compete for the “service mesh standard” throughout 2017–2019.

The service mesh conversation was fundamentally a security architecture conversation: how do you enforce mutual authentication and encryption between services in a Kubernetes cluster without requiring application developers to implement it?


CoreOS Acquisition and the Operator Pattern (2018)

In January 2018, Red Hat acquired CoreOS for $250 million. CoreOS had contributed two things that would permanently shape Kubernetes:

1. The Operator Pattern (introduced by CoreOS engineers Brandon Philips and Josh Wood in 2016): An Operator is a custom controller that uses CRDs to manage the lifecycle of complex, stateful applications. The etcd Operator (CoreOS’s own) was the first — it automated etcd cluster creation, scaling, backup, and failure recovery. The pattern generalized: a Prometheus Operator, a PostgreSQL Operator, a Kafka Operator.

The Operator pattern is the answer to the question “how do you encode operational knowledge into software?” A human operator knows how to deploy, scale, backup, and recover a database. An Operator codifies that knowledge into a controller loop.

# Operator pattern: watch CRD → reconcile → manage application
CRD (EtcdCluster) → Operator Controller watches → creates/updates Pods, Services, Snapshots

2. etcd: The distributed key-value store that backs the Kubernetes control plane. CoreOS built and maintained etcd. Red Hat acquiring CoreOS meant that the company maintaining Kubernetes’s most critical dependency (after the kernel) was now inside the Red Hat/IBM orbit.


Helm 2 and the Charts Ecosystem

By 2017–2018, Helm had become the de facto package manager for Kubernetes. The public Helm chart repository hosted hundreds of charts — databases (PostgreSQL, MySQL, Redis), monitoring (Prometheus, Grafana), ingress controllers (nginx), CI/CD tools (Jenkins, GitLab Runner).

Helm 2 introduced Tiller — a server-side component that managed release state in the cluster. Tiller became the most criticized security decision in the Kubernetes ecosystem: Tiller ran with cluster-admin privileges by default, meaning any user who could reach Tiller’s gRPC endpoint could do anything in the cluster.

Security teams hated Tiller. The Helm team addressed it in Helm 3 (2019) by removing Tiller entirely and storing release state as Kubernetes Secrets instead.


Key Takeaways

  • RBAC going stable in 1.8 was the single most important security event in early Kubernetes history — it gave enterprises the access control model they needed for production
  • CRDs replacing TPRs in 1.7 enabled the entire Operator ecosystem that would define the next phase of Kubernetes
  • Docker Inc.’s October 2017 announcement that it would support Kubernetes in Docker EE effectively ended the container orchestration wars
  • The three major cloud providers (GKE, AKS, EKS) all standardizing on managed Kubernetes drove enterprise adoption faster than any feature announcement could
  • The Operator pattern — Kubernetes controllers that encode operational knowledge — emerged from CoreOS and became the standard model for managing complex stateful applications
  • Helm filled a real gap but Tiller’s cluster-admin model was a security debt the community had to repay in Helm 3

What’s Next

← EP02: The Container Wars | EP04: The Operator Era →

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

The Container Wars: Kubernetes 1.0, CNCF, and the Fight for Orchestration (2014–2016)

Reading Time: 6 minutes


CISSP Domain Mapping

Domain Relevance
Domain 3 — Security Architecture Early Kubernetes security model: flat networking, no RBAC, container isolation gaps
Domain 8 — Software Security Container image trust model begins taking shape; Docker Hub as central registry

Introduction

Three orchestration systems entered the arena in 2015. Only one would still matter three years later.

Docker had created the container revolution. Now everyone needed to run containers at scale, and three camps formed around three very different philosophies. Understanding why Kubernetes won — and how close it came to not winning — explains most of the design choices that still shape Kubernetes today.


The State of Container Orchestration in 2014

When Kubernetes made its public debut at DockerCon 2014, it entered a space that didn’t yet have a name. “Container orchestration” wasn’t a category. It was a problem people had started to feel but not yet articulate.

Three approaches emerged nearly simultaneously:

Docker Swarm (announced December 2014): Docker’s answer to orchestration, built on the premise that the tool you use to run containers should also be the tool you use to cluster them. Swarm used the same Docker CLI and Docker API — zero new concepts for developers already using Docker.

Apache Mesos (Mesosphere Marathon): Mesos predated Docker. It was a distributed systems kernel originally developed at Berkeley, used in production at Twitter, Airbnb, and Apple. Marathon was the framework for running long-running services on top of Mesos. Mesos could run Docker containers, Hadoop jobs, and Spark workloads on the same cluster. Serious infrastructure engineers took it seriously.

Kubernetes: The newcomer with Google’s name behind it, but no track record outside Google, and early versions that required significant operational expertise to run.


Kubernetes v1.0: July 21, 2015

The 1.0 release landed at the first CloudNativeCon/KubeCon in San Francisco on July 21, 2015. The timing was deliberate — it coincided with the announcement of the Cloud Native Computing Foundation.

What shipped in 1.0:

  • Pods: The core scheduling unit — one or more containers sharing a network namespace and storage
  • Replication Controllers: Keep N copies of a pod running (later replaced by ReplicaSets and Deployments)
  • Services: A stable virtual IP and DNS name in front of a set of pods
  • Namespaces: Soft multi-tenancy boundaries within a cluster
  • Labels and Selectors: The flexible grouping mechanism that makes everything composable
  • Persistent Volumes (basic): Pods could mount persistent storage
  • kubectl: The command-line interface

What was not in 1.0:
– No RBAC (Role-Based Access Control)
– No network policy
– No autoscaling
– No Ingress resources
– No StatefulSets
– No DaemonSets (added in 1.1)
– Secrets were stored in plaintext in etcd

The security posture of a fresh Kubernetes 1.0 cluster was essentially: “trust everything inside the cluster.” That was the inherited assumption from Borg.


The CNCF Formation

Alongside the 1.0 release, Google donated Kubernetes to the newly formed Cloud Native Computing Foundation — a Linux Foundation project. This was a critical strategic move.

By donating Kubernetes to a neutral foundation, Google:
1. Removed the perception of a single vendor controlling the project
2. Created a governance model that made enterprise adoption politically safe
3. Invited competitors (Red Hat, CoreOS, Docker, Microsoft) to contribute without ceding control to them

The CNCF’s initial Technical Oversight Committee included engineers from Google, Red Hat, Twitter, Cisco, and others. This governance model would later become the template for every CNCF project that followed.


v1.1 — v1.5: Building the Foundation (Late 2015–2016)

Kubernetes 1.1 (November 2015)

  • Horizontal Pod Autoscaler (HPA): Automatically scale pod count based on CPU utilization
  • HTTP load balancing: Ingress API added as alpha — pods could now be exposed via HTTP routing rules
  • Job objects: Run a task to completion, not just keep it running
  • Performance: 30% throughput improvement, pods per minute scheduling rate improved significantly

Kubernetes 1.2 (March 2016)

  • Deployments promoted to beta: Rolling updates, rollback, pause/resume — the deployment primitive that engineers actually use for application deployments
  • ConfigMaps: Decouple configuration from container images (no more baking config into images)
  • Daemon Sets stable: Run exactly one pod per node — the pattern for node agents (log shippers, monitoring agents, network plugins)
  • Scale: Tested to 1,000 nodes and 30,000 pods per cluster

Kubernetes 1.3 (July 2016)

  • StatefulSets (then called PetSets, alpha): Ordered, persistent-identity pods — the first serious attempt to run databases and stateful applications
  • Cross-cluster federation (alpha): Run workloads across multiple clusters
  • PodDisruptionBudgets (alpha): Control how many pods can be unavailable during voluntary disruptions — critical for safe rolling updates
  • rkt integration (Rktnetes): First Container Runtime Interface experiment — the kubelet talking to something other than Docker

Kubernetes 1.4 (September 2016)

  • kubeadm: A tool to bootstrap a Kubernetes cluster in two commands. Before kubeadm, setting up a cluster required following Kelsey Hightower’s “Kubernetes the Hard Way” — valuable for learning, painful for production
  • ScheduledJobs (CronJobs): Run a job on a schedule
  • PodPresets: Inject common configuration into pods at admission time
  • Init Containers beta: Containers that run to completion before the main application containers start — the clean solution for initialization sequencing

Kubernetes 1.5 (December 2016)

  • StatefulSets promoted to beta
  • PodDisruptionBudgets to beta
  • Windows Server container support (alpha): First step toward a non-Linux node
  • CRI (Container Runtime Interface) alpha: The abstraction layer that would eventually allow Kubernetes to run containerd, CRI-O, and others instead of depending on Docker
  • OpenAPI spec: Machine-readable API documentation, enabling client code generation

Helm: The Missing Package Manager (February 2016)

Kubernetes gave you primitives. It did not give you a way to install applications composed of those primitives. In February 2016, Deis (later acquired by Microsoft) released Helm — a package manager for Kubernetes.

Helm introduced two concepts that stuck:
Charts: A collection of Kubernetes manifests bundled with templating and default values
Releases: An installed instance of a chart, with its own lifecycle (install, upgrade, rollback, delete)

Helm’s immediate adoption signaled something important: the community was already thinking in terms of applications, not just raw primitives. Infrastructure engineers needed a layer of abstraction above YAML.


The Battle Lines Harden

By mid-2016, the three-way contest was becoming clearer:

Docker Swarm’s advantage: Zero friction for existing Docker users. docker swarm init + docker stack deploy. No new CLI, no new API, no new mental model. For small teams running straightforward applications, it was compelling.

Mesos’s advantage: Proven at Google-scale before Kubernetes existed. Twitter ran Mesos in production. It could run heterogeneous workloads (Docker containers, Hadoop, Spark) on the same cluster. Enterprise data teams already had Mesos expertise.

Kubernetes’s advantage: The Google name, rapidly growing community, and a design that was clearly winning the feature race. But operational complexity was real — running Kubernetes well in 2016 required significant investment.


The Turning Point Nobody Talks About

The real moment that decided the container wars wasn’t a feature announcement. It was cloud provider behavior.

Google Kubernetes Engine (GKE) — then called Google Container Engine — had been running since 2014. It was the first managed Kubernetes service, and it worked. In 2016, both Microsoft and Amazon were working on managed Kubernetes offerings. Neither chose Docker Swarm. Neither chose Mesos.

When cloud providers converge on a technology, the market follows. By the time Amazon announced EKS and Microsoft announced AKS in late 2017, the decision was already made.


The Security Debt Accumulates

Running through the 1.0–1.5 feature list reveals a security architecture that was being designed in flight:

  • etcd stored secrets as base64-encoded strings — not encrypted. Kubernetes 1.7 (2017) would add encryption at rest, but it required explicit configuration
  • The API server was unauthenticated by default in early versions — you needed to configure authentication
  • Network traffic between pods was unrestricted — all pods could reach all other pods on all ports, across all namespaces. NetworkPolicy existed as alpha in 1.3 but required a CNI plugin that supported it
  • The kubelet’s API was open — in early Kubernetes, the kubelet’s HTTP API was accessible without authentication from within the cluster

These weren’t oversights — they were reasonable defaults for an internal cluster managed by a single team. They became liabilities as Kubernetes moved into multi-tenant enterprise environments.


KubeCon: A Community Forms

The first KubeCon conference ran November 9-11, 2015, in San Francisco — a small gathering of a few hundred engineers. By November 2016, KubeCon North America in Seattle drew thousands. The growth was not marketing-driven; it was practitioners solving real problems and sharing what they learned.

This community dynamic was qualitatively different from the Docker Swarm and Mesos ecosystems. Kubernetes had a contributor culture — pull requests, SIG (Special Interest Group) meetings, public design docs. The project was being built in the open, and engineers could see it happening.


Key Takeaways

  • Kubernetes 1.0 shipped in July 2015 with the basics functional but security model immature — no RBAC, no network policy, secrets stored in plaintext
  • The CNCF governance model was the strategic move that made enterprise adoption politically safe — no single vendor controls the project
  • Helm filled the missing application packaging layer that raw Kubernetes couldn’t provide
  • The container wars were decided not by technical superiority alone, but by cloud provider alignment — when Google, Microsoft, and Amazon all built managed Kubernetes, the market followed
  • v1.1–v1.5 established the core workload primitives: Deployments, StatefulSets, DaemonSets, Jobs, ConfigMaps, HPA — most of these remain the daily vocabulary of Kubernetes operations

What’s Next

← EP01: The Borg Legacy | EP03: Enterprise Awakening →

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

The Borg Legacy: How Google Built the Blueprint for Kubernetes (2003–2014)

Reading Time: 5 minutes


CISSP Domain Mapping

Domain Relevance
Domain 3 — Security Architecture Workload isolation principles from Borg carried directly into Kubernetes namespace and cgroup model
Domain 7 — Security Operations Cluster-wide resource governance and auditing concepts established by Borg

Introduction

Every piece of infrastructure has a lineage. Kubernetes didn’t appear from nowhere in 2014. It is, in almost every meaningful sense, Google’s Borg system rebuilt for the world — with a decade of hard lessons baked in.

To understand Kubernetes, you have to understand what came before it. And what came before it ran (and still runs) more compute than most organizations will ever touch.


Google’s Scale Problem (2003)

By the early 2000s, Google was running hundreds of thousands of jobs across tens of thousands of machines. Web indexing, ads, Gmail, Maps — all of these needed compute, and none of them could afford to waste it.

In 2003-2004, Google engineer Rohit Seth proposed a kernel feature called cgroups (control groups) — a mechanism to limit, prioritize, account, and isolate resource usage of process groups. The Linux kernel merged cgroups in 2.6.24 (2008). This was the primitive that would later make containers possible.

Simultaneously, Google built Borg — an internal cluster management system that could run hundreds of thousands of jobs, from many thousands of different applications, across many clusters, with each cluster having up to tens of thousands of machines. Borg was never open-sourced. It ran (and still runs) Google’s entire production workload.


What Borg Got Right

Borg introduced concepts that engineers didn’t yet have names for. They became the vocabulary of modern infrastructure:

Workload types:
Borg separated workloads into two classes: long-running services (high-priority, latency-sensitive) and batch jobs (best-effort, preemptible). Kubernetes would later call these Deployments and Jobs.

Declarative specification:
Borg jobs were described in a configuration language (BCL, a dialect of GCL). You declared what you wanted; Borg figured out how to achieve it. Sound familiar?

Resource limits and requests:
Borg tasks had both a request (what you need) and a limit (what you can use). Kubernetes adopted this model directly — resources.requests and resources.limits in pod specs trace directly back to Borg.

Health checking and rescheduling:
Borg monitored task health and automatically rescheduled failed tasks. The kubelet’s liveness and readiness probes are descendants of this.

Cell (cluster) topology:
Borg organized machines into “cells” — what Kubernetes calls clusters. The Borgmaster (control plane) managed the cell.


Omega: The Sequel That Didn’t Ship

Around 2011, Google started building Omega — a more flexible scheduler designed to address Borg’s limitations. Borg had a monolithic scheduler; Omega introduced a shared-state, optimistic-concurrency model where multiple schedulers could operate concurrently without stepping on each other.

A 2013 paper from Google (“Omega: flexible, scalable schedulers for large compute clusters”) made these ideas public. Omega itself stayed internal, but many of its scheduling concepts influenced Kubernetes’ extensible scheduler design.


The Docker Moment (March 2013)

On March 15, 2013, Solomon Hykes stood at PyCon and demonstrated Docker with a five-minute talk titled “The future of Linux Containers.” The demo ran a container. That was it. The room understood immediately.

Docker solved the packaging and distribution problem. Linux had had containers (via LXC and cgroups/namespaces) for years, but running one required deep kernel knowledge. Docker wrapped all of that in a UX that a developer could actually use.

Google’s engineers watched. They recognized the pattern: Docker was doing for containers what the smartphone did for mobile computing — making an existing capability accessible to everyone.

The Google engineers building the next generation of infrastructure realized: once containers become ubiquitous, someone will need to orchestrate them at scale. And they had already built that system internally, twice.


The Decision to Open-Source (Fall 2013)

In late 2013, a small group of Google engineers — Brendan Burns, Joe Beda, Craig McLuckie, Ville Aikas, Tim Hockin, Dawn Chen, Brian Grant, and Daniel Smith — began a new project internally codenamed “Project Seven” (a reference to the Borg drone Seven of Nine).

The core insight: Google’s competitive advantage in infrastructure came from what ran on the cluster management system, not the system itself. Open-sourcing a Kubernetes-like system would benefit Google by standardizing the ecosystem around patterns Google already understood better than anyone.

The initial design decisions were deliberate:

  • Go as the implementation language: Fast compilation, good concurrency primitives, easy deployment as static binaries
  • REST API as the primary interface: Everything in Kubernetes is an API resource. This is not accidental — it makes the system composable and automatable from day one
  • Labels and selectors over hierarchical naming: Borg used a hierarchical job/task naming scheme; Kubernetes chose a flat namespace with label-based grouping, which proved far more flexible
  • Reconciliation loops everywhere: Every Kubernetes controller is a loop that watches actual state and drives it toward desired state. This is the controller pattern, and it is the heart of Kubernetes extensibility

First Commit: June 6, 2014

The first public commit landed on GitHub on June 6, 2014: 250 files, 47,501 lines of Go, Bash, and Markdown.

Three days later, on June 10, 2014, Eric Brewer (VP of Infrastructure at Google) announced Kubernetes publicly at DockerCon 2014. The announcement framed it explicitly as bringing Google’s infrastructure learnings to the community.

By July 10, 2014, Microsoft, Red Hat, IBM, and Docker had joined the contributor community.


What Kubernetes Deliberately Left Out of Borg

The designers made intentional decisions about what not to carry forward:

No proprietary language: Borg’s BCL/GCL was Google-internal. Kubernetes used plain JSON (later YAML) manifests — standard formats any tool could read and write.

No magic autoscaling by default: Borg aggressively reclaimed resources. Kubernetes launched without this, adding HPA (Horizontal Pod Autoscaler) later, allowing operators to control the behavior.

No built-in service discovery tied to the scheduler: Borg had tight coupling between scheduling and name resolution. Kubernetes separated these: Services (kube-proxy, DNS) are distinct from the scheduler, allowing them to evolve independently.


The Borg Paper (2015)

In April 2015, Google published “Large-scale cluster management at Google with Borg” — the first public detailed description of the system. Reading it alongside the Kubernetes documentation reveals how directly the design decisions transferred.

Key numbers from the paper:
– Borg ran hundreds of thousands of jobs from thousands of applications
– Typical cell: 10,000 machines
– Utilization improvements from bin-packing: significant enough to justify the entire engineering investment

The paper is required reading for anyone who wants to understand why Kubernetes is designed the way it is — not as a series of arbitrary choices but as a deliberately evolved system.


The Lineage That Matters for Security

From a security architecture perspective, the Borg lineage matters because the isolation model was designed for a trusted-internal environment, not a multi-tenant hostile-external one. This created a debt that Kubernetes has spent years paying down:

  • Namespaces are a soft boundary, not a hard isolation primitive — just as Borg’s cells were
  • The default-allow network model reflects Borg’s assumption of a trusted internal network
  • No built-in admission control at launch — Borg trusted its job submitters

Understanding this history explains why features like NetworkPolicy, PodSecurity, RBAC, and OPA/Gatekeeper were retrofitted over years rather than built-in from day one. The system was designed by and for Google’s internal trust model. The security hardening came as it entered the wild.


Key Takeaways

  • Kubernetes is Google’s Borg system rebuilt for the world, carrying 10+ years of cluster management experience
  • Core Kubernetes primitives — resource requests/limits, declarative specs, health-based rescheduling, label-based grouping — map directly to Borg concepts
  • The decision to open-source was strategic, not altruistic: Google wanted to standardize the ecosystem on patterns it already mastered
  • The security gaps in early Kubernetes (no default network isolation, permissive RBAC, no pod-level security controls) trace directly to Borg’s trusted-internal-network assumptions
  • Docker’s accessibility breakthrough created the demand; Google’s Borg experience supplied the architecture

What’s Next

EP02: The Container Wars → — Kubernetes 1.0, the CNCF formation, and the three-way fight between Docker Swarm, Apache Mesos, and Kubernetes for control of the container orchestration market.


Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

Hardening Blueprint as Code — Declare Your OS Baseline in YAML

Reading Time: 6 minutes

OS Hardening as Code, Episode 2
Cloud AMI Security Risks · Linux Hardening as Code**

Focus Keyphrase: Linux hardening as code
Search Intent: Informational
Meta Description: Stop relying on hardening runbooks that get skipped at 2am. Declare your Linux OS baseline as a YAML blueprint — and build images where skipping a step is structurally impossible. (155 chars)


TL;DR

  • A hardening runbook is a list of steps someone runs. A HardeningBlueprint YAML is a build artifact — if it wasn’t applied, the image doesn’t exist
  • Linux hardening as code means declaring your entire OS security baseline in a single YAML file and building it reproducibly across any provider
  • stratum build --blueprint ubuntu22-cis-l1.yaml --provider aws either produces a hardened image or fails — there is no partial state
  • The blueprint includes: target OS/provider, compliance benchmark, Ansible roles, and per-control overrides with documented reasons
  • One blueprint file = one source of truth for your hardening posture, version-controlled and reviewable like any other infrastructure code
  • Post-build OpenSCAP scan runs automatically — the image only snapshots if it passes

The Problem: A Runbook That Gets Skipped Once Is a Runbook That Gets Skipped

Hardening runbook
       │
       ▼
  Human executes
  steps manually
       │
       ├─── 47 deployments: followed correctly
       │
       └─── 1 deployment at 2am: step 12 skipped
                    │
                    ▼
           Instance in production
           without audit logging,
           SSH password auth enabled,
           unnecessary services running

Linux hardening as code eliminates the human decision point. If the blueprint wasn’t applied, the image doesn’t exist.

EP01 showed that default cloud AMIs arrive pre-broken — unnecessary services, no audit logging, weak kernel parameters, SSH configured for convenience not security. The obvious response is a hardening script. But a script run by a human is still a process step. It can be skipped. It can be done halfway. It can drift across different engineers who each interpret “run the hardening script” slightly differently.


A production deployment last year. The platform team had a solid CIS L1 hardening runbook — 68 steps, well-documented, followed consistently. Then a critical incident at 2am required three new instances to be deployed on short notice. The engineer on call ran the provisioning script and, under pressure, skipped the hardening step with the intention of running it the next morning.

They didn’t. The three instances stayed in production unhardened for six weeks before an automated scan caught them. Audit logging wasn’t configured. SSH was accepting password authentication. Two unnecessary services were running that weren’t in the approved software list.

Nothing was breached. But the finding went into the next compliance report as a gap, the team spent a week remediating, and the post-mortem conclusion was “we need better runbook discipline.”

That’s the wrong conclusion. The runbook isn’t the problem. The problem is that hardening was a process step instead of a build constraint.


What Linux Hardening as Code Actually Means

Linux hardening as code is the same principle as infrastructure as code applied to OS security posture: the desired state is declared in a file, the file is the source of truth, and the execution is deterministic and repeatable.

HardeningBlueprint YAML
         │
         ▼
  stratum build
         │
  ┌──────┴──────────────────┐
  │  Provider Layer          │
  │  (cloud-init, disk       │
  │   names, metadata        │
  │   endpoint per provider) │
  └──────┬──────────────────┘
         │
  ┌──────┴──────────────────┐
  │  Ansible-Lockdown        │
  │  (CIS L1/L2, STIG —      │
  │   the hardening steps)   │
  └──────┬──────────────────┘
         │
  ┌──────┴──────────────────┐
  │  OpenSCAP Scanner        │
  │  (post-build verify)     │
  └──────┬──────────────────┘
         │
         ▼
  Golden Image (AMI/GCP image/Azure image)
  + Compliance grade in image metadata

The YAML file is what you write. Stratum handles the rest.


The HardeningBlueprint YAML

The blueprint is the complete, auditable declaration of your OS security posture:

# ubuntu22-cis-l1.yaml
name: ubuntu22-cis-l1
description: Ubuntu 22.04 CIS Level 1 baseline for production workloads
version: "1.0"

target:
  os: ubuntu
  version: "22.04"
  provider: aws
  region: ap-south-1
  instance_type: t3.medium

compliance:
  benchmark: cis-l1
  controls: all

hardening:
  - ansible-lockdown/UBUNTU22-CIS
  - role: custom-audit-logging
    vars:
      audit_log_retention_days: 90
      audit_max_log_file: 100

filesystem:
  tmp:
    type: tmpfs
    options: [nodev, nosuid, noexec]
  home:
    options: [nodev]

controls:
  - id: 1.1.2
    override: compliant
    reason: "tmpfs /tmp implemented via systemd unit — equivalent control"
  - id: 5.2.4
    override: compliant
    reason: "SSH timeout managed by session manager policy, not sshd_config"

Each section is explicit:

target — which OS, which version, which provider. This is the only provider-specific section. The compliance intent below it is portable.

compliance — which benchmark and which controls to apply. controls: all means every CIS L1 control. You can also specify controls: [1.x, 2.x] to scope to specific sections.

hardening — which Ansible roles to run. ansible-lockdown/UBUNTU22-CIS is the community CIS hardening role. You can add custom roles alongside it.

controls — documented exceptions. Not suppressions — overrides with a recorded reason. This is the difference between “we turned off this control” and “this control is satisfied by an equivalent implementation, documented here.”


Building the Image

# Validate the blueprint before building
stratum blueprint validate ubuntu22-cis-l1.yaml

# Build — this will take 15-20 minutes
stratum build --blueprint ubuntu22-cis-l1.yaml --provider aws

# Output:
# [15:42:01] Launching build instance...
# [15:42:45] Running ansible-lockdown/UBUNTU22-CIS (144 tasks)...
# [15:51:33] Running custom-audit-logging role...
# [15:52:11] Running post-build OpenSCAP scan (benchmark: cis-l1)...
# [15:54:08] Grade: A (98/100 controls passing)
# [15:54:09] 2 controls overridden (documented in blueprint)
# [15:54:10] Creating AMI snapshot: ami-0a7f3c9e82d1b4c05
# [15:54:47] Done. AMI tagged with compliance grade: cis-l1-A-98

If the post-build scan comes back below a configurable threshold, the build fails — no AMI is created. The instance is terminated. The image does not exist.

That is the structural guarantee. You cannot skip a build step at 2am because at 2am you’re calling stratum build, not running steps manually.


The Control Override Mechanism

The override mechanism is what separates this from checkbox compliance.

Every security benchmark has controls that conflict with how production environments actually work. CIS L1 recommends /tmp on a separate partition. Many cloud instances use tmpfs with equivalent nodev, nosuid, noexec mount options. The intent of the control is satisfied. The literal implementation differs.

Without an override mechanism, you have two bad options: fail the scan (noisy, meaningless), or configure the scanner to ignore the control (undocumented, invisible to auditors).

The blueprint’s controls section gives you a third option: record the override, document the reason, and let the scanner count it as compliant. The SARIF output and the compliance grade both reflect the documented state.

controls:
  - id: 1.1.2
    override: compliant
    reason: "tmpfs /tmp implemented via systemd unit — equivalent control"

This appears in the build log, in the SARIF export, and in the image metadata. An auditor reading the output sees: control 1.1.2 — compliant, documented exception, reason recorded. Not: control 1.1.2 — ignored.


What the Blueprint Gives You That a Script Doesn’t

Hardening script HardeningBlueprint YAML
Version-controlled Possible but not enforced Always — it’s a file
Auditable exceptions Typically not Built-in override mechanism
Post-build verification Manual or none Automatic OpenSCAP scan
Image exists only if hardened No Yes — build fails if scan fails
Multi-cloud portability Requires separate scripts Provider flag, same YAML
Drift detection Not possible Rescan instance against original grade
Skippable at 2am Yes No — you’d have to change the build process

The last row is the one that matters. A script is skippable because there’s a human in the loop. A blueprint is a build artifact — you can’t deploy the image without the blueprint having been applied, because the image is what the blueprint produces.


Validating a Blueprint Before Building

# Syntax and schema validation
stratum blueprint validate ubuntu22-cis-l1.yaml

# Dry-run — show what Ansible tasks will run, what controls will be checked
stratum build --blueprint ubuntu22-cis-l1.yaml --provider aws --dry-run

# Show all available controls for a benchmark
stratum blueprint controls --benchmark cis-l1 --os ubuntu --version 22.04

# Show what a specific control checks
stratum blueprint controls --id 1.1.2 --benchmark cis-l1

The dry-run output shows every Ansible task that will run, every OpenSCAP check that will fire, and flags any controls that might conflict with the provider environment before you’ve launched a build instance.


Production Gotchas

Build time is 15–25 minutes. Ansible-Lockdown applies 144+ tasks for CIS L1. Build this into your pipeline timing — don’t expect golden images in 3 minutes.

Cloud-init ordering matters. On AWS, certain hardening steps (sysctl tuning, PAM configuration) interact with cloud-init. The Stratum provider layer handles sequencing — but if you add custom hardening roles, test the cloud-init interaction explicitly.

Some CIS controls conflict with managed service requirements. AWS Systems Manager Session Manager requires specific SSH configuration. RDS requires specific networking settings. Use the controls override section to document these — don’t suppress them silently.

Kernel parameter hardening requires a reboot. Controls in the 3.x (network parameters) and 1.5.x (kernel modules) sections apply sysctl changes that take effect on reboot. The Stratum build process reboots the instance before the OpenSCAP scan — don’t skip the reboot if you’re building manually.


Key Takeaways

  • Linux hardening as code means the blueprint YAML is the build artifact — the image either exists and is hardened, or it doesn’t exist
  • The controls override mechanism is the difference between undocumented suppressions and auditable, reasoned exceptions
  • Post-build OpenSCAP scan runs automatically — a failing grade blocks image creation
  • One blueprint file is portable across providers (EP03 covers this): the compliance intent stays in the YAML, the cloud-specific details go in the provider layer
  • Version-controlling the blueprint gives you a complete history of what your OS security posture was at any point in time — the same way Terraform state tracks infrastructure

What’s Next

One blueprint, one provider. EP02 showed that the skip-at-2am problem is solved when hardening is a build artifact rather than a process step.

What it didn’t address: what happens when you expand to a second cloud. GCP uses different disk names. Azure cloud-init fires in a different order. The AWS metadata endpoint IP is different from every other provider. If you maintain separate hardening scripts per cloud, they drift within a month.

EP03 covers multi-cloud OS hardening: the same blueprint, six providers, no drift.

Next: multi-cloud OS hardening — one blueprint for AWS, GCP, and Azure

Get EP03 in your inbox when it publishes → linuxcent.com/subscribe

What Is LDAP — and Why It Was Invented to Replace Something Worse

Reading Time: 9 minutes

The Identity Stack, Episode 1
EP01EP02: LDAP Internals → EP03 → …

Focus Keyphrase: what is LDAP
Search Intent: Informational
Meta Description: LDAP solved 1980s authentication chaos and still powers enterprise logins today. Learn what it replaced, how it works, and why it’s still in your stack. (155 chars)


TL;DR

  • LDAP (Lightweight Directory Access Protocol) is a protocol for reading and writing directory information — most commonly, who is allowed to do what
  • It was built in 1993 as a “lightweight” alternative to X.500/DAP, which ran over the full OSI stack and was impossible to deploy on anything but mainframe hardware
  • Before LDAP, every server had its own /etc/passwd — 50 machines meant 50 separate user databases, managed manually
  • NIS (Network Information Service) was the first attempt to centralize this — it worked, then became a cleartext-credentials security liability
  • LDAP v3 (RFC 2251, 1997) is the version still in production today — 27 years of backwards compatibility
  • Everything you use today — Active Directory, Okta, Entra ID — is built on top of, or speaks, LDAP

The Big Picture: 50 Years of “Who Are You?”

1969–1980s   /etc/passwd — per-machine, no network auth
     │        50 servers = 50 user databases, managed manually
     │
     ▼
1984         Sun NIS / Yellow Pages — first centralized directory
     │        broadcast-based, no encryption, flat namespace
     │        Revolutionary for its era. A liability by the 1990s.
     │
     ▼
1988         X.500 / DAP — enterprise-grade directory services
     │        OSI protocol stack. Powerful. Impossible to deploy.
     │        Mainframe-class infrastructure required just to run it.
     │
     ▼
1993         RFC 1487 — LDAP v1
     │        Tim Howes, University of Michigan.
     │        Lightweight. TCP/IP. Actually deployable.
     │
     ▼
1997         RFC 2251 — LDAP v3
     │        SASL authentication. TLS. Controls. Referrals.
     │        The version still in production today.
     │
     ▼
2000s–now    Active Directory, OpenLDAP, 389-DS, FreeIPA
             Okta, Entra ID, Google Workspace
             LDAP DNA in every identity system on the planet.

What is LDAP? It’s the protocol that solved one of the most boring and consequential problems in computing: how do you know who someone is, across machines, at scale, without sending their password in cleartext?


The World Before LDAP

Before you understand why LDAP was invented, you need to feel the problem it solved.

Every Unix machine in the 1970s and 1980s managed its own users. When you created an account on a server, your username, UID, and hashed password went into /etc/passwd on that machine. Another machine had no idea you existed. If you needed access to ten servers, an administrator created ten separate accounts — manually, one by one. When you changed your password, each account had to be updated separately.

For a university with 200 machines and 10,000 students, this was chaos. For a company with offices in three cities, it was a full-time job for multiple sysadmins.

Machine A           Machine B           Machine C
/etc/passwd         /etc/passwd         /etc/passwd
vamshi:x:1001       (vamshi unknown)    vamshi:x:1004
alice:x:1002        alice:x:1001        alice:x:1003
bob:x:1003          bob:x:1002          (bob unknown)

Same people, different UIDs, different machines, no central truth.
File permissions become meaningless when UID 1001 means
different users on different hosts.

For every new hire, an admin SSHed to every machine and ran useradd. When someone left, you hoped whoever ran the offboarding remembered all the machines. Most organizations didn’t know their own attack surface because there was no single place to look.


Sun NIS: The First Attempt at Centralization

Sun Microsystems released NIS (Network Information Service) in 1984, originally called Yellow Pages — a name they had to drop after a trademark dispute with British Telecom. The idea was elegant: one server holds the authoritative /etc/passwd (and /etc/group, /etc/hosts, and a dozen other maps), and client machines query it instead of reading local files.

For the first time, you could create an account once and have it work across your entire network. For a generation of Unix administrators, NIS was liberating.

       NIS Master Server
       /var/yp/passwd.byname
              │
    ┌─────────┼──────────┐
    ▼         ▼          ▼
 Client A   Client B   Client C
 (query NIS — no local /etc/passwd needed)

NIS worked well — until it didn’t. The failure modes were structural:

No encryption. NIS responses were cleartext UDP. An attacker on the same network segment could capture the full password database with a packet sniffer. In 1984, “the network” meant a trusted corporate LAN. By the mid-1990s, it meant ethernet segments that included lab workstations, and the assumptions no longer held.

Broadcast-based discovery. NIS clients found servers by broadcasting on the local network. This worked on a single flat ethernet. It failed completely across routers, across buildings, and across WAN links. Multi-site organizations ended up running separate NIS domains with no connection between them — which partially defeated the purpose.

Flat namespace. NIS had no organizational hierarchy. One domain. Everything flat. You couldn’t have engineering and finance as separate administrative units. You couldn’t delegate user management to a department. One person — usually one overworked sysadmin — managed the whole thing.

UIDs had to match across all machines. If alice was UID 1002 on one server but UID 1001 on another, NFS file ownership became wrong. NIS enforced consistency, but onboarding a new machine into an existing network required manually auditing UID conflicts across the entire directory. Get one wrong and files end up owned by the wrong person.

NIS worked for thousands of installations from 1984 to the mid-1990s. It also ended careers when it failed. What the industry needed was a hierarchical, structured, encrypted, scalable directory service.


X.500 and DAP: The Right Idea, Wrong Protocol

The OSI (Open Systems Interconnection) standards body had an answer: X.500 directory services. X.500 was comprehensive, hierarchical, globally federated. The ITU-T published the standard in 1988, and it looked like exactly what enterprises needed.

X.500 Directory Information Tree (DIT)
              c=US                   ← country
                │
         o=University                ← organization
                │
         ┌──────┴──────┐
     ou=CS           ou=Physics      ← organizational units
         │
     cn=Tim Howes                    ← common name (person)
     telephoneNumber: +1-734-...
     mail: [email protected]

This data model — the hierarchy, the object classes, the distinguished names — is exactly what LDAP inherited. The DIT, the cn=, ou=, dc= notation in every LDAP query you’ve ever read: all of it came from X.500.

The problem was DAP: the Directory Access Protocol that X.500 used to communicate.

DAP ran over the full OSI protocol stack. Not TCP/IP — OSI. Seven layers, all of which required specialized software that in 1988 only mainframe and minicomputer vendors had implemented. A university department wanting to run X.500 needed hardware and software licenses that cost as much as a small car. The vast majority of workstations couldn’t speak OSI at all.

The data model was sound. The transport was impractical.

X.500 / DAP (1988)              LDAP v1 (1993)
──────────────────              ──────────────
Full OSI stack (7 layers)  →    TCP/IP only
Mainframe-class hardware   →    Any Unix box with a TCP stack
$50,000+ deployment cost   →    Free (reference implementation)
Vendor-specific OSI impl.  →    Standard socket API
Zero internet adoption     →    Universities deployed immediately

The Invention: LDAP at the University of Michigan

Tim Howes was at the University of Michigan in the early 1990s. The university was running X.500 for its directory — faculty, staff, student contact information, credentials. The data model was good. The protocol was the problem.

His insight, working with colleagues Wengyik Yeong and Steve Kille: strip X.500 down to what actually needs to function over a TCP/IP connection. Keep the hierarchical data model. Throw away the OSI transport. The result was the Lightweight Directory Access Protocol.

RFC 1487, published July 1993, described LDAP v1. It preserved the X.500 directory information model — the hierarchy, the object classes, the distinguished name format — and mapped it onto a protocol that could run over a simple TCP socket on port 389.

No specialized hardware. No OSI. If you had a Unix machine and TCP/IP, you could run LDAP. By 1993, that meant virtually every workstation and server in every university and most enterprises.

The University of Michigan deployed it immediately. Within two years, organizations across the internet were running the reference implementation.

LDAP v2 (RFC 1777, 1995) cleaned up the protocol. LDAP v3 (RFC 2251, 1997) is the version in production today — adding SASL authentication (which enables Kerberos integration), TLS support, referrals for federated directories, and extensible controls for server-side operations. The RFC that standardized the internet’s primary identity protocol is 27 years old and still running.


What LDAP Actually Is

LDAP is a client-server protocol for reading and writing a directory — a structured, hierarchical database optimized for reads.

Every entry in the directory has a Distinguished Name (DN) that describes its position in the hierarchy, and a set of attributes defined by its object classes. A person entry looks like this:

dn: cn=vamshi,ou=engineers,dc=linuxcent,dc=com

objectClass: inetOrgPerson
objectClass: posixAccount
cn: vamshi
uid: vamshi
uidNumber: 1001
gidNumber: 1001
homeDirectory: /home/vamshi
loginShell: /bin/bash
mail: [email protected]

The DN reads right-to-left: domain linuxcent.com (dc=linuxcent,dc=com) → organizational unit engineers → common name vamshi. Every entry in the directory has a unique path through the tree — there’s no ambiguity about which vamshi you mean.

LDAP defines eight operations: Bind (authenticate), Search, Add, Modify, Delete, ModifyDN (rename), Compare, and Abandon. Most of what a Linux authentication system does with LDAP reduces to two: Bind (prove you are who you say you are) and Search (tell me everything you know about this user).

When your Linux machine authenticates an SSH login against LDAP:

1. User types password
2. PAM calls pam_sss (or pam_ldap on older systems)
3. SSSD issues a Bind to the LDAP server: "I am cn=vamshi, and here is my credential"
4. LDAP server verifies the bind → success or failure
5. SSSD issues a Search: "give me the posixAccount attributes for uid=vamshi"
6. LDAP returns uidNumber, gidNumber, homeDirectory, loginShell
7. PAM creates the session with those attributes

The entire login flow is two LDAP operations: one Bind, one Search.


Try It Right Now

You don’t need to set up an LDAP server to run your first query. There’s a public test LDAP directory at ldap.forumsys.com:

# Query a public LDAP server — no setup required
ldapsearch -x \
  -H ldap://ldap.forumsys.com \
  -b "dc=example,dc=com" \
  -D "cn=read-only-admin,dc=example,dc=com" \
  -w readonly \
  "(objectClass=inetOrgPerson)" \
  cn mail uid

# What you get back (abbreviated):
# dn: uid=tesla,dc=example,dc=com
# cn: Tesla
# mail: [email protected]
# uid: tesla
#
# dn: uid=einstein,dc=example,dc=com
# cn: Albert Einstein
# mail: [email protected]
# uid: einstein

Decode what you just ran:

  • -x — simple authentication (username/password bind, not Kerberos/SASL)
  • -H ldap://ldap.forumsys.com — the LDAP server URI, port 389
  • -b "dc=example,dc=com" — the base DN, the top of the subtree to search
  • -D "cn=read-only-admin,dc=example,dc=com" — the bind DN (who you’re authenticating as)
  • -w readonly — the bind password
  • "(objectClass=inetOrgPerson)" — the search filter: return entries that are people
  • cn mail uid — the attributes to return (default returns all)

That’s a live LDAP query returning real directory entries from a server running RFC 2251 — the same protocol Tim Howes designed in 1993.

On your own Linux system, if you’re joined to AD or LDAP, you can query it the same way with your domain credentials.


Why It Never Went Away

LDAP v3 was finalized in 1997. In 2024, it’s still the protocol every enterprise directory speaks. Why?

Because it became the lingua franca of enterprise identity before any replacement existed. Every application that needs to authenticate users — VPN concentrators, mail servers, network switches, web applications, HR systems — implemented LDAP support. Every directory service Microsoft, Red Hat, Sun, and Novell shipped stored data in an LDAP-accessible tree.

When Microsoft built Active Directory in 1999, they built it on top of LDAP + Kerberos. When your Linux machine joins an AD domain, it speaks LDAP to enumerate users and groups, and Kerberos to verify credentials. When Okta or Entra ID syncs with your on-premises directory, it uses LDAP Sync (or a modern protocol that maps directly to LDAP semantics).

The protocol is old. The ecosystem built on top of it is so deep that replacing LDAP would mean simultaneously replacing every enterprise application that depends on it. Nobody has done that. Nobody has had to.

What happened instead is the stack got taller. LDAP at the bottom, Kerberos for network authentication, SSSD as the local caching daemon, PAM as the Linux integration layer, SAML and OIDC at the top for web-based federation. The directory is still LDAP. The interfaces above it evolved.

That full stack — from the directory at the bottom to Zero Trust at the top — is what this series covers.


⚠ Common Misconceptions

“LDAP is an authentication protocol.” LDAP is a directory protocol. It stores identity information and can verify credentials (via Bind). Authentication in modern stacks is typically Kerberos or OIDC — LDAP provides the directory backing it.

“LDAP is obsolete.” LDAP is the storage layer for Active Directory, OpenLDAP, 389-DS, FreeIPA, and every enterprise IdP’s on-premises sync. It is ubiquitous. What’s changed is the interface layer above it.

“You need Active Directory to run LDAP.” Active Directory uses LDAP. OpenLDAP, 389-DS, FreeIPA, and Apache Directory Server are all standalone LDAP implementations. You can run a directory without Microsoft.

“LDAP and LDAPS are different protocols.” LDAP is the protocol. LDAPS is LDAP over TLS on port 636. StartTLS is LDAP on port 389 with an in-session upgrade to TLS. Same protocol, different transport security.


Framework Alignment

Domain Relevance
CISSP Domain 5: Identity and Access Management LDAP is the foundational directory protocol for centralized identity stores — the base layer of every enterprise IAM stack
CISSP Domain 4: Communications and Network Security Port 389 (LDAP), 636 (LDAPS), 3268/3269 (AD Global Catalog) — transport security decisions affect every directory deployment
CISSP Domain 3: Security Architecture and Engineering DIT hierarchy, schema design, replication topology — directory structure is an architectural security decision
NIST SP 800-63B LDAP as a credential service provider (CSP) backing enterprise authenticators

Key Takeaways

  • LDAP was invented to solve a real, painful problem: the authentication chaos that NIS couldn’t fix and X.500/DAP was too expensive to deploy
  • It inherited the right thing from X.500 (the hierarchical data model) and replaced the right thing (the impractical OSI transport with TCP/IP)
  • NIS was the predecessor that worked until it didn’t — its failure modes (no encryption, flat namespace, broadcast discovery) are exactly what LDAP was designed to fix
  • LDAP v3 (RFC 2251, 1997) is still the production standard — 27 years later
  • Active Directory, OpenLDAP, FreeIPA, Okta, Entra ID — every enterprise identity system either runs LDAP or speaks it
  • The full authentication stack is deeper than LDAP: the next 12 episodes peel it apart layer by layer

What’s Next

EP01 stayed at the design level — the problem, the predecessor failures, the invention, the data model.

EP02 goes inside the wire. The DIT structure, DN syntax, object classes, schema, and the BER-encoded bytes that actually travel from the server to your authentication daemon. Run ldapsearch against your own directory and read every line of what comes back.

Next: LDAP Internals: The Directory Tree, Schema, and What Travels on the Wire

Get EP02 in your inbox when it publishes → linuxcent.com/subscribe

XDP — Packets Processed Before the Kernel Knows They Arrived

Reading Time: 10 minutes

eBPF: From Kernel to Cloud, Episode 7
What Is eBPF? · The BPF Verifier · eBPF vs Kernel Modules · eBPF Program Types · eBPF Maps · CO-RE and libbpf · XDP**


TL;DR

  • XDP fires before sk_buff allocation — the earliest possible kernel hook for packet processing
    (sk_buff = the kernel’s socket buffer — every normal packet requires one to be allocated, which adds up fast at scale)
  • Three modes: native (in-driver, full performance), generic (fallback, no perf gain), offloaded (NIC ASIC)
  • XDP context is raw packet bytes — no socket, no cgroup, no pod identity; handle non-IP traffic explicitly
  • Every pointer dereference requires a bounds check against data_end — the verifier enforces this
  • BPF_MAP_TYPE_LPM_TRIE is the right map type for IP prefix blocklists — handles /32 hosts and CIDRs together
  • XDP metadata area enables coordination with TC programs — classify at XDP speed, enforce with pod context at TC

XDP eBPF fires before sk_buff allocation — the earliest possible kernel hook, and the reason iptables rules can be technically correct while still burning CPU at high packet rates. I had iptables DROP rules installed and working during a SYN flood. Packets were being dropped. CPU was still burning at 28% software interrupt time. The rules weren’t wrong. The hook was in the wrong place.


A client’s cluster was under a SYN flood — roughly 1 million packets per second from a rotating set of source IPs. We had iptables DROP rules installed within the first ten minutes, blocklist updated every 30 seconds as new source ranges appeared. The flood traffic dropped in volume. But node CPU stayed high. The %si column in top — software interrupt time — was sitting at 25–30%.

%si in top is the percentage of CPU time spent handling hardware interrupts and kernel-level packet processing — separate from your application’s CPU usage. On a quiet managed cluster (EKS, GKE) this is usually under 1%. Under a packet flood, high %si means the kernel is burning cycles just receiving packets, before your workloads run at all. It’s the metric that tells you the problem is below the application layer.

I didn’t understand why. The iptables rules were matching. Packets were being dropped. Why was the CPU still burning?

The answer is where in the kernel the drop was happening. iptables fires inside the netfilter framework — after the kernel has already allocated an sk_buff for the packet, done DMA from the NIC ring buffer, and traversed several netfilter hooks.

netfilter is the Linux kernel subsystem that handles packet filtering, NAT, and connection tracking. iptables is the userspace CLI that writes rules into it. At high packet rates, the cost isn’t the rule match — it’s the kernel work that happens before the rule is evaluated. At 1 million packets per second, the allocation cost alone is measurable. The attack was “slow” in network terms, but fast enough to keep the kernel memory allocator and netfilter traversal continuously busy.

XDP fires before any of that. Before sk_buff. Before routing. Before the kernel network stack has touched the packet at all. A DROP decision at the XDP layer costs one bounds check and a return value. Nothing else.

Quick Check: Is XDP Running on Your Cluster?

Before the data path walkthrough — a two-command check you can run right now on any cluster node:

# SSH into a worker node, then:
bpftool net list

On a Cilium-managed node, you’ll see something like:

eth0 (index 2):
        xdpdrv  id 44

lxc8a3f21b (index 7):
        tc ingress id 47
        tc egress  id 48

Reading the output:
xdpdrv — XDP in native mode, running in the NIC driver before sk_buff (this is what you want)
xdpgeneric instead of xdpdrvgeneric mode, runs after sk_buff allocation, no performance benefit
– No XDP line at all — XDP not deployed; your CNI uses iptables for service forwarding

If you’re on EKS with aws-vpc-cni or GKE with kubenet, you likely won’t see XDP here — those CNIs use iptables. Understanding this section explains why teams migrating to Cilium see lower node CPU under the same traffic load.

Where XDP Sits in the Kernel Data Path

The standard Linux packet receive path:

NIC hardware
  ↓
DMA to ring buffer (kernel memory)
  ↓
[XDP hook — fires here, before sk_buff]
  ├── XDP_DROP   → discard, zero further allocation
  ├── XDP_PASS   → continue to kernel network stack
  ├── XDP_TX     → transmit back out the same interface
  └── XDP_REDIRECT → forward to another interface or CPU
  ↓
sk_buff allocated from slab allocator
  ↓
netfilter: PREROUTING
  ↓
IP routing decision
  ↓
netfilter: INPUT or FORWARD
  ↓  [iptables fires somewhere in here]
socket receive queue
  ↓
userspace application

XDP runs at the driver level, in the NAPI poll context — the same context where the driver is processing received packets off the ring buffer. The program runs before the kernel’s general networking code gets involved. There’s no sk_buff, no reference counting, no slab allocation.

NAPI (New API) is how modern Linux receives packets efficiently. Instead of one CPU interrupt per packet (catastrophically expensive at 1Mpps), the NIC fires a single interrupt, then the kernel polls the NIC ring buffer in batches until it’s drained. XDP runs inside this polling loop — as close to the hardware as software gets without running on the NIC itself.

At 1Mpps, the difference between XDP_DROP and an iptables DROP is roughly the cost of allocating and then immediately freeing 1 million sk_buff objects per second — plus netfilter traversal, connection tracking lookup, and the DROP action itself. That’s the CPU time that was burning.

After moving the blocklist to an XDP program, the %si on the same traffic load dropped from 28% to 3%.

XDP Modes

XDP operates in three modes, and which one you get depends on your NIC driver.

Native XDP (XDP_FLAGS_DRV_MODE)

The eBPF program runs directly in the NIC driver’s NAPI poll function — in interrupt context, before sk_buff. This is the only mode that delivers the full performance benefit.

Driver support is required. The widely supported drivers: mlx4, mlx5 (Mellanox/NVIDIA), i40e, ice (Intel), bnxt_en (Broadcom), virtio_net (KVM/QEMU), veth (containers). Check support:

# Verify native XDP support on your driver
ethtool -i eth0 | grep driver
# driver: mlx5_core  ← supports native XDP

# Load in native mode
ip link set dev eth0 xdpdrv obj blocklist.bpf.o sec xdp

The veth driver supporting native XDP is what makes XDP meaningful inside Kubernetes pods — each pod’s veth interface can run an XDP program at wire speed.

Generic XDP (XDP_FLAGS_SKB_MODE)

Fallback for drivers that don’t support native XDP. The program still runs, but it runs after sk_buff allocation, as a hook in the netif_receive_skb path. No performance benefit over early netfilter. sk_buff is still allocated and freed for every packet.

# Generic mode — development and testing only
ip link set dev eth0 xdpgeneric obj blocklist.bpf.o sec xdp

Use this for development on a laptop with a NIC that lacks native XDP support. Never benchmark with it and never use it in production expecting performance gains.

Offloaded XDP

Runs on the NIC’s own processing unit (SmartNIC). Zero CPU involvement — the XDP decision happens in NIC hardware. Supported by Netronome Agilio NICs. Rare in production, but the theoretical ceiling for XDP performance.

The XDP Context: What Your Program Can See

XDP programs receive one argument: struct xdp_md.

struct xdp_md {
    __u32 data;           // offset of first packet byte in the ring buffer page
    __u32 data_end;       // offset past the last byte
    __u32 data_meta;      // metadata area before data (XDP metadata for TC cooperation)
    __u32 ingress_ifindex;
    __u32 rx_queue_index;
};

data and data_end are used as follows:

void *data     = (void *)(long)ctx->data;
void *data_end = (void *)(long)ctx->data_end;

// Every pointer dereference must be bounds-checked first
struct ethhdr *eth = data;
if ((void *)(eth + 1) > data_end)
    return XDP_PASS;  // malformed or truncated packet

The verifier enforces these bounds checks — every pointer derived from ctx->data must be validated before use. The error invalid mem access 'inv' means you dereferenced a pointer without checking the bounds. This is the most common cause of XDP program rejection.

For operators (not writing XDP code): You’ll see invalid mem access 'inv' in logs when an eBPF program is rejected at load time — most commonly during a Cilium upgrade or a custom tool deployment on a kernel the tool wasn’t built for. The fix is in the eBPF source or the tool version, not the cluster config. If you see this error and you’re not writing eBPF yourself, it means the tool’s build doesn’t match your kernel version — upgrade the tool or check its supported kernel matrix.

What XDP cannot see:
– Socket state — no socket buffer exists yet
– Cgroup hierarchy — no pod identity
– Process information — no PID, no container
– Connection tracking state (unless you maintain it yourself in a map)

XDP is ingress-only. It fires on packets arriving at an interface, not departing. For egress, TC is the hook.

What This Means on Your Cluster Right Now

Every Cilium-managed node has XDP programs running. Here’s how to see them:

# All XDP programs on all interfaces — this is the full picture
bpftool net list
# Sample output on a Cilium node:
#
# eth0 (index 2):
#         xdpdrv  id 44         ← XDP in native mode on the node uplink
#
# lxc8a3f21b (index 7):
#         tc ingress id 47      ← TC enforces NetworkPolicy on pod ingress
#         tc egress  id 48      ← TC enforces NetworkPolicy on pod egress
#
# "xdpdrv"     = native mode (runs in NIC driver, before sk_buff — full performance)
# "xdpgeneric" = fallback mode (after sk_buff — no performance benefit over iptables)

# Which mode is active?
ip link show eth0 | grep xdp
# xdp mode drv  ← native (full performance)
# xdp mode generic  ← fallback (no perf benefit)

# Details on the XDP program ID
bpftool prog show id $(bpftool net show dev eth0 | grep xdp | awk '{print $NF}')
# Shows: loaded_at, tag, xlated bytes, jited bytes, map IDs

The map IDs in that output are the BPF maps the XDP program is using — typically the service VIP table for DNAT, and in security tools, the blocklist or allowlist. To see what’s in them:

# List maps used by the XDP program
bpftool prog show id <PROG_ID> | grep map_ids

# Dump the service map (for a Cilium node — this is the load balancer table)
bpftool map dump id <MAP_ID> | head -40

For a blocklist scenario — like the SYN flood mitigation above — the BPF_MAP_TYPE_LPM_TRIE is the standard data structure. A lookup for 192.168.1.45 hits a 192.168.1.0/24 entry in the same map, handling both host /32s and CIDR ranges in one lookup. The practical operational check:

# Count entries in an XDP filter map
bpftool map dump id <BLOCKLIST_MAP_ID> | grep -c "key"

# Verify XDP is active and inspect program details
bpftool net show dev eth0

XDP Metadata: Cooperating with TC

Think of it as a sticky note attached to the packet. XDP writes the note at line speed (no context about pods or sockets). TC reads it later when full context is available, and acts on it. The packet carries the note between them.

More precisely: XDP can write metadata into the area before ctx->data — a small scratch space that survives as the packet moves from XDP to the TC hook. This is the coordination mechanism between the two eBPF layers.

The pattern is: XDP classifies at speed (no sk_buff overhead), TC enforces with pod context (where you have socket identity). XDP writes a classification tag into the metadata area. TC reads it and makes the policy decision.

This is the architecture behind tools like Pro-NDS: the fast-path pattern matching (connection tracking, signature matching) happens at XDP before any kernel allocation. The enforcement action — which requires knowing which pod sent this — happens at TC using the metadata XDP already wrote.

From an operational standpoint, when you see two eBPF programs on the same interface (one XDP, one TC), this pipeline is the likely explanation. The bpftool net list output shows both:

bpftool net list
# xdpdrv id 44 on eth0       ← XDP classifier running at line rate
# tc ingress id 47 on eth0   ← TC enforcer reading XDP metadata

How Cilium Uses XDP

Not running Cilium? On EKS with aws-vpc-cni or GKE with kubenet, service forwarding uses iptables NAT rules and conntrack instead. You can see this with iptables -t nat -L -n on a node — look for the KUBE-SVC-* chains. Those chains are what XDP replaces in a Cilium cluster. This is why teams migrating from kube-proxy to Cilium report lower node CPU at high connection rates — it’s not magic, it’s hook placement.

On a Cilium node, XDP handles the load balancing path for ClusterIP services. When a packet arrives at the node destined for a ClusterIP:

  1. XDP program checks the destination IP against a BPF LRU hash map of known service VIPs
  2. On a match, it performs DNAT — rewriting the destination IP to a backend pod IP
  3. Returns XDP_TX or XDP_REDIRECT to forward directly

No iptables NAT rules. No conntrack state machine. No socket buffer allocation for the routing decision. The lookup is O(1) in a BPF hash map.

# See Cilium's XDP program on the node uplink
ip link show eth0 | grep xdp
# xdp  (attached, native mode)

# The XDP program details
bpftool prog show pinned /sys/fs/bpf/cilium/xdp

# Load time, instruction count, JIT-compiled size
bpftool prog show id $(bpftool net list | grep xdp | awk '{print $NF}')

At production scale — 500+ nodes, 50k+ services — removing iptables from the service forwarding path with XDP reduces per-node CPU utilization measurably. The effect is most visible on nodes handling high connection rates to cluster services.

Operational Inspection

# All XDP programs on all interfaces
bpftool net list

# Check XDP mode (native, generic, offloaded)
ip link show | grep xdp

# Per-interface stats — includes XDP drop/pass counters
cat /sys/class/net/eth0/statistics/rx_dropped

# XDP drop counters exposed via bpftool
bpftool map dump id <stats_map_id>

# Verify XDP is active and show program details
bpftool net show dev eth0

Common Mistakes

Mistake Impact Fix
Missing bounds check before pointer dereference Verifier rejects: “invalid mem access” Always check ptr + sizeof(*ptr) > data_end before use
Using generic XDP for performance testing Misleading numbers — sk_buff still allocated Test in native mode only; check ip link output for mode
Not handling non-IP traffic (ARP, IPv6, VLAN) ARP breaks, IPv6 drops, VLAN-tagged frames dropped Check eth->h_proto and return XDP_PASS for non-IP
XDP for egress or pod identity No socket context at XDP; XDP is ingress only Use TC egress for pod-identity-aware egress policy
Forgetting BPF_F_NO_PREALLOC on LPM trie Full memory allocated at map creation for all entries Always set this flag for sparse prefix tries
Blocking ARP by accident in a /24 blocklist Loss of layer-2 reachability within the blocked subnet Separate ARP handling before the IP blocklist check

Key Takeaways

  • XDP fires before sk_buff allocation — the earliest possible kernel hook for packet processing
  • Three modes: native (in-driver, full performance), generic (fallback, no perf gain), offloaded (NIC ASIC)
  • XDP context is raw packet bytes — no socket, no cgroup, no pod identity; handle non-IP traffic explicitly
  • Every pointer dereference requires a bounds check against data_end — the verifier enforces this
  • BPF_MAP_TYPE_LPM_TRIE is the right map for IP prefix blocklists — handles /32 hosts and CIDRs together
  • XDP metadata area enables coordination with TC programs — classify at XDP speed, enforce with pod context at TC

What’s Next

XDP handles ingress at the fastest possible point but has no visibility into which pod sent a packet. EP08 covers TC eBPF — the hook that fires after sk_buff allocation, where socket and cgroup context exist.

TC is how Cilium implements pod-to-pod network policy without iptables. It’s also where stale programs from failed Cilium upgrades leave ghost filters that cause intermittent packet drops. Knowing how TC programs chain — and how to find and remove stale ones — is a specific, concrete operational skill.

Next: TC eBPF — pod-level network policy without iptables

Get EP08 in your inbox when it publishes → linuxcent.com/subscribe