The Kubernetes Controller Reconcile Loop: How CRDs Come Alive at Runtime

Reading Time: 7 minutes

Kubernetes CRDs & Operators: Extending the API, Episode 6
What Is a CRD? · CRDs You Already Use · CRD Anatomy · Write Your First CRD · CEL Validation · Controller Loop · Build an Operator · CRD Versioning · Admission Webhooks · CRDs in Production


TL;DR

  • The Kubernetes controller reconcile loop is the mechanism that makes CRDs do something — it watches custom resources, compares desired state (spec) to actual state, and takes actions to close the gap
    (reconcile = “make actual match desired”; the loop runs repeatedly because the world is not static — things drift, fail, and change)
  • Controllers do not receive events like webhooks — they receive object names from a work queue, then re-read the full object from the API server cache
  • The reconcile function is idempotent: calling it ten times with the same object must produce the same result as calling it once
  • controller-runtime is the Go library that provides the informer cache, work queue, and reconciler interface — kubebuilder scaffolds controllers on top of it
  • Kubernetes uses the same reconcile loop internally — the Deployment controller, ReplicaSet controller, and node lifecycle controller all follow this exact pattern
  • A failed reconcile returns an error or explicit requeue request; the controller retries with exponential backoff, not an infinite tight loop

The Big Picture

  THE KUBERNETES CONTROLLER RECONCILE LOOP

  etcd
   │ change event
   ▼
  Informer cache
  (API server-side list+watch,
   local in-memory replica)
   │ cache update → enqueue object name
   ▼
  Work queue
  (rate-limited, deduplicating)
   │ dequeue: "demo/nightly"
   ▼
  Reconcile(ctx, Request{Name, Namespace})
   │
   ├── 1. Fetch object from cache
   │        if not found → ignore (already deleted)
   │
   ├── 2. Read spec (desired state)
   │
   ├── 3. Read actual state
   │        (check child resources, external systems)
   │
   ├── 4. Compare: actual vs desired
   │
   ├── 5. Act: create/update/delete child resources
   │        OR update external system
   │
   └── 6. Update status with outcome
           └── return Result{}, nil      → done
               return Result{Requeue}, nil → retry after delay
               return Result{}, err     → immediate retry + backoff

The Kubernetes controller reconcile loop is what separates a CRD (validated storage) from an operator (automated behavior). Understanding this loop is the prerequisite for writing controllers that work correctly under failure, partial completion, and concurrent modification.


What “Reconcile” Actually Means

Reconcile means: look at what the user asked for (spec), look at what actually exists, and do whatever is needed to make actual match desired.

The key insight is that this is not event-driven in the traditional sense. A controller does not receive a “diff” — it receives a name. It reads the full current state of the object and acts accordingly.

This matters because:

  1. Multiple events get deduplicated. If a BackupPolicy is updated five times in one second, the work queue delivers one reconcile call, not five.
  2. The reconcile is stateless. The controller should not maintain in-memory state about what it “did last time.” It re-reads everything on each reconcile.
  3. Partial failure is safe. If the reconcile fails halfway through, the next reconcile re-reads actual state and continues from where it left off.

The Informer Cache

Controllers do not call the API server directly for every read. They use an informer — a list-and-watch mechanism that maintains a local in-memory copy of all objects of a given type.

  HOW THE INFORMER CACHE WORKS

  Controller startup:
  ┌─────────────────────────────────────────────────────┐
  │ 1. List all BackupPolicies from API server          │
  │    → populate local cache                           │
  │ 2. Establish a Watch stream                         │
  │    → receive incremental updates                    │
  │ 3. For each update: update cache + enqueue object   │
  └─────────────────────────────────────────────────────┘

  On reconcile:
  ┌─────────────────────────────────────────────────────┐
  │ controller reads from LOCAL cache (not API server)  │
  │ → fast, no network round-trip per reconcile         │
  │ → cache is eventually consistent                    │
  └─────────────────────────────────────────────────────┘

Cache consistency: After writing a change (creating a child Secret, for example), re-reading from the cache may return the old state for a brief period. This is normal and expected. Well-written controllers handle this by returning a requeue rather than assuming the write is immediately visible.


Walking Through a Reconcile for BackupPolicy

Suppose a user creates this BackupPolicy:

apiVersion: storage.example.com/v1alpha1
kind: BackupPolicy
metadata:
  name: nightly
  namespace: demo
spec:
  schedule: "0 2 * * *"
  retentionDays: 30
  targets:
    - namespace: production

The controller’s reconcile function runs. Here is what it does conceptually:

Reconcile(ctx, {Namespace: "demo", Name: "nightly"})

Step 1: Fetch BackupPolicy "demo/nightly" from cache
  → found; spec.schedule = "0 2 * * *", spec.retentionDays = 30

Step 2: Check if a CronJob for this BackupPolicy exists
  → kubectl get cronjob nightly-backup -n demo
  → not found

Step 3: Gap detected: CronJob should exist but doesn't
  → Create CronJob "nightly-backup" in namespace "demo"
    spec.schedule = "0 2 * * *"
    spec.jobTemplate.spec.template.spec.containers[0].args = ["--retention=30"]

Step 4: Set owner reference on CronJob pointing to BackupPolicy
  → CronJob is now garbage-collected if BackupPolicy is deleted

Step 5: Update BackupPolicy status
  → conditions: [{type: Ready, status: True, reason: CronJobCreated}]
  → lastScheduleTime: null (not yet run)

Step 6: Return Result{}, nil   → reconcile complete

Next time the BackupPolicy is modified (e.g., suspended: true):

Reconcile(ctx, {Namespace: "demo", Name: "nightly"})

Step 1: Fetch → spec.suspended = true

Step 2: Fetch CronJob "nightly-backup"
  → found; spec.suspend = false  ← actual state

Step 3: Gap: CronJob.spec.suspend should be true but is false
  → Patch CronJob: set spec.suspend = true

Step 4: Update status
  → conditions: [{type: Ready, status: True, reason: Suspended}]

Step 5: Return Result{}, nil

Idempotency: The Essential Property

The reconcile function must be idempotent. If it runs ten times with the same object state, the result must be the same as if it ran once.

Why? Because the controller framework delivers at-least-once semantics — your reconcile function will be called more than once for the same object state, especially at startup (the informer re-lists all objects) and after controller restarts.

Non-idempotent (wrong):

// Creates a new CronJob every time, even if one already exists
err := r.Create(ctx, cronJob)

Idempotent (correct):

// Only creates if it doesn't exist; updates if it does
existing := &batchv1.CronJob{}
err := r.Get(ctx, types.NamespacedName{Name: jobName, Namespace: ns}, existing)
if apierrors.IsNotFound(err) {
    err = r.Create(ctx, cronJob)
} else if err == nil {
    // update if spec differs
    existing.Spec = cronJob.Spec
    err = r.Update(ctx, existing)
}

The get-before-create pattern is the most basic idempotency mechanism. controller-runtime provides CreateOrUpdate helpers that codify this.


Requeue and Retry Semantics

The reconcile function returns a (Result, error) pair:

return Result{}, nil
  → Reconcile succeeded. Re-run only if object changes again.

return Result{RequeueAfter: 5 * time.Minute}, nil
  → Reconcile succeeded, but requeue in 5 minutes regardless.
  → Used for: polling external system, TTL-based refresh.

return Result{Requeue: true}, nil
  → Requeue immediately (with rate limiting).
  → Used for: cache not yet consistent after a write.

return Result{}, err
  → Reconcile failed. Retry with exponential backoff.
  → Used for: API errors, transient failures.
  RETRY BEHAVIOR

  First failure  → retry after ~1s
  Second failure → retry after ~2s
  Third failure  → retry after ~4s
  ...
  Max backoff    → ~16min (controller-runtime default)

  Object changes (new version from informer) → reset backoff, reconcile immediately

Do not return Result{Requeue: true}, nil in a tight loop — this saturates the work queue and starves other objects. If you need to poll, use RequeueAfter with a meaningful interval.


Watches: What Triggers a Reconcile

The controller does not only watch the primary resource (BackupPolicy). It also watches child resources and maps child changes back to the parent:

  WATCH CONFIGURATION (conceptual)

  Controller watches:
    BackupPolicy (primary) → reconcile when BackupPolicy changes
    CronJob (child/owned)  → reconcile BackupPolicy owner when CronJob changes
    ConfigMap (watched)    → reconcile BackupPolicy when referenced ConfigMap changes

If a user accidentally deletes the CronJob that the controller created:

  1. CronJob deletion event arrives in the informer
  2. Controller maps the deleted CronJob → its owner BackupPolicy
  3. BackupPolicy is enqueued
  4. Reconcile runs, detects missing CronJob, recreates it

This “self-healing” behavior — where controllers reconcile the world back to desired state — is the core operational value of operators. It is not magic; it is the result of watching child resources and re-running reconcile when they drift.


Level-Triggered vs Edge-Triggered

Kubernetes controllers are level-triggered, not edge-triggered. This distinction matters:

  EDGE-TRIGGERED (not what Kubernetes uses)
  → "BackupPolicy was updated FROM retained-30 TO retained-7"
  → If event is lost, the update is lost forever

  LEVEL-TRIGGERED (what Kubernetes uses)
  → "BackupPolicy exists with retentionDays=7"
  → On every reconcile, the controller reads the current level (state)
  → Missing an event is safe — the next reconcile corrects the state

Level-triggered design is why controllers survive restarts, network partitions, and lost events gracefully. The reconcile does not need to track “what changed” — it only needs to know “what is the desired state right now.”


The Same Pattern in Kubernetes Core

Every built-in Kubernetes controller follows this loop:

Controller Watches Manages Reconciles
Deployment controller Deployment ReplicaSets desired replicas ↔ actual ReplicaSet count
ReplicaSet controller ReplicaSet Pods desired replicas ↔ running Pod count
Node lifecycle controller Node Node conditions NotReady nodes → taint, evict pods
Service controller (cloud) Service LoadBalancer cloud LB exists ↔ Service spec

The BackupPolicy controller you will build in EP07 follows exactly the same structure as the Deployment controller.


⚠ Common Mistakes

Reading from the API server directly instead of the cache. Every reconcile reading directly from the API server (not the informer cache) creates N×M load on the API server as the number of objects and reconcile frequency grows. Always read via the controller’s cached client.

Not handling “not found” on object fetch. If a reconcile is triggered but the object has been deleted by the time reconcile runs, the cache returns “not found.” This is normal — the correct response is to return Result{}, nil, not an error.

Tight requeue loop on recoverable error. Returning Result{Requeue: true}, nil or Result{}, err on every call creates an infinite busy-loop. Use RequeueAfter for expected wait conditions, and only return errors for unexpected failures that should back off.

Mutable reconcile state. Do not store reconcile state in struct fields on the reconciler. The reconciler is shared across goroutines; mutable fields cause race conditions. Everything transient must be local to the reconcile function.


Quick Reference

Reconcile input:
  ctx context.Context
  req ctrl.Request   → {Namespace: "demo", Name: "nightly"}

Reconcile output:
  (ctrl.Result, error)

Common returns:
  Result{}, nil                        → done, wait for next change
  Result{Requeue: true}, nil           → retry now (rate limited)
  Result{RequeueAfter: 5*time.Minute}  → retry in 5 minutes
  Result{}, err                        → retry with backoff

Key operations:
  r.Get(ctx, req.NamespacedName, &obj)     → fetch from cache
  r.Create(ctx, &obj)                      → create in API server
  r.Update(ctx, &obj)                      → full update
  r.Patch(ctx, &obj, patch)                → partial update
  r.Delete(ctx, &obj)                      → delete
  r.Status().Update(ctx, &obj)             → update status only

Key Takeaways

  • The reconcile loop reads desired state from spec, reads actual state from the cluster, and closes the gap — on every trigger, not just on changes
  • Controllers use an informer cache for reads — fast, eventually consistent, does not hammer the API server
  • Idempotency is not optional: the reconcile function will be called multiple times with the same state
  • Level-triggered design means missing events is safe — the next reconcile corrects any drift
  • Return values from reconcile control retry behavior: RequeueAfter for polling, err for failures, nil for success

What’s Next

EP07: Build a Simple Kubernetes Operator with controller-runtime puts the reconcile loop into practice — kubebuilder scaffold, a complete reconciler for BackupPolicy, RBAC markers, and running the operator locally against a real cluster.

Get EP07 in your inbox when it publishes → subscribe at linuxcent.com

Kubernetes CRD CEL Validation: Replace Admission Webhooks for Schema Rules

Reading Time: 6 minutes

Kubernetes CRDs & Operators: Extending the API, Episode 5
What Is a CRD? · CRDs You Already Use · CRD Anatomy · Write Your First CRD · CEL Validation · Controller Loop · Build an Operator · CRD Versioning · Admission Webhooks · CRDs in Production


TL;DR

  • Kubernetes CRD CEL validation (x-kubernetes-validations) lets you write arbitrary validation rules in the CRD schema — no admission webhook needed
    (CEL = Common Expression Language, a lightweight expression language built into Kubernetes since 1.25 stable; replaces most reasons you would write a validating admission webhook)
  • CEL rules are evaluated by the API server at admit time — the same place as OpenAPI schema validation, before etcd
  • self refers to the current object’s field; oldSelf refers to the previous value (for update rules)
  • Cross-field validation: “if storageClass is premium, retentionDays must be ≤ 90″ — impossible with plain OpenAPI schema, trivial with CEL
  • Immutable fields: oldSelf == self with reason: Immutable prevents users from changing values after creation
  • CEL rules run in ~microseconds inside the API server; no external service, no TLS, no latency budget to manage

The Big Picture

  CEL VALIDATION: WHERE IT FITS IN THE ADMISSION CHAIN

  kubectl apply -f backup.yaml
         │
         ▼
  API Server admission chain
  ┌────────────────────────────────────────────────────┐
  │                                                    │
  │  1. Mutating admission webhooks (modify object)    │
  │  2. Schema validation (OpenAPI types, required,    │
  │     minimum/maximum, pattern)                      │
  │  3. CEL validation (x-kubernetes-validations)  ←  │ THIS EPISODE
  │  4. Validating admission webhooks (external)       │
  │                                                    │
  └────────────────────────────────────────────────────┘
         │
         ▼ (passes all checks)
  etcd storage

Kubernetes CRD CEL validation sits between schema validation and external webhooks. For most validation requirements, CEL eliminates the need for a webhook entirely — which means no separate deployment to maintain, no TLS certificates to rotate, no availability dependency between your CRD and a webhook server.


Why CEL Replaces Most Admission Webhooks

Before CEL (stable in Kubernetes 1.25), the only way to express “if field A has value X, field B must be present” was an admission webhook — a separate HTTP server that Kubernetes called synchronously during every API request.

Webhooks work, but they have real costs:

  • Availability dependency: if the webhook is down, creates/updates for that resource type fail
  • TLS management: webhook endpoints require valid TLS certs that must be rotated
  • Deployment overhead: another Deployment, Service, and certificate to manage
  • Latency: every API operation waits for an HTTP round-trip

CEL runs inside the API server process. There is no network call, no certificate, no separate deployment. Rules are compiled once and evaluated in microseconds.

The trade-off: CEL cannot make network calls or access state outside the object being validated. For rules that need to look up other resources (e.g., “does this referenced Secret exist?”), you still need a webhook or a controller that validates via status conditions.


CEL Syntax Basics

CEL expressions are small programs. In Kubernetes CRD validation, the key variables are:

Variable Meaning
self The current field value (or root object at top level)
oldSelf The previous value of the field (only available on update; nil on create)

CEL returns true (validation passes) or false (validation fails, API returns error).

Common patterns:

# String not empty
self.size() > 0

# String matches format
self.matches('^[a-z][a-z0-9-]*$')

# Integer in range
self >= 1 && self <= 365

# Field present (for optional fields)
has(self.fieldName)

# Conditional: if A then B
!has(self.premium) || self.retentionDays <= 90

# List not empty
self.size() > 0

# All items in list satisfy condition
self.all(item, item.namespace.size() > 0)

# Cross-field: access sibling field via parent
self.retentionDays >= self.minRetentionDays

Adding CEL Rules to the BackupPolicy CRD

Start from the CRD built in EP04. Add x-kubernetes-validations at the levels where you need them.

Rule 1: Cron expression validation

The OpenAPI pattern field can validate basic structure, but a proper cron regex is unwieldy. CEL is cleaner:

spec:
  type: object
  required: ["schedule", "retentionDays"]
  x-kubernetes-validations:
    - rule: "self.schedule.matches('^(\\\\*|[0-9,\\\\-\\\\/]+) (\\\\*|[0-9,\\\\-\\\\/]+) (\\\\*|[0-9,\\\\-\\\\/]+) (\\\\*|[0-9,\\\\-\\\\/]+) (\\\\*|[0-9,\\\\-\\\\/]+)$')"
      message: "schedule must be a valid 5-field cron expression"

Rule 2: Cross-field validation

spec:
  type: object
  x-kubernetes-validations:
    - rule: "!(self.storageClass == 'premium') || self.retentionDays <= 90"
      message: "premium storage class supports at most 90 days retention"
    - rule: "!self.suspended || !has(self.pausedBy) || self.pausedBy.size() > 0"
      message: "when suspended is true, pausedBy must be non-empty if provided"

Rule 3: Immutable fields

Once a BackupPolicy is created, the schedule field should not be changeable without deleting and recreating:

schedule:
  type: string
  x-kubernetes-validations:
    - rule: "self == oldSelf"
      message: "schedule is immutable after creation"
      reason: Immutable

reason field: Available reasons are FieldValueInvalid (default), FieldValueForbidden, FieldValueRequired, and Immutable. Using Immutable returns HTTP 422 with a clear message that the field cannot be changed.

Rule 4: Conditional required field

If storageClass is encrypted, then encryptionKeyRef must be present:

spec:
  type: object
  x-kubernetes-validations:
    - rule: "self.storageClass != 'encrypted' || has(self.encryptionKeyRef)"
      message: "encryptionKeyRef is required when storageClass is 'encrypted'"

Rule 5: List element validation

Ensure each target namespace is a valid RFC 1123 DNS label:

targets:
  type: array
  items:
    type: object
    x-kubernetes-validations:
      - rule: "self.namespace.matches('^[a-z0-9]([-a-z0-9]*[a-z0-9])?$')"
        message: "namespace must be a valid DNS label"

The Complete Updated CRD with CEL

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: backuppolicies.storage.example.com
spec:
  group: storage.example.com
  scope: Namespaced
  names:
    plural:     backuppolicies
    singular:   backuppolicy
    kind:       BackupPolicy
    shortNames: [bp]
  versions:
    - name: v1alpha1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          required: ["spec"]
          properties:
            spec:
              type: object
              required: ["schedule", "retentionDays"]
              x-kubernetes-validations:
                - rule: "!(self.storageClass == 'premium') || self.retentionDays <= 90"
                  message: "premium storage class supports at most 90 days retention"
              properties:
                schedule:
                  type: string
                  x-kubernetes-validations:
                    - rule: "self == oldSelf"
                      message: "schedule is immutable after creation"
                      reason: Immutable
                retentionDays:
                  type: integer
                  minimum: 1
                  maximum: 365
                storageClass:
                  type: string
                  default: "standard"
                  enum: ["standard", "premium", "encrypted", "archive"]
                encryptionKeyRef:
                  type: string
                targets:
                  type: array
                  maxItems: 20
                  items:
                    type: object
                    required: ["namespace"]
                    x-kubernetes-validations:
                      - rule: "self.namespace.matches('^[a-z0-9]([-a-z0-9]*[a-z0-9])?$')"
                        message: "namespace must be a valid DNS label"
                    properties:
                      namespace:
                        type: string
                      includeSecrets:
                        type: boolean
                        default: false
                suspended:
                  type: boolean
                  default: false
            status:
              type: object
              x-kubernetes-preserve-unknown-fields: true
      subresources:
        status: {}
      additionalPrinterColumns:
        - name: Schedule
          type: string
          jsonPath: .spec.schedule
        - name: Retention
          type: integer
          jsonPath: .spec.retentionDays
        - name: Ready
          type: string
          jsonPath: .status.conditions[?(@.type=='Ready')].status
        - name: Age
          type: date
          jsonPath: .metadata.creationTimestamp

Testing CEL Rules

Apply the updated CRD:

kubectl apply -f backuppolicies-crd-cel.yaml

Test cross-field validation:

kubectl apply -f - <<'EOF'
apiVersion: storage.example.com/v1alpha1
kind: BackupPolicy
metadata:
  name: premium-long
  namespace: demo
spec:
  schedule: "0 2 * * *"
  retentionDays: 180          # violates: premium + > 90 days
  storageClass: premium
EOF
The BackupPolicy "premium-long" is invalid:
  spec: Invalid value: "object":
    premium storage class supports at most 90 days retention

Test immutability:

# Create valid policy
kubectl apply -f - <<'EOF'
apiVersion: storage.example.com/v1alpha1
kind: BackupPolicy
metadata:
  name: immutable-test
  namespace: demo
spec:
  schedule: "0 2 * * *"
  retentionDays: 30
EOF

# Try to change the schedule
kubectl patch bp immutable-test -n demo \
  --type=merge -p '{"spec":{"schedule":"0 3 * * *"}}'
The BackupPolicy "immutable-test" is invalid:
  spec.schedule: Invalid value: "0 3 * * *":
    schedule is immutable after creation

Test list element validation:

kubectl apply -f - <<'EOF'
apiVersion: storage.example.com/v1alpha1
kind: BackupPolicy
metadata:
  name: bad-namespace
  namespace: demo
spec:
  schedule: "0 2 * * *"
  retentionDays: 7
  targets:
    - namespace: "UPPERCASE_IS_INVALID"
EOF
The BackupPolicy "bad-namespace" is invalid:
  spec.targets[0]: Invalid value: "object":
    namespace must be a valid DNS label

CEL Cost and Limits

CEL expressions are evaluated at admission time in the API server. Kubernetes imposes cost limits to prevent expressions from consuming excessive CPU:

  • Each expression is assigned a cost based on its operations (string matches, list iteration, etc.)
  • If the expression cost exceeds the per-validation limit, the API server rejects the CRD itself when you apply it
  • Complex all() over large lists is the most common way to hit cost limits

If you hit a cost limit error:

CustomResourceDefinition is invalid: spec.validation.openAPIV3Schema...
  CEL expression cost exceeds budget

Solutions:
– Reduce list traversal in CEL rules; enforce list length with maxItems instead
– Split one expensive rule into multiple simpler rules
– Move the expensive validation to a controller (status condition) rather than admission


⚠ Common Mistakes

Using oldSelf on create. On create operations, oldSelf is nil/unset. A rule like self == oldSelf for immutability will panic on create unless you guard it: oldSelf == null || self == oldSelf. In practice, Kubernetes applies immutable rules only on updates (the reason: Immutable annotation helps here), but be explicit in rules that reference oldSelf.

Forgetting has() checks for optional fields. If encryptionKeyRef is optional (not in required) and you write a rule like self.encryptionKeyRef.size() > 0, it will fail with a “no such key” error when the field is absent. Always guard optional field access with has(self.fieldName).

Overloading CEL for what a controller should do. CEL validates fields at admission. If your rule needs to verify that a referenced Secret actually exists, CEL cannot do that — it only sees the object being submitted. Use a controller status condition for existence checks, not CEL.


Quick Reference: Common CEL Patterns

# String not empty
self.size() > 0

# String matches regex
self.matches('^[a-z][a-z0-9-]{1,62}$')

# Optional field guard
!has(self.fieldName) || self.fieldName.size() > 0

# Conditional requirement
!(condition) || has(self.requiredWhenConditionIsTrue)

# Immutable field (update only)
self == oldSelf

# All list items satisfy condition
self.all(item, item.namespace.size() > 0)

# At least one list item satisfies condition
self.exists(item, item.type == 'primary')

# Cross-field comparison
self.minReplicas <= self.maxReplicas

# Enum-style check
self.in(['standard', 'premium', 'archive'])

Key Takeaways

  • x-kubernetes-validations with CEL rules replaces most validating admission webhooks for CRD-specific logic
  • CEL runs inside the API server — no external service, no TLS, no separate deployment
  • Cross-field validation, immutable fields, and conditional requirements are all expressible in CEL
  • Use has() guards for optional fields; use oldSelf carefully (it is nil on create)
  • CEL has cost limits — avoid unbounded list iteration; use maxItems to bound lists first

What’s Next

EP06: The Kubernetes Controller Reconcile Loop explains how a controller watches BackupPolicy objects and acts on them — the mechanism that makes CRDs useful beyond validated configuration storage. Before writing code in EP07, you need to understand the reconcile loop conceptually.

Get EP06 in your inbox when it publishes → subscribe at linuxcent.com

Write Your First Kubernetes CRD: A Hands-On YAML Walkthrough

Reading Time: 6 minutes

Kubernetes CRDs & Operators: Extending the API, Episode 4
What Is a CRD? · CRDs You Already Use · CRD Anatomy · Write Your First CRD · CEL Validation · Controller Loop · Build an Operator · CRD Versioning · Admission Webhooks · CRDs in Production


TL;DR

  • Writing a Kubernetes CRD requires five YAML files: the CRD itself, a ClusterRole/ClusterRoleBinding, a namespaced Role/RoleBinding for consumers, and a sample custom resource
  • The BackupPolicy CRD built in this episode is the running example throughout the rest of the series — operators, versioning, and production patterns all use it
  • Apply the CRD, verify it with kubectl get crds, create a custom resource, and watch the API server validate your spec
  • RBAC for CRDs follows the same Role/ClusterRole model as built-in resources — the generated resource name is {plural}.{group}
  • Schema validation fires at apply time: bad field types, missing required fields, and out-of-range values all return clear errors before anything reaches etcd
  • Without a controller, a BackupPolicy is stored in etcd but nothing acts on it — that is the topic of EP05 and EP07

The Big Picture

  WHAT WE'RE BUILDING IN THIS EPISODE

  1. backuppolicies-crd.yaml        ← registers the BackupPolicy type
  2. backuppolicies-rbac.yaml       ← controls who can create/view/delete
  3. nightly-backup.yaml            ← our first custom resource instance

  After applying:

  kubectl get crds | grep backup      ← BackupPolicy type exists
  kubectl get backuppolicies -n demo  ← nightly instance exists
  kubectl describe bp nightly -n demo ← spec visible, status empty
  kubectl apply -f bad-backup.yaml    ← schema validation rejects bad data

Writing your first Kubernetes CRD is the step that bridges understanding CRDs conceptually to operating them in a real cluster. This episode is hands-on — every block of YAML is something you apply and verify.


Prerequisites

You need a running Kubernetes cluster and kubectl configured. Any of these work:

# Local options
kind create cluster --name crd-demo
# or
minikube start

# Verify cluster access
kubectl cluster-info
kubectl get nodes

Step 1: Write the CRD

Save this as backuppolicies-crd.yaml:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: backuppolicies.storage.example.com
spec:
  group: storage.example.com
  scope: Namespaced
  names:
    plural:     backuppolicies
    singular:   backuppolicy
    kind:       BackupPolicy
    shortNames:
      - bp
    categories:
      - storage
  versions:
    - name: v1alpha1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          required: ["spec"]
          properties:
            spec:
              type: object
              required: ["schedule", "retentionDays"]
              properties:
                schedule:
                  type: string
                  description: "Cron expression (e.g. '0 2 * * *' for 02:00 daily)"
                retentionDays:
                  type: integer
                  minimum: 1
                  maximum: 365
                  description: "How many days to retain backup snapshots"
                storageClass:
                  type: string
                  default: "standard"
                  description: "StorageClass to use for backup volumes"
                targets:
                  type: array
                  description: "Namespaces and resources to include in the backup"
                  maxItems: 20
                  items:
                    type: object
                    required: ["namespace"]
                    properties:
                      namespace:
                        type: string
                      includeSecrets:
                        type: boolean
                        default: false
                suspended:
                  type: boolean
                  default: false
                  description: "Set to true to pause backup execution"
            status:
              type: object
              x-kubernetes-preserve-unknown-fields: true
      subresources:
        status: {}
      additionalPrinterColumns:
        - name: Schedule
          type: string
          jsonPath: .spec.schedule
        - name: Retention
          type: integer
          jsonPath: .spec.retentionDays
        - name: Suspended
          type: boolean
          jsonPath: .spec.suspended
        - name: Ready
          type: string
          jsonPath: .status.conditions[?(@.type=='Ready')].status
        - name: Age
          type: date
          jsonPath: .metadata.creationTimestamp

Apply it:

kubectl apply -f backuppolicies-crd.yaml

Verify it registered correctly:

kubectl get crds backuppolicies.storage.example.com
NAME                                    CREATED AT
backuppolicies.storage.example.com      2026-04-25T08:00:00Z

Check the API server now knows about it:

kubectl api-resources | grep backuppolic
backuppolicies    bp    storage.example.com/v1alpha1    true    BackupPolicy

Check it is Established:

kubectl get crd backuppolicies.storage.example.com \
  -o jsonpath='{.status.conditions[?(@.type=="Established")].status}'
True

If you see False or empty output, wait a few seconds and retry — the API server takes a moment to register new CRDs.


Step 2: Write RBAC

CRDs follow the same RBAC model as built-in resources. The resource name is {plural}.{group}.

Save this as backuppolicies-rbac.yaml:

# ClusterRole for operators/controllers that manage BackupPolicy objects
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: backuppolicy-controller
rules:
  - apiGroups: ["storage.example.com"]
    resources: ["backuppolicies"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["storage.example.com"]
    resources: ["backuppolicies/status"]
    verbs: ["get", "update", "patch"]
  - apiGroups: ["storage.example.com"]
    resources: ["backuppolicies/finalizers"]
    verbs: ["update"]
---
# Role for application teams to manage BackupPolicies in their namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: backuppolicy-editor
rules:
  - apiGroups: ["storage.example.com"]
    resources: ["backuppolicies"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
# Read-only role for auditors
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: backuppolicy-viewer
rules:
  - apiGroups: ["storage.example.com"]
    resources: ["backuppolicies"]
    verbs: ["get", "list", "watch"]
kubectl apply -f backuppolicies-rbac.yaml

Verify the roles exist:

kubectl get clusterrole | grep backuppolicy
backuppolicy-controller   2026-04-25T08:01:00Z
backuppolicy-editor       2026-04-25T08:01:00Z
backuppolicy-viewer       2026-04-25T08:01:00Z

Note on backuppolicies/status: The separate status RBAC rule is only meaningful if you enabled the status subresource (we did). Without it, status and spec share the same update path.


Step 3: Create a Namespace and Your First Custom Resource

kubectl create namespace demo

Save this as nightly-backup.yaml:

apiVersion: storage.example.com/v1alpha1
kind: BackupPolicy
metadata:
  name: nightly
  namespace: demo
  labels:
    app.kubernetes.io/managed-by: manual
spec:
  schedule: "0 2 * * *"
  retentionDays: 30
  storageClass: standard
  targets:
    - namespace: production
      includeSecrets: false
    - namespace: staging
      includeSecrets: false
  suspended: false

Apply it:

kubectl apply -f nightly-backup.yaml

Get it back:

kubectl get backuppolicies -n demo
NAME      SCHEDULE    RETENTION   SUSPENDED   READY   AGE
nightly   0 2 * * *   30          false       <none>  5s

The Ready column is <none> because there is no controller writing status yet. The custom resource exists and is stored in etcd, but nothing is acting on it.

Describe it:

kubectl describe bp nightly -n demo
Name:         nightly
Namespace:    demo
Labels:       app.kubernetes.io/managed-by=manual
Annotations:  <none>
API Version:  storage.example.com/v1alpha1
Kind:         BackupPolicy
Metadata:
  Creation Timestamp:  2026-04-25T08:05:00Z
  ...
Spec:
  Retention Days:  30
  Schedule:        0 2 * * *
  Storage Class:   standard
  Suspended:       false
  Targets:
    Include Secrets:  false
    Namespace:        production
    Include Secrets:  false
    Namespace:        staging
Status:
Events:  <none>

Step 4: Test Schema Validation

The API server now validates every BackupPolicy against the schema. Try creating an invalid one:

kubectl apply -f - <<'EOF'
apiVersion: storage.example.com/v1alpha1
kind: BackupPolicy
metadata:
  name: bad-policy
  namespace: demo
spec:
  schedule: "not-a-cron"
  retentionDays: 500
EOF
The BackupPolicy "bad-policy" is invalid:
  spec.retentionDays: Invalid value: 500:
    spec.retentionDays in body should be less than or equal to 365

Missing required field:

kubectl apply -f - <<'EOF'
apiVersion: storage.example.com/v1alpha1
kind: BackupPolicy
metadata:
  name: missing-schedule
  namespace: demo
spec:
  retentionDays: 7
EOF
The BackupPolicy "missing-schedule" is invalid:
  spec.schedule: Required value

Wrong type:

kubectl apply -f - <<'EOF'
apiVersion: storage.example.com/v1alpha1
kind: BackupPolicy
metadata:
  name: wrong-type
  namespace: demo
spec:
  schedule: "0 2 * * *"
  retentionDays: "thirty"
EOF
The BackupPolicy "wrong-type" is invalid:
  spec.retentionDays: Invalid value: "string":
    spec.retentionDays in body must be of type integer: "string"

All validation fires at the API boundary — before etcd, before any controller sees the object.


Step 5: Verify Default Values Apply

The schema defines storageClass: default: "standard" and suspended: default: false. Verify they are applied even when not specified:

kubectl apply -f - <<'EOF'
apiVersion: storage.example.com/v1alpha1
kind: BackupPolicy
metadata:
  name: minimal
  namespace: demo
spec:
  schedule: "0 0 * * 0"
  retentionDays: 7
EOF

kubectl get bp minimal -n demo -o jsonpath='{.spec.storageClass}'
standard
kubectl get bp minimal -n demo -o jsonpath='{.spec.suspended}'
false

Defaults are injected by the API server at admission time. They appear in etcd and in every kubectl get -o yaml output — the stored object includes the defaults even if the user did not specify them.


Step 6: Explore the API Endpoints

Your custom resource is now available at standard REST endpoints:

kubectl proxy --port=8001 &

# List all BackupPolicies in the demo namespace
curl -s http://localhost:8001/apis/storage.example.com/v1alpha1/namespaces/demo/backuppolicies \
  | jq '.items[].metadata.name'
"nightly"
"minimal"
# Get a specific BackupPolicy
curl -s http://localhost:8001/apis/storage.example.com/v1alpha1/namespaces/demo/backuppolicies/nightly \
  | jq '.spec'

This is how controllers discover and watch custom resources — via the same API server endpoints, using informers that wrap these REST calls with efficient list-and-watch semantics.


Step 7: Clean Up

kubectl delete namespace demo
kubectl delete -f backuppolicies-rbac.yaml
kubectl delete -f backuppolicies-crd.yaml   # WARNING: deletes all BackupPolicy instances first

⚠ Common Mistakes

metadata.name does not match {plural}.{group}. The most common error. If you name the CRD backuppolicy.storage.example.com (singular) but the spec says plural: backuppolicies, the API server rejects it. The name must always be {plural}.{group}.

No required fields on spec. Without required constraints, kubectl apply accepts an empty spec: {}. The controller then receives objects with no configuration and has to handle the nil case. Define required fields in the schema.

Forgetting subresources: status: {}. Without this, controllers writing .status also overwrite .spec on full PUT updates. This causes status updates to reset user edits. Enable the status subresource from day one.

Not testing validation errors. Schema validation is the first line of defense. Always explicitly test that your required fields are required, types are enforced, and range constraints work — before deploying the controller.


Quick Reference

# All kubectl operations work on custom resources
kubectl get      backuppolicies -n demo
kubectl get      bp -n demo                  # shortName
kubectl describe bp nightly -n demo
kubectl edit     bp nightly -n demo
kubectl delete   bp nightly -n demo

# Output formats
kubectl get bp -n demo -o yaml
kubectl get bp -n demo -o json
kubectl get bp -n demo -o jsonpath='{.items[*].metadata.name}'

# Watch for changes
kubectl get bp -n demo -w

# List across all namespaces
kubectl get bp -A

# Patch spec
kubectl patch bp nightly -n demo \
  --type=merge -p '{"spec":{"suspended":true}}'

Key Takeaways

  • A working CRD deployment needs: the CRD YAML, RBAC ClusterRoles, and at least one sample custom resource
  • The API server validates all custom resources against the schema at apply time — errors are surfaced immediately, not inside the controller
  • Default values in the schema are injected at admission time and appear in every stored object
  • RBAC for custom resources uses {plural}.{group} as the resource name — status and finalizers are separate sub-resources
  • Without a controller, custom resources are stored in etcd and serve as validated configuration — nothing acts on them until a controller is deployed

What’s Next

EP05: Kubernetes CRD CEL Validation extends schema validation beyond simple type and range checks — cross-field rules (“if storageClass is premium, retentionDays must be at most 90″), regex validation beyond pattern, and immutable field enforcement. All without an admission webhook.

Get EP05 in your inbox when it publishes → subscribe at linuxcent.com

Kubernetes CRD Schema Explained: Versions, Validation, and Status Subresource

Reading Time: 6 minutes

Kubernetes CRDs & Operators: Extending the API, Episode 3
What Is a CRD? · CRDs You Already Use · CRD Anatomy · Write Your First CRD · CEL Validation · Controller Loop · Build an Operator · CRD Versioning · Admission Webhooks · CRDs in Production


TL;DR

  • The Kubernetes CRD schema is defined in spec.versions[].schema.openAPIV3Schema — the API server uses it to validate every custom resource create and update before storing in etcd
    (OpenAPI v3 schema = a JSON Schema dialect that describes the structure, types, and constraints of your resource’s fields)
  • spec.versions is a list — CRDs can serve multiple API versions simultaneously; exactly one version must have storage: true
  • scope: Namespaced vs scope: Cluster controls whether custom resources live inside a namespace or at cluster level (like PersistentVolume vs PersistentVolumeClaim)
  • spec.names defines the plural, singular, kind, and optional shortNames used in kubectl and RBAC
  • The status subresource (subresources.status: {}) separates user writes (spec) from controller writes (status) — enabling optimistic concurrency and kubectl status support
  • The scale subresource (subresources.scale) makes your custom resource compatible with kubectl scale and the HorizontalPodAutoscaler

The Big Picture

  ANATOMY OF A CUSTOMRESOURCEDEFINITION

  apiVersion: apiextensions.k8s.io/v1
  kind: CustomResourceDefinition
  metadata:
    name: {plural}.{group}        ← MUST be exactly this format
  spec:
    group: {group}                ← API group (e.g. storage.example.com)
    scope: Namespaced | Cluster   ← where instances live
    names:                        ← how kubectl refers to this resource
      plural: backuppolicies
      singular: backuppolicy
      kind: BackupPolicy
      shortNames: [bp]
    versions:                     ← can be a list; one must have storage: true
      - name: v1alpha1
        served: true              ← API server responds to this version
        storage: true             ← etcd stores objects in this version
        schema:
          openAPIV3Schema:        ← validation schema for ALL objects of this type
            type: object
            properties:
              spec: {...}
              status: {...}
        subresources:
          status: {}              ← enables separate status write path
          scale:                  ← enables kubectl scale + HPA
            specReplicasPath: .spec.replicas
            statusReplicasPath: .status.replicas
        additionalPrinterColumns: ← extra columns in kubectl get output
          - name: Schedule
            type: string
            jsonPath: .spec.schedule

Understanding the Kubernetes CRD schema is the prerequisite for writing a CRD that behaves correctly in production — validation catches bad data at the API boundary, the status subresource prevents controller race conditions, and scope determines your entire RBAC and multi-tenancy model.


spec.group and metadata.name

The group is a reverse-DNS identifier for your API. Convention:

storage.example.com     ← domain you control + functional area
monitoring.myteam.io
databases.platform.company.com

The CRD’s metadata.name must be exactly {plural}.{group}:

metadata:
  name: backuppolicies.storage.example.com
spec:
  group: storage.example.com
  names:
    plural: backuppolicies

If these do not match, the API server rejects the CRD with a validation error. This is the most common first-timer mistake.


spec.scope: Namespaced vs Cluster

  SCOPE DETERMINES WHERE INSTANCES LIVE

  Namespaced (scope: Namespaced)       Cluster (scope: Cluster)
  ─────────────────────────────         ──────────────────────────
  kubectl get backuppolicies -n prod    kubectl get clusterbackuppolicies
  kubectl get backuppolicies -A         (no -n flag, no namespace)

  Analogous to: Pod, Deployment,        Analogous to: PersistentVolume,
                ConfigMap                             ClusterRole, Node

Namespaced: Use when instances are per-tenant or per-application. Users with namespace-scoped RBAC can manage their own instances without cluster-admin. Most CRDs should be namespaced.

Cluster-scoped: Use when instances represent cluster-wide configuration — a ClusterIssuer (cert-manager), ClusterSecretStore (ESO), a StorageClass-like concept. Requires cluster-level RBAC to create/modify.

You cannot change scope after a CRD is created without deleting and recreating it (which deletes all instances). Choose carefully.


spec.versions: Serving Multiple API Versions

spec:
  versions:
    - name: v1alpha1
      served: true
      storage: false       # not stored; converted on read
      schema:
        openAPIV3Schema: {...}
    - name: v1beta1
      served: true
      storage: false
      schema:
        openAPIV3Schema: {...}
    - name: v1
      served: true
      storage: true        # etcd stores in this version
      schema:
        openAPIV3Schema: {...}

Rules:
served: true means the API server accepts requests at this version
served: false means the API server returns 404 for that version — use to deprecate
– Exactly one version must have storage: true — this is what gets written to etcd
– When a client requests a non-storage version, the API server converts on the fly (or calls your conversion webhook — see EP08)

Early in development, start with v1alpha1 storage: true. Promote to v1 when the schema is stable. EP08 covers how to do this without losing data.


spec.names: What kubectl Sees

spec:
  names:
    plural:     backuppolicies     # kubectl get backuppolicies
    singular:   backuppolicy       # kubectl get backuppolicy (also works)
    kind:       BackupPolicy       # used in YAML apiVersion/kind
    listKind:   BackupPolicyList   # optional; auto-derived if omitted
    shortNames:                    # kubectl get bp
      - bp
    categories:                    # kubectl get all includes this type
      - all

categories is worth noting: if you add all to categories, your custom resources appear when someone runs kubectl get all -n mynamespace. Most CRDs deliberately do not add this — it clutters get all output. Only add it if your resource is a primary operational concern.


schema.openAPIV3Schema: Validation

The schema is where you define field types, required fields, constraints, and descriptions. The API server validates every create and update against this schema before writing to etcd.

schema:
  openAPIV3Schema:
    type: object
    required: ["spec"]
    properties:
      spec:
        type: object
        required: ["schedule", "retentionDays"]
        properties:
          schedule:
            type: string
            description: "Cron expression for backup schedule"
            pattern: '^(\*|[0-9,\-\/]+)\s+(\*|[0-9,\-\/]+)\s+(\*|[0-9,\-\/]+)\s+(\*|[0-9,\-\/]+)\s+(\*|[0-9,\-\/]+)$'
          retentionDays:
            type: integer
            minimum: 1
            maximum: 365
          storageClass:
            type: string
            default: "standard"        # default value (Kubernetes 1.17+)
          targets:
            type: array
            maxItems: 10
            items:
              type: object
              required: ["name"]
              properties:
                name:
                  type: string
                namespace:
                  type: string
                  default: "default"
      status:
        type: object
        x-kubernetes-preserve-unknown-fields: true   # controllers write arbitrary status

Field types available

Type Usage
string Text values; supports format, pattern, enum, minLength, maxLength
integer Whole numbers; supports minimum, maximum
number Floating point
boolean true/false
object Nested structure; use properties to define fields
array List; use items to define element schema; supports minItems, maxItems

x-kubernetes-preserve-unknown-fields: true

This tells the API server not to prune fields it does not know about. Use it on status (controllers write whatever they need) and on fields that are intentionally free-form (like a config field that accepts arbitrary YAML). Avoid it on spec — it bypasses validation.

Validation behavior in practice

# This will fail with a clear error:
kubectl apply -f - <<EOF
apiVersion: storage.example.com/v1alpha1
kind: BackupPolicy
metadata:
  name: bad
  namespace: default
spec:
  schedule: "not-a-cron"    # fails pattern validation
  retentionDays: 500         # fails maximum: 365
EOF
The BackupPolicy "bad" is invalid:
  spec.schedule: Invalid value: "not-a-cron": spec.schedule in body should match
    '^(\*|[0-9,\-\/]+)\s+...'
  spec.retentionDays: Invalid value: 500: spec.retentionDays in body should be
    less than or equal to 365

Schema validation catches configuration mistakes at apply time, not at runtime inside a pod. This is one of the core advantages of expressing domain configuration as CRDs rather than ConfigMaps.


additionalPrinterColumns: What kubectl get Shows

By default, kubectl get backuppolicies shows only NAME and AGE. You can add columns:

additionalPrinterColumns:
  - name: Schedule
    type: string
    jsonPath: .spec.schedule
    description: Cron schedule for backups
  - name: Retention
    type: integer
    jsonPath: .spec.retentionDays
    priority: 1          # 0 = always shown; 1 = only with -o wide
  - name: Ready
    type: string
    jsonPath: .status.conditions[?(@.type=='Ready')].status
  - name: Age
    type: date
    jsonPath: .metadata.creationTimestamp

Result:

NAME        SCHEDULE      READY   AGE
nightly     0 2 * * *     True    3d
weekly      0 0 * * 0     False   7d

Good printer columns turn kubectl get into a useful operational dashboard. Include Ready (from status conditions) so operators can immediately see which custom resources are healthy without running kubectl describe.


The Status Subresource

subresources:
  status: {}

Without the status subresource, spec and status are part of the same object. Any user with update permission on the CRD can modify both. Controllers write status through the same path as users write spec.

With the status subresource enabled:
kubectl apply / kubectl patch only update spec — the status block is stripped
– Controllers use the /status subresource endpoint to write status
– RBAC can grant update on backuppolicies (spec) independently from update on backuppolicies/status

  WITHOUT status subresource:         WITH status subresource:
  ─────────────────────────            ──────────────────────────
  PUT /backuppolicies/nightly          PUT /backuppolicies/nightly
  → updates spec AND status            → updates spec only

                                       PUT /backuppolicies/nightly/status
                                       → updates status only (controller path)

Always enable the status subresource on production CRDs. The split between spec and status is fundamental to the Kubernetes API contract. Without it, a controller updating status can accidentally overwrite spec changes made by a user at the same time.


The Scale Subresource

subresources:
  scale:
    specReplicasPath: .spec.replicas
    statusReplicasPath: .status.replicas
    labelSelectorPath: .status.labelSelector

This makes your custom resource compatible with:

kubectl scale backuppolicy nightly --replicas=3

And with HorizontalPodAutoscaler targeting your custom resource. If your CRD manages something replica-based (workers, shards, connections), enabling the scale subresource lets it plug into the standard Kubernetes autoscaling ecosystem without extra plumbing.


⚠ Common Mistakes

Forgetting x-kubernetes-preserve-unknown-fields: true on status. If you validate the status field with a strict schema but do not add this, the API server will prune any status fields the controller writes that are not in the schema. The controller’s status updates will silently lose fields. Either define the full status schema or use x-kubernetes-preserve-unknown-fields: true.

Using scope: Cluster for resources that should be namespaced. Once a CRD is created as cluster-scoped, you cannot make it namespaced without deleting and recreating it. Plan scope before deploying to production.

Not enabling the status subresource. Without it, controllers writing status can race with users updating spec. It also means kubectl patch --subresource=status does not work and some tooling behaves unexpectedly. Enable it from the start.

Loose schema with no required fields. An openAPIV3Schema with no required constraint accepts objects with empty spec. This usually means your controller gets called with a resource that is missing mandatory configuration. Define required fields and validate them at the API boundary, not inside the controller.


Quick Reference

# Inspect the full schema of a CRD
kubectl get crd backuppolicies.storage.example.com -o yaml | \
  yq '.spec.versions[0].schema'

# Check what subresources are enabled
kubectl get crd certificates.cert-manager.io -o jsonpath=\
  '{.spec.versions[0].subresources}'

# See all served versions for a CRD
kubectl get crd prometheuses.monitoring.coreos.com \
  -o jsonpath='{.spec.versions[*].name}'

# Check which version is the storage version
kubectl get crd certificates.cert-manager.io \
  -o jsonpath='{.spec.versions[?(@.storage==true)].name}'

# Describe the printer columns for a CRD
kubectl get crd scaledobjects.keda.sh \
  -o jsonpath='{.spec.versions[0].additionalPrinterColumns}'

Key Takeaways

  • spec.versions allows serving and storing multiple API versions; only one version has storage: true
  • scope (Namespaced vs Cluster) cannot be changed after creation — choose deliberately
  • openAPIV3Schema validates every CR at the API boundary, before etcd storage
  • The status subresource separates the user write path (spec) from the controller write path (status) — always enable it
  • additionalPrinterColumns makes kubectl get operationally useful; include a Ready column from status conditions

What’s Next

EP04: Write Your First Kubernetes CRD puts the anatomy into practice — a complete hands-on walkthrough building a BackupPolicy CRD from scratch, applying it to a cluster, creating instances, and verifying validation, RBAC, and status behavior.

Get EP04 in your inbox when it publishes → subscribe at linuxcent.com

CRDs You Already Use: cert-manager, KEDA, and External Secrets Explained

Reading Time: 6 minutes

Kubernetes CRDs & Operators: Extending the API, Episode 2
What Is a CRD? · CRDs You Already Use · CRD Anatomy · Write Your First CRD · CEL Validation · Controller Loop · Build an Operator · CRD Versioning · Admission Webhooks · CRDs in Production


TL;DR

  • cert-manager, KEDA, and External Secrets Operator are all CRD-based systems — understanding their custom resources shows you what a well-designed CRD looks like before you build one
  • cert-manager’s Certificate CRD expresses desired TLS state; the cert-manager controller reconciles that state by issuing, renewing, and storing certificates in Secrets
  • KEDA’s ScaledObject extends the HorizontalPodAutoscaler with external metrics (queue depth, Kafka lag, Prometheus queries) — the KEDA operator translates ScaledObjects into native HPA objects
  • External Secrets Operator’s ExternalSecret abstracts over secret backends (AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager) — the controller pulls values and writes Kubernetes Secrets
  • All three follow the same pattern: you describe desired state in a custom resource; the operator reconciles actual state to match
  • Kubernetes custom resources examples like these are the fastest way to internalize the CRD mental model before writing your own

The Big Picture

  THREE CRD-BASED OPERATORS AND WHAT THEY MANAGE

  ┌─────────────────────────────────────────────────────────────┐
  │  cert-manager                                               │
  │  Certificate CR  →  controller issues cert  →  TLS Secret  │
  └─────────────────────────────────────────────────────────────┘

  ┌─────────────────────────────────────────────────────────────┐
  │  KEDA                                                       │
  │  ScaledObject CR  →  controller creates HPA  →  Pod count  │
  └─────────────────────────────────────────────────────────────┘

  ┌─────────────────────────────────────────────────────────────┐
  │  External Secrets Operator                                  │
  │  ExternalSecret CR  →  controller pulls  →  K8s Secret      │
  │                         from Vault/AWS/GCP                  │
  └─────────────────────────────────────────────────────────────┘

  In every case:
  User creates CR  →  Operator watches CR  →  Operator acts  →  Status updated

Kubernetes custom resources examples from real tools like these reveal the design pattern you will use in every CRD you build: express desired state declaratively, let the controller bridge the gap to actual state, surface the outcome in the status subresource.


Why Look at Existing CRDs First?

Before designing your own CRD, you want to understand what good CRD design looks like from the user’s perspective. The engineers at Jetstack (cert-manager), KEDACORE (KEDA), and External Secrets contributors have collectively solved the same problems you will face:

  • What goes in spec vs status?
  • How do you reference other Kubernetes objects?
  • How do you handle secrets and credentials securely?
  • What does a healthy vs unhealthy custom resource look like?

Studying these before writing your own saves you from the most common first-timer mistakes.


cert-manager: The Certificate CRD

cert-manager is the most widely deployed CRD-based system in Kubernetes. It manages TLS certificates from Let’s Encrypt, internal CAs, and cloud providers.

The core CRDs

kubectl get crds | grep cert-manager
certificates.cert-manager.io
certificaterequests.cert-manager.io
challenges.acme.cert-manager.io
clusterissuers.cert-manager.io
issuers.cert-manager.io
orders.acme.cert-manager.io

The one you interact with most is Certificate. Here is a real example:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: api-tls
  namespace: production
spec:
  secretName: api-tls-cert        # cert-manager writes the TLS Secret here
  duration: 2160h                 # 90 days
  renewBefore: 720h               # renew 30 days before expiry
  subject:
    organizations:
      - example.com
  dnsNames:
    - api.example.com
    - api-internal.example.com
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer

What happens after you apply this:

  1. cert-manager controller sees the new Certificate object
  2. It contacts the referenced ClusterIssuer (Let’s Encrypt in this case)
  3. It completes the ACME challenge, obtains the certificate
  4. It writes the certificate and private key into the api-tls-cert Secret
  5. It updates the Certificate object’s status to reflect success
kubectl describe certificate api-tls -n production
Status:
  Conditions:
    Last Transition Time:  2026-04-10T08:00:00Z
    Message:               Certificate is up to date and has not expired
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2026-07-09T08:00:00Z
  Not Before:              2026-04-10T08:00:00Z
  Renewal Time:            2026-06-09T08:00:00Z

What this teaches you about CRD design

  • spec.secretName — the CR references an output object by name. The controller creates or updates that object.
  • spec.issuerRef — the CR references another custom resource (ClusterIssuer) by name. This is a common pattern for separating configuration concerns.
  • status.conditions — the standard Kubernetes condition pattern: type, status, reason, message. You will use the same structure in your own CRDs.
  • The controller owns status — users own spec. This separation is a core convention.

KEDA: The ScaledObject CRD

KEDA (Kubernetes Event-Driven Autoscaling) extends Kubernetes autoscaling beyond CPU and memory. It can scale deployments based on queue depth, Kafka consumer lag, Prometheus metric values, and dozens of other event sources.

The core CRDs

kubectl get crds | grep keda
clustertriggerauthentications.keda.sh
scaledjobs.keda.sh
scaledobjects.keda.sh
triggerauthentications.keda.sh

A ScaledObject ties a Deployment to an external scaler:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: order-processor-scaler
  namespace: production
spec:
  scaleTargetRef:
    name: order-processor        # the Deployment to scale
  minReplicaCount: 0             # scale to zero when idle
  maxReplicaCount: 50
  triggers:
    - type: aws-sqs-queue
      metadata:
        queueURL: https://sqs.us-east-1.amazonaws.com/123456789/orders
        queueLength: "5"         # target: 5 messages per pod
        awsRegion: us-east-1
      authenticationRef:
        name: keda-sqs-auth      # TriggerAuthentication for AWS credentials

What KEDA does with this:

  1. KEDA controller sees the ScaledObject
  2. It creates a native HorizontalPodAutoscaler object targeting the order-processor Deployment
  3. KEDA’s metrics adapter polls the SQS queue depth and exposes it as a custom metric
  4. The HPA uses that metric to scale replicas — including to zero when the queue is empty
kubectl get scaledobject order-processor-scaler -n production
NAME                       SCALETARGETKIND      SCALETARGETNAME    MIN   MAX   TRIGGERS         READY   ACTIVE
order-processor-scaler     apps/Deployment      order-processor    0     50    aws-sqs-queue    True    True

What this teaches you about CRD design

  • spec.scaleTargetRef — targeting another object by name. The controller acts on that object, not on the CR itself.
  • spec.triggers — a list of trigger specifications. Lists of typed sub-objects are a recurring CRD pattern.
  • spec.minReplicaCount: 0 — expressing scale-to-zero as a first-class concept in the API. Built-in HPA does not support this; KEDA’s CRD extends the vocabulary of what is expressible.
  • The KEDA operator translates ScaledObject → native HPA. The CRD is an abstraction over a more complex Kubernetes object. This “translate and manage child resources” pattern is extremely common in operators.

External Secrets Operator: The ExternalSecret CRD

External Secrets Operator (ESO) solves a specific problem: secrets live in external systems (AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager), but Kubernetes workloads need them as Kubernetes Secrets. ESO bridges the gap.

The core CRDs

kubectl get crds | grep external-secrets
clusterexternalsecrets.external-secrets.io
clustersecretstores.external-secrets.io
externalsecrets.external-secrets.io
secretstores.external-secrets.io

A SecretStore defines the backend connection:

apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: aws-secrets-manager
  namespace: production
spec:
  provider:
    aws:
      service: SecretsManager
      region: us-east-1
      auth:
        jwt:
          serviceAccountRef:
            name: eso-sa            # uses IRSA/workload identity

An ExternalSecret defines what to pull and how to map it:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: database-creds
  namespace: production
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secrets-manager
    kind: SecretStore
  target:
    name: database-secret          # Kubernetes Secret to create/update
    creationPolicy: Owner
  data:
    - secretKey: username          # key in the K8s Secret
      remoteRef:
        key: prod/database         # path in AWS Secrets Manager
        property: username         # property within that secret
    - secretKey: password
      remoteRef:
        key: prod/database
        property: password

After ESO reconciles this:

kubectl get secret database-secret -n production -o jsonpath='{.data.username}' | base64 -d
# outputs: db_user
kubectl describe externalsecret database-creds -n production
Status:
  Conditions:
    Last Transition Time:   2026-04-10T08:00:00Z
    Message:                Secret was synced
    Reason:                 SecretSynced
    Status:                 True
    Type:                   Ready
  Refresh Time:             2026-04-10T09:00:00Z
  Synced Resource Version:  1-abc123

What this teaches you about CRD design

  • spec.secretStoreRef — referencing a configuration CRD (SecretStore) from an operational CRD (ExternalSecret). This layering of CRDs to separate concerns is a mature pattern.
  • spec.refreshInterval — the CR expresses a desired behavior (periodic sync), not just a desired state snapshot. CRDs can express temporal behaviors.
  • spec.target.creationPolicy: Owner — ESO will set an owner reference on the created Secret, so deleting the ExternalSecret cascades to deleting the Secret. This is how controllers manage lifecycle.
  • Sensitive values never appear in the CR — only paths and references. The controller handles the actual secret retrieval. This is a key security pattern in CRD design.

The Common Pattern Across All Three

  OPERATOR PATTERN (cert-manager / KEDA / ESO / every other operator)

  User applies CR
        │
        ▼
  Controller watches CRDs
  (informer cache, events queue)
        │
        ▼
  Controller reconciles:
  actual state ──→ compare ──→ desired state
        │              │
        │         (gap found)
        │              │
        ▼              ▼
  Takes action      Updates status
  (issue cert,      conditions in CR
   create HPA,
   sync Secret)
        │
        └──── loops back, watches for next change

The design contract:
Users write spec — what they want
Controllers read spec, write status — what actually happened
Status conditions are truthReady: True/False with reason and message tell operators what the controller knows

This pattern, explained in depth in EP06, is why CRDs and controllers are designed the way they are.


⚠ Common Mistakes

Installing CRDs without the controller. If you install cert-manager’s CRDs from the crds.yaml manifest without installing cert-manager itself, Certificate objects will be accepted by the API server but never reconciled. The Ready condition will never appear. Always install the operator alongside its CRDs.

Editing status fields directly. Many teams try kubectl patch or kubectl edit to update a custom resource’s status to work around a stuck controller. Most well-written controllers overwrite status every reconcile loop — your manual change will be wiped. Fix the underlying issue, not the status display.

Assuming CRD deletion is safe. Covered in EP01 but worth repeating: deleting a CRD cascades to deleting all instances. If you kubectl delete crd certificates.cert-manager.io, every Certificate object in every namespace is gone and cert-manager will stop issuing. Back up CRDs and their instances before any CRD deletion.


Quick Reference

# See all CRDs installed by cert-manager
kubectl get crds | grep cert-manager.io

# Get all Certificates across all namespaces
kubectl get certificates -A

# Watch cert-manager reconcile a new Certificate
kubectl get certificate api-tls -n production -w

# See all ScaledObjects and their current state
kubectl get scaledobjects -A

# Check ESO sync status for all ExternalSecrets
kubectl get externalsecrets -A

# Inspect what APIs a CRD exposes
kubectl api-resources | grep cert-manager

Key Takeaways

  • cert-manager, KEDA, and ESO are canonical examples of well-designed CRD-based operators
  • All three follow the same pattern: user writes spec, controller reconciles to actual state, status reflects outcome
  • spec expresses desired state declaratively; the controller figures out how to achieve it
  • Status conditions (type, status, reason, message) are the standard way to surface controller outcomes
  • Sensitive values never appear in the CR — controllers retrieve them from external systems using references and credentials

What’s Next

EP03: CRD Anatomy opens the YAML of a CRD itself — spec.versions, OpenAPI schema properties, scope, names, and subresources. You have seen CRDs from the outside; next we look at how they are structured on the inside.

Get EP03 in your inbox when it publishes → subscribe at linuxcent.com

What Is a Kubernetes CRD? How Custom Resources Extend the API

Reading Time: 6 minutes

Kubernetes CRDs & Operators: Extending the API, Episode 1
What Is a CRD? · CRDs You Already Use · CRD Anatomy · Write Your First CRD · CEL Validation · Controller Loop · Build an Operator · CRD Versioning · Admission Webhooks · CRDs in Production


TL;DR

  • A Kubernetes CRD (Custom Resource Definition) is how you add new resource types to the Kubernetes API — the same way Deployment and Service exist natively, you can make BackupPolicy or Certificate exist too
    (CRD = the schema/blueprint; Custom Resource = an instance of that schema, just like a Pod is an instance of the Pod schema)
  • Every kubectl get crds on a real cluster shows dozens of them — cert-manager, KEDA, Prometheus Operator, Crossplane all ship their own CRDs
  • CRDs are served by the same API server as built-in resources — kubectl, RBAC, watches, and events all work identically
  • A CRD alone does nothing — a controller watches the custom resources and acts on them; together they form an Operator
  • CRDs live in etcd just like Pods and Deployments — they survive API server restarts and cluster upgrades
  • You do not need to modify Kubernetes source code or restart the API server to add a CRD

The Big Picture

  HOW KUBERNETES CRDs EXTEND THE API

  ┌──────────────────────────────────────────────────────────────┐
  │  Kubernetes API Server                                       │
  │                                                              │
  │  Built-in resources          Custom resources (via CRD)      │
  │  ─────────────────           ──────────────────────────      │
  │  Pod                         Certificate     (cert-manager)  │
  │  Deployment                  ScaledObject    (KEDA)          │
  │  Service                     ExternalSecret  (ESO)           │
  │  ConfigMap                   BackupPolicy    (your team)     │
  │  ...                         ...                             │
  │                                                              │
  │  All resources: same API, same kubectl, same RBAC, same etcd │
  └──────────────────────────────────────────────────────────────┘
            ▲                          ▲
            │ built in                 │ registered at runtime
            │                         │
         Kubernetes              CustomResourceDefinition
          binary                    (a YAML you apply)

What is a Kubernetes CRD? It is a resource that defines resources — a schema registration that teaches the API server about a new object type you want to use in your cluster.


What Problem CRDs Solve

Kubernetes ships with roughly 50 resource types: Pods, Deployments, Services, ConfigMaps, Secrets, PersistentVolumes, and so on. These cover the general-purpose building blocks for running containerized workloads.

But the moment you operate real infrastructure, you hit the edges. You want to express:

  • “This database should have three replicas with point-in-time recovery enabled” — not a Deployment
  • “This TLS certificate for api.example.com should renew 30 days before expiry” — not a Secret
  • “This queue consumer should scale to zero when the queue is empty” — not a HorizontalPodAutoscaler

Before CRDs (pre-2017), the only options were: use ConfigMaps as a poor substitute (no schema, no validation, no dedicated RBAC), or fork Kubernetes and add the resource natively (impractical for everyone outside the core team).

CRDs, introduced as stable in Kubernetes 1.16, solved this by letting you register a new resource type with the API server at runtime — without touching Kubernetes source code, without restarting the API server, without any special access beyond being able to create cluster-scoped resources.


The Kubernetes API: A Brief Mental Model

Before CRDs make sense, the API model needs to be clear.

  KUBERNETES API STRUCTURE

  apiVersion: apps/v1       ← API group (apps) + version (v1)
  kind: Deployment          ← resource type
  metadata:
    name: web               ← instance name
    namespace: default      ← namespace scope
  spec:
    replicas: 3             ← desired state

Every Kubernetes resource has:
– A group (e.g., apps, batch, networking.k8s.io) — or no group for core resources
– A version (e.g., v1, v1beta1)
– A kind (e.g., Deployment, Pod)
– A scope: namespaced or cluster-wide

The API server is a registry. Each group/version/kind combination maps to a Go struct that knows how to validate, store, and serve that resource type.

A CRD registers a new entry in that registry. You supply the group, version, kind, and schema. The API server handles everything else — serving it via REST, storing it in etcd, exposing it to kubectl.


What a CRD Looks Like

Here is the smallest possible CRD — it creates a new BackupPolicy resource type in the storage.example.com API group:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: backuppolicies.storage.example.com
spec:
  group: storage.example.com
  versions:
    - name: v1alpha1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                schedule:
                  type: string
                retentionDays:
                  type: integer
  scope: Namespaced
  names:
    plural: backuppolicies
    singular: backuppolicy
    kind: BackupPolicy
    shortNames:
      - bp

Apply it:

kubectl apply -f backuppolicy-crd.yaml

Now create an instance:

apiVersion: storage.example.com/v1alpha1
kind: BackupPolicy
metadata:
  name: nightly
  namespace: default
spec:
  schedule: "0 2 * * *"
  retentionDays: 30
kubectl apply -f nightly-backup.yaml
kubectl get backuppolicies
kubectl get bp            # shortName works
kubectl describe bp nightly

The API server validates the spec against the schema, stores it in etcd, and returns it via all the standard API endpoints — all without a single line of custom code.


CRD vs Built-In Resource: What Is Different?

Not much, deliberately.

Capability Built-in resource Custom resource (CRD)
kubectl get / describe / delete Yes Yes
RBAC (Roles, ClusterRoles) Yes Yes
Watch (informers, events) Yes Yes
Stored in etcd Yes Yes
OpenAPI schema validation Yes Yes (you define the schema)
Admission webhooks Yes Yes
Status subresource Yes Optional (you enable it)
Scale subresource Yes Optional (you enable it)
Built-in controller behavior Yes No — you write the controller

The last row is the critical one. When you create a Deployment, the deployment controller immediately starts managing ReplicaSets. When you create a BackupPolicy, nothing happens — until you write and deploy a controller that watches BackupPolicy objects and acts on them.

That controller + the CRD is what people call an Operator.


A Real Cluster: What You Actually See

Run this on any cluster running cert-manager, Prometheus Operator, or any other tooling:

kubectl get crds

Sample output (abbreviated):

NAME                                                  CREATED AT
certificates.cert-manager.io                          2024-11-01T08:12:00Z
certificaterequests.cert-manager.io                   2024-11-01T08:12:00Z
issuers.cert-manager.io                               2024-11-01T08:12:00Z
clusterissuers.cert-manager.io                        2024-11-01T08:12:00Z
scaledobjects.keda.sh                                 2024-11-01T08:13:00Z
scaledjobs.keda.sh                                    2024-11-01T08:13:00Z
externalsecrets.external-secrets.io                   2024-11-01T08:14:00Z
prometheuses.monitoring.coreos.com                    2024-11-01T08:15:00Z
servicemonitors.monitoring.coreos.com                 2024-11-01T08:15:00Z

Every tool that ships as a CRD-based system registers its resource types here first. The count often surprises engineers: a production cluster with a typical toolchain easily has 40–80 CRDs.

Check how many are on your cluster:

kubectl get crds --no-headers | wc -l

How the API Server Handles a CRD

When you apply a CRD, the API server does three things:

  CRD REGISTRATION FLOW

  kubectl apply -f my-crd.yaml
          │
          ▼
  1. API server validates the CRD manifest
     (is the schema valid OpenAPI v3? are names correct?)
          │
          ▼
  2. CRD stored in etcd
     (under /registry/apiextensions.k8s.io/customresourcedefinitions/)
          │
          ▼
  3. New REST endpoints activated immediately:
     GET  /apis/storage.example.com/v1alpha1/namespaces/{ns}/backuppolicies
     POST /apis/storage.example.com/v1alpha1/namespaces/{ns}/backuppolicies
     ...

From this point, any kubectl get backuppolicies or API call to those endpoints is handled exactly like a built-in resource call — the API server serves it from etcd, applies RBAC, runs admission webhooks, and returns standard JSON.

No restart required. The new endpoints appear within seconds.


The Difference Between CRD and CR

Two terms that are easily confused:

  • CRD (CustomResourceDefinition) — the schema/blueprint. There is one CRD per resource type. certificates.cert-manager.io is a CRD.
  • CR (Custom Resource) — an instance of a CRD. Every Certificate object you create is a custom resource. You can have thousands of CRs per CRD.
  CRD (one)          →  Custom Resource (many)
  ─────────             ─────────────────────
  certificates          web-tls           (namespace: production)
  .cert-manager.io      api-tls           (namespace: production)
                        admin-tls         (namespace: staging)
                        ...

The CRD is applied once (usually by the tool’s Helm chart). Custom resources are created by your users, your CI pipeline, or your GitOps system throughout the life of the cluster.


Where CRDs Fit in the Kubernetes Extension Model

CRDs are one of three ways to extend Kubernetes:

  KUBERNETES EXTENSION MECHANISMS

  1. CRDs + Controllers (Operators)
     Add new resource types + behavior
     → cert-manager, KEDA, Argo CD, Crossplane
     Used for: domain-specific abstractions, infrastructure management

  2. Admission Webhooks
     Intercept API requests to validate or mutate objects
     → OPA/Gatekeeper, Kyverno, Istio injection
     Used for: policy enforcement, sidecar injection, defaulting

  3. API Aggregation (AA)
     Register a fully separate API server behind the main API server
     → metrics-server, custom autoscalers
     Used for: when you need non-CRUD semantics (e.g. exec, attach, streaming)

For 95% of use cases, CRDs + controllers are the right mechanism. API aggregation is complex and only warranted for non-standard API semantics. Admission webhooks are complementary to CRDs, not an alternative.


⚠ Common Mistakes

Confusing the CRD with the controller. The CRD is just a schema registration — it does not execute code. If you apply a CRD but do not deploy its controller, creating custom resources will succeed (the API server accepts them) but nothing will happen. This catches many people the first time they try to use cert-manager by only applying the CRDs without installing the cert-manager controller.

Assuming CRD deletion is safe. Deleting a CRD deletes all custom resources of that type from etcd. There is no “are you sure?” prompt. If you delete the certificates.cert-manager.io CRD, every Certificate object in every namespace is gone.

Treating CRDs as ConfigMap replacements. Some teams store configuration in CRDs purely to get schema validation. This works, but without a controller, the custom resources are inert data. If you only need configuration storage with validation, a CRD is viable — just be explicit that there is no reconciliation loop.


Quick Reference

# List all CRDs in the cluster
kubectl get crds

# Inspect a specific CRD's schema
kubectl get crd certificates.cert-manager.io -o yaml

# List all custom resources of a type
kubectl get certificates -A

# Get details on a specific custom resource
kubectl describe certificate web-tls -n production

# Delete a CRD (WARNING: deletes all instances)
kubectl delete crd backuppolicies.storage.example.com

# Check if a CRD is established (ready to use)
kubectl get crd backuppolicies.storage.example.com \
  -o jsonpath='{.status.conditions[?(@.type=="Established")].status}'
# Returns: True

Key Takeaways

  • A Kubernetes CRD registers a new resource type with the API server — no source code changes, no restart required
  • Custom resources behave identically to built-in resources: kubectl, RBAC, watches, etcd, admission webhooks all work the same way
  • The CRD is just the schema; a controller gives custom resources behavior — together they form an Operator
  • Every production cluster running modern tooling already uses dozens of CRDs
  • Deleting a CRD deletes all its instances — treat CRDs as production-critical objects

What’s Next

EP02: CRDs You Already Use makes this concrete before we go deeper — we walk through cert-manager’s Certificate, KEDA’s ScaledObject, and External Secrets’ ExternalSecret as working examples, so you understand what a well-designed CRD looks like from a user’s perspective before you design your own.

Get EP02 in your inbox when it publishes → subscribe at linuxcent.com

LDAP Internals: The Directory Tree, Schema, and What Travels on the Wire

Reading Time: 12 minutes

The Identity Stack, Episode 2
EP01: What Is LDAPEP02EP03: LDAP Authentication on Linux → …


TL;DR

  • The Directory Information Tree (DIT) is the hierarchical database LDAP stores — every entry lives at a unique path described by its Distinguished Name (DN)
  • Object classes define what attributes an entry is allowed or required to have — posixAccount adds UID, GID, and home directory; inetOrgPerson adds email and display name
  • Schema is the rulebook: which attribute types exist across the entire directory, what syntax each follows, and which object classes require or permit them
  • An LDAP Search sends four things: a base DN, a scope (base/one/sub), a filter like (uid=vamshi), and a list of attributes to return — the server traverses the tree and returns LDIF
  • Every LDAP message on the wire is BER-encoded (Basic Encoding Rules, a subset of ASN.1) — a compact binary format, not text
  • ldapsearch output is LDIF (LDAP Data Interchange Format) — the human-readable representation of what the BER payload carried

The Big Picture: From ldapsearch to Directory Entry

ldapsearch -x -H ldap://dc.corp.com -b "dc=corp,dc=com" "(uid=vamshi)" cn mail uidNumber
     │
     │  TCP port 389 (or 636 for LDAPS)
     │  BER-encoded SearchRequest
     ▼
┌─────────────────────────────────────────────────┐
│  LDAP Server (AD / OpenLDAP / 389-DS / FreeIPA)  │
│                                                   │
│  Directory Information Tree                       │
│                                                   │
│  dc=corp,dc=com                    ← search base  │
│    └── ou=engineers                ← scope: sub   │
│          ├── uid=alice                            │
│          └── uid=vamshi  ← filter match           │
│                cn: vamshi                         │
│                mail: [email protected]              │
│                uidNumber: 1001                    │
└─────────────────────────────────────────────────┘
     │
     │  BER-encoded SearchResultEntry
     ▼
# LDIF output on your terminal
dn: uid=vamshi,ou=engineers,dc=corp,dc=com
cn: vamshi
mail: [email protected]
uidNumber: 1001

LDAP internals are the mechanics between the command you type and the directory entry you get back. EP01 explained why LDAP was invented. This episode explains what it actually does when you run it.


The Directory Information Tree

EP01 introduced the DIT as a concept inherited from X.500. Here’s what it actually looks like inside a directory.

Every LDAP directory has a root — the base DN — from which all entries descend. For a company called Corp with a domain corp.com, the base is typically dc=corp,dc=com. Below that, the tree branches into organizational units, and below those, individual entries for people, groups, services, and anything else the directory administrator decided to model.

dc=corp,dc=com                          ← domain root (base DN)
│
├── ou=people                           ← organizational unit: people
│     ├── uid=alice                     ← user entry
│     ├── uid=vamshi
│     └── uid=bob
│
├── ou=groups                           ← organizational unit: groups
│     ├── cn=engineers
│     └── cn=ops
│
├── ou=services                         ← organizational unit: service accounts
│     ├── cn=jenkins
│     └── cn=gitlab-runner
│
└── ou=hosts                            ← organizational unit: machines
      ├── cn=web01.corp.com
      └── cn=db01.corp.com

This hierarchy is not a file system and not a relational database. It is specifically optimized for reads — the query “give me everything about this user” is the operation the protocol is built around. Writes are infrequent. Reads are constant.

Every entry in the tree has exactly one parent. There are no cross-links between branches, no foreign keys. The tree is the structure. An entry’s position in the tree is what defines it.


Distinguished Names: Reading the Path

The Distinguished Name (DN) is how you address any entry in the directory. It reads right-to-left, from the leaf to the root, with each component separated by a comma.

uid=vamshi,ou=engineers,dc=corp,dc=com

Reading right-to-left:
  dc=corp,dc=com       ← domain: corp.com
  ou=engineers         ← organizational unit: engineers
  uid=vamshi           ← this specific entry: user "vamshi"

Each component of a DN — uid=vamshi, ou=engineers, dc=corp — is a Relative Distinguished Name (RDN). The RDN is the attribute-value pair that uniquely identifies the entry within its parent container. Two users in the same ou=engineers cannot both have uid=vamshi — that would create two entries with identical DNs, which the directory won’t allow.

Common RDN attribute types and what they mean:

Attribute Stands for Typical use
dc Domain Component Domain name segments (dc=corp,dc=com = corp.com)
ou Organizational Unit Container for grouping entries
cn Common Name Groups, service accounts, human-readable name
uid User ID Linux username — the standard RDN for user entries
o Organization Top-level org containers (less common in modern setups)

When your Linux system calls getent passwd vamshi, SSSD translates that into an LDAP Search for an entry where uid=vamshi somewhere under the configured base DN. The full DN comes back with the result, but what your system cares about are the attributes inside it.


Object Classes and Schema

Every entry in the directory has a objectClass attribute — usually several values. Object classes define what attributes the entry is allowed or required to have.

# A typical user entry's object classes
dn: uid=vamshi,ou=engineers,dc=corp,dc=com
objectClass: top
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount

Each object class contributes a set of attributes — some required (MUST), some optional (MAY):

objectClass: posixAccount
  MUST: cn, uid, uidNumber, gidNumber, homeDirectory
  MAY:  userPassword, loginShell, gecos, description

objectClass: inetOrgPerson
  MUST: sn (surname), cn
  MAY:  mail, telephoneNumber, displayName, jpegPhoto, ...

objectClass: shadowAccount
  MUST: uid
  MAY:  shadowLastChange, shadowMin, shadowMax, shadowWarning, ...

When Linux authenticates a user via LDAP, it needs the posixAccount attributes: uidNumber (the numeric UID), gidNumber, homeDirectory, and loginShell. Without posixAccount, the user entry exists in the directory but can’t be used for Linux logins — getent passwd will return nothing.

Object classes are grouped into three kinds:

Groups in LDAP use their own object class:

objectClass: groupOfNames
  MUST: cn, member
  MAY:  description, owner, ...

# A group entry looks like this:
dn: cn=engineers,ou=groups,dc=corp,dc=com
objectClass: groupOfNames
cn: engineers
member: uid=vamshi,ou=engineers,dc=corp,dc=com
member: uid=alice,ou=engineers,dc=corp,dc=com

groupOfNames stores members as full DNs — which is why the SSSD group search filter is (member=uid=vamshi,ou=...) rather than (member=vamshi). The directory stores the exact path to each member entry. posixGroup is the alternative, which stores the memberUid as a bare username string instead of a DN — Active Directory uses groupOfNames; pure POSIX environments often use posixGroup.

Object classes are grouped into three kinds:

Structural — defines what the entry fundamentally is. Every entry must have exactly one structural class. posixAccount is structural.

Auxiliary — adds additional attributes to an existing entry. shadowAccount and inetOrgPerson can be auxiliary. You can stack multiple auxiliary classes on a single entry.

Abstract — base classes that other classes inherit from. top is the root abstract class that every entry implicitly has. You never add top to an entry; it’s always there.

Schema: The Directory’s Type System

Schema is the global rulebook for the entire directory. It defines:

  • Attribute type definitions — what each attribute is named, what syntax it uses (a string? an integer? a binary blob?), whether it’s case-sensitive, whether multiple values are allowed
  • Object class definitions — which attributes each class requires or permits
  • Matching rules — how equality comparisons work for each attribute type

The schema is stored in the directory itself, under a special entry at cn=schema,cn=config (OpenLDAP) or cn=Schema,cn=Configuration (Active Directory). You can query it:

# View the schema for the posixAccount object class
ldapsearch -x -H ldap://your-dc \
  -b "cn=schema,cn=config" \
  "(objectClass=olcObjectClasses)" \
  olcObjectClasses | grep -A 10 "posixAccount"

# Output:
# olcObjectClasses: ( 1.3.6.1.1.1.2.0
#   NAME 'posixAccount'
#   DESC 'Abstraction of an account with POSIX attributes'
#   SUP top
#   AUXILIARY
#   MUST ( cn $ uid $ uidNumber $ gidNumber $ homeDirectory )
#   MAY ( userPassword $ loginShell $ gecos $ description ) )

That OID (1.3.6.1.1.1.2.0) is the globally unique identifier for the posixAccount object class. Every object class and attribute type in every LDAP directory on the planet has a unique OID assigned by an authority. This is how schema interoperability works across different directory implementations — OpenLDAP, Active Directory, and 389-DS can all understand each other’s posixAccount entries because they share the same OID.


LDAP Operations: What Actually Runs

LDAP defines eight operations. Day-to-day authentication uses two: Bind and Search.

LDAP Operation Set
──────────────────
Bind        ← authenticate (prove identity)
Search      ← query the directory
Add         ← create a new entry
Modify      ← change attributes on an existing entry
Delete      ← remove an entry
ModifyDN    ← rename or move an entry
Compare     ← test if an attribute has a specific value
Abandon     ← cancel an outstanding operation

Bind: Proving Who You Are

Before any authenticated operation, the client sends a Bind request. There are two types:

Simple Bind — the client sends its DN and password in the clear (or over TLS). This is what -x in ldapsearch means: simple authentication.

# Simple bind as a service account
ldapsearch -x \
  -D "cn=svc-ldap-reader,ou=services,dc=corp,dc=com" \
  -w "service-account-password" \
  -H ldap://dc.corp.com \
  -b "dc=corp,dc=com" \
  "(uid=vamshi)"

SASL Bind — the client uses an authentication mechanism registered with SASL (Simple Authentication and Security Layer). Kerberos (via the GSSAPI mechanism) is the most common. EP05 covers Kerberos in detail.

# SASL bind using Kerberos (after kinit)
ldapsearch -Y GSSAPI \
  -H ldap://dc.corp.com \
  -b "dc=corp,dc=com" \
  "(uid=vamshi)"

An anonymous Bind (no DN, no password) is also valid for directories configured to allow anonymous reads. Many public LDAP directories (and some internal ones, misconfigured) allow this.

Search: The Core Operation

A Search request has five required parameters:

baseObject   — where in the DIT to start (e.g., "dc=corp,dc=com")
scope        — how deep to look
               base    = only the base entry itself
               one     = one level below base (immediate children)
               sub     = entire subtree below base (most common)
derefAliases — how to handle alias entries (usually derefAlways)
filter       — what to match (e.g., "(uid=vamshi)")
attributes   — which attributes to return (empty = return all)

When SSSD authenticates a user login, it runs exactly two Search operations:

Search 1 — find the user's entry
  base:       dc=corp,dc=com
  scope:      sub
  filter:     (uid=vamshi)
  attributes: dn, uid, uidNumber, gidNumber, homeDirectory, loginShell

Search 2 — find the user's group memberships
  base:       dc=corp,dc=com
  scope:      sub
  filter:     (member=uid=vamshi,ou=engineers,dc=corp,dc=com)
  attributes: dn, cn, gidNumber

The first search locates the user entry and retrieves the POSIX attributes. The second finds all group entries that contain the user’s DN as a member. These two queries are the complete basis for a Linux login over LDAP.

Search Filters

LDAP filters follow a prefix (Polish notation) syntax. Every filter is wrapped in parentheses:

# Simple equality
(uid=vamshi)

# Presence — entry has this attribute at all
(mail=*)

# Substring match
(cn=vam*)

# Comparison
(uidNumber>=1000)

# Logical AND — both conditions must match
(&(objectClass=posixAccount)(uid=vamshi))

# Logical OR — either condition matches
(|(uid=vamshi)([email protected]))

# Logical NOT
(!(uid=guest))

# Combined — posixAccount entries with UID >= 1000 and no disabled flag
(&(objectClass=posixAccount)(uidNumber>=1000)(!(pwdAccountLockedTime=*)))

The & and | operators take any number of operands. Filter syntax looks strange the first time but is unambiguous and compact — which matters when you’re encoding it into BER for the wire.


What Actually Travels on the Wire

Every LDAP message is encoded in BER (Basic Encoding Rules), a binary subset of ASN.1. LDAP is not a text protocol.

When you run ldapsearch, the tool constructs a BER-encoded SearchRequest message and sends it over TCP. The server responds with one or more SearchResultEntry messages (one per matching entry), followed by a SearchResultDone. All of these are BER.

BER uses a type-length-value (TLV) encoding:

Tag byte(s)    — what type of data this is
Length byte(s) — how many bytes of data follow
Value byte(s)  — the actual data

A minimal LDAP SearchRequest for ldapsearch -x -b "dc=corp,dc=com" "(uid=vamshi)" uid looks like this on the wire:

30 45          ← SEQUENCE (LDAPMessage)
  02 01 01     ← INTEGER 1 (messageID = 1)
  63 40        ← [APPLICATION 3] SearchRequest
    04 11       ← OCTET STRING: baseObject
      64 63 3d  ← "dc=corp,dc=com" (20 bytes)
      63 6f 72
      70 2c 64
      63 3d 63
      6f 6d
    0a 01 02   ← ENUMERATED: scope = wholeSubtree (2)
    0a 01 03   ← ENUMERATED: derefAliases = derefAlways (3)
    02 01 00   ← INTEGER: sizeLimit = 0 (unlimited)
    02 01 00   ← INTEGER: timeLimit = 0 (unlimited)
    01 01 00   ← BOOLEAN: typesOnly = false
    a7 0f      ← [7] equalityMatch filter
      04 03 75 69 64   ← attributeDesc: "uid"
      04 06 76 61 6d   ← assertionValue: "vamshi"
             73 68 69
    30 05      ← SEQUENCE: AttributeDescriptionList
      04 03 75 69 64   ← "uid"

You don’t need to read BER by hand in practice. But knowing it’s binary — not HTTP, not JSON, not plain text — explains some things:

  • Why tcpdump port 389 shows binary output you can’t read directly
  • Why LDAP on port 389 looks different in Wireshark than HTTP traffic
  • Why ldapsearch output (LDIF) is a transformation of the wire data, not the wire data itself

To see the wire protocol in action:

# Run ldapsearch with debug output (level 1 = protocol tracing)
ldapsearch -d 1 -x \
  -H ldap://ldap.forumsys.com \
  -b "dc=example,dc=com" \
  -D "cn=read-only-admin,dc=example,dc=com" \
  -w readonly \
  "(uid=tesla)" cn

# You'll see output like:
# ldap_connect_to_host: TCP ldap.forumsys.com:389
# ldap_new_connection 1 1 0
# ldap_connect_to_host: Trying ldap.forumsys.com:389
# ldap_pvt_connect: fd: 5 tm: -1 async: 0
# TLS: can't connect.
# ldap_open_defconn: successful
# ber_scanf fmt ({it) ber:     ← BER decoding of the response
# ber_scanf fmt ({) ber:
# ber_scanf fmt (W) ber:
# ...

The ber_scanf lines are the BER decoder working through the server’s response. Each line represents one TLV element being read off the wire.


Reading ldapsearch Output: Every Field

ldapsearch output is LDIF (LDAP Data Interchange Format), defined in RFC 2849. It’s the standard text serialization of LDAP entries.

ldapsearch -x \
  -H ldap://ldap.forumsys.com \
  -b "dc=example,dc=com" \
  -D "cn=read-only-admin,dc=example,dc=com" \
  -w readonly \
  "(uid=tesla)" \
  cn mail uid uidNumber objectClass

Output, annotated:

# extended LDIF
#
# LDAPv3                              ← protocol version confirmed
# base <dc=example,dc=com> with scope subtree
# filter: (uid=tesla)                 ← your search filter echoed back
# requesting: cn mail uid uidNumber objectClass
#

# tesla, example.com                  ← comment: CN, base DN
dn: uid=tesla,dc=example,dc=com      ← Distinguished Name — full path in the tree

objectClass: inetOrgPerson           ← structural class: person with org attrs
objectClass: organizationalPerson    ← auxiliary: adds telephoneNumber etc.
objectClass: person                  ← auxiliary: adds sn (surname)
objectClass: top                     ← every entry has this implicitly
cn: Tesla                            ← common name (from inetOrgPerson MUST)
mail: [email protected]        ← email (from inetOrgPerson MAY)
uid: tesla                           ← userid (from inetOrgPerson MAY)

# search result
search: 2                            ← messageID of the SearchResultDone
result: 0 Success                    ← 0 = no error; 32 = no such object; 49 = invalid credentials

# numResponses: 2                    ← 1 result entry + 1 SearchResultDone
# numEntries: 1

The result: line is the one to watch when debugging. LDAP result codes:

Code Meaning What it tells you
0 Success Query ran, results returned (or no results found — check numEntries)
32 No Such Object Base DN doesn’t exist in this directory
49 Invalid Credentials Bind failed — wrong DN, wrong password, or account locked
50 Insufficient Access Your bind DN doesn’t have read permission on these entries
53 Unwilling to Perform Server refused the operation (e.g., password policy, anonymous bind disabled)
65 Object Class Violation Add/Modify would violate schema (missing MUST attribute, unrecognized object class)

Ports: 389, 636, and 3268

Port 389   — LDAP (plaintext, or StartTLS in-session upgrade)
Port 636   — LDAPS (LDAP wrapped in TLS from the start)
Port 3268  — Active Directory Global Catalog (plain)
Port 3269  — Active Directory Global Catalog over TLS

Port 389 vs 636: Both carry the same BER-encoded LDAP protocol. The difference is when TLS starts. On 636 (LDAPS), the TLS handshake happens before the first LDAP message. On 389 with StartTLS, the client sends a plaintext ExtendedRequest with OID 1.3.6.1.4.1.1466.20037 to initiate the TLS upgrade, then both sides continue over TLS. In production, use one or the other — never unencrypted port 389. Your credentials transit the wire on every Bind.

Ports 3268/3269 — Active Directory Global Catalog: AD organizes domains into forests. Each domain controller holds the full LDAP tree for its own domain. The Global Catalog is a read-only, partial replica of every domain in the forest — just the most-queried attributes from every object. When an application needs to find a user across domains in the same forest (not just in one domain), it queries the Global Catalog on 3268/3269 instead of a domain-specific DC on 389/636.

Forest: corp.com
  ├── Domain: corp.com       → DC at port 389/636   (full copy of corp.com)
  ├── Domain: emea.corp.com  → DC at port 389/636   (full copy of emea.corp.com)
  └── Global Catalog        → GC at port 3268/3269  (partial copy of ALL domains)

If your SSSD or application is configured to use port 3268 instead of 389, it’s talking to the Global Catalog — useful for forest-wide user lookups, but missing some less-common attributes that aren’t replicated to the GC.


Try It: ldapsearch Against Your Own Directory

If your Linux machine is joined to AD or connected to an LDAP directory, you can run these right now:

# 1. Confirm your SSSD knows where the LDAP server is
grep -E "ldap_uri|ad_domain|krb5_server" /etc/sssd/sssd.conf

# 2. Look up your own user entry
ldapsearch -x \
  -H ldap://$(grep ldap_uri /etc/sssd/sssd.conf | awk -F= '{print $2}' | tr -d ' ') \
  -b "dc=$(hostname -d | sed 's/\./,dc=/g')" \
  "(uid=$(whoami))" \
  dn objectClass uid uidNumber gidNumber homeDirectory loginShell

# 3. Find the groups you're in
ldapsearch -x \
  -H ldap://your-dc \
  -b "dc=corp,dc=com" \
  "(member=$(ldapsearch -x ... "(uid=$(whoami))" dn | grep ^dn | cut -d' ' -f2-))" \
  cn gidNumber

# 4. Check what object classes your entry has
ldapsearch -x \
  -H ldap://your-dc \
  -b "dc=corp,dc=com" \
  "(uid=$(whoami))" \
  objectClass

On a machine joined to Active Directory, the ldap_uri in sssd.conf is your domain controller’s address. On FreeIPA or OpenLDAP, it’s the directory server. The same ldapsearch commands work against all of them — because they all speak LDAP v3.


⚠ Common Misconceptions

“The DN is like a file path.” The analogy holds for reading it, but the DIT is not a file system. Entries don’t inherit permissions from parent containers the way files inherit from directories. Access control in LDAP is defined by ACLs on the server — not by position in the tree.

“LDAP is case-sensitive.” It depends on the attribute. Most string attributes (like cn and mail) use case-insensitive matching by default — (cn=Vamshi) and (cn=vamshi) return the same results. But some attributes (like userPassword and most binary types) are case-sensitive. The schema’s matching rules define this per-attribute.

“You need the full DN to search for a user.” No. The Search operation with a sub scope searches the entire subtree below the base DN. You search with a filter like (uid=vamshi) without knowing the full DN. The DN comes back in the result.

“LDAP accounts and Linux accounts are the same thing.” An LDAP user entry becomes a Linux account only if the entry has a posixAccount object class with the required POSIX attributes (uidNumber, gidNumber, homeDirectory). An LDAP entry without posixAccount can exist in the directory but getent passwd will not return it.

“The objectClass attribute can be changed freely.” Structural object classes cannot be changed after an entry is created — you’d have to delete and recreate the entry. Auxiliary classes can be added or removed. This is why correctly choosing the structural class at entry creation time matters.


Framework Alignment

Domain Relevance
CISSP Domain 5: Identity and Access Management DIT structure, DN addressing, object classes, and schema are the data model underpinning every enterprise identity store — understanding them is foundational to managing directory-based IAM
CISSP Domain 4: Communications and Network Security BER on port 389 is unencrypted; LDAPS (port 636) or StartTLS is required for production — wire-level understanding informs the transport security decision
CISSP Domain 3: Security Architecture and Engineering Schema design and DIT hierarchy are architectural decisions with security consequences: overly permissive schemas enable privilege escalation; flat DITs make access delegation harder

Key Takeaways

  • The DIT is a hierarchical database — every entry has a unique DN that describes its path from leaf to root
  • Object classes define the schema rules for each entry: what attributes are required (MUST) vs optional (MAY), and what the entry fundamentally is
  • For a user to be usable for Linux logins, the directory entry needs the posixAccount object class with uidNumber, gidNumber, and homeDirectory populated
  • An LDAP login is two operations: a Bind (authenticate), then a Search (retrieve POSIX attributes and group memberships)
  • Everything on the wire is BER-encoded binary — ldapsearch output is LDIF, a human-readable transformation of what the wire actually carries
  • LDAP result code 0 means success; 49 means bad credentials; 32 means the base DN doesn’t exist — these are the three you’ll debug most often


Run ldapsearch against your own directory and look at the object classes on your entry. Does it have posixAccount? Does it have shadowAccount? What attributes is your SSSD actually reading on every login — and what does it do when the LDAP server is unreachable? 👇


What’s Next

EP02 showed what’s inside the directory: the tree structure, the schema, the operations, and the wire protocol. What it left open is how Linux actually uses this information to grant a login.

LDAP is not, by itself, an authentication protocol. The Bind operation can verify a password — but that’s a tiny piece of what happens when you SSH into a machine joined to Active Directory. The full login flow runs through PAM, NSS, and SSSD before LDAP ever gets queried. EP03 traces that path.

Next: LDAP Authentication on Linux: PAM, NSS, and the Login Stack

Get EP03 in your inbox when it publishes → linuxcent.com/subscribe

Kubernetes Today: v1.33 to v1.35, In-Place Resize GA, and What Comes Next

Reading Time: 6 minutes


Introduction

Ten years after the first commit, Kubernetes is not exciting in the way it was in 2015. That’s a compliment. The system is stable. The APIs are mature. The migrations — dockershim, PSP, cloud provider code — are behind us.

What the 1.33–1.35 cycle shows is a project focused on precision: removing edge cases, promoting long-running alpha features to stable, and making the scheduler, storage, and security model more correct rather than more powerful. That’s what a mature infrastructure platform looks like.

Here’s what happened and where the project is headed.


Kubernetes 1.33 — Sidecar Resize, In-Place Resize Beta (April 2025)

Code name: Octarine

In-Place Pod Vertical Scaling reaches Beta

After landing as alpha in 1.27, in-place pod resource resizing became beta in 1.33 — enabled by default via the InPlacePodVerticalScaling feature gate.

The capability: change CPU and memory requests/limits on a running container without terminating and restarting the pod.

# Resize a running container's CPU limit without restart
kubectl patch pod api-pod-xyz --type='json' -p='[
  {
    "op": "replace",
    "path": "/spec/containers/0/resources/requests/cpu",
    "value": "2"
  },
  {
    "op": "replace",
    "path": "/spec/containers/0/resources/limits/cpu",
    "value": "4"
  }
]'

# Verify the resize was applied
kubectl get pod api-pod-xyz -o jsonpath='{.status.containerStatuses[0].resources}'

Why this matters operationally: Before in-place resize, vertical scaling meant terminating the pod, losing in-memory state, waiting for a new pod to become ready. For databases with warm buffer pools, JVM applications with loaded heap caches, or any workload where startup cost is significant, this was a serious limitation. Vertical Pod Autoscaler (VPA) worked around it by restarting pods — acceptable for stateless workloads, problematic for stateful ones.

In 1.33, resizing also works for sidecar containers, combining two 1.32-stable features.

Sidecar Containers — Full Maturity

The first feature to formally combine sidecar and in-place resize: you can now vertically scale a service mesh proxy (Envoy sidecar) without restarting the application pod. For high-traffic services where the proxy itself becomes the CPU bottleneck, this is directly actionable.


Gateway API v1.4 (October 2025)

Gateway API continued its rapid iteration with v1.4:

BackendTLSPolicy (Standard channel): Configure TLS between the gateway and the backend service — not just TLS termination at the gateway, but end-to-end encryption:

apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
  name: api-backend-tls
spec:
  targetRefs:
  - group: ""
    kind: Service
    name: api-service
  validation:
    caCertificateRefs:
    - name: internal-ca
      group: ""
      kind: ConfigMap
    hostname: api.internal.corp

Gateway Client Certificate Validation: The gateway can now validate client certificates — mutual TLS for ingress traffic, not just between services.

TLSRoute to Standard: TLS routing (based on SNI, not HTTP host headers) graduated to the standard channel — enabling TCP workloads with TLS passthrough through the Gateway API model.

ListenerSet: Group multiple Gateway listeners — useful for shared infrastructure where multiple teams need to attach routes to the same gateway without managing separate Gateway resources.


Kubernetes 1.34 — Scheduler Improvements, DRA Continues (August 2025)

The 1.34 release focused on the scheduler and Dynamic Resource Allocation:

DRA structured parameters stabilization: The Dynamic Resource Allocation API matured its parameter model — resource drivers can expose structured claims that the scheduler understands, enabling topology-aware placement of GPU workloads:

apiVersion: resource.k8s.io/v1alpha3
kind: ResourceClaim
metadata:
  name: gpu-claim
spec:
  devices:
    requests:
    - name: gpu
      deviceClassName: gpu.nvidia.com
      selectors:
      - cel:
          expression: device.attributes["nvidia.com/gpu-product"].string() == "A100-SXM4-80GB"
      count: 2

Scheduler QueueingHint stable: Plugins can now tell the scheduler when to re-queue a pod for scheduling — instead of the scheduler periodically retrying all unschedulable pods, plugins signal when relevant cluster state has changed. This significantly reduces scheduler CPU consumption in large clusters with many unschedulable pods.

Fine-grained node authorization improvements: Kubelets can now be restricted from accessing Service resources they don’t need — further reducing the blast radius of a compromised kubelet.


Kubernetes 1.35 — In-Place Resize GA, Memory Limits Unlocked (December 2025)

In-Place Pod Vertical Scaling Graduates to Stable

After landing in alpha (1.27), beta (1.33), in-place resize graduated to GA in 1.35. Two significant improvements accompanied GA:

Memory limit decreases now permitted: Previously, you could increase memory limits in-place but not decrease them. The restriction existed because the kernel doesn’t immediately reclaim memory when the limit is lowered — the OOM killer would need to run. 1.35 lifts this restriction with proper handling: the kernel is instructed to reclaim, and the pod status reflects the resize progress.

Pod-Level Resources (alpha in 1.35): Specify resource requests and limits at the pod level rather than per-container — with in-place resize support. Useful for init containers and sidecar patterns where total pod resources matter more than per-container allocation.

spec:
  # Pod-level resources (alpha) — total budget for all containers
  resources:
    requests:
      cpu: "4"
      memory: "8Gi"
  containers:
  - name: application
    image: myapp:latest
    # No per-container resources; pod-level applies
  - name: log-collector
    image: fluentbit:latest
    restartPolicy: Always  # sidecar

Other 1.35 Highlights

Topology Spread Constraints improvements: Better handling of unschedulable scenarios — whenUnsatisfiable: ScheduleAnyway now has smarter fallback behavior.

VolumeAttributesClass stable: Change storage performance characteristics (IOPS, throughput) of a PersistentVolume without re-provisioning — the storage equivalent of in-place pod resize.

# Change volume IOPS without re-provisioning
kubectl patch pvc database-pvc --type='merge' -p='
  {"spec": {"volumeAttributesClassName": "high-performance"}}'

Job success policy improvements: Declare a Job successful when a subset of pods complete successfully — for distributed training jobs where not all workers need to finish.


What’s in Kubernetes 1.36 (April 22, 2026)

Kubernetes 1.36 is on track for April 22, 2026 release. Based on the enhancement tracking and KEP (Kubernetes Enhancement Proposal) pipeline, expected highlights include:

  • DRA continuing toward stable
  • Pod-level resources moving to beta
  • Scheduler improvements for AI/ML workload placement
  • Further Gateway API integration as core networking model

The project has reached a rhythm: four releases per year, each focused on advancing a predictable set of features through alpha → beta → stable. The drama of the 2019–2022 period (PSP, dockershim, API removals) is behind it.


The State of the Ecosystem in 2026

Control Plane Deployment Models

Model Examples Best For
Managed (cloud provider) GKE, EKS, AKS Most organizations; no control plane ops
Self-managed kubeadm, k3s, Talos Air-gapped, on-prem, specific compliance requirements
Managed (platform) Rancher, OpenShift Enterprises that need multi-cluster management + vendor support

CNI Landscape

CNI Model Notable Feature
Cilium eBPF kube-proxy replacement, network policy at kernel, Hubble observability
Calico eBPF or iptables BGP-based networking, hybrid cloud routing
Flannel VXLAN/host-gw Simple, low overhead, no network policy
Weave Mesh overlay Easy multi-host setup

eBPF-based CNIs (Cilium, Calico in eBPF mode) are now the default recommendation for production clusters. The iptables era of Kubernetes networking is ending.

Security Stack in 2026

A hardened Kubernetes cluster in 2026 runs:

Cluster provisioning:    Cluster API + GitOps (Flux/ArgoCD)
Admission control:       Pod Security Admission (restricted) + Kyverno or OPA/Gatekeeper
Runtime security:        Falco (eBPF-based syscall monitoring)
Network security:        Cilium NetworkPolicy + Cilium Cluster Mesh for multi-cluster
Image security:          Cosign signing in CI + admission webhook for signature verification
Secret management:       External Secrets Operator → HashiCorp Vault or cloud KMS
Observability:           Prometheus + Grafana + Hubble (network flows) + OpenTelemetry

The Permanent Principles That Haven’t Changed

Looking across twelve years and 35 minor versions, some things have not changed:

The API as the universal interface: Everything in Kubernetes is a resource. This remains the most important architectural decision — it makes every tool, every controller, every GitOps system work with the same model.

Reconciliation loops: Every Kubernetes controller watches actual state and drives it toward desired state. The controller pattern from 2014 is unchanged. CRDs and Operators are just more instances of it.

Labels and selectors: The flexible grouping mechanism from 1.0 is still the primary way Kubernetes components find each other. Services find pods. HPA finds Deployments. Operators find their managed resources.

Declarative, not imperative: You describe what you want. Kubernetes figures out how to achieve and maintain it. This principle, inherited from Borg’s BCL configuration, underlies everything from Deployments to Crossplane’s cloud resource management.


What’s Coming: The Next Five Years

WebAssembly on Kubernetes: The Wasm ecosystem (wasmCloud, SpinKube) is building toward running WebAssembly workloads as first-class Kubernetes pods — near-native performance, smaller images, stronger isolation than containers. Still early, but gaining real adoption.

AI inference as infrastructure: LLM serving is becoming a cluster primitive. Tools like KServe and vLLM on Kubernetes are moving from research to production. The scheduler, resource model, and networking will continue adapting to inference workload patterns.

Confidential computing: AMD SEV, Intel TDX, and ARM CCA provide hardware-level memory encryption for pods. The RuntimeClass mechanism and ongoing kernel work are making confidential Kubernetes workloads operational rather than experimental.

Leaner distributions: k3s, k0s, Talos, and Flatcar-based minimal Kubernetes distributions are growing in adoption for edge, IoT, and resource-constrained environments. The pressure is toward smaller, more auditable control planes.


Key Takeaways

  • In-place pod vertical scaling went from alpha (1.27) to stable (1.35) — live CPU and memory resize without pod restart changes the economics of stateful workload management
  • Gateway API v1.4 completes the ingress replacement story: BackendTLSPolicy, client certificate validation, and TLSRoute in standard channel
  • VolumeAttributesClass stable (1.35): Change storage performance in-place — the storage parallel to pod resource resize
  • The eBPF era of Kubernetes networking is established: Cilium as default CNI in GKE, growing in EKS/AKS, replacing iptables-based kube-proxy
  • The Kubernetes project in 2026 is focused on precision — promoting mature features to stable, reducing edge cases, improving scheduler efficiency — not adding new abstractions
  • WebAssembly, confidential computing, and AI inference scheduling are the frontiers to watch

Series Wrap-Up

Era Defining Change
2003–2014 Borg and Omega build the playbook internally at Google
2014–2016 Kubernetes 1.0, CNCF, and winning the container orchestration wars
2016–2018 RBAC stable, CRDs, cloud providers all-in on managed K8s
2018–2020 Operators, service mesh, OPA/Gatekeeper — the extensibility era
2020–2022 Supply chain crisis, PSP deprecated, API removals, dockershim exit
2022–2023 Dockershim and PSP removed, eBPF networking takes over
2023–2025 GitOps standard, sidecar stable, DRA, AI/ML workloads
2025–2026 In-place resize GA, VolumeAttributesClass, Gateway API complete

From 47,501 lines of Go in a 250-file GitHub commit to the operating system of the cloud — and still reconciling.


← EP07: Platform Engineering Era

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

Hardening Blueprint as Code — Declare Your OS Baseline in YAML

Reading Time: 6 minutes

OS Hardening as Code, Episode 2
Cloud AMI Security Risks · Linux Hardening as Code**


TL;DR

  • A hardening runbook is a list of steps someone runs. A HardeningBlueprint YAML is a build artifact — if it wasn’t applied, the image doesn’t exist
  • Linux hardening as code means declaring your entire OS security baseline in a single YAML file and building it reproducibly across any provider
  • stratum build --blueprint ubuntu22-cis-l1.yaml --provider aws either produces a hardened image or fails — there is no partial state
  • The blueprint includes: target OS/provider, compliance benchmark, Ansible roles, and per-control overrides with documented reasons
  • One blueprint file = one source of truth for your hardening posture, version-controlled and reviewable like any other infrastructure code
  • Post-build OpenSCAP scan runs automatically — the image only snapshots if it passes

The Problem: A Runbook That Gets Skipped Once Is a Runbook That Gets Skipped

Hardening runbook
       │
       ▼
  Human executes
  steps manually
       │
       ├─── 47 deployments: followed correctly
       │
       └─── 1 deployment at 2am: step 12 skipped
                    │
                    ▼
           Instance in production
           without audit logging,
           SSH password auth enabled,
           unnecessary services running

Linux hardening as code eliminates the human decision point. If the blueprint wasn’t applied, the image doesn’t exist.

EP01 showed that default cloud AMIs arrive pre-broken — unnecessary services, no audit logging, weak kernel parameters, SSH configured for convenience not security. The obvious response is a hardening script. But a script run by a human is still a process step. It can be skipped. It can be done halfway. It can drift across different engineers who each interpret “run the hardening script” slightly differently.


A production deployment last year. The platform team had a solid CIS L1 hardening runbook — 68 steps, well-documented, followed consistently. Then a critical incident at 2am required three new instances to be deployed on short notice. The engineer on call ran the provisioning script and, under pressure, skipped the hardening step with the intention of running it the next morning.

They didn’t. The three instances stayed in production unhardened for six weeks before an automated scan caught them. Audit logging wasn’t configured. SSH was accepting password authentication. Two unnecessary services were running that weren’t in the approved software list.

Nothing was breached. But the finding went into the next compliance report as a gap, the team spent a week remediating, and the post-mortem conclusion was “we need better runbook discipline.”

That’s the wrong conclusion. The runbook isn’t the problem. The problem is that hardening was a process step instead of a build constraint.


What Linux Hardening as Code Actually Means

Linux hardening as code is the same principle as infrastructure as code applied to OS security posture: the desired state is declared in a file, the file is the source of truth, and the execution is deterministic and repeatable.

HardeningBlueprint YAML
         │
         ▼
  stratum build
         │
  ┌──────┴──────────────────┐
  │  Provider Layer          │
  │  (cloud-init, disk       │
  │   names, metadata        │
  │   endpoint per provider) │
  └──────┬──────────────────┘
         │
  ┌──────┴──────────────────┐
  │  Ansible-Lockdown        │
  │  (CIS L1/L2, STIG —      │
  │   the hardening steps)   │
  └──────┬──────────────────┘
         │
  ┌──────┴──────────────────┐
  │  OpenSCAP Scanner        │
  │  (post-build verify)     │
  └──────┬──────────────────┘
         │
         ▼
  Golden Image (AMI/GCP image/Azure image)
  + Compliance grade in image metadata

The YAML file is what you write. Stratum handles the rest.


The HardeningBlueprint YAML

The blueprint is the complete, auditable declaration of your OS security posture:

# ubuntu22-cis-l1.yaml
name: ubuntu22-cis-l1
description: Ubuntu 22.04 CIS Level 1 baseline for production workloads
version: "1.0"

target:
  os: ubuntu
  version: "22.04"
  provider: aws
  region: ap-south-1
  instance_type: t3.medium

compliance:
  benchmark: cis-l1
  controls: all

hardening:
  - ansible-lockdown/UBUNTU22-CIS
  - role: custom-audit-logging
    vars:
      audit_log_retention_days: 90
      audit_max_log_file: 100

filesystem:
  tmp:
    type: tmpfs
    options: [nodev, nosuid, noexec]
  home:
    options: [nodev]

controls:
  - id: 1.1.2
    override: compliant
    reason: "tmpfs /tmp implemented via systemd unit — equivalent control"
  - id: 5.2.4
    override: compliant
    reason: "SSH timeout managed by session manager policy, not sshd_config"

Each section is explicit:

target — which OS, which version, which provider. This is the only provider-specific section. The compliance intent below it is portable.

compliance — which benchmark and which controls to apply. controls: all means every CIS L1 control. You can also specify controls: [1.x, 2.x] to scope to specific sections.

hardening — which Ansible roles to run. ansible-lockdown/UBUNTU22-CIS is the community CIS hardening role. You can add custom roles alongside it.

controls — documented exceptions. Not suppressions — overrides with a recorded reason. This is the difference between “we turned off this control” and “this control is satisfied by an equivalent implementation, documented here.”


Building the Image

# Validate the blueprint before building
stratum blueprint validate ubuntu22-cis-l1.yaml

# Build — this will take 15-20 minutes
stratum build --blueprint ubuntu22-cis-l1.yaml --provider aws

# Output:
# [15:42:01] Launching build instance...
# [15:42:45] Running ansible-lockdown/UBUNTU22-CIS (144 tasks)...
# [15:51:33] Running custom-audit-logging role...
# [15:52:11] Running post-build OpenSCAP scan (benchmark: cis-l1)...
# [15:54:08] Grade: A (98/100 controls passing)
# [15:54:09] 2 controls overridden (documented in blueprint)
# [15:54:10] Creating AMI snapshot: ami-0a7f3c9e82d1b4c05
# [15:54:47] Done. AMI tagged with compliance grade: cis-l1-A-98

If the post-build scan comes back below a configurable threshold, the build fails — no AMI is created. The instance is terminated. The image does not exist.

That is the structural guarantee. You cannot skip a build step at 2am because at 2am you’re calling stratum build, not running steps manually.


The Control Override Mechanism

The override mechanism is what separates this from checkbox compliance.

Every security benchmark has controls that conflict with how production environments actually work. CIS L1 recommends /tmp on a separate partition. Many cloud instances use tmpfs with equivalent nodev, nosuid, noexec mount options. The intent of the control is satisfied. The literal implementation differs.

Without an override mechanism, you have two bad options: fail the scan (noisy, meaningless), or configure the scanner to ignore the control (undocumented, invisible to auditors).

The blueprint’s controls section gives you a third option: record the override, document the reason, and let the scanner count it as compliant. The SARIF output and the compliance grade both reflect the documented state.

controls:
  - id: 1.1.2
    override: compliant
    reason: "tmpfs /tmp implemented via systemd unit — equivalent control"

This appears in the build log, in the SARIF export, and in the image metadata. An auditor reading the output sees: control 1.1.2 — compliant, documented exception, reason recorded. Not: control 1.1.2 — ignored.


What the Blueprint Gives You That a Script Doesn’t

Hardening script HardeningBlueprint YAML
Version-controlled Possible but not enforced Always — it’s a file
Auditable exceptions Typically not Built-in override mechanism
Post-build verification Manual or none Automatic OpenSCAP scan
Image exists only if hardened No Yes — build fails if scan fails
Multi-cloud portability Requires separate scripts Provider flag, same YAML
Drift detection Not possible Rescan instance against original grade
Skippable at 2am Yes No — you’d have to change the build process

The last row is the one that matters. A script is skippable because there’s a human in the loop. A blueprint is a build artifact — you can’t deploy the image without the blueprint having been applied, because the image is what the blueprint produces.


Validating a Blueprint Before Building

# Syntax and schema validation
stratum blueprint validate ubuntu22-cis-l1.yaml

# Dry-run — show what Ansible tasks will run, what controls will be checked
stratum build --blueprint ubuntu22-cis-l1.yaml --provider aws --dry-run

# Show all available controls for a benchmark
stratum blueprint controls --benchmark cis-l1 --os ubuntu --version 22.04

# Show what a specific control checks
stratum blueprint controls --id 1.1.2 --benchmark cis-l1

The dry-run output shows every Ansible task that will run, every OpenSCAP check that will fire, and flags any controls that might conflict with the provider environment before you’ve launched a build instance.


Production Gotchas

Build time is 15–25 minutes. Ansible-Lockdown applies 144+ tasks for CIS L1. Build this into your pipeline timing — don’t expect golden images in 3 minutes.

Cloud-init ordering matters. On AWS, certain hardening steps (sysctl tuning, PAM configuration) interact with cloud-init. The Stratum provider layer handles sequencing — but if you add custom hardening roles, test the cloud-init interaction explicitly.

Some CIS controls conflict with managed service requirements. AWS Systems Manager Session Manager requires specific SSH configuration. RDS requires specific networking settings. Use the controls override section to document these — don’t suppress them silently.

Kernel parameter hardening requires a reboot. Controls in the 3.x (network parameters) and 1.5.x (kernel modules) sections apply sysctl changes that take effect on reboot. The Stratum build process reboots the instance before the OpenSCAP scan — don’t skip the reboot if you’re building manually.


Key Takeaways

  • Linux hardening as code means the blueprint YAML is the build artifact — the image either exists and is hardened, or it doesn’t exist
  • The controls override mechanism is the difference between undocumented suppressions and auditable, reasoned exceptions
  • Post-build OpenSCAP scan runs automatically — a failing grade blocks image creation
  • One blueprint file is portable across providers (EP03 covers this): the compliance intent stays in the YAML, the cloud-specific details go in the provider layer
  • Version-controlling the blueprint gives you a complete history of what your OS security posture was at any point in time — the same way Terraform state tracks infrastructure

What’s Next

One blueprint, one provider. EP02 showed that the skip-at-2am problem is solved when hardening is a build artifact rather than a process step.

What it didn’t address: what happens when you expand to a second cloud. GCP uses different disk names. Azure cloud-init fires in a different order. The AWS metadata endpoint IP is different from every other provider. If you maintain separate hardening scripts per cloud, they drift within a month.

EP03 covers multi-cloud OS hardening: the same blueprint, six providers, no drift.

Next: multi-cloud OS hardening — one blueprint for AWS, GCP, and Azure

Get EP03 in your inbox when it publishes → linuxcent.com/subscribe

What Is LDAP — and Why It Was Invented to Replace Something Worse

Reading Time: 9 minutes

The Identity Stack, Episode 1
EP01EP02: LDAP Internals → EP03 → …


TL;DR

  • LDAP (Lightweight Directory Access Protocol) is a protocol for reading and writing directory information — most commonly, who is allowed to do what
  • It was built in 1993 as a “lightweight” alternative to X.500/DAP, which ran over the full OSI stack and was impossible to deploy on anything but mainframe hardware
  • Before LDAP, every server had its own /etc/passwd — 50 machines meant 50 separate user databases, managed manually
  • NIS (Network Information Service) was the first attempt to centralize this — it worked, then became a cleartext-credentials security liability
  • LDAP v3 (RFC 2251, 1997) is the version still in production today — 27 years of backwards compatibility
  • Everything you use today — Active Directory, Okta, Entra ID — is built on top of, or speaks, LDAP

The Big Picture: 50 Years of “Who Are You?”

1969–1980s   /etc/passwd — per-machine, no network auth
     │        50 servers = 50 user databases, managed manually
     │
     ▼
1984         Sun NIS / Yellow Pages — first centralized directory
     │        broadcast-based, no encryption, flat namespace
     │        Revolutionary for its era. A liability by the 1990s.
     │
     ▼
1988         X.500 / DAP — enterprise-grade directory services
     │        OSI protocol stack. Powerful. Impossible to deploy.
     │        Mainframe-class infrastructure required just to run it.
     │
     ▼
1993         RFC 1487 — LDAP v1
     │        Tim Howes, University of Michigan.
     │        Lightweight. TCP/IP. Actually deployable.
     │
     ▼
1997         RFC 2251 — LDAP v3
     │        SASL authentication. TLS. Controls. Referrals.
     │        The version still in production today.
     │
     ▼
2000s–now    Active Directory, OpenLDAP, 389-DS, FreeIPA
             Okta, Entra ID, Google Workspace
             LDAP DNA in every identity system on the planet.

What is LDAP? It’s the protocol that solved one of the most boring and consequential problems in computing: how do you know who someone is, across machines, at scale, without sending their password in cleartext?


The World Before LDAP

Before you understand why LDAP was invented, you need to feel the problem it solved.

Every Unix machine in the 1970s and 1980s managed its own users. When you created an account on a server, your username, UID, and hashed password went into /etc/passwd on that machine. Another machine had no idea you existed. If you needed access to ten servers, an administrator created ten separate accounts — manually, one by one. When you changed your password, each account had to be updated separately.

For a university with 200 machines and 10,000 students, this was chaos. For a company with offices in three cities, it was a full-time job for multiple sysadmins.

Machine A           Machine B           Machine C
/etc/passwd         /etc/passwd         /etc/passwd
vamshi:x:1001       (vamshi unknown)    vamshi:x:1004
alice:x:1002        alice:x:1001        alice:x:1003
bob:x:1003          bob:x:1002          (bob unknown)

Same people, different UIDs, different machines, no central truth.
File permissions become meaningless when UID 1001 means
different users on different hosts.

For every new hire, an admin SSHed to every machine and ran useradd. When someone left, you hoped whoever ran the offboarding remembered all the machines. Most organizations didn’t know their own attack surface because there was no single place to look.


Sun NIS: The First Attempt at Centralization

Sun Microsystems released NIS (Network Information Service) in 1984, originally called Yellow Pages — a name they had to drop after a trademark dispute with British Telecom. The idea was elegant: one server holds the authoritative /etc/passwd (and /etc/group, /etc/hosts, and a dozen other maps), and client machines query it instead of reading local files.

For the first time, you could create an account once and have it work across your entire network. For a generation of Unix administrators, NIS was liberating.

       NIS Master Server
       /var/yp/passwd.byname
              │
    ┌─────────┼──────────┐
    ▼         ▼          ▼
 Client A   Client B   Client C
 (query NIS — no local /etc/passwd needed)

NIS worked well — until it didn’t. The failure modes were structural:

No encryption. NIS responses were cleartext UDP. An attacker on the same network segment could capture the full password database with a packet sniffer. In 1984, “the network” meant a trusted corporate LAN. By the mid-1990s, it meant ethernet segments that included lab workstations, and the assumptions no longer held.

Broadcast-based discovery. NIS clients found servers by broadcasting on the local network. This worked on a single flat ethernet. It failed completely across routers, across buildings, and across WAN links. Multi-site organizations ended up running separate NIS domains with no connection between them — which partially defeated the purpose.

Flat namespace. NIS had no organizational hierarchy. One domain. Everything flat. You couldn’t have engineering and finance as separate administrative units. You couldn’t delegate user management to a department. One person — usually one overworked sysadmin — managed the whole thing.

UIDs had to match across all machines. If alice was UID 1002 on one server but UID 1001 on another, NFS file ownership became wrong. NIS enforced consistency, but onboarding a new machine into an existing network required manually auditing UID conflicts across the entire directory. Get one wrong and files end up owned by the wrong person.

NIS worked for thousands of installations from 1984 to the mid-1990s. It also ended careers when it failed. What the industry needed was a hierarchical, structured, encrypted, scalable directory service.


X.500 and DAP: The Right Idea, Wrong Protocol

The OSI (Open Systems Interconnection) standards body had an answer: X.500 directory services. X.500 was comprehensive, hierarchical, globally federated. The ITU-T published the standard in 1988, and it looked like exactly what enterprises needed.

X.500 Directory Information Tree (DIT)
              c=US                   ← country
                │
         o=University                ← organization
                │
         ┌──────┴──────┐
     ou=CS           ou=Physics      ← organizational units
         │
     cn=Tim Howes                    ← common name (person)
     telephoneNumber: +1-734-...
     mail: [email protected]

This data model — the hierarchy, the object classes, the distinguished names — is exactly what LDAP inherited. The DIT, the cn=, ou=, dc= notation in every LDAP query you’ve ever read: all of it came from X.500.

The problem was DAP: the Directory Access Protocol that X.500 used to communicate.

DAP ran over the full OSI protocol stack. Not TCP/IP — OSI. Seven layers, all of which required specialized software that in 1988 only mainframe and minicomputer vendors had implemented. A university department wanting to run X.500 needed hardware and software licenses that cost as much as a small car. The vast majority of workstations couldn’t speak OSI at all.

The data model was sound. The transport was impractical.

X.500 / DAP (1988)              LDAP v1 (1993)
──────────────────              ──────────────
Full OSI stack (7 layers)  →    TCP/IP only
Mainframe-class hardware   →    Any Unix box with a TCP stack
$50,000+ deployment cost   →    Free (reference implementation)
Vendor-specific OSI impl.  →    Standard socket API
Zero internet adoption     →    Universities deployed immediately

The Invention: LDAP at the University of Michigan

Tim Howes was at the University of Michigan in the early 1990s. The university was running X.500 for its directory — faculty, staff, student contact information, credentials. The data model was good. The protocol was the problem.

His insight, working with colleagues Wengyik Yeong and Steve Kille: strip X.500 down to what actually needs to function over a TCP/IP connection. Keep the hierarchical data model. Throw away the OSI transport. The result was the Lightweight Directory Access Protocol.

RFC 1487, published July 1993, described LDAP v1. It preserved the X.500 directory information model — the hierarchy, the object classes, the distinguished name format — and mapped it onto a protocol that could run over a simple TCP socket on port 389.

No specialized hardware. No OSI. If you had a Unix machine and TCP/IP, you could run LDAP. By 1993, that meant virtually every workstation and server in every university and most enterprises.

The University of Michigan deployed it immediately. Within two years, organizations across the internet were running the reference implementation.

LDAP v2 (RFC 1777, 1995) cleaned up the protocol. LDAP v3 (RFC 2251, 1997) is the version in production today — adding SASL authentication (which enables Kerberos integration), TLS support, referrals for federated directories, and extensible controls for server-side operations. The RFC that standardized the internet’s primary identity protocol is 27 years old and still running.


What LDAP Actually Is

LDAP is a client-server protocol for reading and writing a directory — a structured, hierarchical database optimized for reads.

Every entry in the directory has a Distinguished Name (DN) that describes its position in the hierarchy, and a set of attributes defined by its object classes. A person entry looks like this:

dn: cn=vamshi,ou=engineers,dc=linuxcent,dc=com

objectClass: inetOrgPerson
objectClass: posixAccount
cn: vamshi
uid: vamshi
uidNumber: 1001
gidNumber: 1001
homeDirectory: /home/vamshi
loginShell: /bin/bash
mail: [email protected]

The DN reads right-to-left: domain linuxcent.com (dc=linuxcent,dc=com) → organizational unit engineers → common name vamshi. Every entry in the directory has a unique path through the tree — there’s no ambiguity about which vamshi you mean.

LDAP defines eight operations: Bind (authenticate), Search, Add, Modify, Delete, ModifyDN (rename), Compare, and Abandon. Most of what a Linux authentication system does with LDAP reduces to two: Bind (prove you are who you say you are) and Search (tell me everything you know about this user).

When your Linux machine authenticates an SSH login against LDAP:

1. User types password
2. PAM calls pam_sss (or pam_ldap on older systems)
3. SSSD issues a Bind to the LDAP server: "I am cn=vamshi, and here is my credential"
4. LDAP server verifies the bind → success or failure
5. SSSD issues a Search: "give me the posixAccount attributes for uid=vamshi"
6. LDAP returns uidNumber, gidNumber, homeDirectory, loginShell
7. PAM creates the session with those attributes

The entire login flow is two LDAP operations: one Bind, one Search.


Try It Right Now

You don’t need to set up an LDAP server to run your first query. There’s a public test LDAP directory at ldap.forumsys.com:

# Query a public LDAP server — no setup required
ldapsearch -x \
  -H ldap://ldap.forumsys.com \
  -b "dc=example,dc=com" \
  -D "cn=read-only-admin,dc=example,dc=com" \
  -w readonly \
  "(objectClass=inetOrgPerson)" \
  cn mail uid

# What you get back (abbreviated):
# dn: uid=tesla,dc=example,dc=com
# cn: Tesla
# mail: [email protected]
# uid: tesla
#
# dn: uid=einstein,dc=example,dc=com
# cn: Albert Einstein
# mail: [email protected]
# uid: einstein

Decode what you just ran:

  • -x — simple authentication (username/password bind, not Kerberos/SASL)
  • -H ldap://ldap.forumsys.com — the LDAP server URI, port 389
  • -b "dc=example,dc=com" — the base DN, the top of the subtree to search
  • -D "cn=read-only-admin,dc=example,dc=com" — the bind DN (who you’re authenticating as)
  • -w readonly — the bind password
  • "(objectClass=inetOrgPerson)" — the search filter: return entries that are people
  • cn mail uid — the attributes to return (default returns all)

That’s a live LDAP query returning real directory entries from a server running RFC 2251 — the same protocol Tim Howes designed in 1993.

On your own Linux system, if you’re joined to AD or LDAP, you can query it the same way with your domain credentials.


Why It Never Went Away

LDAP v3 was finalized in 1997. In 2024, it’s still the protocol every enterprise directory speaks. Why?

Because it became the lingua franca of enterprise identity before any replacement existed. Every application that needs to authenticate users — VPN concentrators, mail servers, network switches, web applications, HR systems — implemented LDAP support. Every directory service Microsoft, Red Hat, Sun, and Novell shipped stored data in an LDAP-accessible tree.

When Microsoft built Active Directory in 1999, they built it on top of LDAP + Kerberos. When your Linux machine joins an AD domain, it speaks LDAP to enumerate users and groups, and Kerberos to verify credentials. When Okta or Entra ID syncs with your on-premises directory, it uses LDAP Sync (or a modern protocol that maps directly to LDAP semantics).

The protocol is old. The ecosystem built on top of it is so deep that replacing LDAP would mean simultaneously replacing every enterprise application that depends on it. Nobody has done that. Nobody has had to.

What happened instead is the stack got taller. LDAP at the bottom, Kerberos for network authentication, SSSD as the local caching daemon, PAM as the Linux integration layer, SAML and OIDC at the top for web-based federation. The directory is still LDAP. The interfaces above it evolved.

That full stack — from the directory at the bottom to Zero Trust at the top — is what this series covers.


⚠ Common Misconceptions

“LDAP is an authentication protocol.” LDAP is a directory protocol. It stores identity information and can verify credentials (via Bind). Authentication in modern stacks is typically Kerberos or OIDC — LDAP provides the directory backing it.

“LDAP is obsolete.” LDAP is the storage layer for Active Directory, OpenLDAP, 389-DS, FreeIPA, and every enterprise IdP’s on-premises sync. It is ubiquitous. What’s changed is the interface layer above it.

“You need Active Directory to run LDAP.” Active Directory uses LDAP. OpenLDAP, 389-DS, FreeIPA, and Apache Directory Server are all standalone LDAP implementations. You can run a directory without Microsoft.

“LDAP and LDAPS are different protocols.” LDAP is the protocol. LDAPS is LDAP over TLS on port 636. StartTLS is LDAP on port 389 with an in-session upgrade to TLS. Same protocol, different transport security.


Framework Alignment

Domain Relevance
CISSP Domain 5: Identity and Access Management LDAP is the foundational directory protocol for centralized identity stores — the base layer of every enterprise IAM stack
CISSP Domain 4: Communications and Network Security Port 389 (LDAP), 636 (LDAPS), 3268/3269 (AD Global Catalog) — transport security decisions affect every directory deployment
CISSP Domain 3: Security Architecture and Engineering DIT hierarchy, schema design, replication topology — directory structure is an architectural security decision
NIST SP 800-63B LDAP as a credential service provider (CSP) backing enterprise authenticators

Key Takeaways

  • LDAP was invented to solve a real, painful problem: the authentication chaos that NIS couldn’t fix and X.500/DAP was too expensive to deploy
  • It inherited the right thing from X.500 (the hierarchical data model) and replaced the right thing (the impractical OSI transport with TCP/IP)
  • NIS was the predecessor that worked until it didn’t — its failure modes (no encryption, flat namespace, broadcast discovery) are exactly what LDAP was designed to fix
  • LDAP v3 (RFC 2251, 1997) is still the production standard — 27 years later
  • Active Directory, OpenLDAP, FreeIPA, Okta, Entra ID — every enterprise identity system either runs LDAP or speaks it
  • The full authentication stack is deeper than LDAP: the next 12 episodes peel it apart layer by layer

What’s Next

EP01 stayed at the design level — the problem, the predecessor failures, the invention, the data model.

EP02 goes inside the wire. The DIT structure, DN syntax, object classes, schema, and the BER-encoded bytes that actually travel from the server to your authentication daemon. Run ldapsearch against your own directory and read every line of what comes back.

Next: LDAP Internals: The Directory Tree, Schema, and What Travels on the Wire

Get EP02 in your inbox when it publishes → linuxcent.com/subscribe

XDP — Packets Processed Before the Kernel Knows They Arrived

Reading Time: 10 minutes

eBPF: From Kernel to Cloud, Episode 7
What Is eBPF? · The BPF Verifier · eBPF vs Kernel Modules · eBPF Program Types · eBPF Maps · CO-RE and libbpf · XDP**

14 min read


Introduction

EP01 through EP06 covered what eBPF is, how the verifier keeps it safe, and how the toolchain compiles and loads programs across kernel versions. This episode is where that foundation meets production networking.

XDP — eXpress Data Path — is the earliest hook in the Linux kernel packet path. It fires before sk_buff allocation, before routing, before netfilter. A DROP decision at XDP costs one bounds check and a return value. Everything else is skipped. At 1 million packets per second, that difference shows up directly as CPU.

This episode explains where XDP sits, what it can and cannot see, how Cilium uses it, and what every Kubernetes operator needs to know about it — even if they never write an eBPF program.


Table of Contents


Architecture Overview

XDP Pre-Stack Packet Hook — eBPF kernel data path diagram showing where XDP fires before sk_buff allocation
XDP fires before sk_buff allocation — the earliest possible kernel hook for zero-copy packet processing.

TL;DR

  • XDP fires before sk_buff allocation — the earliest possible kernel hook for packet processing
    (sk_buff = the kernel’s socket buffer — every normal packet requires one to be allocated, which adds up fast at scale)
  • Three modes: native (in-driver, full performance), generic (fallback, no perf gain), offloaded (NIC ASIC)
  • XDP context is raw packet bytes — no socket, no cgroup, no pod identity; handle non-IP traffic explicitly
  • Every pointer dereference requires a bounds check against data_end — the verifier enforces this
  • BPF_MAP_TYPE_LPM_TRIE is the right map type for IP prefix blocklists — handles /32 hosts and CIDRs together
  • XDP metadata area enables coordination with TC programs — classify at XDP speed, enforce with pod context at TC

Quick Check: Is XDP Running on Your Cluster?

Before the data path walkthrough — a two-command check you can run right now on any cluster node:

# SSH into a worker node, then:
bpftool net list

On a Cilium-managed node, you’ll see something like:

eth0 (index 2):
        xdpdrv  id 44

lxc8a3f21b (index 7):
        tc ingress id 47
        tc egress  id 48

Reading the output:
xdpdrv — XDP in native mode, running in the NIC driver before sk_buff (this is what you want)
xdpgeneric instead of xdpdrvgeneric mode, runs after sk_buff allocation, no performance benefit
– No XDP line at all — XDP not deployed; your CNI uses iptables for service forwarding

If you’re on EKS with aws-vpc-cni or GKE with kubenet, you likely won’t see XDP here — those CNIs use iptables. Understanding this section explains why teams migrating to Cilium see lower node CPU under the same traffic load.


Where XDP Sits in the Kernel Data Path

A client’s cluster was under a SYN flood — roughly 1 million packets per second from a rotating set of source IPs. We had iptables DROP rules installed within the first ten minutes, blocklist updated every 30 seconds as new source ranges appeared. The flood traffic dropped in volume. But node CPU stayed high. The %si column in top — software interrupt time — was sitting at 25–30%.

%si in top is the percentage of CPU time spent handling hardware interrupts and kernel-level packet processing — separate from your application’s CPU usage. On a quiet managed cluster (EKS, GKE) this is usually under 1%. Under a packet flood, high %si means the kernel is burning cycles just receiving packets, before your workloads run at all. It’s the metric that tells you the problem is below the application layer.

The iptables rules were matching. Packets were being dropped. CPU was still burning. The answer is where in the kernel the drop was happening. iptables fires inside the netfilter framework — after the kernel has already allocated an sk_buff for the packet, done DMA from the NIC ring buffer, and traversed several netfilter hooks. At 1Mpps, the allocation cost alone is measurable.

XDP fires before any of that.

The standard Linux packet receive path:

NIC hardware
  ↓
DMA to ring buffer (kernel memory)
  ↓
[XDP hook — fires here, before sk_buff]
  ├── XDP_DROP   → discard, zero further allocation
  ├── XDP_PASS   → continue to kernel network stack
  ├── XDP_TX     → transmit back out the same interface
  └── XDP_REDIRECT → forward to another interface or CPU
  ↓
sk_buff allocated from slab allocator
  ↓
netfilter: PREROUTING
  ↓
IP routing decision
  ↓
netfilter: INPUT or FORWARD
  ↓  [iptables fires somewhere in here]
socket receive queue
  ↓
userspace application

XDP runs at the driver level, in the NAPI poll context — the same context where the driver is processing received packets off the ring buffer. The program runs before the kernel’s general networking code gets involved. There’s no sk_buff, no reference counting, no slab allocation.

NAPI (New API) is how modern Linux receives packets efficiently. Instead of one CPU interrupt per packet (catastrophically expensive at 1Mpps), the NIC fires a single interrupt, then the kernel polls the NIC ring buffer in batches until it’s drained. XDP runs inside this polling loop — as close to the hardware as software gets without running on the NIC itself.

At 1Mpps, the difference between XDP_DROP and an iptables DROP is roughly the cost of allocating and then immediately freeing 1 million sk_buff objects per second — plus netfilter traversal, connection tracking lookup, and the DROP action itself. That’s the CPU time that was burning.

After moving the blocklist to an XDP program, the %si on the same traffic load dropped from 28% to 3%.


XDP Modes

XDP operates in three modes, and which one you get depends on your NIC driver.

Native XDP (XDP_FLAGS_DRV_MODE)

The eBPF program runs directly in the NIC driver’s NAPI poll function — in interrupt context, before sk_buff. This is the only mode that delivers the full performance benefit.

Driver support is required. The widely supported drivers: mlx4, mlx5 (Mellanox/NVIDIA), i40e, ice (Intel), bnxt_en (Broadcom), virtio_net (KVM/QEMU), veth (containers). Check support:

# Verify native XDP support on your driver
ethtool -i eth0 | grep driver
# driver: mlx5_core  ← supports native XDP

# Load in native mode
ip link set dev eth0 xdpdrv obj blocklist.bpf.o sec xdp

The veth driver supporting native XDP is what makes XDP meaningful inside Kubernetes pods — each pod’s veth interface can run an XDP program at wire speed.

Generic XDP (XDP_FLAGS_SKB_MODE)

Fallback for drivers that don’t support native XDP. The program still runs, but it runs after sk_buff allocation, as a hook in the netif_receive_skb path. No performance benefit over early netfilter. sk_buff is still allocated and freed for every packet.

# Generic mode — development and testing only
ip link set dev eth0 xdpgeneric obj blocklist.bpf.o sec xdp

Use this for development on a laptop with a NIC that lacks native XDP support. Never benchmark with it and never use it in production expecting performance gains.

Offloaded XDP

Runs on the NIC’s own processing unit (SmartNIC). Zero CPU involvement — the XDP decision happens in NIC hardware. Supported by Netronome Agilio NICs. Rare in production, but the theoretical ceiling for XDP performance.


The XDP Context: What Your Program Can See

XDP programs receive one argument: struct xdp_md.

struct xdp_md {
    __u32 data;           // offset of first packet byte in the ring buffer page
    __u32 data_end;       // offset past the last byte
    __u32 data_meta;      // metadata area before data (XDP metadata for TC cooperation)
    __u32 ingress_ifindex;
    __u32 rx_queue_index;
};

data and data_end are used as follows:

void *data     = (void *)(long)ctx->data;
void *data_end = (void *)(long)ctx->data_end;

// Every pointer dereference must be bounds-checked first
struct ethhdr *eth = data;
if ((void *)(eth + 1) > data_end)
    return XDP_PASS;  // malformed or truncated packet

The verifier enforces these bounds checks — every pointer derived from ctx->data must be validated before use. The error invalid mem access 'inv' means you dereferenced a pointer without checking the bounds. This is the most common cause of XDP program rejection.

For operators (not writing XDP code): You’ll see invalid mem access 'inv' in logs when an eBPF program is rejected at load time — most commonly during a Cilium upgrade or a custom tool deployment on a kernel the tool wasn’t built for. The fix is in the eBPF source or the tool version, not the cluster config.

What XDP cannot see:
– Socket state — no socket buffer exists yet
– Cgroup hierarchy — no pod identity
– Process information — no PID, no container
– Connection tracking state (unless you maintain it yourself in a map)

XDP is ingress-only. It fires on packets arriving at an interface, not departing. For egress, TC is the hook.


What This Means on Your Cluster Right Now

Every Cilium-managed node has XDP programs running. Here’s how to see them:

# All XDP programs on all interfaces — this is the full picture
bpftool net list
# Sample output on a Cilium node:
#
# eth0 (index 2):
#         xdpdrv  id 44         ← XDP in native mode on the node uplink
#
# lxc8a3f21b (index 7):
#         tc ingress id 47      ← TC enforces NetworkPolicy on pod ingress
#         tc egress  id 48      ← TC enforces NetworkPolicy on pod egress
#
# "xdpdrv"     = native mode (runs in NIC driver, before sk_buff — full performance)
# "xdpgeneric" = fallback mode (after sk_buff — no performance benefit over iptables)

# Which mode is active?
ip link show eth0 | grep xdp
# xdp mode drv  ← native (full performance)
# xdp mode generic  ← fallback (no perf benefit)

# Details on the XDP program ID
bpftool prog show id $(bpftool net show dev eth0 | grep xdp | awk '{print $NF}')
# Shows: loaded_at, tag, xlated bytes, jited bytes, map IDs

The map IDs in that output are the BPF maps the XDP program is using — typically the service VIP table for DNAT, and in security tools, the blocklist or allowlist. To see what’s in them:

# List maps used by the XDP program
bpftool prog show id <PROG_ID> | grep map_ids

# Dump the service map (for a Cilium node — this is the load balancer table)
bpftool map dump id <MAP_ID> | head -40

For a blocklist scenario — like the SYN flood mitigation above — the BPF_MAP_TYPE_LPM_TRIE is the standard data structure. A lookup for 192.168.1.45 hits a 192.168.1.0/24 entry in the same map, handling both host /32s and CIDR ranges in one lookup.

# Count entries in an XDP filter map
bpftool map dump id <BLOCKLIST_MAP_ID> | grep -c "key"

# Verify XDP is active and inspect program details
bpftool net show dev eth0

XDP Metadata: Cooperating with TC

Think of it as a sticky note attached to the packet. XDP writes the note at line speed (no context about pods or sockets). TC reads it later when full context is available, and acts on it. The packet carries the note between them.

More precisely: XDP can write metadata into the area before ctx->data — a small scratch space that survives as the packet moves from XDP to the TC hook. This is the coordination mechanism between the two eBPF layers.

The pattern: XDP classifies at speed (no sk_buff overhead), TC enforces with pod context (where you have socket identity). XDP writes a classification tag into the metadata area. TC reads it and makes the policy decision.

From an operational standpoint, when you see two eBPF programs on the same interface (one XDP, one TC), this pipeline is the likely explanation:

bpftool net list
# xdpdrv id 44 on eth0       ← XDP classifier running at line rate
# tc ingress id 47 on eth0   ← TC enforcer reading XDP metadata

How Cilium Uses XDP

Not running Cilium? On EKS with aws-vpc-cni or GKE with kubenet, service forwarding uses iptables NAT rules and conntrack instead. You can see this with iptables -t nat -L -n on a node — look for the KUBE-SVC-* chains. Those chains are what XDP replaces in a Cilium cluster. This is why teams migrating from kube-proxy to Cilium report lower node CPU at high connection rates — it’s not magic, it’s hook placement.

On a Cilium node, XDP handles the load balancing path for ClusterIP services. When a packet arrives at the node destined for a ClusterIP:

  1. XDP program checks the destination IP against a BPF LRU hash map of known service VIPs
  2. On a match, it performs DNAT — rewriting the destination IP to a backend pod IP
  3. Returns XDP_TX or XDP_REDIRECT to forward directly

No iptables NAT rules. No conntrack state machine. No socket buffer allocation for the routing decision. The lookup is O(1) in a BPF hash map.

# See Cilium's XDP program on the node uplink
ip link show eth0 | grep xdp
# xdp  (attached, native mode)

# The XDP program details
bpftool prog show pinned /sys/fs/bpf/cilium/xdp

# Load time, instruction count, JIT-compiled size
bpftool prog show id $(bpftool net list | grep xdp | awk '{print $NF}')

At production scale — 500+ nodes, 50k+ services — removing iptables from the service forwarding path with XDP reduces per-node CPU utilization measurably. The effect is most visible on nodes handling high connection rates to cluster services.


Operational Inspection

# All XDP programs on all interfaces
bpftool net list

# Check XDP mode (native, generic, offloaded)
ip link show | grep xdp

# Per-interface stats — includes XDP drop/pass counters
cat /sys/class/net/eth0/statistics/rx_dropped

# XDP drop counters exposed via bpftool
bpftool map dump id <stats_map_id>

# Verify XDP is active and show program details
bpftool net show dev eth0

Common Mistakes

Mistake Impact Fix
Missing bounds check before pointer dereference Verifier rejects: “invalid mem access” Always check ptr + sizeof(*ptr) > data_end before use
Using generic XDP for performance testing Misleading numbers — sk_buff still allocated Test in native mode only; check ip link output for mode
Not handling non-IP traffic (ARP, IPv6, VLAN) ARP breaks, IPv6 drops, VLAN-tagged frames dropped Check eth->h_proto and return XDP_PASS for non-IP
XDP for egress or pod identity No socket context at XDP; XDP is ingress only Use TC egress for pod-identity-aware egress policy
Forgetting BPF_F_NO_PREALLOC on LPM trie Full memory allocated at map creation for all entries Always set this flag for sparse prefix tries
Blocking ARP by accident in a /24 blocklist Loss of layer-2 reachability within the blocked subnet Separate ARP handling before the IP blocklist check

Key Takeaways

  • XDP fires before sk_buff allocation — the earliest possible kernel hook for packet processing
  • Three modes: native (in-driver, full performance), generic (fallback, no perf gain), offloaded (NIC ASIC)
  • XDP context is raw packet bytes — no socket, no cgroup, no pod identity; handle non-IP traffic explicitly
  • Every pointer dereference requires a bounds check against data_end — the verifier enforces this
  • BPF_MAP_TYPE_LPM_TRIE is the right map for IP prefix blocklists — handles /32 hosts and CIDRs together
  • XDP metadata area enables coordination with TC programs — classify at XDP speed, enforce with pod context at TC

What’s Next

XDP handles ingress at the fastest possible point but has no visibility into which pod sent a packet. EP08 covers TC eBPF — the hook that fires after sk_buff allocation, where socket and cgroup context exist.

TC is how Cilium implements pod-to-pod network policy without iptables. It’s also where stale programs from failed Cilium upgrades leave ghost filters that cause intermittent packet drops. Knowing how TC programs chain — and how to find and remove stale ones — is a specific, concrete operational skill.

Next: TC eBPF — pod-level network policy without iptables

Get EP08 in your inbox when it publishes → linuxcent.com/subscribe