CRDs You Already Use: cert-manager, KEDA, and External Secrets Explained

Reading Time: 6 minutes

Kubernetes CRDs & Operators: Extending the API, Episode 2
What Is a CRD? · CRDs You Already Use · CRD Anatomy · Write Your First CRD · CEL Validation · Controller Loop · Build an Operator · CRD Versioning · Admission Webhooks · CRDs in Production


TL;DR

  • cert-manager, KEDA, and External Secrets Operator are all CRD-based systems — understanding their custom resources shows you what a well-designed CRD looks like before you build one
  • cert-manager’s Certificate CRD expresses desired TLS state; the cert-manager controller reconciles that state by issuing, renewing, and storing certificates in Secrets
  • KEDA’s ScaledObject extends the HorizontalPodAutoscaler with external metrics (queue depth, Kafka lag, Prometheus queries) — the KEDA operator translates ScaledObjects into native HPA objects
  • External Secrets Operator’s ExternalSecret abstracts over secret backends (AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager) — the controller pulls values and writes Kubernetes Secrets
  • All three follow the same pattern: you describe desired state in a custom resource; the operator reconciles actual state to match
  • Kubernetes custom resources examples like these are the fastest way to internalize the CRD mental model before writing your own

The Big Picture

  THREE CRD-BASED OPERATORS AND WHAT THEY MANAGE

  ┌─────────────────────────────────────────────────────────────┐
  │  cert-manager                                               │
  │  Certificate CR  →  controller issues cert  →  TLS Secret  │
  └─────────────────────────────────────────────────────────────┘

  ┌─────────────────────────────────────────────────────────────┐
  │  KEDA                                                       │
  │  ScaledObject CR  →  controller creates HPA  →  Pod count  │
  └─────────────────────────────────────────────────────────────┘

  ┌─────────────────────────────────────────────────────────────┐
  │  External Secrets Operator                                  │
  │  ExternalSecret CR  →  controller pulls  →  K8s Secret      │
  │                         from Vault/AWS/GCP                  │
  └─────────────────────────────────────────────────────────────┘

  In every case:
  User creates CR  →  Operator watches CR  →  Operator acts  →  Status updated

Kubernetes custom resources examples from real tools like these reveal the design pattern you will use in every CRD you build: express desired state declaratively, let the controller bridge the gap to actual state, surface the outcome in the status subresource.


Why Look at Existing CRDs First?

Before designing your own CRD, you want to understand what good CRD design looks like from the user’s perspective. The engineers at Jetstack (cert-manager), KEDACORE (KEDA), and External Secrets contributors have collectively solved the same problems you will face:

  • What goes in spec vs status?
  • How do you reference other Kubernetes objects?
  • How do you handle secrets and credentials securely?
  • What does a healthy vs unhealthy custom resource look like?

Studying these before writing your own saves you from the most common first-timer mistakes.


cert-manager: The Certificate CRD

cert-manager is the most widely deployed CRD-based system in Kubernetes. It manages TLS certificates from Let’s Encrypt, internal CAs, and cloud providers.

The core CRDs

kubectl get crds | grep cert-manager
certificates.cert-manager.io
certificaterequests.cert-manager.io
challenges.acme.cert-manager.io
clusterissuers.cert-manager.io
issuers.cert-manager.io
orders.acme.cert-manager.io

The one you interact with most is Certificate. Here is a real example:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: api-tls
  namespace: production
spec:
  secretName: api-tls-cert        # cert-manager writes the TLS Secret here
  duration: 2160h                 # 90 days
  renewBefore: 720h               # renew 30 days before expiry
  subject:
    organizations:
      - example.com
  dnsNames:
    - api.example.com
    - api-internal.example.com
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer

What happens after you apply this:

  1. cert-manager controller sees the new Certificate object
  2. It contacts the referenced ClusterIssuer (Let’s Encrypt in this case)
  3. It completes the ACME challenge, obtains the certificate
  4. It writes the certificate and private key into the api-tls-cert Secret
  5. It updates the Certificate object’s status to reflect success
kubectl describe certificate api-tls -n production
Status:
  Conditions:
    Last Transition Time:  2026-04-10T08:00:00Z
    Message:               Certificate is up to date and has not expired
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2026-07-09T08:00:00Z
  Not Before:              2026-04-10T08:00:00Z
  Renewal Time:            2026-06-09T08:00:00Z

What this teaches you about CRD design

  • spec.secretName — the CR references an output object by name. The controller creates or updates that object.
  • spec.issuerRef — the CR references another custom resource (ClusterIssuer) by name. This is a common pattern for separating configuration concerns.
  • status.conditions — the standard Kubernetes condition pattern: type, status, reason, message. You will use the same structure in your own CRDs.
  • The controller owns status — users own spec. This separation is a core convention.

KEDA: The ScaledObject CRD

KEDA (Kubernetes Event-Driven Autoscaling) extends Kubernetes autoscaling beyond CPU and memory. It can scale deployments based on queue depth, Kafka consumer lag, Prometheus metric values, and dozens of other event sources.

The core CRDs

kubectl get crds | grep keda
clustertriggerauthentications.keda.sh
scaledjobs.keda.sh
scaledobjects.keda.sh
triggerauthentications.keda.sh

A ScaledObject ties a Deployment to an external scaler:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: order-processor-scaler
  namespace: production
spec:
  scaleTargetRef:
    name: order-processor        # the Deployment to scale
  minReplicaCount: 0             # scale to zero when idle
  maxReplicaCount: 50
  triggers:
    - type: aws-sqs-queue
      metadata:
        queueURL: https://sqs.us-east-1.amazonaws.com/123456789/orders
        queueLength: "5"         # target: 5 messages per pod
        awsRegion: us-east-1
      authenticationRef:
        name: keda-sqs-auth      # TriggerAuthentication for AWS credentials

What KEDA does with this:

  1. KEDA controller sees the ScaledObject
  2. It creates a native HorizontalPodAutoscaler object targeting the order-processor Deployment
  3. KEDA’s metrics adapter polls the SQS queue depth and exposes it as a custom metric
  4. The HPA uses that metric to scale replicas — including to zero when the queue is empty
kubectl get scaledobject order-processor-scaler -n production
NAME                       SCALETARGETKIND      SCALETARGETNAME    MIN   MAX   TRIGGERS         READY   ACTIVE
order-processor-scaler     apps/Deployment      order-processor    0     50    aws-sqs-queue    True    True

What this teaches you about CRD design

  • spec.scaleTargetRef — targeting another object by name. The controller acts on that object, not on the CR itself.
  • spec.triggers — a list of trigger specifications. Lists of typed sub-objects are a recurring CRD pattern.
  • spec.minReplicaCount: 0 — expressing scale-to-zero as a first-class concept in the API. Built-in HPA does not support this; KEDA’s CRD extends the vocabulary of what is expressible.
  • The KEDA operator translates ScaledObject → native HPA. The CRD is an abstraction over a more complex Kubernetes object. This “translate and manage child resources” pattern is extremely common in operators.

External Secrets Operator: The ExternalSecret CRD

External Secrets Operator (ESO) solves a specific problem: secrets live in external systems (AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager), but Kubernetes workloads need them as Kubernetes Secrets. ESO bridges the gap.

The core CRDs

kubectl get crds | grep external-secrets
clusterexternalsecrets.external-secrets.io
clustersecretstores.external-secrets.io
externalsecrets.external-secrets.io
secretstores.external-secrets.io

A SecretStore defines the backend connection:

apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: aws-secrets-manager
  namespace: production
spec:
  provider:
    aws:
      service: SecretsManager
      region: us-east-1
      auth:
        jwt:
          serviceAccountRef:
            name: eso-sa            # uses IRSA/workload identity

An ExternalSecret defines what to pull and how to map it:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: database-creds
  namespace: production
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secrets-manager
    kind: SecretStore
  target:
    name: database-secret          # Kubernetes Secret to create/update
    creationPolicy: Owner
  data:
    - secretKey: username          # key in the K8s Secret
      remoteRef:
        key: prod/database         # path in AWS Secrets Manager
        property: username         # property within that secret
    - secretKey: password
      remoteRef:
        key: prod/database
        property: password

After ESO reconciles this:

kubectl get secret database-secret -n production -o jsonpath='{.data.username}' | base64 -d
# outputs: db_user
kubectl describe externalsecret database-creds -n production
Status:
  Conditions:
    Last Transition Time:   2026-04-10T08:00:00Z
    Message:                Secret was synced
    Reason:                 SecretSynced
    Status:                 True
    Type:                   Ready
  Refresh Time:             2026-04-10T09:00:00Z
  Synced Resource Version:  1-abc123

What this teaches you about CRD design

  • spec.secretStoreRef — referencing a configuration CRD (SecretStore) from an operational CRD (ExternalSecret). This layering of CRDs to separate concerns is a mature pattern.
  • spec.refreshInterval — the CR expresses a desired behavior (periodic sync), not just a desired state snapshot. CRDs can express temporal behaviors.
  • spec.target.creationPolicy: Owner — ESO will set an owner reference on the created Secret, so deleting the ExternalSecret cascades to deleting the Secret. This is how controllers manage lifecycle.
  • Sensitive values never appear in the CR — only paths and references. The controller handles the actual secret retrieval. This is a key security pattern in CRD design.

The Common Pattern Across All Three

  OPERATOR PATTERN (cert-manager / KEDA / ESO / every other operator)

  User applies CR
        │
        ▼
  Controller watches CRDs
  (informer cache, events queue)
        │
        ▼
  Controller reconciles:
  actual state ──→ compare ──→ desired state
        │              │
        │         (gap found)
        │              │
        ▼              ▼
  Takes action      Updates status
  (issue cert,      conditions in CR
   create HPA,
   sync Secret)
        │
        └──── loops back, watches for next change

The design contract:
Users write spec — what they want
Controllers read spec, write status — what actually happened
Status conditions are truthReady: True/False with reason and message tell operators what the controller knows

This pattern, explained in depth in EP06, is why CRDs and controllers are designed the way they are.


⚠ Common Mistakes

Installing CRDs without the controller. If you install cert-manager’s CRDs from the crds.yaml manifest without installing cert-manager itself, Certificate objects will be accepted by the API server but never reconciled. The Ready condition will never appear. Always install the operator alongside its CRDs.

Editing status fields directly. Many teams try kubectl patch or kubectl edit to update a custom resource’s status to work around a stuck controller. Most well-written controllers overwrite status every reconcile loop — your manual change will be wiped. Fix the underlying issue, not the status display.

Assuming CRD deletion is safe. Covered in EP01 but worth repeating: deleting a CRD cascades to deleting all instances. If you kubectl delete crd certificates.cert-manager.io, every Certificate object in every namespace is gone and cert-manager will stop issuing. Back up CRDs and their instances before any CRD deletion.


Quick Reference

# See all CRDs installed by cert-manager
kubectl get crds | grep cert-manager.io

# Get all Certificates across all namespaces
kubectl get certificates -A

# Watch cert-manager reconcile a new Certificate
kubectl get certificate api-tls -n production -w

# See all ScaledObjects and their current state
kubectl get scaledobjects -A

# Check ESO sync status for all ExternalSecrets
kubectl get externalsecrets -A

# Inspect what APIs a CRD exposes
kubectl api-resources | grep cert-manager

Key Takeaways

  • cert-manager, KEDA, and ESO are canonical examples of well-designed CRD-based operators
  • All three follow the same pattern: user writes spec, controller reconciles to actual state, status reflects outcome
  • spec expresses desired state declaratively; the controller figures out how to achieve it
  • Status conditions (type, status, reason, message) are the standard way to surface controller outcomes
  • Sensitive values never appear in the CR — controllers retrieve them from external systems using references and credentials

What’s Next

EP03: CRD Anatomy opens the YAML of a CRD itself — spec.versions, OpenAPI schema properties, scope, names, and subresources. You have seen CRDs from the outside; next we look at how they are structured on the inside.

Get EP03 in your inbox when it publishes → subscribe at linuxcent.com

What Is a Kubernetes CRD? How Custom Resources Extend the API

Reading Time: 6 minutes

Kubernetes CRDs & Operators: Extending the API, Episode 1
What Is a CRD? · CRDs You Already Use · CRD Anatomy · Write Your First CRD · CEL Validation · Controller Loop · Build an Operator · CRD Versioning · Admission Webhooks · CRDs in Production


TL;DR

  • A Kubernetes CRD (Custom Resource Definition) is how you add new resource types to the Kubernetes API — the same way Deployment and Service exist natively, you can make BackupPolicy or Certificate exist too
    (CRD = the schema/blueprint; Custom Resource = an instance of that schema, just like a Pod is an instance of the Pod schema)
  • Every kubectl get crds on a real cluster shows dozens of them — cert-manager, KEDA, Prometheus Operator, Crossplane all ship their own CRDs
  • CRDs are served by the same API server as built-in resources — kubectl, RBAC, watches, and events all work identically
  • A CRD alone does nothing — a controller watches the custom resources and acts on them; together they form an Operator
  • CRDs live in etcd just like Pods and Deployments — they survive API server restarts and cluster upgrades
  • You do not need to modify Kubernetes source code or restart the API server to add a CRD

The Big Picture

  HOW KUBERNETES CRDs EXTEND THE API

  ┌──────────────────────────────────────────────────────────────┐
  │  Kubernetes API Server                                       │
  │                                                              │
  │  Built-in resources          Custom resources (via CRD)      │
  │  ─────────────────           ──────────────────────────      │
  │  Pod                         Certificate     (cert-manager)  │
  │  Deployment                  ScaledObject    (KEDA)          │
  │  Service                     ExternalSecret  (ESO)           │
  │  ConfigMap                   BackupPolicy    (your team)     │
  │  ...                         ...                             │
  │                                                              │
  │  All resources: same API, same kubectl, same RBAC, same etcd │
  └──────────────────────────────────────────────────────────────┘
            ▲                          ▲
            │ built in                 │ registered at runtime
            │                         │
         Kubernetes              CustomResourceDefinition
          binary                    (a YAML you apply)

What is a Kubernetes CRD? It is a resource that defines resources — a schema registration that teaches the API server about a new object type you want to use in your cluster.


What Problem CRDs Solve

Kubernetes ships with roughly 50 resource types: Pods, Deployments, Services, ConfigMaps, Secrets, PersistentVolumes, and so on. These cover the general-purpose building blocks for running containerized workloads.

But the moment you operate real infrastructure, you hit the edges. You want to express:

  • “This database should have three replicas with point-in-time recovery enabled” — not a Deployment
  • “This TLS certificate for api.example.com should renew 30 days before expiry” — not a Secret
  • “This queue consumer should scale to zero when the queue is empty” — not a HorizontalPodAutoscaler

Before CRDs (pre-2017), the only options were: use ConfigMaps as a poor substitute (no schema, no validation, no dedicated RBAC), or fork Kubernetes and add the resource natively (impractical for everyone outside the core team).

CRDs, introduced as stable in Kubernetes 1.16, solved this by letting you register a new resource type with the API server at runtime — without touching Kubernetes source code, without restarting the API server, without any special access beyond being able to create cluster-scoped resources.


The Kubernetes API: A Brief Mental Model

Before CRDs make sense, the API model needs to be clear.

  KUBERNETES API STRUCTURE

  apiVersion: apps/v1       ← API group (apps) + version (v1)
  kind: Deployment          ← resource type
  metadata:
    name: web               ← instance name
    namespace: default      ← namespace scope
  spec:
    replicas: 3             ← desired state

Every Kubernetes resource has:
– A group (e.g., apps, batch, networking.k8s.io) — or no group for core resources
– A version (e.g., v1, v1beta1)
– A kind (e.g., Deployment, Pod)
– A scope: namespaced or cluster-wide

The API server is a registry. Each group/version/kind combination maps to a Go struct that knows how to validate, store, and serve that resource type.

A CRD registers a new entry in that registry. You supply the group, version, kind, and schema. The API server handles everything else — serving it via REST, storing it in etcd, exposing it to kubectl.


What a CRD Looks Like

Here is the smallest possible CRD — it creates a new BackupPolicy resource type in the storage.example.com API group:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: backuppolicies.storage.example.com
spec:
  group: storage.example.com
  versions:
    - name: v1alpha1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                schedule:
                  type: string
                retentionDays:
                  type: integer
  scope: Namespaced
  names:
    plural: backuppolicies
    singular: backuppolicy
    kind: BackupPolicy
    shortNames:
      - bp

Apply it:

kubectl apply -f backuppolicy-crd.yaml

Now create an instance:

apiVersion: storage.example.com/v1alpha1
kind: BackupPolicy
metadata:
  name: nightly
  namespace: default
spec:
  schedule: "0 2 * * *"
  retentionDays: 30
kubectl apply -f nightly-backup.yaml
kubectl get backuppolicies
kubectl get bp            # shortName works
kubectl describe bp nightly

The API server validates the spec against the schema, stores it in etcd, and returns it via all the standard API endpoints — all without a single line of custom code.


CRD vs Built-In Resource: What Is Different?

Not much, deliberately.

Capability Built-in resource Custom resource (CRD)
kubectl get / describe / delete Yes Yes
RBAC (Roles, ClusterRoles) Yes Yes
Watch (informers, events) Yes Yes
Stored in etcd Yes Yes
OpenAPI schema validation Yes Yes (you define the schema)
Admission webhooks Yes Yes
Status subresource Yes Optional (you enable it)
Scale subresource Yes Optional (you enable it)
Built-in controller behavior Yes No — you write the controller

The last row is the critical one. When you create a Deployment, the deployment controller immediately starts managing ReplicaSets. When you create a BackupPolicy, nothing happens — until you write and deploy a controller that watches BackupPolicy objects and acts on them.

That controller + the CRD is what people call an Operator.


A Real Cluster: What You Actually See

Run this on any cluster running cert-manager, Prometheus Operator, or any other tooling:

kubectl get crds

Sample output (abbreviated):

NAME                                                  CREATED AT
certificates.cert-manager.io                          2024-11-01T08:12:00Z
certificaterequests.cert-manager.io                   2024-11-01T08:12:00Z
issuers.cert-manager.io                               2024-11-01T08:12:00Z
clusterissuers.cert-manager.io                        2024-11-01T08:12:00Z
scaledobjects.keda.sh                                 2024-11-01T08:13:00Z
scaledjobs.keda.sh                                    2024-11-01T08:13:00Z
externalsecrets.external-secrets.io                   2024-11-01T08:14:00Z
prometheuses.monitoring.coreos.com                    2024-11-01T08:15:00Z
servicemonitors.monitoring.coreos.com                 2024-11-01T08:15:00Z

Every tool that ships as a CRD-based system registers its resource types here first. The count often surprises engineers: a production cluster with a typical toolchain easily has 40–80 CRDs.

Check how many are on your cluster:

kubectl get crds --no-headers | wc -l

How the API Server Handles a CRD

When you apply a CRD, the API server does three things:

  CRD REGISTRATION FLOW

  kubectl apply -f my-crd.yaml
          │
          ▼
  1. API server validates the CRD manifest
     (is the schema valid OpenAPI v3? are names correct?)
          │
          ▼
  2. CRD stored in etcd
     (under /registry/apiextensions.k8s.io/customresourcedefinitions/)
          │
          ▼
  3. New REST endpoints activated immediately:
     GET  /apis/storage.example.com/v1alpha1/namespaces/{ns}/backuppolicies
     POST /apis/storage.example.com/v1alpha1/namespaces/{ns}/backuppolicies
     ...

From this point, any kubectl get backuppolicies or API call to those endpoints is handled exactly like a built-in resource call — the API server serves it from etcd, applies RBAC, runs admission webhooks, and returns standard JSON.

No restart required. The new endpoints appear within seconds.


The Difference Between CRD and CR

Two terms that are easily confused:

  • CRD (CustomResourceDefinition) — the schema/blueprint. There is one CRD per resource type. certificates.cert-manager.io is a CRD.
  • CR (Custom Resource) — an instance of a CRD. Every Certificate object you create is a custom resource. You can have thousands of CRs per CRD.
  CRD (one)          →  Custom Resource (many)
  ─────────             ─────────────────────
  certificates          web-tls           (namespace: production)
  .cert-manager.io      api-tls           (namespace: production)
                        admin-tls         (namespace: staging)
                        ...

The CRD is applied once (usually by the tool’s Helm chart). Custom resources are created by your users, your CI pipeline, or your GitOps system throughout the life of the cluster.


Where CRDs Fit in the Kubernetes Extension Model

CRDs are one of three ways to extend Kubernetes:

  KUBERNETES EXTENSION MECHANISMS

  1. CRDs + Controllers (Operators)
     Add new resource types + behavior
     → cert-manager, KEDA, Argo CD, Crossplane
     Used for: domain-specific abstractions, infrastructure management

  2. Admission Webhooks
     Intercept API requests to validate or mutate objects
     → OPA/Gatekeeper, Kyverno, Istio injection
     Used for: policy enforcement, sidecar injection, defaulting

  3. API Aggregation (AA)
     Register a fully separate API server behind the main API server
     → metrics-server, custom autoscalers
     Used for: when you need non-CRUD semantics (e.g. exec, attach, streaming)

For 95% of use cases, CRDs + controllers are the right mechanism. API aggregation is complex and only warranted for non-standard API semantics. Admission webhooks are complementary to CRDs, not an alternative.


⚠ Common Mistakes

Confusing the CRD with the controller. The CRD is just a schema registration — it does not execute code. If you apply a CRD but do not deploy its controller, creating custom resources will succeed (the API server accepts them) but nothing will happen. This catches many people the first time they try to use cert-manager by only applying the CRDs without installing the cert-manager controller.

Assuming CRD deletion is safe. Deleting a CRD deletes all custom resources of that type from etcd. There is no “are you sure?” prompt. If you delete the certificates.cert-manager.io CRD, every Certificate object in every namespace is gone.

Treating CRDs as ConfigMap replacements. Some teams store configuration in CRDs purely to get schema validation. This works, but without a controller, the custom resources are inert data. If you only need configuration storage with validation, a CRD is viable — just be explicit that there is no reconciliation loop.


Quick Reference

# List all CRDs in the cluster
kubectl get crds

# Inspect a specific CRD's schema
kubectl get crd certificates.cert-manager.io -o yaml

# List all custom resources of a type
kubectl get certificates -A

# Get details on a specific custom resource
kubectl describe certificate web-tls -n production

# Delete a CRD (WARNING: deletes all instances)
kubectl delete crd backuppolicies.storage.example.com

# Check if a CRD is established (ready to use)
kubectl get crd backuppolicies.storage.example.com \
  -o jsonpath='{.status.conditions[?(@.type=="Established")].status}'
# Returns: True

Key Takeaways

  • A Kubernetes CRD registers a new resource type with the API server — no source code changes, no restart required
  • Custom resources behave identically to built-in resources: kubectl, RBAC, watches, etcd, admission webhooks all work the same way
  • The CRD is just the schema; a controller gives custom resources behavior — together they form an Operator
  • Every production cluster running modern tooling already uses dozens of CRDs
  • Deleting a CRD deletes all its instances — treat CRDs as production-critical objects

What’s Next

EP02: CRDs You Already Use makes this concrete before we go deeper — we walk through cert-manager’s Certificate, KEDA’s ScaledObject, and External Secrets’ ExternalSecret as working examples, so you understand what a well-designed CRD looks like from a user’s perspective before you design your own.

Get EP02 in your inbox when it publishes → subscribe at linuxcent.com