Kubernetes CRD Versioning: From v1alpha1 to v1 Without Breaking Clients

Reading Time: 6 minutes

Kubernetes CRDs & Operators: Extending the API, Episode 8
What Is a CRD? · CRDs You Already Use · CRD Anatomy · Write Your First CRD · CEL Validation · Controller Loop · Build an Operator · CRD Versioning · Admission Webhooks · CRDs in Production


TL;DR

  • Kubernetes CRD versioning lets you evolve your API from v1alpha1 to v1 without deleting existing custom resources or breaking clients still using the old version
    (storage version = the version etcd actually stores objects in; served versions = the versions the API server responds to; you can serve v1alpha1 and v1 simultaneously while migrating)
  • The hub-and-spoke model is the recommended conversion architecture: one “hub” version (usually v1) that every other version converts to/from
  • Without a conversion webhook, the API server only allows one served version at a time — you must use a webhook to serve multiple versions with schema differences
  • kubectl storage-version-migrator (or manual re-apply) migrates existing objects from the old storage version to the new one after you update storage: true
  • Changing field names between versions without a conversion webhook corrupts data silently — always test conversion round-trips before promoting a version

The Big Picture

  CRD VERSION LIFECYCLE

  Stage 1: Alpha                 Stage 2: Beta              Stage 3: Stable
  ──────────────────             ──────────────             ──────────────
  v1alpha1                       v1alpha1 (deprecated)      v1alpha1 (removed)
    served: true                   served: true               served: false
    storage: true                  storage: false             storage: false
                                 v1beta1                    v1beta1 (deprecated)
                                   served: true               served: true
                                   storage: false             storage: false
                                 v1                         v1
                                   served: true               served: true
                                   storage: true              storage: true

  Clients using v1alpha1:         The API server converts     Eventually remove
  still work via conversion       on the fly                  old served versions
  webhook

Kubernetes CRD versioning is what allows you to ship BackupPolicy v1alpha1 today, learn from real usage, evolve the schema to v1 with renamed fields and new constraints, and keep existing clusters running without a migration window.


Why Versioning Is Necessary

When BackupPolicy v1alpha1 shipped, the spec used retentionDays. After six months of production use, the team learns:

  • retentionDays should be renamed to retention.days (nested under a retention object for future extensibility)
  • A new required field backupFormat needs to be added with a default of tar.gz
  • The targets field should be renamed to includedNamespaces

These are breaking changes. Clients (GitOps repos, Helm charts, other operators) still have YAML referencing v1alpha1 with the old field names. You cannot simply rename the fields.

The solution: add v1 with the new schema, run both versions simultaneously via a conversion webhook, migrate objects to the new storage version, then deprecate v1alpha1.


Simple Case: Non-Breaking Addition (No Webhook Needed)

If you only add new optional fields to the schema — no renames, no removals — you can add a new version without a conversion webhook, as long as only one version is served at a time.

versions:
  - name: v1alpha1
    served: false      # stop serving old version
    storage: false
    schema: ...
  - name: v1
    served: true
    storage: true
    schema:
      openAPIV3Schema:
        properties:
          spec:
            properties:
              schedule:
                type: string
              retentionDays:
                type: integer
              backupFormat:          # new optional field
                type: string
                default: "tar.gz"

Existing objects stored as v1alpha1 are served as v1 with the new field defaulted. This works for purely additive changes because the stored bytes are compatible with the new schema.

When this is not enough: field renames, type changes, field removal, or structural reorganization all require a conversion webhook.


The Hub-and-Spoke Model

For breaking schema changes, the API server needs a conversion webhook. The recommended architecture is hub-and-spoke:

  HUB-AND-SPOKE CONVERSION

       v1alpha1
          │
          ▼ convert to hub
         v1  (hub)
          ▲
          │ convert to hub
       v1beta1

  Every version converts TO the hub and FROM the hub.
  The hub is always the storage version.
  Two-version conversion: v1alpha1 → v1 → v1beta1
  Never directly: v1alpha1 → v1beta1

This means you only write N conversion functions (one per version) rather than N² (one per version pair). As you add versions, the conversion complexity grows linearly.


Writing a Conversion Webhook

The conversion webhook is an HTTPS endpoint that the API server calls when it needs to convert an object between versions.

1. Define the conversion hub

In the kubebuilder project, mark v1 as the hub:

In api/v1/backuppolicy_conversion.go:

package v1

// Hub marks this type as the conversion hub.
func (*BackupPolicy) Hub() {}

2. Implement conversion in v1alpha1

In api/v1alpha1/backuppolicy_conversion.go:

package v1alpha1

import (
    "fmt"
    v1 "github.com/example/backup-operator/api/v1"
    "sigs.k8s.io/controller-runtime/pkg/conversion"
)

// ConvertTo converts v1alpha1 BackupPolicy to v1 (the hub).
func (src *BackupPolicy) ConvertTo(dstRaw conversion.Hub) error {
    dst := dstRaw.(*v1.BackupPolicy)

    // Metadata
    dst.ObjectMeta = src.ObjectMeta

    // Field mapping: v1alpha1 → v1
    dst.Spec.Schedule      = src.Spec.Schedule
    dst.Spec.BackupFormat  = "tar.gz"           // new field: default for old objects
    dst.Spec.StorageClass  = src.Spec.StorageClass
    dst.Spec.Suspended     = src.Spec.Suspended

    // Renamed field: retentionDays → retention.days
    dst.Spec.Retention = v1.RetentionSpec{
        Days: src.Spec.RetentionDays,
    }

    // Renamed field: targets → includedNamespaces
    for _, t := range src.Spec.Targets {
        dst.Spec.IncludedNamespaces = append(dst.Spec.IncludedNamespaces,
            v1.NamespaceTarget{
                Namespace:      t.Namespace,
                IncludeSecrets: t.IncludeSecrets,
            })
    }

    dst.Status = v1.BackupPolicyStatus(src.Status)
    return nil
}

// ConvertFrom converts v1 (hub) BackupPolicy back to v1alpha1.
func (dst *BackupPolicy) ConvertFrom(srcRaw conversion.Hub) error {
    src := srcRaw.(*v1.BackupPolicy)

    dst.ObjectMeta = src.ObjectMeta

    dst.Spec.Schedule      = src.Spec.Schedule
    dst.Spec.StorageClass  = src.Spec.StorageClass
    dst.Spec.Suspended     = src.Spec.Suspended
    dst.Spec.RetentionDays = src.Spec.Retention.Days

    for _, n := range src.Spec.IncludedNamespaces {
        dst.Spec.Targets = append(dst.Spec.Targets, BackupTarget{
            Namespace:      n.Namespace,
            IncludeSecrets: n.IncludeSecrets,
        })
    }

    // backupFormat cannot be round-tripped to v1alpha1 (no such field)
    // Store it in an annotation to preserve the value if the object is
    // re-converted back to v1.
    if src.Spec.BackupFormat != "" && src.Spec.BackupFormat != "tar.gz" {
        if dst.Annotations == nil {
            dst.Annotations = make(map[string]string)
        }
        dst.Annotations["storage.example.com/backup-format"] = src.Spec.BackupFormat
    }

    dst.Status = BackupPolicyStatus(src.Status)
    return nil
}

3. Register the webhook

kubebuilder create webhook \
  --group storage \
  --version v1alpha1 \
  --kind BackupPolicy \
  --conversion

This generates the webhook server setup. Deploy with a TLS certificate (cert-manager can manage this automatically via the kubebuilder //+kubebuilder:webhook:... marker).


Updating the CRD to Reference the Webhook

spec:
  conversion:
    strategy: Webhook
    webhook:
      clientConfig:
        service:
          name: backup-operator-webhook-service
          namespace: backup-operator-system
          path: /convert
      conversionReviewVersions: ["v1", "v1beta1"]
  versions:
    - name: v1alpha1
      served: true
      storage: false
      schema: ...
    - name: v1
      served: true
      storage: true
      schema: ...

Once applied, kubectl get backuppolicies.v1alpha1.storage.example.com/nightly and kubectl get backuppolicies.v1.storage.example.com/nightly both work — the API server converts transparently.


Migrating Existing Objects to the New Storage Version

After changing storage: true from v1alpha1 to v1, existing objects in etcd are still stored as v1alpha1 bytes. They are served correctly (via conversion) but are not yet migrated.

Migrate them:

# Option 1: Manual re-apply (works for small object counts)
kubectl get backuppolicies -A -o name | while read name; do
  kubectl apply -f <(kubectl get $name -o yaml)
done

# Option 2: Storage Version Migrator (automated, for large clusters)
# Install: https://github.com/kubernetes-sigs/kube-storage-version-migrator
kubectl apply -f storageVersionMigration.yaml

After migration, all objects in etcd are stored as v1. You can then set v1alpha1 served: false to stop serving the old version.


Storage Version Migration Checklist

  SAFE VERSION PROMOTION CHECKLIST

  □ New version (v1) has served: true, storage: true
  □ Old version (v1alpha1) has served: true, storage: false
  □ Conversion webhook deployed and healthy
  □ Round-trip conversion tested (v1alpha1 → v1 → v1alpha1 preserves all data)
  □ kubectl get backuppolicies works at both versions
  □ Existing objects migrated (re-applied or migration job run)
  □ Old version set to served: false (stop serving)
  □ Old version removed from CRD after N release cycles

⚠ Common Mistakes

Changing the storage version without a conversion webhook. If you flip storage: true from v1alpha1 to v1 while still serving v1alpha1, the API server tries to read stored v1alpha1 bytes as v1 and fails. Always deploy the conversion webhook before changing the storage version.

Lossy conversion. If ConvertFrom (v1 → v1alpha1) drops a field that exists in v1, objects are silently corrupted when a v1alpha1 client reads and re-saves them. Round-trip test every conversion: original → hub → original must produce identical objects (or use annotations to preserve fields that cannot round-trip).

Forgetting to migrate existing objects. After changing the storage version, existing objects are still stored in the old format. They convert on read, but etcd still holds old bytes. Until migrated, your etcd backup/restore story is broken — restoring from backup would restore old-format bytes that need conversion.


Quick Reference

# Check which version is currently the storage version
kubectl get crd backuppolicies.storage.example.com \
  -o jsonpath='{.status.storedVersions}'
# output: ["v1alpha1"]  or  ["v1alpha1","v1"]  or  ["v1"]

# Verify conversion webhook is reachable
kubectl get crd backuppolicies.storage.example.com \
  -o jsonpath='{.spec.conversion.webhook.clientConfig}'

# Read an object at a specific version
kubectl get backuppolicies.v1alpha1.storage.example.com/nightly -n demo -o yaml
kubectl get backuppolicies.v1.storage.example.com/nightly -n demo -o yaml

# Check CRD conditions (NamesAccepted, Established)
kubectl describe crd backuppolicies.storage.example.com | grep -A5 Conditions

Key Takeaways

  • CRD versioning lets you evolve the schema without a migration window — old and new versions coexist via a conversion webhook
  • The hub-and-spoke model minimizes conversion code: N functions, not N² — the hub version is always the storage version
  • Never change the storage version without a deployed conversion webhook for breaking schema changes
  • Conversion must be lossless — fields that cannot round-trip should be preserved in annotations
  • Migrate existing objects to the new storage version after promoting it, then deprecate the old served version

What’s Next

EP09: Admission Webhooks completes the Kubernetes extension picture — validating and mutating webhooks that intercept API requests before they reach etcd, when to use them alongside CRDs, and how they differ from CEL validation.

Get EP09 in your inbox when it publishes → subscribe at linuxcent.com

Build a Simple Kubernetes Operator with controller-runtime and kubebuilder

Reading Time: 7 minutes

Kubernetes CRDs & Operators: Extending the API, Episode 7
What Is a CRD? · CRDs You Already Use · CRD Anatomy · Write Your First CRD · CEL Validation · Controller Loop · Build an Operator · CRD Versioning · Admission Webhooks · CRDs in Production


TL;DR

  • Building a Kubernetes operator means writing a Go reconciler with controller-runtime — kubebuilder scaffolds the project structure, RBAC markers, and Makefile targets so you focus on the reconcile logic
    (kubebuilder = a CLI and framework that generates the operator project scaffold; controller-runtime = the Go library that provides the informer cache, work queue, and reconciler interface)
  • The reconciler for BackupPolicy in this episode creates and manages a CronJob — it is the behavior layer for the CRD built in EP03–EP05
  • RBAC is expressed as Go code comments (//+kubebuilder:rbac:...) — kubebuilder generates the ClusterRole YAML from them
  • Run the operator locally with make run during development; no cluster deployment needed until ready
  • The same project that builds the operator also builds and installs the CRD — make install applies the CRD YAML generated from your Go types
  • Testing: the operator ships with envtest — a local API server + etcd for controller testing without a real cluster

The Big Picture

  OPERATOR PROJECT STRUCTURE (kubebuilder scaffold)

  backup-operator/
  ├── api/v1alpha1/
  │   ├── backuppolicy_types.go     ← Go types that define CRD schema
  │   └── groupversion_info.go
  ├── internal/controller/
  │   └── backuppolicy_controller.go ← reconcile logic (our main focus)
  ├── config/
  │   ├── crd/                       ← generated CRD YAML
  │   ├── rbac/                      ← generated RBAC YAML
  │   └── manager/                   ← controller Deployment YAML
  ├── cmd/main.go                    ← entrypoint, sets up the manager
  └── Makefile                       ← build, test, install, deploy targets

  FLOW:
  Go types → kubebuilder generate → CRD YAML + RBAC YAML
  Reconcile function → runs in cluster → watches BackupPolicy → manages CronJobs

Building a Kubernetes operator with controller-runtime is where CRDs become living infrastructure — the BackupPolicy objects created in EP04 now get actual behavior attached to them.


Prerequisites

# Go 1.22+
go version

# kubebuilder CLI
curl -L -o kubebuilder \
  https://github.com/kubernetes-sigs/kubebuilder/releases/latest/download/kubebuilder_linux_amd64
chmod +x kubebuilder
sudo mv kubebuilder /usr/local/bin/

# A running cluster (kind works well for development)
kind create cluster --name operator-dev

# Verify kubectl works
kubectl cluster-info --context kind-operator-dev

Step 1: Scaffold the Project

mkdir backup-operator && cd backup-operator

# Initialize the Go module and project structure
kubebuilder init \
  --domain storage.example.com \
  --repo github.com/example/backup-operator

# Create the API (Go types + controller scaffold)
kubebuilder create api \
  --group storage \
  --version v1alpha1 \
  --kind BackupPolicy \
  --resource \
  --controller

When prompted:

Create Resource [y/n]: y
Create Controller [y/n]: y

The generated directory tree:

backup-operator/
├── api/
│   └── v1alpha1/
│       ├── backuppolicy_types.go
│       └── groupversion_info.go
├── internal/
│   └── controller/
│       └── backuppolicy_controller.go
├── cmd/
│   └── main.go
├── config/
│   ├── crd/bases/
│   ├── rbac/
│   └── manager/
├── go.mod
├── go.sum
└── Makefile

Step 2: Define the Go Types

Edit api/v1alpha1/backuppolicy_types.go to match the schema from EP03:

package v1alpha1

import (
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

// BackupTarget specifies a namespace to include in the backup.
type BackupTarget struct {
    Namespace      string `json:"namespace"`
    IncludeSecrets bool   `json:"includeSecrets,omitempty"`
}

// BackupPolicySpec defines the desired state of BackupPolicy.
type BackupPolicySpec struct {
    // Schedule is a cron expression for when to run backups.
    // +kubebuilder:validation:Pattern=`^(\*|[0-9,\-\/]+) (\*|[0-9,\-\/]+) (\*|[0-9,\-\/]+) (\*|[0-9,\-\/]+) (\*|[0-9,\-\/]+)$`
    Schedule string `json:"schedule"`

    // RetentionDays is how long to keep backup snapshots.
    // +kubebuilder:validation:Minimum=1
    // +kubebuilder:validation:Maximum=365
    RetentionDays int32 `json:"retentionDays"`

    // StorageClass is the storage class to use for backup volumes.
    // +kubebuilder:default=standard
    // +kubebuilder:validation:Enum=standard;premium;encrypted;archive
    StorageClass string `json:"storageClass,omitempty"`

    // Targets lists the namespaces and resources to include.
    // +kubebuilder:validation:MaxItems=20
    Targets []BackupTarget `json:"targets,omitempty"`

    // Suspended pauses backup execution when true.
    // +kubebuilder:default=false
    Suspended bool `json:"suspended,omitempty"`
}

// BackupPolicyStatus defines the observed state of BackupPolicy.
type BackupPolicyStatus struct {
    // Conditions reflect the current state of the BackupPolicy.
    Conditions []metav1.Condition `json:"conditions,omitempty"`

    // LastBackupTime is when the most recent backup completed.
    LastBackupTime *metav1.Time `json:"lastBackupTime,omitempty"`

    // CronJobName is the name of the managed CronJob.
    CronJobName string `json:"cronJobName,omitempty"`
}

// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="Schedule",type=string,JSONPath=`.spec.schedule`
// +kubebuilder:printcolumn:name="Retention",type=integer,JSONPath=`.spec.retentionDays`
// +kubebuilder:printcolumn:name="Suspended",type=boolean,JSONPath=`.spec.suspended`
// +kubebuilder:printcolumn:name="Ready",type=string,JSONPath=`.status.conditions[?(@.type=='Ready')].status`
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`

// BackupPolicy is the Schema for the backuppolicies API.
type BackupPolicy struct {
    metav1.TypeMeta   `json:",inline"`
    metav1.ObjectMeta `json:"metadata,omitempty"`

    Spec   BackupPolicySpec   `json:"spec,omitempty"`
    Status BackupPolicyStatus `json:"status,omitempty"`
}

// +kubebuilder:object:root=true

// BackupPolicyList contains a list of BackupPolicy.
type BackupPolicyList struct {
    metav1.TypeMeta `json:",inline"`
    metav1.ListMeta `json:"metadata,omitempty"`
    Items           []BackupPolicy `json:"items"`
}

func init() {
    SchemeBuilder.Register(&BackupPolicy{}, &BackupPolicyList{})
}

Regenerate the CRD YAML and DeepCopy methods:

make generate   # regenerates zz_generated.deepcopy.go
make manifests  # regenerates CRD YAML under config/crd/bases/

Step 3: Write the Reconciler

Edit internal/controller/backuppolicy_controller.go:

package controller

import (
    "context"
    "fmt"

    batchv1 "k8s.io/api/batch/v1"
    corev1 "k8s.io/api/core/v1"
    apierrors "k8s.io/apimachinery/pkg/api/errors"
    "k8s.io/apimachinery/pkg/api/meta"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/apimachinery/pkg/runtime"
    "k8s.io/apimachinery/pkg/types"
    ctrl "sigs.k8s.io/controller-runtime"
    "sigs.k8s.io/controller-runtime/pkg/client"
    "sigs.k8s.io/controller-runtime/pkg/log"

    storagev1alpha1 "github.com/example/backup-operator/api/v1alpha1"
)

// BackupPolicyReconciler reconciles BackupPolicy objects.
type BackupPolicyReconciler struct {
    client.Client
    Scheme *runtime.Scheme
}

// RBAC markers — kubebuilder generates ClusterRole YAML from these comments.
//+kubebuilder:rbac:groups=storage.example.com,resources=backuppolicies,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=storage.example.com,resources=backuppolicies/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=storage.example.com,resources=backuppolicies/finalizers,verbs=update
//+kubebuilder:rbac:groups=batch,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete

func (r *BackupPolicyReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    logger := log.FromContext(ctx)

    // Step 1: Fetch the BackupPolicy
    bp := &storagev1alpha1.BackupPolicy{}
    if err := r.Get(ctx, req.NamespacedName, bp); err != nil {
        if apierrors.IsNotFound(err) {
            // Object deleted before we could reconcile — nothing to do.
            return ctrl.Result{}, nil
        }
        return ctrl.Result{}, fmt.Errorf("fetching BackupPolicy: %w", err)
    }

    // Step 2: Define the desired CronJob name
    cronJobName := fmt.Sprintf("%s-backup", bp.Name)

    // Step 3: Fetch the existing CronJob (if any)
    existing := &batchv1.CronJob{}
    err := r.Get(ctx, types.NamespacedName{Name: cronJobName, Namespace: bp.Namespace}, existing)
    notFound := apierrors.IsNotFound(err)
    if err != nil && !notFound {
        return ctrl.Result{}, fmt.Errorf("fetching CronJob: %w", err)
    }

    // Step 4: Build the desired CronJob
    desired := r.buildCronJob(bp, cronJobName)

    // Step 5: Create or update
    if notFound {
        logger.Info("Creating CronJob", "name", cronJobName)
        if err := r.Create(ctx, desired); err != nil {
            return ctrl.Result{}, fmt.Errorf("creating CronJob: %w", err)
        }
    } else {
        // Update schedule and suspend state if they differ
        if existing.Spec.Schedule != desired.Spec.Schedule ||
            existing.Spec.Suspend != desired.Spec.Suspend {
            existing.Spec.Schedule = desired.Spec.Schedule
            existing.Spec.Suspend = desired.Spec.Suspend
            logger.Info("Updating CronJob", "name", cronJobName)
            if err := r.Update(ctx, existing); err != nil {
                return ctrl.Result{}, fmt.Errorf("updating CronJob: %w", err)
            }
        }
    }

    // Step 6: Update status
    bpCopy := bp.DeepCopy()
    meta.SetStatusCondition(&bpCopy.Status.Conditions, metav1.Condition{
        Type:               "Ready",
        Status:             metav1.ConditionTrue,
        Reason:             "CronJobReady",
        Message:            fmt.Sprintf("CronJob %s is configured", cronJobName),
        ObservedGeneration: bp.Generation,
    })
    bpCopy.Status.CronJobName = cronJobName

    if err := r.Status().Update(ctx, bpCopy); err != nil {
        return ctrl.Result{}, fmt.Errorf("updating status: %w", err)
    }

    return ctrl.Result{}, nil
}

func (r *BackupPolicyReconciler) buildCronJob(bp *storagev1alpha1.BackupPolicy, name string) *batchv1.CronJob {
    suspend := bp.Spec.Suspended
    retentionArg := fmt.Sprintf("--retention-days=%d", bp.Spec.RetentionDays)

    cj := &batchv1.CronJob{
        ObjectMeta: metav1.ObjectMeta{
            Name:      name,
            Namespace: bp.Namespace,
            Labels: map[string]string{
                "app.kubernetes.io/managed-by": "backup-operator",
                "backuppolicy":                 bp.Name,
            },
        },
        Spec: batchv1.CronJobSpec{
            Schedule: bp.Spec.Schedule,
            Suspend:  &suspend,
            JobTemplate: batchv1.JobTemplateSpec{
                Spec: batchv1.JobSpec{
                    Template: corev1.PodTemplateSpec{
                        Spec: corev1.PodSpec{
                            RestartPolicy: corev1.RestartPolicyOnFailure,
                            Containers: []corev1.Container{
                                {
                                    Name:    "backup",
                                    Image:   "backup-tool:latest",
                                    Args:    []string{retentionArg},
                                },
                            },
                        },
                    },
                },
            },
        },
    }

    // Set owner reference — CronJob is garbage-collected when BackupPolicy is deleted
    _ = ctrl.SetControllerReference(bp, cj, r.Scheme)
    return cj
}

// SetupWithManager registers the controller with the manager and declares what to watch.
func (r *BackupPolicyReconciler) SetupWithManager(mgr ctrl.Manager) error {
    return ctrl.NewControllerManagedBy(mgr).
        For(&storagev1alpha1.BackupPolicy{}).
        Owns(&batchv1.CronJob{}).    // reconcile BackupPolicy when owned CronJob changes
        Complete(r)
}

Step 4: Install the CRD and Run Locally

# Install the CRD into the cluster
make install
customresourcedefinition.apiextensions.k8s.io/backuppolicies.storage.example.com created
# Run the controller locally (outside the cluster)
make run
2026-04-25T08:00:00Z  INFO  Starting manager
2026-04-25T08:00:00Z  INFO  Starting workers  {"controller": "backuppolicy", "worker count": 1}

In a separate terminal:

kubectl apply -f - <<'EOF'
apiVersion: storage.example.com/v1alpha1
kind: BackupPolicy
metadata:
  name: nightly
  namespace: default
spec:
  schedule: "0 2 * * *"
  retentionDays: 30
EOF

Watch the controller output:

2026-04-25T08:01:00Z  INFO  Creating CronJob  {"name": "nightly-backup"}

Check the result:

kubectl get bp nightly
NAME      SCHEDULE    RETENTION   SUSPENDED   READY   AGE
nightly   0 2 * * *   30          false       True    10s
kubectl get cronjob nightly-backup
NAME             SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
nightly-backup   0 2 * * *   False     0        <none>          10s

Test self-healing — delete the CronJob and watch the controller recreate it:

kubectl delete cronjob nightly-backup
# Controller output:
# 2026-04-25T08:02:00Z  INFO  Creating CronJob  {"name": "nightly-backup"}

kubectl get cronjob nightly-backup
# Back within seconds

Test suspend:

kubectl patch bp nightly --type=merge -p '{"spec":{"suspended":true}}'
kubectl get cronjob nightly-backup -o jsonpath='{.spec.suspend}'
# true

Step 5: Deploy to Cluster

When ready for in-cluster deployment:

# Build and push the controller image
make docker-build docker-push IMG=your-registry/backup-operator:v0.1.0

# Deploy to cluster (creates Deployment, RBAC, CRD)
make deploy IMG=your-registry/backup-operator:v0.1.0
kubectl get pods -n backup-operator-system
NAME                                          READY   STATUS    RESTARTS   AGE
backup-operator-controller-manager-abc123     2/2     Running   0          30s

Understanding the RBAC Markers

The //+kubebuilder:rbac:... comments in the controller generate the ClusterRole YAML when you run make manifests:

//+kubebuilder:rbac:groups=storage.example.com,resources=backuppolicies,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=storage.example.com,resources=backuppolicies/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=batch,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete

Generated YAML under config/rbac/role.yaml:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: manager-role
rules:
  - apiGroups: ["storage.example.com"]
    resources: ["backuppolicies"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["storage.example.com"]
    resources: ["backuppolicies/status"]
    verbs: ["get", "update", "patch"]
  - apiGroups: ["batch"]
    resources: ["cronjobs"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]

This approach keeps RBAC co-located with the code that needs it — if you add a new resource access in the controller, you add the marker next to it.


⚠ Common Mistakes

Not setting an owner reference on child resources. Without ctrl.SetControllerReference(parent, child, scheme), deleting the BackupPolicy leaves orphaned CronJobs. Owner references enable automatic garbage collection of child resources.

Updating the object after r.Get() without handling conflicts. If two reconciles run concurrently (possible after a controller restart), both may try to update the same resource. The API server uses resource version for optimistic concurrency — you will get a conflict error. Retry the reconcile on conflict errors rather than failing.

Writing to bp directly instead of bp.DeepCopy() for status updates. If the status update fails and you retry, the original bp object now has the modified status in memory. Always update a deep copy when writing status so the in-memory state stays consistent with what was actually persisted.

Not watching owned resources. If you forget .Owns(&batchv1.CronJob{}) in SetupWithManager, the controller will not reconcile when a CronJob is deleted. Self-healing requires watching the resources you manage.


Quick Reference

# Scaffold a new API + controller
kubebuilder create api --group mygroup --version v1alpha1 --kind MyKind

# Regenerate deep copy methods after changing types
make generate

# Regenerate CRD YAML + RBAC from markers
make manifests

# Install CRD into current cluster
make install

# Run controller locally (outside cluster)
make run

# Build + push image, then deploy to cluster
make docker-build docker-push IMG=registry/operator:tag
make deploy IMG=registry/operator:tag

# Uninstall CRD (WARNING: deletes all instances)
make uninstall

Key Takeaways

  • kubebuilder scaffolds the project; you write the types and the reconcile function
  • Go struct markers (//+kubebuilder:...) generate the CRD YAML and RBAC — keep them close to the code they describe
  • ctrl.SetControllerReference enables automatic garbage collection of child resources
  • Always deep-copy the object before writing status; retry on conflict errors
  • make run runs the controller locally — no Docker build needed during development

What’s Next

EP08: Kubernetes CRD Versioning covers how to evolve the BackupPolicy schema from v1alpha1 to v1 without breaking existing clients — storage versions, conversion webhooks, and the hub-and-spoke model for safe API evolution in production clusters.

Get EP08 in your inbox when it publishes → subscribe at linuxcent.com