CISSP Domain Mapping
| Domain | Relevance |
|---|---|
| Domain 3 — Security Architecture | Pod Security Standards replace PSP; namespace-level security profiles |
| Domain 7 — Security Operations | Supply chain security: Sigstore, image signing, SBOM; SolarWinds context |
| Domain 8 — Software Security | Container image provenance; admission-time image verification |
Introduction
The 2020–2022 period redefined what “secure Kubernetes” meant. A global pandemic moved workloads to cloud-native infrastructure faster than security practices could follow. SolarWinds happened. Log4Shell happened. The software supply chain became a crisis.
At the same time, the Kubernetes project was doing something it had been reluctant to do: removing APIs and features, including PodSecurityPolicy — the primary security primitive that most enterprise clusters depended on. The replacement was simpler, but the migration was not.
Kubernetes 1.19 — LTS Behavior, Ingress Stable (August 2020)
1.19 extended the support window to one year (from nine months). This was an acknowledgment that enterprise organizations couldn’t upgrade four times per year — a common complaint from operations teams.
- Ingress graduated to stable:
networking.k8s.io/v1— after years as a beta resource, Ingress finally had a stable API - Immutable ConfigMaps and Secrets to beta: Configuration protection becomes broadly available
- EndpointSlices to GA: The replacement for Endpoints — shards pod-to-service mappings to avoid the single large Endpoints object that caused control plane stress at scale (10,000+ endpoints for a single service)
- Structured logging (alpha): Machine-parseable log output from Kubernetes control plane components — a prerequisite for reliable SIEM integration
# EndpointSlice: distributed representation of service endpoints
kubectl get endpointslices -n production -l kubernetes.io/service-name=api-service
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
api-service-abc12 IPv4 8080 10.0.1.5,10.0.1.6,10.0.1.7 + 47 more... 2d
api-service-def34 IPv4 8080 10.0.2.1,10.0.2.2,10.0.2.3 + 47 more... 2d
Kubernetes 1.20 — Dockershim Deprecated (December 2020)
The announcement in 1.20 that the Docker shim was deprecated caused more panic than any previous Kubernetes deprecation. The message was misread by many as “Kubernetes is dropping Docker support” — the PR catastrophe that followed required the Kubernetes blog to publish a dedicated clarification post.
The reality: Docker-built images continued to work on Kubernetes. What was being removed was the code in the kubelet that talked directly to Docker’s daemon using a non-standard interface, rather than through the Container Runtime Interface (CRI). Docker images conform to the OCI (Open Container Initiative) image specification — they run on any CRI-compliant runtime.
The migration path:
– containerd: The runtime that Docker itself used internally. Moving to containerd meant removing the Docker layer entirely — the kubelet talks directly to containerd via CRI
– CRI-O: An OCI-focused runtime designed specifically for Kubernetes, minimal and purpose-built
# Before (Docker socket): kubelet → dockershim → Docker daemon → containerd → runc
# After (direct CRI): kubelet → containerd → runc
# or: kubelet → CRI-O → runc
# Check runtime in use on a node
kubectl get node worker-1 -o jsonpath='{.status.nodeInfo.containerRuntimeVersion}'
# containerd://1.6.4
Also in 1.20:
– API Priority and Fairness beta: Rate-limit API server requests by priority — prevents a runaway controller from starving other API clients
– CronJobs stable: Scheduled jobs graduate after years in beta
– Volume snapshot stable
The SolarWinds Context (December 2020)
The SolarWinds supply chain attack, disclosed in December 2020, didn’t directly target Kubernetes. But it accelerated an existing conversation in the cloud-native community: if the build pipeline is compromised, signed binaries mean nothing. If the image registry is compromised, admission control on image names means nothing.
The attack catalyzed work on several fronts:
– Sigstore: An open-source project (Google, Red Hat, Purdue University) for signing and verifying software artifacts including container images
– SLSA (Supply chain Levels for Software Artifacts): A framework for incrementally improving supply chain security, from basic build provenance to hermetic builds with verified dependencies
– SBOM (Software Bill of Materials): A machine-readable inventory of software components in an image — required by US Executive Order 14028 (May 2021) for software sold to the federal government
Kubernetes 1.21 — PodSecurityPolicy Deprecation (April 2021)
PodSecurityPolicy was deprecated in 1.21, announcing its removal in 1.25. The deprecation was contentious — PSP was the only built-in mechanism for enforcing pod security constraints, and every security-conscious cluster depended on it, despite its many flaws.
The replacement approach: Pod Security Standards — three predefined security profiles:
| Profile | Description | Use Case |
|---|---|---|
| Privileged | No restrictions | System-level workloads, trusted components |
| Baseline | Prevents known privilege escalations | General application workloads |
| Restricted | Hardened; follows current best practices | High-security workloads |
Other 1.21 highlights:
– CronJobs stable
– Immutable ConfigMaps and Secrets stable
– Graceful node shutdown beta: The kubelet gracefully terminates pods when a node shuts down (not just when the kubelet stops)
– PodDisruptionBudget stable
Kubernetes 1.22 — The Great API Removal (August 2021)
1.22 was the most disruptive Kubernetes release for operations teams since 1.0. Several long-lived beta APIs were removed:
| Removed API | Replacement | Used By |
|---|---|---|
| networking.k8s.io/v1beta1 Ingress | networking.k8s.io/v1 | Every ingress resource |
| batch/v1beta1 CronJob | batch/v1 | Every scheduled job |
| apiextensions.k8s.io/v1beta1 CRD | apiextensions.k8s.io/v1 | Every CRD definition |
| rbac.authorization.k8s.io/v1beta1 | rbac.authorization.k8s.io/v1 | RBAC resources |
Teams with Helm charts, Terraform modules, and CI/CD pipelines built against beta API versions had to update their manifests. This was the moment that finally drove home the message: beta APIs in Kubernetes are not stable — they will be removed.
Also in 1.22:
– Server-Side Apply stable: Apply semantics moved server-side — field ownership tracking, conflict detection, and merge strategies are handled by the API server rather than client-side kubectl
– Memory manager stable: Better NUMA-aware memory allocation for latency-sensitive workloads
– Bound Service Account Token Volumes stable: Time-limited, audience-bound tokens for pods — replacing the long-lived, cluster-wide service account tokens that were a persistent security concern
# Bound service account token — expires, audience-restricted
# Projected volume mounts a time-limited token (default 1h expiry)
volumes:
- name: token
projected:
sources:
- serviceAccountToken:
audience: api
expirationSeconds: 3600
path: token
The bound token change was significant from a security perspective: previously, a service account token extracted from a pod would be valid indefinitely, for any audience. Projected tokens expire and are tied to a specific audience.
Pod Security Admission (Kubernetes 1.22, GA in 1.25)
The replacement for PodSecurityPolicy was Pod Security Admission — an admission controller built into the API server (no webhook required) that enforces the three Pod Security Standards at the namespace level:
# Namespace-level security enforcement
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: v1.25
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: v1.25
The three modes:
– enforce: Reject pods that violate the policy
– audit: Allow the pod but add an audit annotation
– warn: Allow the pod and send a warning to the client
Pod Security Admission is deliberately simpler than PSP. It does less — it enforces three fixed profiles, not arbitrary rules. For arbitrary policy, you still need OPA/Gatekeeper or Kyverno. But the simplicity means it works reliably, with no authorization edge cases.
Kubernetes 1.23 — Dual-Stack Stable, HPA v2 Stable (December 2021)
- IPv4/IPv6 dual-stack stable: Pods and Services can have both IPv4 and IPv6 addresses — critical for organizations running mixed-stack networks or migrating from IPv4 to IPv6
- HPA v2 stable: Horizontal Pod Autoscaler with support for multiple metrics (CPU, memory, custom metrics from Prometheus, external metrics). Scale on Prometheus metrics, not just CPU:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: api-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api
minReplicas: 2
maxReplicas: 20
metrics:
- type: Pods
pods:
metric:
name: http_requests_per_second
target:
type: AverageValue
averageValue: 1000m
- FlexVolume deprecated (in favor of CSI): Another step in the driver out-of-tree migration
The Log4Shell Moment (December 2021)
Log4Shell (CVE-2021-44228) hit on December 9, 2021. The vulnerability allowed unauthenticated remote code execution in any Java application using Log4j 2.x. The blast radius was enormous — Log4j was in everything.
For Kubernetes operators, Log4Shell crystallized several operational realities:
Inventory problem: Do you know which of your pods is running a Java application? Do you know which version of Log4j it includes? Without an SBOM pipeline and admission-time image scanning, you probably don’t have a reliable answer.
Patch velocity problem: Once you know which images are vulnerable, how quickly can you rebuild and redeploy? Organizations with GitOps pipelines and image update automation (Flux’s image reflector, ArgoCD Image Updater) could respond in hours. Organizations without this infrastructure measured response time in days.
Runtime detection problem: Can you detect exploitation attempts in real time? Falco rules for Log4Shell JNDI lookup patterns were available within hours of disclosure — but only organizations already running Falco could use them.
Log4Shell made the case for supply chain security, image scanning, SBOM generation, and runtime detection tooling more effectively than any conference talk.
Sigstore and the Supply Chain Response
In 2021, Sigstore reached a point where its tooling — cosign (image signing), rekor (transparency log), fulcio (keyless signing via OIDC) — was production-ready.
The keyless signing model was significant: instead of managing long-lived signing keys (which themselves become a supply chain risk), fulcio issues short-lived certificates tied to an OIDC identity (a GitHub Actions workflow, a GitLab CI job). The signature proves that a specific workflow built the image.
# Sign an image as part of CI (keyless, OIDC-based)
cosign sign --yes ghcr.io/org/app:v1.0.0
# Verify before deploying
cosign verify \
--certificate-identity-regexp "https://github.com/org/app/.github/workflows/build.yml" \
--certificate-oidc-issuer https://token.actions.githubusercontent.com \
ghcr.io/org/app:v1.0.0
Policy engines (OPA/Gatekeeper, Kyverno) could be configured to reject pods using unsigned or unverified images at admission time — closing the loop from build provenance to runtime enforcement.
Key Takeaways
- Dockershim deprecation in 1.20 was about removing the non-standard interface, not about dropping Docker image compatibility — containers built with Docker run on containerd or CRI-O without changes
- The API removals in 1.22 were operationally painful but necessary — beta APIs in Kubernetes are not production-stable commitments
- Pod Security Admission (PSP’s replacement) trades power for reliability — three fixed profiles enforced at the namespace level, built into the API server, no authorization edge cases
- SolarWinds and Log4Shell made supply chain security a board-level concern; Sigstore, SBOM, and admission-time image verification moved from “nice to have” to operational requirements
- Bound service account tokens (1.22 stable) addressed a persistent security gap: pod tokens that expire and are audience-restricted rather than long-lived cluster-wide credentials
What’s Next
← EP04: The Operator Era | EP06: The Runtime Reckoning →
Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com