Kubernetes Today: v1.33 to v1.35, In-Place Resize GA, and What Comes Next

Reading Time: 6 minutes


Introduction

Ten years after the first commit, Kubernetes is not exciting in the way it was in 2015. That’s a compliment. The system is stable. The APIs are mature. The migrations — dockershim, PSP, cloud provider code — are behind us.

What the 1.33–1.35 cycle shows is a project focused on precision: removing edge cases, promoting long-running alpha features to stable, and making the scheduler, storage, and security model more correct rather than more powerful. That’s what a mature infrastructure platform looks like.

Here’s what happened and where the project is headed.


Kubernetes 1.33 — Sidecar Resize, In-Place Resize Beta (April 2025)

Code name: Octarine

In-Place Pod Vertical Scaling reaches Beta

After landing as alpha in 1.27, in-place pod resource resizing became beta in 1.33 — enabled by default via the InPlacePodVerticalScaling feature gate.

The capability: change CPU and memory requests/limits on a running container without terminating and restarting the pod.

# Resize a running container's CPU limit without restart
kubectl patch pod api-pod-xyz --type='json' -p='[
  {
    "op": "replace",
    "path": "/spec/containers/0/resources/requests/cpu",
    "value": "2"
  },
  {
    "op": "replace",
    "path": "/spec/containers/0/resources/limits/cpu",
    "value": "4"
  }
]'

# Verify the resize was applied
kubectl get pod api-pod-xyz -o jsonpath='{.status.containerStatuses[0].resources}'

Why this matters operationally: Before in-place resize, vertical scaling meant terminating the pod, losing in-memory state, waiting for a new pod to become ready. For databases with warm buffer pools, JVM applications with loaded heap caches, or any workload where startup cost is significant, this was a serious limitation. Vertical Pod Autoscaler (VPA) worked around it by restarting pods — acceptable for stateless workloads, problematic for stateful ones.

In 1.33, resizing also works for sidecar containers, combining two 1.32-stable features.

Sidecar Containers — Full Maturity

The first feature to formally combine sidecar and in-place resize: you can now vertically scale a service mesh proxy (Envoy sidecar) without restarting the application pod. For high-traffic services where the proxy itself becomes the CPU bottleneck, this is directly actionable.


Gateway API v1.4 (October 2025)

Gateway API continued its rapid iteration with v1.4:

BackendTLSPolicy (Standard channel): Configure TLS between the gateway and the backend service — not just TLS termination at the gateway, but end-to-end encryption:

apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
  name: api-backend-tls
spec:
  targetRefs:
  - group: ""
    kind: Service
    name: api-service
  validation:
    caCertificateRefs:
    - name: internal-ca
      group: ""
      kind: ConfigMap
    hostname: api.internal.corp

Gateway Client Certificate Validation: The gateway can now validate client certificates — mutual TLS for ingress traffic, not just between services.

TLSRoute to Standard: TLS routing (based on SNI, not HTTP host headers) graduated to the standard channel — enabling TCP workloads with TLS passthrough through the Gateway API model.

ListenerSet: Group multiple Gateway listeners — useful for shared infrastructure where multiple teams need to attach routes to the same gateway without managing separate Gateway resources.


Kubernetes 1.34 — Scheduler Improvements, DRA Continues (August 2025)

The 1.34 release focused on the scheduler and Dynamic Resource Allocation:

DRA structured parameters stabilization: The Dynamic Resource Allocation API matured its parameter model — resource drivers can expose structured claims that the scheduler understands, enabling topology-aware placement of GPU workloads:

apiVersion: resource.k8s.io/v1alpha3
kind: ResourceClaim
metadata:
  name: gpu-claim
spec:
  devices:
    requests:
    - name: gpu
      deviceClassName: gpu.nvidia.com
      selectors:
      - cel:
          expression: device.attributes["nvidia.com/gpu-product"].string() == "A100-SXM4-80GB"
      count: 2

Scheduler QueueingHint stable: Plugins can now tell the scheduler when to re-queue a pod for scheduling — instead of the scheduler periodically retrying all unschedulable pods, plugins signal when relevant cluster state has changed. This significantly reduces scheduler CPU consumption in large clusters with many unschedulable pods.

Fine-grained node authorization improvements: Kubelets can now be restricted from accessing Service resources they don’t need — further reducing the blast radius of a compromised kubelet.


Kubernetes 1.35 — In-Place Resize GA, Memory Limits Unlocked (December 2025)

In-Place Pod Vertical Scaling Graduates to Stable

After landing in alpha (1.27), beta (1.33), in-place resize graduated to GA in 1.35. Two significant improvements accompanied GA:

Memory limit decreases now permitted: Previously, you could increase memory limits in-place but not decrease them. The restriction existed because the kernel doesn’t immediately reclaim memory when the limit is lowered — the OOM killer would need to run. 1.35 lifts this restriction with proper handling: the kernel is instructed to reclaim, and the pod status reflects the resize progress.

Pod-Level Resources (alpha in 1.35): Specify resource requests and limits at the pod level rather than per-container — with in-place resize support. Useful for init containers and sidecar patterns where total pod resources matter more than per-container allocation.

spec:
  # Pod-level resources (alpha) — total budget for all containers
  resources:
    requests:
      cpu: "4"
      memory: "8Gi"
  containers:
  - name: application
    image: myapp:latest
    # No per-container resources; pod-level applies
  - name: log-collector
    image: fluentbit:latest
    restartPolicy: Always  # sidecar

Other 1.35 Highlights

Topology Spread Constraints improvements: Better handling of unschedulable scenarios — whenUnsatisfiable: ScheduleAnyway now has smarter fallback behavior.

VolumeAttributesClass stable: Change storage performance characteristics (IOPS, throughput) of a PersistentVolume without re-provisioning — the storage equivalent of in-place pod resize.

# Change volume IOPS without re-provisioning
kubectl patch pvc database-pvc --type='merge' -p='
  {"spec": {"volumeAttributesClassName": "high-performance"}}'

Job success policy improvements: Declare a Job successful when a subset of pods complete successfully — for distributed training jobs where not all workers need to finish.


What’s in Kubernetes 1.36 (April 22, 2026)

Kubernetes 1.36 is on track for April 22, 2026 release. Based on the enhancement tracking and KEP (Kubernetes Enhancement Proposal) pipeline, expected highlights include:

  • DRA continuing toward stable
  • Pod-level resources moving to beta
  • Scheduler improvements for AI/ML workload placement
  • Further Gateway API integration as core networking model

The project has reached a rhythm: four releases per year, each focused on advancing a predictable set of features through alpha → beta → stable. The drama of the 2019–2022 period (PSP, dockershim, API removals) is behind it.


The State of the Ecosystem in 2026

Control Plane Deployment Models

Model Examples Best For
Managed (cloud provider) GKE, EKS, AKS Most organizations; no control plane ops
Self-managed kubeadm, k3s, Talos Air-gapped, on-prem, specific compliance requirements
Managed (platform) Rancher, OpenShift Enterprises that need multi-cluster management + vendor support

CNI Landscape

CNI Model Notable Feature
Cilium eBPF kube-proxy replacement, network policy at kernel, Hubble observability
Calico eBPF or iptables BGP-based networking, hybrid cloud routing
Flannel VXLAN/host-gw Simple, low overhead, no network policy
Weave Mesh overlay Easy multi-host setup

eBPF-based CNIs (Cilium, Calico in eBPF mode) are now the default recommendation for production clusters. The iptables era of Kubernetes networking is ending.

Security Stack in 2026

A hardened Kubernetes cluster in 2026 runs:

Cluster provisioning:    Cluster API + GitOps (Flux/ArgoCD)
Admission control:       Pod Security Admission (restricted) + Kyverno or OPA/Gatekeeper
Runtime security:        Falco (eBPF-based syscall monitoring)
Network security:        Cilium NetworkPolicy + Cilium Cluster Mesh for multi-cluster
Image security:          Cosign signing in CI + admission webhook for signature verification
Secret management:       External Secrets Operator → HashiCorp Vault or cloud KMS
Observability:           Prometheus + Grafana + Hubble (network flows) + OpenTelemetry

The Permanent Principles That Haven’t Changed

Looking across twelve years and 35 minor versions, some things have not changed:

The API as the universal interface: Everything in Kubernetes is a resource. This remains the most important architectural decision — it makes every tool, every controller, every GitOps system work with the same model.

Reconciliation loops: Every Kubernetes controller watches actual state and drives it toward desired state. The controller pattern from 2014 is unchanged. CRDs and Operators are just more instances of it.

Labels and selectors: The flexible grouping mechanism from 1.0 is still the primary way Kubernetes components find each other. Services find pods. HPA finds Deployments. Operators find their managed resources.

Declarative, not imperative: You describe what you want. Kubernetes figures out how to achieve and maintain it. This principle, inherited from Borg’s BCL configuration, underlies everything from Deployments to Crossplane’s cloud resource management.


What’s Coming: The Next Five Years

WebAssembly on Kubernetes: The Wasm ecosystem (wasmCloud, SpinKube) is building toward running WebAssembly workloads as first-class Kubernetes pods — near-native performance, smaller images, stronger isolation than containers. Still early, but gaining real adoption.

AI inference as infrastructure: LLM serving is becoming a cluster primitive. Tools like KServe and vLLM on Kubernetes are moving from research to production. The scheduler, resource model, and networking will continue adapting to inference workload patterns.

Confidential computing: AMD SEV, Intel TDX, and ARM CCA provide hardware-level memory encryption for pods. The RuntimeClass mechanism and ongoing kernel work are making confidential Kubernetes workloads operational rather than experimental.

Leaner distributions: k3s, k0s, Talos, and Flatcar-based minimal Kubernetes distributions are growing in adoption for edge, IoT, and resource-constrained environments. The pressure is toward smaller, more auditable control planes.


Key Takeaways

  • In-place pod vertical scaling went from alpha (1.27) to stable (1.35) — live CPU and memory resize without pod restart changes the economics of stateful workload management
  • Gateway API v1.4 completes the ingress replacement story: BackendTLSPolicy, client certificate validation, and TLSRoute in standard channel
  • VolumeAttributesClass stable (1.35): Change storage performance in-place — the storage parallel to pod resource resize
  • The eBPF era of Kubernetes networking is established: Cilium as default CNI in GKE, growing in EKS/AKS, replacing iptables-based kube-proxy
  • The Kubernetes project in 2026 is focused on precision — promoting mature features to stable, reducing edge cases, improving scheduler efficiency — not adding new abstractions
  • WebAssembly, confidential computing, and AI inference scheduling are the frontiers to watch

Series Wrap-Up

Era Defining Change
2003–2014 Borg and Omega build the playbook internally at Google
2014–2016 Kubernetes 1.0, CNCF, and winning the container orchestration wars
2016–2018 RBAC stable, CRDs, cloud providers all-in on managed K8s
2018–2020 Operators, service mesh, OPA/Gatekeeper — the extensibility era
2020–2022 Supply chain crisis, PSP deprecated, API removals, dockershim exit
2022–2023 Dockershim and PSP removed, eBPF networking takes over
2023–2025 GitOps standard, sidecar stable, DRA, AI/ML workloads
2025–2026 In-place resize GA, VolumeAttributesClass, Gateway API complete

From 47,501 lines of Go in a 250-file GitHub commit to the operating system of the cloud — and still reconciling.


← EP07: Platform Engineering Era

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

Kubernetes RBAC and AWS IAM: The Two-Layer Access Model for EKS

Reading Time: 9 minutes

Meta Description: Understand how Kubernetes RBAC and AWS IAM interact in EKS — map the two-layer access model and debug permission failures across both control planes.


What Is Cloud IAMAuthentication vs AuthorizationIAM Roles vs PoliciesAWS IAM Deep DiveGCP Resource Hierarchy IAMAzure RBAC ScopesOIDC Workload IdentityAWS IAM Privilege EscalationAWS Least Privilege AuditSAML vs OIDC FederationKubernetes RBAC and AWS IAM


TL;DR

  • Kubernetes RBAC and cloud IAM are separate authorization layers — strong cloud IAM with weak Kubernetes RBAC is still a vulnerable cluster
  • cluster-admin ClusterRoleBindings are the first thing to audit — a compromised pod with cluster-admin controls the entire cluster
  • Disable automountServiceAccountToken on pods that don’t call the Kubernetes API — most application pods don’t need it mounted
  • Use OIDC for human access instead of X.509 client certificates — client certs cannot be revoked without rotating the CA
  • Bind groups from IdP, not individual usernames — revocation propagates automatically when someone leaves
  • A ServiceAccount that can create pods or create rolebindings is a privilege escalation path: the same class of risk as iam:PassRole

The Big Picture

  TWO AUTHORIZATION LAYERS — NEITHER COMPENSATES FOR THE OTHER

  ┌─────────────────────────────────────────────────────────────────┐
  │  CLOUD IAM LAYER  (AWS IAM / GCP IAM / Azure RBAC)             │
  │  Controls: S3, DynamoDB, Lambda, RDS, cloud services           │
  │  Human: federated identity from IdP (SAML / OIDC)             │
  │  Machine: IRSA annotation → IAM role / GKE WI / AKS WI        │
  │  Audit: CloudTrail, GCP Audit Logs, Azure Monitor              │
  └─────────────────────────────────────────────────────────────────┘
           ↕ separate systems — no inheritance in either direction
  ┌─────────────────────────────────────────────────────────────────┐
  │  KUBERNETES RBAC LAYER  (within the cluster)                   │
  │  Controls: pods, secrets, deployments, configmaps, namespaces  │
  │  Human: OIDC groups → ClusterRoleBinding (or RoleBinding)      │
  │  Machine: ServiceAccount → Role / ClusterRole                  │
  │  Audit: kube-apiserver audit log                               │
  └─────────────────────────────────────────────────────────────────┘

  Attack path: exploit app pod → SA has cluster-admin → own the cluster
  Audit finding: cluster-admin on app SA, regardless of cloud IAM posture

Introduction

I spent a long time in Kubernetes environments thinking cloud IAM and Kubernetes RBAC were related in a way that meant securing one partially covered the other. They don’t. They’re separate authorization systems that happen to share infrastructure.

The moment this crystallized for me: I was auditing an EKS cluster for a fintech company. Their AWS IAM posture was actually quite good — least privilege roles, no wildcard policies, SCPs in place at the org level. I was about to give them a clean bill of health when I ran one command:

kubectl get clusterrolebindings -o json | \
  jq '.items[] | select(.roleRef.name=="cluster-admin") | {name:.metadata.name, subjects:.subjects}'

The output showed five ClusterRoleBindings to cluster-admin. Two of them bound it to service accounts in production namespaces. One of those service accounts was used by an application that processed customer transactions.

cluster-admin in Kubernetes is the equivalent of AdministratorAccess in AWS. An attacker who compromises a pod running as that service account doesn’t just have access to the application’s data. They have control of the entire cluster: reading every secret in every namespace, deploying arbitrary workloads, modifying RBAC bindings to create persistence.

None of this showed up in the AWS IAM audit. AWS IAM and Kubernetes RBAC are separate systems. Securing one tells you nothing about the other.


Kubernetes RBAC Architecture

Kubernetes RBAC works with four object types:

Object Scope What It Does
Role Single namespace Defines permissions within one namespace
ClusterRole Cluster-wide Permissions across all namespaces, or for non-namespaced resources
RoleBinding Single namespace Binds a Role (or ClusterRole) to subjects, scoped to one namespace
ClusterRoleBinding Cluster-wide Binds a ClusterRole to subjects with cluster-wide scope

Subjects — the identities that receive the binding — are:
User: an external identity (Kubernetes has no native user objects; users come from the authenticator)
Group: a group of external identities
ServiceAccount: a Kubernetes-native machine identity, namespaced

The scoping matters. A ClusterRole defines what permissions exist. A RoleBinding applies that ClusterRole within a single namespace. A ClusterRoleBinding applies it everywhere. The same permissions, dramatically different blast radius.


Roles and ClusterRoles

# Role: read pods and their logs — scoped to the default namespace only
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]          # "" = core API group (pods, secrets, configmaps, etc.)
  resources: ["pods", "pods/log"]
  verbs: ["get", "list", "watch"]
# ClusterRole: manage Deployments across all namespaces
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: deployment-manager
rules:
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]

The verbs map to HTTP methods against the Kubernetes API: get reads a specific resource, list returns a collection, watch streams changes, create/update/patch/delete are mutations.

One that consistently surprises people: list on secrets returns secret values in some Kubernetes versions and configurations. You might think “list” is just metadata, but listing secrets can include their data. If a service account needs to check whether a secret exists, grant get on the specific secret name. Avoid list on the secrets resource.

The Wildcard Risk

# This is effectively cluster-admin in the default namespace — avoid
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]

Any * in RBAC rules is an audit finding. In practice I find wildcards most often in:
– Operator and controller service accounts (understandable, but worth reviewing)
– “Temporary” RBAC that became permanent
– Developer tooling given cluster-admin “because it was easier”

Run this to find all ClusterRoles with wildcard verbs:

kubectl get clusterroles -o json | \
  jq '.items[] | select(.rules[]?.verbs[] == "*") | .metadata.name'

Bindings — Connecting Identities to Roles

# RoleBinding: alice can read pods in the default namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: alice-pod-reader
  namespace: default
subjects:
- kind: User
  name: [email protected]
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
# ClusterRoleBinding: Prometheus can read cluster-wide (monitoring use case)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus-cluster-reader
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: monitoring
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io

An important pattern: a RoleBinding can reference a ClusterRole. This lets you define a role once at the cluster level (the ClusterRole) and bind it within specific namespaces through RoleBindings. The permissions are still scoped to the namespace where the RoleBinding lives. This is the right pattern for shared role definitions — define the permission set once, instantiate it with appropriate scope.

Default to RoleBinding over ClusterRoleBinding for namespace-scoped work. ClusterRoleBinding should be reserved for genuinely cluster-wide operations: monitoring agents, network plugins, cluster operators, security tooling.


Service Accounts — The Machine Identity in Kubernetes

Every pod in Kubernetes runs as a service account. If you don’t specify one, it uses the default service account in the pod’s namespace.

The default service account is where many RBAC misconfigurations accumulate. When someone creates a RoleBinding without thinking about which SA to use, they often bind the permission to default. Now every pod in that namespace that doesn’t explicitly set a service account — including pods deployed by developers who aren’t thinking about RBAC — inherits that binding.

# Create a dedicated SA for each application
kubectl create serviceaccount app-backend -n production

# Check what any SA can currently do — use this in every audit
kubectl auth can-i --list --as=system:serviceaccount:production:app-backend -n production

# Check a specific action
kubectl auth can-i get secrets \
  --as=system:serviceaccount:production:app-backend -n production

kubectl auth can-i create pods \
  --as=system:serviceaccount:production:app-backend -n production

Disable Auto-Mounting the SA Token

By default, Kubernetes mounts the service account token into every pod at /var/run/secrets/kubernetes.io/serviceaccount/token. A pod that doesn’t need to call the Kubernetes API doesn’t need this token. Having it mounted increases the blast radius if the pod is compromised — the token can be used to call the K8s API with whatever RBAC permissions the SA has.

# Disable at the pod level
apiVersion: v1
kind: Pod
spec:
  automountServiceAccountToken: false
  serviceAccountName: app-backend
  containers:
  - name: app
    image: my-app:latest

# Or at the service account level (applies to all pods using this SA)
apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-backend
  namespace: production
automountServiceAccountToken: false

For most application pods — anything that isn’t a Kubernetes operator, controller, or management tool — the K8s API token is unnecessary. Disable it.


Human Access to Kubernetes — Get Off Client Certificates

Kubernetes doesn’t manage human users natively. Authentication is delegated to an external mechanism. The most common approaches:

Method Notes
X.509 client certificates Common for initial cluster setup; credentials are embedded in kubeconfig; cannot be revoked without revoking the CA
Static bearer tokens Long-lived; avoid
OIDC via external IdP Preferred for human access — supports SSO, MFA, and revocation via IdP
Webhook auth Flexible, requires custom infrastructure

X.509 certificates are the bootstrap pattern. Every managed Kubernetes offering generates an admin kubeconfig with a client certificate. The problem: you can’t revoke individual certificates without rotating the CA. If you’re giving human engineers access via client certificates, someone leaving doesn’t actually lose cluster access until the certificate expires.

OIDC is the right model. Configure the kube-apiserver to accept JWTs from your IdP, bind RBAC permissions to groups from the IdP, and revocation becomes “remove from IdP group” rather than “hope the certificate expires soon”:

# kube-apiserver flags for OIDC (managed clusters configure this via provider settings)
--oidc-issuer-url=https://accounts.google.com
--oidc-client-id=my-cluster-client-id
--oidc-username-claim=email
--oidc-groups-claim=groups
--oidc-groups-prefix=oidc:
# User's kubeconfig — uses an exec plugin to fetch an OIDC token
users:
- name: alice
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      command: kubectl-oidc-login
      args:
        - get-token
        - --oidc-issuer-url=https://dex.company.com
        - --oidc-client-id=kubernetes

With managed clusters:

# EKS: add IAM role as a cluster access entry (replaces the aws-auth ConfigMap)
aws eks create-access-entry \
  --cluster-name my-cluster \
  --principal-arn arn:aws:iam::123456789012:role/DevTeamRole \
  --type STANDARD

aws eks associate-access-policy \
  --cluster-name my-cluster \
  --principal-arn arn:aws:iam::123456789012:role/DevTeamRole \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy \
  --access-scope type=namespace,namespaces=production,staging

# GKE: get credentials; IAM roles map to cluster permissions
gcloud container clusters get-credentials my-cluster --region us-central1
# roles/container.developer → edit permissions
# But: use ClusterRoleBindings for fine-grained control rather than relying on GCP IAM roles

# AKS: bind Entra ID groups to Kubernetes RBAC
az aks get-credentials --name my-aks --resource-group rg-prod
kubectl create clusterrolebinding dev-team-view \
  --clusterrole=view \
  --group=ENTRA_GROUP_OBJECT_ID

Cloud IAM + Kubernetes RBAC: The Integration Points

EKS Pod Identity / IRSA (revisited)

The annotation on the Kubernetes ServiceAccount is the bridge:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-backend
  namespace: production
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/AppBackendRole

Kubernetes RBAC controls what the pod can do inside the cluster. The IAM role controls what the pod can do in AWS. Both must be explicitly granted; neither inherits from the other.

GKE Workload Identity

apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-backend
  namespace: production
  annotations:
    iam.gke.io/gcp-service-account: [email protected]

AKS Workload Identity

apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-backend
  namespace: production
  annotations:
    azure.workload.identity/client-id: "MANAGED_IDENTITY_CLIENT_ID"
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    azure.workload.identity/use: "true"
spec:
  serviceAccountName: app-backend

RBAC Audit — What to Check First

# Start here: who has cluster-admin?
kubectl get clusterrolebindings -o json | \
  jq '.items[] | select(.roleRef.name=="cluster-admin") | 
      {binding: .metadata.name, subjects: .subjects}'
# cluster-admin should bind to almost nobody — review every result

# Find ClusterRoles with wildcard permissions
kubectl get clusterroles -o json | \
  jq '.items[] | select(.rules[]?.verbs[]? == "*") | .metadata.name'

# What can the default SA do in each namespace?
for ns in $(kubectl get namespaces -o name | cut -d/ -f2); do
  echo "=== $ns ==="
  kubectl auth can-i --list --as=system:serviceaccount:${ns}:default -n ${ns} 2>/dev/null \
    | grep -v "no" | head -10
done

# What can a specific SA do?
kubectl auth can-i --list \
  --as=system:serviceaccount:production:app-backend \
  -n production

# Check whether an SA can escalate — key risk indicators
kubectl auth can-i get secrets -n production \
  --as=system:serviceaccount:production:app-backend
kubectl auth can-i create pods -n production \
  --as=system:serviceaccount:production:app-backend
kubectl auth can-i create rolebindings -n production \
  --as=system:serviceaccount:production:app-backend

Creating pods and creating rolebindings are privilege escalation primitives. A service account that can create pods can run a pod with a different, more powerful SA. A service account that can create rolebindings can grant itself more permissions.

Useful Tools

# rbac-tool — visualize and analyze RBAC (install: kubectl krew install rbac-tool)
kubectl rbac-tool viz                              # generate a graph of all bindings
kubectl rbac-tool who-can get secrets -n production
kubectl rbac-tool lookup [email protected]

# rakkess — access matrix for a subject
kubectl rakkess --sa production:app-backend

# audit2rbac — generate minimal RBAC from audit logs
audit2rbac --filename /var/log/kubernetes/audit.log \
  --serviceaccount production:app-backend

Common RBAC Misconfigurations

Misconfiguration Risk Fix
cluster-admin bound to application SA Full cluster takeover from compromised pod Minimal ClusterRole; scope to namespace where possible
list or wildcard on secrets Read all secrets in scope — includes credentials, API keys Grant get on specific named secrets only
default SA with non-trivial permissions Every pod in the namespace inherits the permission Bind permissions to dedicated SAs; automountServiceAccountToken: false on default
ClusterRoleBinding for namespace-scoped work Namespace work with cluster-wide permission Always prefer RoleBinding; ClusterRoleBinding only for genuinely cluster-wide needs
Binding users by username string Hard to revoke; doesn’t sync with IdP Bind groups from IdP; revocation propagates through group membership
SA can create pods or create rolebindings Privilege escalation path Audit and remove these from non-privileged SAs

Framework Alignment

Framework Reference What It Covers Here
CISSP Domain 5 — Identity and Access Management Kubernetes RBAC operates as a full IAM system at the platform layer, independent of cloud IAM
CISSP Domain 3 — Security Architecture Two independent authorization layers (cloud + K8s) must each be designed and audited — one does not compensate for the other
ISO 27001:2022 5.15 Access control Kubernetes RBAC Roles, ClusterRoles, and bindings implement access control within the container platform
ISO 27001:2022 5.18 Access rights Service account provisioning, OIDC-based human access, and workload identity integration with cloud IAM
ISO 27001:2022 8.2 Privileged access rights cluster-admin and wildcard RBAC bindings represent the highest-privilege grants in Kubernetes
SOC 2 CC6.1 Kubernetes RBAC is the access control mechanism for the container platform layer in CC6.1
SOC 2 CC6.3 Binding revocation, SA token disabling, and OIDC group-based access removal satisfy CC6.3 requirements

Key Takeaways

  • Kubernetes RBAC and cloud IAM are separate authorization layers — both must be secured; strong cloud IAM with weak K8s RBAC is still a vulnerable cluster
  • cluster-admin bindings are the first thing to audit in any cluster — the blast radius of a compromised pod with cluster-admin is the entire cluster
  • Disable automountServiceAccountToken on service accounts and pods that don’t call the Kubernetes API — most application pods don’t need it
  • Use OIDC for human access rather than client certificates; revocation via IdP is instant and reliable
  • Bind groups from IdP rather than individual usernames; revocation propagates automatically when someone leaves
  • A service account that can create pods or create rolebindings is a privilege escalation path — audit for these in every namespace

What’s Next

EP12 is the capstone: Zero Trust IAM — how all the concepts in this series come together into an architecture that assumes nothing is implicitly trusted, verifies everything explicitly, and limits blast radius through least privilege enforced at every layer.

Next: Zero trust access in the cloud

Get EP12 in your inbox when it publishes → linuxcent.com/subscribe

OIDC Workload Identity: Eliminate Cloud Access Keys Entirely

Reading Time: 12 minutes

Meta Description: Replace static cloud credentials with OIDC workload identity — eliminate key rotation entirely for Lambda, GKE, and EKS workloads in production.


What Is Cloud IAMAuthentication vs AuthorizationIAM Roles vs PoliciesAWS IAM Deep DiveGCP Resource Hierarchy IAMAzure RBAC ScopesOIDC Workload Identity


TL;DR

  • Workload identity federation replaces static cloud access keys with short-lived tokens tied to runtime identity — no key to rotate, no secret to leak
  • The OIDC token exchange pattern is consistent across AWS (IRSA / Pod Identity), GCP (Workload Identity), and Azure (AKS Workload Identity) — learn one, translate the others
  • AWS EKS: use Pod Identity for new clusters; IRSA is the pattern for existing ones — both eliminate static keys
  • GCP GKE: --workload-pool at cluster level + roles/iam.workloadIdentityUser binding on the GCP service account
  • Azure AKS: federated credential on a managed identity + azure.workload.identity/use: "true" pod label
  • Cross-cloud federation works: an AWS IAM role can call GCP APIs without a GCP key file on the AWS side
  • Enforce IMDSv2 everywhere; pin OIDC trust conditions to specific service account names; give each workload its own identity

The Big Picture

  WORKLOAD IDENTITY FEDERATION — BEFORE AND AFTER

  ── STATIC CREDENTIALS (the broken model) ────────────────────────────────

  IAM user created → access key generated
         ↓
  Key distributed to pods / CI / servers → stored in Secrets, env vars, .env
         ↓
  Valid indefinitely — never expires on its own
         ↓
  Rotation is manual, painful, deferred ("there's a ticket for that")
         ↓
  Key proliferates across environments — you lose track of every copy
         ↓
  Leaked key → unlimited blast radius until someone notices and revokes it

  ── WORKLOAD IDENTITY FEDERATION (the current model) ─────────────────────

  No key created. No key distributed. No key to rotate.

  Workload starts → requests signed JWT from its native IdP
         │           (EKS OIDC issuer, GitHub Actions, GKE metadata server)
         ↓
  JWT carries workload claims: namespace, service account, repo, instance ID
         ↓
  Cloud STS / token endpoint validates JWT signature + trust conditions
         ↓
  Short-lived credential issued  (AWS STS: 1–12h  |  GCP/Azure: ~1h)
         ↓
  Credential expires automatically — nothing to clean up
         ↓
  Token stolen → usable for 1 hour maximum, audience-bound, not reusable

Workload identity federation is the architectural answer to static credential sprawl. The workload’s proof of identity is its runtime environment — the cluster it runs in, the repository it belongs to, the service account it uses. The cloud provider never issues a persistent secret. This episode covers how that exchange works across all three clouds and Kubernetes.


Introduction

Workload identity federation eliminates static cloud credentials by replacing them with short-lived tokens that the runtime environment generates and the cloud provider validates against a registered trust relationship. No key to distribute, no rotation schedule to maintain, no proliferation to track.

A while back I was reviewing a Kubernetes cluster that had been running in production for about two years. The team had done good work — solid app code, reasonable cluster configuration. But when I started looking at how pods were authenticating to AWS, I found what I find in roughly 60% of environments I look at.

Twelve service accounts. Twelve access key pairs. Keys created 6 to 24 months ago. Stored as Kubernetes Secrets. Mounted into pods as environment variables. Never rotated because “the app would need to be restarted” and nobody owned the rotation schedule. Two of the keys belonged to AWS IAM users who no longer worked at the company — the users had been deactivated, but the access keys were still valid because in AWS, access keys live independently of console login status.

When I asked who was responsible for rotating these, the answer I got was: “There’s a ticket for that.”

There’s always a ticket for that.

The engineering problem here isn’t that the team was careless. It’s that static credentials are fundamentally unmanageable at scale. Workload identity removes the problem at its root.


Why Static Credentials Are the Wrong Model for Machines

Before getting into solutions, let me be precise about why this is a security problem, not just an operational inconvenience.

Static credentials have four fundamental failure modes:

They don’t expire. An AWS access key created in 2022 is valid in 2026 unless someone explicitly rotates it. GitGuardian’s 2024 data puts the average time from secret creation to detection at 328 days. That’s almost a year of exposure window before anyone even knows.

They lose origin context. When an API call arrives at AWS with an access key, the authorization system can tell you what key was used — not whether it was used by your Lambda function, by a developer debugging something, or by an attacker using a stolen copy. Static credentials are context-blind.

They proliferate invisibly. One key, distributed to a team, copied into three environments, cached on developer laptops, stored in a CI/CD pipeline, pasted into a config file in a test environment that got committed. By the time you need to rotate it, you don’t know all the places it lives.

Rotation is operationally painful. Creating a new key, updating every place the old key lives, removing the old key — while ensuring nothing breaks during the transition — is a coordination exercise that organizations consistently defer. Every month the rotation doesn’t happen is another month of accumulated risk.

Workload identity solves all four by replacing persistent credentials with short-lived tokens that are generated from the runtime environment and verified by the cloud provider against a registered trust relationship.


The OIDC Exchange — What’s Actually Happening

All three major cloud providers have converged on the same underlying mechanism: OIDC token exchange.

Workload (pod, GitHub Actions runner, EC2 instance, on-prem server)
    │
    │  1. Request a signed JWT from the native identity provider
    │     (EKS OIDC server, GitHub's token.actions.githubusercontent.com,
    │      GKE metadata server, Azure IMDS)
    ▼
Native IdP issues a JWT. It contains claims about the workload:
    - What repository triggered this CI run
    - What Kubernetes namespace and service account this pod uses
    - What EC2 instance ID this request came from
    │
    │  2. Workload presents the JWT to the cloud STS / federation endpoint
    ▼
Cloud IAM evaluates:
    - Is the JWT signature valid? (verified against the IdP's public keys)
    - Does the issuer match a registered trust relationship?
    - Do the claims match the conditions in the trust policy?
    │
    │  3. If all checks pass: short-lived cloud credentials issued
    │     (AWS: temporary STS credentials, expiry 1-12 hours)
    │     (GCP: OAuth2 access token, expiry ~1 hour)
    │     (Azure: access token, expiry ~1 hour)
    ▼
Workload calls cloud API with short-lived credentials.
Credentials expire. Nothing to clean up. Nothing to rotate.

No static secret is stored anywhere. The workload’s identity is its runtime environment — the cluster it runs in, the repository it belongs to, the service account it uses. If someone steals the short-lived token, it expires in an hour. If someone tries to use a token for a different resource than it was issued for, the audience claim doesn’t match and it’s rejected.


AWS: IRSA and Pod Identity for EKS

IRSA — The Original Pattern

IRSA (IAM Roles for Service Accounts) federates a Kubernetes service account identity with an AWS IAM role. Each pod’s service account is the proof of identity; AWS issues temporary credentials in exchange for the OIDC JWT.

# Step 1: get the OIDC issuer URL for your EKS cluster
OIDC_ISSUER=$(aws eks describe-cluster \
  --name my-cluster \
  --query "cluster.identity.oidc.issuer" \
  --output text)

# Step 2: register this OIDC issuer with IAM
aws iam create-open-id-connect-provider \
  --url "${OIDC_ISSUER}" \
  --client-id-list sts.amazonaws.com \
  --thumbprint-list "$(openssl s_client -connect ${OIDC_ISSUER#https://}:443 2>/dev/null \
    | openssl x509 -fingerprint -noout | cut -d= -f2 | tr -d ':')"

# Step 3: create an IAM role with a trust policy scoped to a specific service account
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
OIDC_ID="${OIDC_ISSUER#https://}"

cat > irsa-trust.json << EOF
{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": {
      "Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_ID}"
    },
    "Action": "sts:AssumeRoleWithWebIdentity",
    "Condition": {
      "StringEquals": {
        "${OIDC_ID}:sub": "system:serviceaccount:production:app-backend",
        "${OIDC_ID}:aud": "sts.amazonaws.com"
      }
    }
  }]
}
EOF

aws iam create-role \
  --role-name app-backend-s3-role \
  --assume-role-policy-document file://irsa-trust.json

aws iam put-role-policy \
  --role-name app-backend-s3-role \
  --policy-name AppBackendPolicy \
  --policy-document file://app-backend-policy.json
# Step 4: annotate the Kubernetes service account with the role ARN
apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-backend
  namespace: production
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/app-backend-s3-role

The EKS Pod Identity webhook injects two environment variables into any pod using this service account: AWS_WEB_IDENTITY_TOKEN_FILE pointing to a projected token, and AWS_ROLE_ARN. The AWS SDK reads these automatically. The application doesn’t know any of this is happening — it just calls S3 and it works, using credentials that were never stored anywhere and expire automatically.

The trust policy’s sub condition is the security boundary. system:serviceaccount:production:app-backend means: only pods in the production namespace using the app-backend service account can assume this role. A pod in a different namespace, even with the same service account name, gets a different sub claim and the assumption fails.

EKS Pod Identity — The Simpler Modern Approach

AWS released Pod Identity as a simpler alternative to IRSA. No OIDC provider setup, no manual trust policy with OIDC conditions:

# Enable the Pod Identity agent addon on the cluster
aws eks create-addon \
  --cluster-name my-cluster \
  --addon-name eks-pod-identity-agent

# Create the association — this replaces the OIDC trust policy setup
aws eks create-pod-identity-association \
  --cluster-name my-cluster \
  --namespace production \
  --service-account app-backend \
  --role-arn arn:aws:iam::123456789012:role/app-backend-s3-role

Same result, less ceremony. For new clusters, Pod Identity is the path I’d recommend. IRSA remains important to understand for the many existing clusters already using it.

IAM Roles Anywhere — For On-Premises Workloads

Not everything runs in Kubernetes. For on-premises servers and workloads outside AWS, IAM Roles Anywhere issues temporary credentials to servers that present an X.509 certificate signed by a trusted CA:

# Register your internal CA as a trust anchor
aws rolesanywhere create-trust-anchor \
  --name "OnPremCA" \
  --source sourceType=CERTIFICATE_BUNDLE,sourceData.x509CertificateData="$(base64 -w0 ca-cert.pem)"

# Create a profile mapping the CA to allowed roles
aws rolesanywhere create-profile \
  --name "OnPremServers" \
  --role-arns "arn:aws:iam::123456789012:role/OnPremAppRole" \
  --trust-anchor-arns "${TRUST_ANCHOR_ARN}"

# On the on-prem server — exchange the certificate for AWS credentials
aws_signing_helper credential-process \
  --certificate /etc/pki/server.crt \
  --private-key /etc/pki/server.key \
  --trust-anchor-arn "${TRUST_ANCHOR_ARN}" \
  --profile-arn "${PROFILE_ARN}" \
  --role-arn "arn:aws:iam::123456789012:role/OnPremAppRole"

The server’s certificate (managed by your internal PKI or an ACM Private CA) is the proof of identity. No access key distributed to the server — just a certificate that your CA signed and that you can revoke through your existing certificate revocation infrastructure.


GCP: Workload Identity for GKE

For GKE clusters, Workload Identity is enabled at the cluster level and creates a bridge between Kubernetes service accounts and GCP service accounts:

# Enable Workload Identity on the cluster
gcloud container clusters update my-cluster \
  --workload-pool=my-project.svc.id.goog

# Enable on the node pool (required for the metadata server to work)
gcloud container node-pools update default-pool \
  --cluster=my-cluster \
  --workload-metadata=GKE_METADATA

# Create the GCP service account for the workload
gcloud iam service-accounts create app-backend \
  --project=my-project

SA_EMAIL="[email protected]"

# Grant the GCP SA the permissions it needs
gcloud storage buckets add-iam-policy-binding gs://app-data \
  --member="serviceAccount:${SA_EMAIL}" \
  --role="roles/storage.objectViewer"

# Create the trust relationship: K8s SA → GCP SA
gcloud iam service-accounts add-iam-policy-binding "${SA_EMAIL}" \
  --role=roles/iam.workloadIdentityUser \
  --member="serviceAccount:my-project.svc.id.goog[production/app-backend]"
# Annotate the Kubernetes service account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-backend
  namespace: production
  annotations:
    iam.gke.io/gcp-service-account: [email protected]

When the pod makes a GCP API call using ADC (Application Default Credentials), the GKE metadata server intercepts the credential request. It validates the pod’s Kubernetes identity, checks the IAM binding, and returns a short-lived GCP access token. The GCP service account key file never exists. There’s nothing to protect, nothing to rotate, nothing to leak.


Azure: Workload Identity for AKS

Azure’s workload identity for Kubernetes replaced the older AAD Pod Identity approach — which required a DaemonSet, had known TOCTOU vulnerabilities, and was operationally fragile. The current implementation uses the OIDC pattern:

# Enable OIDC issuer and workload identity on the AKS cluster
az aks update \
  --name my-aks \
  --resource-group rg-prod \
  --enable-oidc-issuer \
  --enable-workload-identity

# Get the OIDC issuer URL for this cluster
OIDC_ISSUER=$(az aks show \
  --name my-aks --resource-group rg-prod \
  --query "oidcIssuerProfile.issuerUrl" -o tsv)

# Create a user-assigned managed identity for the workload
az identity create --name app-backend-identity --resource-group rg-identities
CLIENT_ID=$(az identity show --name app-backend-identity -g rg-identities --query clientId -o tsv)
PRINCIPAL_ID=$(az identity show --name app-backend-identity -g rg-identities --query principalId -o tsv)

# Grant the identity the access it needs
az role assignment create \
  --assignee-object-id "$PRINCIPAL_ID" \
  --role "Storage Blob Data Reader" \
  --scope /subscriptions/SUB_ID/resourceGroups/rg-prod/providers/Microsoft.Storage/storageAccounts/appstore

# Federate: trust the K8s service account from this cluster
az identity federated-credential create \
  --name aks-app-backend-binding \
  --identity-name app-backend-identity \
  --resource-group rg-identities \
  --issuer "${OIDC_ISSUER}" \
  --subject "system:serviceaccount:production:app-backend" \
  --audience "api://AzureADTokenExchange"
apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-backend
  namespace: production
  annotations:
    azure.workload.identity/client-id: "CLIENT_ID_HERE"
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    azure.workload.identity/use: "true"   # triggers token injection
spec:
  serviceAccountName: app-backend
  containers:
  - name: app
    image: my-app:latest
    # Azure SDK DefaultAzureCredential picks up the injected token automatically

Cross-Cloud Federation — When AWS Talks to GCP

The same OIDC mechanism works cross-cloud. An AWS Lambda or EC2 instance can call GCP APIs without any GCP service account key on the AWS side:

# GCP side: create a workload identity pool that trusts AWS
gcloud iam workload-identity-pools create "aws-workloads" --location=global

gcloud iam workload-identity-pools providers create-aws "aws-provider" \
  --workload-identity-pool="aws-workloads" \
  --account-id="AWS_ACCOUNT_ID"

# Bind the specific AWS role to the GCP service account
gcloud iam service-accounts add-iam-policy-binding [email protected] \
  --role=roles/iam.workloadIdentityUser \
  --member="principalSet://iam.googleapis.com/projects/GCP_PROJ_NUM/locations/global/workloadIdentityPools/aws-workloads/attribute.aws_role/arn:aws:sts::AWS_ACCOUNT:assumed-role/MyAWSRole"

The AWS workload presents its STS-issued credentials to GCP’s token exchange endpoint. GCP verifies the AWS signature, checks the attribute mapping (only MyAWSRole from that AWS account), and issues a short-lived GCP access token. No GCP service account key is ever distributed to the AWS side.


The Threat Model — What Workload Identity Doesn’t Solve

Workload identity dramatically reduces the attack surface, but it doesn’t eliminate it:

Threat What Still Applies Mitigation
Token theft from the container filesystem The projected token is readable if you have container filesystem access Short TTL (default 1h); tokens are audience-bound — can’t use a K8s token to call Azure APIs
SSRF to metadata service An SSRF vulnerability can fetch credentials from the metadata endpoint Enforce IMDSv2 on AWS; use metadata server restrictions on GKE/AKS
Overpermissioned service account Workload identity doesn’t enforce least privilege — the SA can still be over-granted One SA per workload; review permissions against actual usage
Trust policy too broad OIDC trust policy allows any service account in a namespace Always pin to specific SA name in the sub condition

The SSRF-to-metadata-service path deserves particular attention. IMDSv2 (mandatory in AWS by requiring a PUT to get a token before any metadata request) blocks most SSRF scenarios because a simple SSRF can only make GET requests. Enforce it:

# Enforce IMDSv2 at instance launch
aws ec2 run-instances \
  --metadata-options HttpTokens=required,HttpPutResponseHopLimit=1

# Enforce org-wide via SCP — no instance can launch without IMDSv2
{
  "Effect": "Deny",
  "Action": "ec2:RunInstances",
  "Resource": "arn:aws:ec2:*:*:instance/*",
  "Condition": {
    "StringNotEquals": {
      "ec2:MetadataHttpTokens": "required"
    }
  }
}

⚠ Production Gotchas

╔══════════════════════════════════════════════════════════════════════╗
║  ⚠  GOTCHA 1 — Trust policy scoped to namespace, not service account ║
║                                                                      ║
║  A condition like "sub": "system:serviceaccount:production:*"        ║
║  grants any pod in the production namespace the ability to assume    ║
║  the role. A compromised or new workload in that namespace gets      ║
║  access automatically.                                               ║
║                                                                      ║
║  Fix: always pin the sub condition to the exact service account      ║
║  name. "system:serviceaccount:production:app-backend" — not a glob.  ║
╚══════════════════════════════════════════════════════════════════════╝

╔══════════════════════════════════════════════════════════════════════╗
║  ⚠  GOTCHA 2 — Shared service accounts across workloads             ║
║                                                                      ║
║  Reusing one service account for multiple workloads saves setup      ║
║  time and creates a lateral movement path. A compromised workload    ║
║  that shares a service account with a payment processor has payment  ║
║  processor permissions.                                              ║
║                                                                      ║
║  Fix: one service account per workload. The overhead is low.         ║
║  The blast radius reduction is significant.                          ║
╚══════════════════════════════════════════════════════════════════════╝

╔══════════════════════════════════════════════════════════════════════╗
║  ⚠  GOTCHA 3 — IMDSv1 still reachable after enabling IMDSv2        ║
║                                                                      ║
║  Enabling IMDSv2 on new instances doesn't affect existing ones.      ║
║  The SCP approach enforces it at the org level going forward, but    ║
║  existing instances need explicit remediation.                       ║
║                                                                      ║
║  Fix: audit existing instances for IMDSv1 exposure.                 ║
║  aws ec2 describe-instances --query                                  ║
║    "Reservations[].Instances[?MetadataOptions.HttpTokens!='required']║
║    .[InstanceId,Tags]"                                               ║
╚══════════════════════════════════════════════════════════════════════╝

Quick Reference

┌────────────────────────────────┬───────────────────────────────────────────────────────┐
│ Term                           │ What it means                                         │
├────────────────────────────────┼───────────────────────────────────────────────────────┤
│ Workload identity federation   │ OIDC-based exchange: runtime JWT → short-lived token  │
│ IRSA                           │ IAM Roles for Service Accounts — EKS + OIDC pattern   │
│ EKS Pod Identity               │ Newer, simpler IRSA replacement — no OIDC setup       │
│ GKE Workload Identity          │ K8s SA → GCP SA via workload pool + IAM binding       │
│ AKS Workload Identity          │ K8s SA → managed identity via federated credential    │
│ IAM Roles Anywhere             │ AWS temp credentials for on-prem via X.509 cert       │
│ IMDSv2                         │ Token-gated AWS metadata service — blocks SSRF        │
│ OIDC sub claim                 │ Workload's unique identity string — use for pinning   │
│ Projected service account token│ K8s-injected JWT — the OIDC token pods present to AWS │
└────────────────────────────────┴───────────────────────────────────────────────────────┘

Key commands:
┌────────────────────────────────────────────────────────────────────────────────────────┐
│  # AWS — list OIDC providers registered in this account                               │
│  aws iam list-open-id-connect-providers                                               │
│                                                                                        │
│  # AWS — list Pod Identity associations for a cluster                                 │
│  aws eks list-pod-identity-associations --cluster-name my-cluster                     │
│                                                                                        │
│  # AWS — verify what credentials a pod is actually using                              │
│  aws sts get-caller-identity   # run from inside the pod                              │
│                                                                                        │
│  # AWS — audit instances missing IMDSv2                                               │
│  aws ec2 describe-instances \                                                          │
│    --query "Reservations[].Instances[?MetadataOptions.HttpTokens!='required']          │
│    .[InstanceId]" --output text                                                        │
│                                                                                        │
│  # GCP — verify workload identity binding on a GCP service account                   │
│  gcloud iam service-accounts get-iam-policy SA_EMAIL                                  │
│                                                                                        │
│  # GCP — list workload identity pools                                                 │
│  gcloud iam workload-identity-pools list --location=global                            │
│                                                                                        │
│  # Azure — list federated credentials on a managed identity                           │
│  az identity federated-credential list \                                               │
│    --identity-name app-backend-identity --resource-group rg-identities                │
└────────────────────────────────────────────────────────────────────────────────────────┘

Framework Alignment

Framework Reference What It Covers Here
CISSP Domain 5 — Identity and Access Management Non-human identities dominate cloud environments; workload identity federation is the modern machine authentication pattern
CISSP Domain 1 — Security & Risk Management Static credential sprawl is a measurable, eliminable risk; workload identity removes it at the root
ISO 27001:2022 5.17 Authentication information Managing machine credentials — workload identity replaces long-lived secrets with short-lived, environment-bound tokens
ISO 27001:2022 8.5 Secure authentication OIDC token exchange is the secure authentication mechanism for machine identities
ISO 27001:2022 5.18 Access rights Service account provisioning and deprovisioning — workload identity ties access to the runtime environment, not a stored secret
SOC 2 CC6.1 Workload identity federation is the preferred technical control for machine-to-cloud authentication in CC6.1
SOC 2 CC6.7 Short-lived, audience-bound tokens restrict credential reuse across systems — addresses transmission and access controls

Key Takeaways

  • Static credentials for machine identities are the problem, not the solution — workload identity federation eliminates them at the root
  • The OIDC token exchange pattern is consistent across AWS (IRSA/Pod Identity), GCP (Workload Identity), and Azure (AKS Workload Identity) — learn one, the others are a translation
  • AWS EKS: use Pod Identity for new clusters; IRSA remains the pattern for existing ones — both eliminate static keys
  • GCP GKE: Workload Identity enabled at cluster level, SA annotation at the K8s service account level
  • Azure AKS: federated credential on the managed identity, azure.workload.identity/use: "true" label on pods
  • Cross-cloud federation works — an AWS IAM role can call GCP APIs without a GCP key file
  • Enforce IMDSv2 everywhere; pin OIDC trust conditions to specific service account names; apply least privilege to the underlying cloud identity

What’s Next

You’ve eliminated the static credential problem. The next question is: what happens when the IAM configuration itself is the vulnerability? AWS IAM privilege escalation goes into the attack paths — how iam:PassRole, iam:CreateAccessKey, and misconfigured trust policies turn IAM misconfigurations into full account compromise. If you’re designing or auditing cloud access control, you need to know these paths before an attacker finds them.

Next: AWS IAM Privilege Escalation: How iam:PassRole Leads to Full Compromise

Get EP08 in your inbox when it publishes → linuxcent.com/subscribe

The Runtime Reckoning: Dockershim Out, eBPF In, and PSP Finally Dies (2022–2023)

Reading Time: 6 minutes


Introduction

2022 is the year Kubernetes dealt with its legacy. The Docker shim that everyone had been warned about for two years was actually removed. PodSecurityPolicy — the broken security primitive that clusters had depended on since 1.3 — was deleted. And eBPF started displacing iptables as the networking substrate.

These weren’t additions to Kubernetes. They were the removal of technical debt accumulated over eight years. And the migrations they forced were the most operationally significant events since RBAC went stable.


Kubernetes 1.24 — Dockershim Removed (May 2022)

The dockershim was removed in 1.24. The deprecation had been announced in 1.20 (December 2020) — 18 months of warning. It didn’t matter. Operators who hadn’t migrated still scrambled.

The actual migration was straightforward for most environments:

# On each node, before upgrading to 1.24:
# 1. Install containerd
apt-get install -y containerd.io

# 2. Configure containerd
containerd config default | tee /etc/containerd/config.toml
# Edit: set SystemdCgroup = true in runc options

# 3. Update kubelet to use containerd socket
# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Add: --container-runtime-endpoint=unix:///run/containerd/containerd.sock

# 4. Restart
systemctl daemon-reload && systemctl restart kubelet

What the migration revealed: how many teams were depending on the Docker socket being present on nodes. Tools that mounted /var/run/docker.sock to talk to the Docker daemon — build tools, CI agents, some monitoring agents — broke. The ecosystem had to adapt to nerdctl (containerd’s Docker-compatible CLI), Kaniko, Buildah, or mounting the containerd socket instead.

Other 1.24 highlights:
Beta APIs disabled by default: New beta features would no longer be enabled automatically. This reversed a long-standing policy that had caused too many production clusters to accidentally pick up unstable features
gRPC probes stable: Liveness and readiness probes could now use gRPC health checks natively — no more writing HTTP wrapper endpoints for gRPC services
Non-graceful node shutdown alpha: Handle the case where the node disappears without the kubelet getting to gracefully terminate pods — stateful workloads on node failure


Kubernetes 1.25 — PSP Removed (August 2022)

PodSecurityPolicy was deleted in 1.25. Every cluster that was still using PSP had to migrate to Pod Security Admission (or OPA/Gatekeeper or Kyverno) before upgrading.

Pod Security Admission was GA in 1.25, ready to take over:

# Enforce restricted policy on a namespace
kubectl label namespace production \
  pod-security.kubernetes.io/enforce=restricted \
  pod-security.kubernetes.io/enforce-version=v1.25

# Test a pod against the policy without enforcing
kubectl label namespace staging \
  pod-security.kubernetes.io/warn=restricted \
  pod-security.kubernetes.io/audit=restricted

The dry-run modes (warn, audit) were critical for migration: you could enable them on namespaces and watch what would have been rejected before switching to enforce mode.

The real migration challenge was existing workloads running as root, with privileged security contexts, or with hostPath mounts. The restricted policy rejected all of these. Production applications that had been running for years under permissive PSP policies now failed validation.

Also in 1.25:
Ephemeral containers stable: Attach a debug container to a running pod without restarting it

# Debug a running pod with no shell
kubectl debug -it nginx-pod --image=busybox:latest --target=nginx
  • CSI ephemeral volumes stable
  • cgroups v2 (unified hierarchy) support stable: Enables memory QoS, improved resource accounting

Kubernetes 1.26 — Structured Parameter Scheduling, Storage (December 2022)

1.26 focused on the scheduler and storage:
Dynamic Resource Allocation alpha: A generalization of the device plugin API — allows requesting complex resources (GPUs, FPGAs, network adapters) with scheduling constraints. The foundation for AI/ML workload scheduling on heterogeneous hardware
CrossNamespacePVCDataSource beta: Clone a PVC across namespaces — enables namespace-based data isolation while sharing data sets
Pod scheduling readiness alpha: A pod can declare that it’s not ready to be scheduled until external conditions are met (data pre-loading complete, license validated, etc.)
Removal of in-tree cloud provider code (beta, continued): A long-running effort to move cloud-provider-specific code out of the core Kubernetes binary

The Dynamic Resource Allocation feature deserves emphasis: it’s the mechanism that makes Kubernetes a serious platform for GPU scheduling in AI/ML workloads. Device plugins (the prior mechanism) had limitations — a pod either got a GPU or it didn’t. DRA allows richer resource semantics: this pod needs two GPUs on the same PCIe bus, or this pod needs a specific GPU model.


eBPF Reshapes Kubernetes Networking

The most significant architectural shift in Kubernetes networking during 2022–2023 wasn’t a Kubernetes release feature. It was the adoption of eBPF-based CNI solutions — primarily Cilium — as the default networking layer in major managed Kubernetes offerings.

The iptables problem: kube-proxy has been using iptables rules to implement Service routing since Kubernetes 1.0. Every Service adds iptables rules to every node. At 10,000 services, the iptables rule table on each node has hundreds of thousands of rules. Traversing these rules on every packet is O(n). Updating them requires locking and flushing. At scale, iptables becomes a bottleneck.

The eBPF solution: Cilium replaces kube-proxy entirely, implementing Service routing using eBPF maps — hash tables in kernel memory. Service lookup is O(1). Rule updates don’t require locking. Network policy enforcement happens in the kernel, before packets even reach the application.

# Check if Cilium is running in kube-proxy replacement mode
cilium status | grep "KubeProxy replacement"
# KubeProxy replacement:    True

# eBPF-based service map — inspect directly
cilium service list
# ID   Frontend          Service Type   Backend
# 1    10.96.0.1:443     ClusterIP      10.0.0.5:6443
# 2    10.96.0.10:53     ClusterIP      10.0.1.2:53, 10.0.1.3:53

Network policy enforcement: Cilium’s NetworkPolicy implementation enforces rules at the eBPF layer — packets that would be dropped by policy are dropped before they ever leave the kernel, before they touch the pod’s network stack. This is both faster and more secure than userspace enforcement.

Hubble: Cilium’s observability layer — built on the same eBPF probes — provides real-time network flow visibility, HTTP layer observability (which service called which endpoint, response codes), and DNS query logging without any application changes.

Major adoption milestones:
– GKE’s default CNI became Cilium (Dataplane V2) in 2021
– Amazon EKS added Cilium support
– Azure AKS enabled Cilium-based networking
– Google’s Autopilot clusters use Cilium exclusively


Kubernetes 1.27 — Graceful Failure, In-Place Resize Alpha (April 2023)

  • In-Place Pod Vertical Scaling alpha: Change the CPU and memory resources of a running container without restarting the pod. For databases, JVM-based applications, and anything with warm caches, live resizing is a significant operational improvement
# Resize a container's CPU without restart
kubectl patch pod database-pod --type='json' \
  -p='[{"op": "replace", "path": "/spec/containers/0/resources/requests/cpu", "value": "2"}]'
  • SeccompDefault stable: Enable the default seccomp profile (RuntimeDefault) cluster-wide — a meaningful reduction in the default syscall attack surface for all pods
  • Mutable scheduling directives for Jobs stable: Change node affinity and tolerations of pending (not yet running) Job pods
  • ReadWriteOncePod PersistentVolume access mode stable: A volume can only be mounted by a single pod at a time — the correct semantic for databases with file-level locking requirements

The 1.5 Million Lines Removed: Cloud Provider Code Migration

One of the largest ongoing engineering efforts in Kubernetes 1.26–1.31 was the removal of in-tree cloud provider code. Every major cloud provider (AWS, Azure, GCP, OpenStack, vSphere) had code compiled directly into the Kubernetes control plane binaries.

The result: the Kubernetes API server and controller manager binaries contained code for AWS EBS volumes, GCE persistent disks, Azure managed disks, OpenStack Cinder — regardless of which cloud you were running on.

The migration moved this code to external Cloud Controller Managers (CCM) — separate processes that communicate with the API server like any other controller:

Before: kube-controller-manager (monolithic, includes all cloud providers)
After:  kube-controller-manager (generic) + cloud-controller-manager (cloud-specific, external)

By 1.31, approximately 1.5 million lines of code had been removed from the core binaries, reducing binary sizes by approximately 40%. This is the largest refactor in Kubernetes history.


Gateway API: Replacing Ingress (2022–2023)

The Ingress API, which graduated to stable in 1.19, has fundamental limitations:
– No support for TCP/UDP routing (HTTP only)
– No traffic splitting between multiple backends
– No header-based routing
– Vendor-specific features implemented via annotations (not portable)
– No RBAC granularity within a single Ingress resource

Gateway API (kubernetes-sigs/gateway-api) was designed as the successor, with a role-based model:

GatewayClass  → Managed by infrastructure provider (cluster admin)
Gateway       → Managed by cluster operators
HTTPRoute     → Managed by application developers
# Gateway — cluster operator configures the load balancer
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: production-gateway
spec:
  gatewayClassName: nginx
  listeners:
  - name: https
    port: 443
    protocol: HTTPS
    tls:
      mode: Terminate
      certificateRefs:
      - name: tls-cert

---
# HTTPRoute — application team configures routing
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: api-route
spec:
  parentRefs:
  - name: production-gateway
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /api/v2
    backendRefs:
    - name: api-v2-service
      port: 8080
      weight: 90
    - name: api-v3-canary
      port: 8080
      weight: 10

Gateway API reached GA (v1.0) in October 2023, with the core HTTPRoute, Gateway, and GatewayClass resources graduating to stable.


Key Takeaways

  • Dockershim removal in 1.24 completed the CRI migration that started in 1.5 — the Kubernetes runtime interface is now clean, with containerd and CRI-O as the standard runtimes
  • PSP removal in 1.25 forced a migration that should have happened years earlier; Pod Security Admission’s simplicity is a feature, not a limitation
  • eBPF-based networking (Cilium, Dataplane V2) is now the default in GKE and increasingly in EKS and AKS — O(1) service routing and kernel-level policy enforcement replace the iptables approach that dated to Kubernetes 1.0
  • Dynamic Resource Allocation (1.26 alpha) is the foundation for AI/ML GPU scheduling — more capable than device plugins and designed for heterogeneous hardware requests
  • Gateway API reaching GA replaced the annotation-driven, non-portable Ingress API with a role-oriented, extensible routing API
  • The cloud provider code removal (1.5M lines) is the largest refactor in Kubernetes history, a prerequisite for a maintainable, leaner core

What’s Next

← EP05: Security Hardens | EP07: Platform Engineering Era →

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

Security Hardens: Supply Chain, Pod Security, and the API Cleanup (2020–2022)

Reading Time: 6 minutes


Introduction

The 2020–2022 period redefined what “secure Kubernetes” meant. A global pandemic moved workloads to cloud-native infrastructure faster than security practices could follow. SolarWinds happened. Log4Shell happened. The software supply chain became a crisis.

At the same time, the Kubernetes project was doing something it had been reluctant to do: removing APIs and features, including PodSecurityPolicy — the primary security primitive that most enterprise clusters depended on. The replacement was simpler, but the migration was not.


Kubernetes 1.19 — LTS Behavior, Ingress Stable (August 2020)

1.19 extended the support window to one year (from nine months). This was an acknowledgment that enterprise organizations couldn’t upgrade four times per year — a common complaint from operations teams.

  • Ingress graduated to stable: networking.k8s.io/v1 — after years as a beta resource, Ingress finally had a stable API
  • Immutable ConfigMaps and Secrets to beta: Configuration protection becomes broadly available
  • EndpointSlices to GA: The replacement for Endpoints — shards pod-to-service mappings to avoid the single large Endpoints object that caused control plane stress at scale (10,000+ endpoints for a single service)
  • Structured logging (alpha): Machine-parseable log output from Kubernetes control plane components — a prerequisite for reliable SIEM integration
# EndpointSlice: distributed representation of service endpoints
kubectl get endpointslices -n production -l kubernetes.io/service-name=api-service
NAME                  ADDRESSTYPE   PORTS   ENDPOINTS                                   AGE
api-service-abc12     IPv4          8080    10.0.1.5,10.0.1.6,10.0.1.7 + 47 more...   2d
api-service-def34     IPv4          8080    10.0.2.1,10.0.2.2,10.0.2.3 + 47 more...   2d

Kubernetes 1.20 — Dockershim Deprecated (December 2020)

The announcement in 1.20 that the Docker shim was deprecated caused more panic than any previous Kubernetes deprecation. The message was misread by many as “Kubernetes is dropping Docker support” — the PR catastrophe that followed required the Kubernetes blog to publish a dedicated clarification post.

The reality: Docker-built images continued to work on Kubernetes. What was being removed was the code in the kubelet that talked directly to Docker’s daemon using a non-standard interface, rather than through the Container Runtime Interface (CRI). Docker images conform to the OCI (Open Container Initiative) image specification — they run on any CRI-compliant runtime.

The migration path:
containerd: The runtime that Docker itself used internally. Moving to containerd meant removing the Docker layer entirely — the kubelet talks directly to containerd via CRI
CRI-O: An OCI-focused runtime designed specifically for Kubernetes, minimal and purpose-built

# Before (Docker socket): kubelet → dockershim → Docker daemon → containerd → runc
# After (direct CRI):     kubelet → containerd → runc
#                    or:  kubelet → CRI-O → runc

# Check runtime in use on a node
kubectl get node worker-1 -o jsonpath='{.status.nodeInfo.containerRuntimeVersion}'
# containerd://1.6.4

Also in 1.20:
API Priority and Fairness beta: Rate-limit API server requests by priority — prevents a runaway controller from starving other API clients
CronJobs stable: Scheduled jobs graduate after years in beta
Volume snapshot stable


The SolarWinds Context (December 2020)

The SolarWinds supply chain attack, disclosed in December 2020, didn’t directly target Kubernetes. But it accelerated an existing conversation in the cloud-native community: if the build pipeline is compromised, signed binaries mean nothing. If the image registry is compromised, admission control on image names means nothing.

The attack catalyzed work on several fronts:
Sigstore: An open-source project (Google, Red Hat, Purdue University) for signing and verifying software artifacts including container images
SLSA (Supply chain Levels for Software Artifacts): A framework for incrementally improving supply chain security, from basic build provenance to hermetic builds with verified dependencies
SBOM (Software Bill of Materials): A machine-readable inventory of software components in an image — required by US Executive Order 14028 (May 2021) for software sold to the federal government


Kubernetes 1.21 — PodSecurityPolicy Deprecation (April 2021)

PodSecurityPolicy was deprecated in 1.21, announcing its removal in 1.25. The deprecation was contentious — PSP was the only built-in mechanism for enforcing pod security constraints, and every security-conscious cluster depended on it, despite its many flaws.

The replacement approach: Pod Security Standards — three predefined security profiles:

Profile Description Use Case
Privileged No restrictions System-level workloads, trusted components
Baseline Prevents known privilege escalations General application workloads
Restricted Hardened; follows current best practices High-security workloads

Other 1.21 highlights:
CronJobs stable
Immutable ConfigMaps and Secrets stable
Graceful node shutdown beta: The kubelet gracefully terminates pods when a node shuts down (not just when the kubelet stops)
PodDisruptionBudget stable


Kubernetes 1.22 — The Great API Removal (August 2021)

1.22 was the most disruptive Kubernetes release for operations teams since 1.0. Several long-lived beta APIs were removed:

Removed API Replacement Used By
networking.k8s.io/v1beta1 Ingress networking.k8s.io/v1 Every ingress resource
batch/v1beta1 CronJob batch/v1 Every scheduled job
apiextensions.k8s.io/v1beta1 CRD apiextensions.k8s.io/v1 Every CRD definition
rbac.authorization.k8s.io/v1beta1 rbac.authorization.k8s.io/v1 RBAC resources

Teams with Helm charts, Terraform modules, and CI/CD pipelines built against beta API versions had to update their manifests. This was the moment that finally drove home the message: beta APIs in Kubernetes are not stable — they will be removed.

Also in 1.22:
Server-Side Apply stable: Apply semantics moved server-side — field ownership tracking, conflict detection, and merge strategies are handled by the API server rather than client-side kubectl
Memory manager stable: Better NUMA-aware memory allocation for latency-sensitive workloads
Bound Service Account Token Volumes stable: Time-limited, audience-bound tokens for pods — replacing the long-lived, cluster-wide service account tokens that were a persistent security concern

# Bound service account token — expires, audience-restricted
# Projected volume mounts a time-limited token (default 1h expiry)
volumes:
- name: token
  projected:
    sources:
    - serviceAccountToken:
        audience: api
        expirationSeconds: 3600
        path: token

The bound token change was significant from a security perspective: previously, a service account token extracted from a pod would be valid indefinitely, for any audience. Projected tokens expire and are tied to a specific audience.


Pod Security Admission (Kubernetes 1.22, GA in 1.25)

The replacement for PodSecurityPolicy was Pod Security Admission — an admission controller built into the API server (no webhook required) that enforces the three Pod Security Standards at the namespace level:

# Namespace-level security enforcement
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/enforce-version: v1.25
    pod-security.kubernetes.io/warn: restricted
    pod-security.kubernetes.io/warn-version: v1.25

The three modes:
enforce: Reject pods that violate the policy
audit: Allow the pod but add an audit annotation
warn: Allow the pod and send a warning to the client

Pod Security Admission is deliberately simpler than PSP. It does less — it enforces three fixed profiles, not arbitrary rules. For arbitrary policy, you still need OPA/Gatekeeper or Kyverno. But the simplicity means it works reliably, with no authorization edge cases.


Kubernetes 1.23 — Dual-Stack Stable, HPA v2 Stable (December 2021)

  • IPv4/IPv6 dual-stack stable: Pods and Services can have both IPv4 and IPv6 addresses — critical for organizations running mixed-stack networks or migrating from IPv4 to IPv6
  • HPA v2 stable: Horizontal Pod Autoscaler with support for multiple metrics (CPU, memory, custom metrics from Prometheus, external metrics). Scale on Prometheus metrics, not just CPU:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: api-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: api
  minReplicas: 2
  maxReplicas: 20
  metrics:
  - type: Pods
    pods:
      metric:
        name: http_requests_per_second
      target:
        type: AverageValue
        averageValue: 1000m
  • FlexVolume deprecated (in favor of CSI): Another step in the driver out-of-tree migration

The Log4Shell Moment (December 2021)

Log4Shell (CVE-2021-44228) hit on December 9, 2021. The vulnerability allowed unauthenticated remote code execution in any Java application using Log4j 2.x. The blast radius was enormous — Log4j was in everything.

For Kubernetes operators, Log4Shell crystallized several operational realities:

Inventory problem: Do you know which of your pods is running a Java application? Do you know which version of Log4j it includes? Without an SBOM pipeline and admission-time image scanning, you probably don’t have a reliable answer.

Patch velocity problem: Once you know which images are vulnerable, how quickly can you rebuild and redeploy? Organizations with GitOps pipelines and image update automation (Flux’s image reflector, ArgoCD Image Updater) could respond in hours. Organizations without this infrastructure measured response time in days.

Runtime detection problem: Can you detect exploitation attempts in real time? Falco rules for Log4Shell JNDI lookup patterns were available within hours of disclosure — but only organizations already running Falco could use them.

Log4Shell made the case for supply chain security, image scanning, SBOM generation, and runtime detection tooling more effectively than any conference talk.


Sigstore and the Supply Chain Response

In 2021, Sigstore reached a point where its tooling — cosign (image signing), rekor (transparency log), fulcio (keyless signing via OIDC) — was production-ready.

The keyless signing model was significant: instead of managing long-lived signing keys (which themselves become a supply chain risk), fulcio issues short-lived certificates tied to an OIDC identity (a GitHub Actions workflow, a GitLab CI job). The signature proves that a specific workflow built the image.

# Sign an image as part of CI (keyless, OIDC-based)
cosign sign --yes ghcr.io/org/app:v1.0.0

# Verify before deploying
cosign verify \
  --certificate-identity-regexp "https://github.com/org/app/.github/workflows/build.yml" \
  --certificate-oidc-issuer https://token.actions.githubusercontent.com \
  ghcr.io/org/app:v1.0.0

Policy engines (OPA/Gatekeeper, Kyverno) could be configured to reject pods using unsigned or unverified images at admission time — closing the loop from build provenance to runtime enforcement.


Key Takeaways

  • Dockershim deprecation in 1.20 was about removing the non-standard interface, not about dropping Docker image compatibility — containers built with Docker run on containerd or CRI-O without changes
  • The API removals in 1.22 were operationally painful but necessary — beta APIs in Kubernetes are not production-stable commitments
  • Pod Security Admission (PSP’s replacement) trades power for reliability — three fixed profiles enforced at the namespace level, built into the API server, no authorization edge cases
  • SolarWinds and Log4Shell made supply chain security a board-level concern; Sigstore, SBOM, and admission-time image verification moved from “nice to have” to operational requirements
  • Bound service account tokens (1.22 stable) addressed a persistent security gap: pod tokens that expire and are audience-restricted rather than long-lived cluster-wide credentials

What’s Next

← EP04: The Operator Era | EP06: The Runtime Reckoning →

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

The Operator Era: Stateful Workloads, Service Mesh, and the Cloud-Native Stack (2018–2020)

Reading Time: 6 minutes


Introduction

By 2018, Kubernetes had won the orchestration market. The question was no longer “which orchestrator?” — it was “how do we run complex workloads on it, and how do we do it safely?”

The 2018–2020 period is defined by three parallel tracks: the Operator pattern maturing into a serious engineering discipline, the service mesh debate consuming enormous community energy, and the security model evolving from “trust everything in the cluster” toward something resembling defense-in-depth.


The OperatorHub Era

The Operator pattern, introduced by CoreOS engineers in 2016, reached critical mass in 2018–2019. In November 2018, Red Hat launched OperatorHub.io — a registry for Kubernetes Operators covering databases (PostgreSQL, MongoDB, CockroachDB), messaging (Kafka, RabbitMQ), monitoring (Prometheus), and more.

The Operator SDK (Red Hat, 2018) gave teams a framework for building Operators in Go, Ansible, or Helm — lowering the barrier from “you need to write a Kubernetes controller from scratch” to “fill in the reconciliation logic.”

The maturity model for Operators was codified into five levels:

Level Capability
1 Basic Install — automated deployment
2 Seamless Upgrades — patch and minor version upgrades
3 Full Lifecycle — backup, failure recovery
4 Deep Insights — metrics, alerts, log processing
5 Auto Pilot — horizontal/vertical scaling, auto-config tuning

Most production Operators in 2019 were at Level 1–2. Getting to Level 3+ required encoding significant domain knowledge — the kind that previously lived in a senior database administrator’s head.


Kubernetes 1.11 — CoreDNS Default, Load Balancing Stable (June 2018)

  • CoreDNS replaced kube-dns as the default DNS provider. CoreDNS is plugin-based — you can extend it for custom DNS resolution logic (split DNS, external name resolution, DNS-based service discovery for non-Kubernetes services)
  • IPVS-based kube-proxy stable: The load balancing mode for Services switched from iptables to IPVS (IP Virtual Server), enabling O(1) service routing instead of O(n) iptables rule traversal — critical at scale
  • TLS bootstrapping stable: Kubelet automatic certificate rotation — kubelets no longer needed manual certificate management

The IPVS kube-proxy mode is a good example of a performance improvement that also has security implications. iptables rules degrade linearly with rule count; at 10,000+ services, iptables becomes a performance and debuggability problem. IPVS uses a hash table — O(1) lookups regardless of service count.


Kubernetes 1.12 — 1.13: Amazon EKS, Runtime Security (September–December 2018)

Amazon EKS Goes GA (June 2018)

Amazon EKS became generally available in June 2018. This was significant not just for AWS customers but for the entire ecosystem: EKS’s launch meant every major cloud provider now had a production-grade managed Kubernetes offering.

EKS’s initial release was deliberately limited — managed control plane, self-managed worker nodes. This contrasted with GKE’s more automated approach, and the community noticed. GKE had been running managed Kubernetes longer, and it showed in feature completeness.

1.12 (September 2018)

  • RuntimeClass alpha: A mechanism to specify which container runtime to use for a pod — containerd, gVisor, Kata Containers. The foundation for confidential computing workloads where you want hardware-isolated containers
  • RBAC delegation: Service accounts could now grant RBAC permissions they themselves held — enabling Operators to manage RBAC for the applications they deploy
  • Volume snapshot alpha: Create point-in-time snapshots of PersistentVolumes — the beginning of Kubernetes-native backup primitives

1.13 (December 2018)

  • kubeadm graduates to GA: The cluster bootstrapping tool was now stable and recommended for production
  • CoreDNS stable
  • CSI stable: Storage drivers could be shipped entirely out of tree

Kubernetes 1.14 — Windows Containers Go Stable (March 2019)

Windows Server container support graduated to stable in 1.14. For the first time, Kubernetes clusters could run Windows workloads as first-class citizens — .NET Framework applications, IIS, SQL Server containers alongside Linux-based microservices.

The implementation required significant work: Windows containers have different networking models, different filesystem semantics, and different process models than Linux containers. Making them a first-class Kubernetes citizen meant handling all of those differences in the node components.

Also in 1.14:
PersistentVolume and StorageClass improvements
kubectl improvements: kubectl diff — show what would change before applying a manifest


The PodSecurityPolicy Problem

PodSecurityPolicy (PSP) was alpha in Kubernetes 1.3, beta in 1.8, and would remain in beta until it was deprecated in 1.21. It was simultaneously the most important security primitive in Kubernetes and the most broken.

PSP let administrators define what a pod was allowed to do:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: restricted
spec:
  privileged: false
  allowPrivilegeEscalation: false
  requiredDropCapabilities:
    - ALL
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    - 'persistentVolumeClaim'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  runAsUser:
    rule: MustRunAsNonRoot
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: MustRunAs
    ranges:
      - min: 1
        max: 65535
  fsGroup:
    rule: MustRunAs
    ranges:
      - min: 1
        max: 65535
  readOnlyRootFilesystem: false

The problem: the admission mechanism was confusing, the UX was hostile, and the authorization model (who could use which PSP) led to privilege escalation paths that were non-obvious. Many teams either disabled PSP entirely or created a permissive policy that made it functionally useless.

The community would spend years working toward a replacement. In 2021 it was deprecated; in 1.25 (2022) it was removed. The replacement — Pod Security Admission — is discussed in EP05.


Kubernetes 1.15 — 1.17: Custom Resource Maturity (2019)

1.15 (June 2019)

  • CRDs continue maturing: Structural schemas, pruning of unknown fields — making CRDs behave more like first-class API types
  • Kustomize integrated into kubectl: Template-free Kubernetes configuration customization. Where Helm uses Go templates, Kustomize uses overlays — a base configuration plus environment-specific patches
# kustomization.yaml — base + production overlay
bases:
  - ../../base
patches:
  - deployment-replicas.yaml
  - resource-limits.yaml
configMapGenerator:
  - name: app-config
    literals:
      - ENV=production

1.16 (September 2019)

  • CRDs graduate to GA (apps/v1, not extensions/v1beta1)
  • Admission webhooks stable: Validating and mutating webhooks that intercept every API request. This is the foundation for OPA/Gatekeeper, Kyverno, and all policy-as-code enforcement in Kubernetes

The admission webhook framework’s graduation to stable in 1.16 was more significant than it appeared. It meant that any security policy engine — OPA/Gatekeeper, Kyverno, Styra, etc. — could now enforce policies on any Kubernetes resource creation or modification, using a stable, documented API.

  • Removal of several deprecated beta APIs: extensions/v1beta1 Deployments, DaemonSets, ReplicaSets — a preview of the more aggressive API cleanup that would come in 1.22

1.17 (December 2019)

  • Volume snapshots beta
  • Cloud Provider labels stable

OPA/Gatekeeper: Policy as Code Enters the Mainstream

Open Policy Agent (OPA) + Gatekeeper emerged as the policy engine of choice for Kubernetes in 2019. Gatekeeper uses the admission webhook framework to intercept API requests and evaluate them against Rego policies:

# Deny containers running as root
package kubernetes.admission

deny[msg] {
  input.request.kind.kind == "Pod"
  container := input.request.object.spec.containers[_]
  container.securityContext.runAsUser == 0
  msg := sprintf("Container %v must not run as root", [container.name])
}

The OPA/Gatekeeper model represented a shift in security thinking: instead of configuring security at the cluster level, you codify security policy in a language (Rego) and enforce it uniformly across all admission requests. Policies can be tested, versioned, and reviewed like code.


Kubernetes 1.18 — Topology-Aware Routing, Immutability (March 2020)

  • Topology-aware service routing alpha: Route service traffic to endpoints in the same zone/node as the caller — reducing cross-zone data transfer costs and latency
  • Immutable ConfigMaps and Secrets alpha: Mark a ConfigMap or Secret as immutable — the API server rejects updates, preventing accidental mutation of configuration that applications have already loaded
  • IngressClass: A mechanism to specify which Ingress controller should handle an Ingress resource — enabling multiple ingress controllers in the same cluster
# Immutable secret — once set, cannot be changed
apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
immutable: true
data:
  password: dGhpcyBpcyBhIHRlc3Q=

The Falco Adoption Wave

CNCF-donated Falco (originated by Sysdig) became the standard tool for Kubernetes runtime security in this period. Falco uses eBPF probes or kernel modules to monitor syscalls and generate alerts based on rules:

# Falco rule: detect shell spawned in a container
- rule: Terminal shell in container
  desc: A shell was spawned in a container
  condition: >
    spawned_process and container and
    shell_procs and proc.tty != 0
  output: >
    A shell was spawned in a container
    (user=%user.name container=%container.name
     shell=%proc.name parent=%proc.pname)
  priority: WARNING

Falco addressed the gap that PodSecurityPolicy couldn’t: admission-time policy prevents known-bad configurations from running, but it can’t detect a compromise that happens at runtime — a shell spawned by an exploited web application, for example.


The Service Mesh Exhaustion

By 2019, the service mesh landscape was producing more overhead than value for many teams. Istio’s operational complexity — its control plane components, its sidecar injection model, its frequent breaking changes between versions — burned teams that adopted it early.

The community questions were real: do you actually need mTLS between every service in your cluster? Is the operational cost of a service mesh worth the security benefit for every organization?

Linkerd 2.x (Buoyant) positioned itself as the lightweight alternative — simpler to operate, less configuration surface, Rust-based proxy instead of Envoy. For teams that wanted the security benefit (mTLS) without the complexity cost, Linkerd 2.x was often the better choice.

The honest answer in 2019-2020: service meshes were the right architecture for organizations with hundreds of services and dedicated platform teams. For most organizations, they were complexity that outpaced the threat model.


Key Takeaways

  • The Operator pattern matured from a pattern into an engineering discipline with tooling (Operator SDK), a registry (OperatorHub), and a capability maturity model
  • EKS going GA completed the managed Kubernetes trifecta — every major cloud provider was now committed
  • CRDs graduating to stable in 1.16 was the foundation for everything built on Kubernetes extensibility — Operators, policy engines, GitOps tools
  • Admission webhooks graduating to stable enabled the policy-as-code ecosystem (OPA/Gatekeeper, Kyverno) — the only viable alternative to PSP’s broken model
  • Falco established runtime security as a distinct discipline from admission-time policy enforcement
  • Service mesh adoption was real but the complexity cost was frequently underestimated; many teams that adopted Istio in 2018-2019 spent 2019-2020 managing it

What’s Next

← EP03: Enterprise Awakening | EP05: Security Hardens →

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

Enterprise Awakening: RBAC, CRDs, Cloud Providers, and Helm Goes Mainstream (2016–2018)

Reading Time: 6 minutes


Introduction

By the end of 2016, engineers were running Kubernetes in production. Not as an experiment — in production, handling real traffic. And that’s where the real gaps became visible.

The 2016–2018 period is the era when Kubernetes grew up. RBAC went stable. CRDs replaced the fragile ThirdPartyResource hack. The major cloud providers launched managed services. Helm became the standard for packaging. And the security posture, which had been an afterthought in the Borg-derived model, started getting serious attention.


Kubernetes 1.6 — The RBAC Milestone (March 2017)

Kubernetes 1.6 is the release that made enterprise Kubernetes possible. The headline feature: RBAC (Role-Based Access Control) promoted to beta, enabled by default.

Before RBAC, Kubernetes had attribute-based access control (ABAC) — a flat policy file on the API server that required a restart to change. It worked, but it was operationally painful and offered no granularity at the namespace level.

RBAC introduced four objects:
Role: A set of permissions scoped to a namespace
ClusterRole: A set of permissions cluster-wide or reusable across namespaces
RoleBinding: Assigns a Role to a user/group/service account in a namespace
ClusterRoleBinding: Assigns a ClusterRole cluster-wide

# Example: read-only access to pods in the dev namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: dev
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: dev
subjects:
- kind: User
  name: alice
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Also in 1.6:
etcd v3 as default: Better performance, watch semantics, and transaction support
Node Authorization mode: Kubelets can now only access secrets and pods bound to their own node — a critical lateral movement restriction
Audit logging (alpha): API server logs every request — who did what, to which resource, at what time
– Scale: Tested to 5,000 nodes per cluster

The node authorization mode deserves more attention than it typically gets. Before 1.6, a compromised kubelet could read all secrets in the cluster. Node authorization restricted the kubelet to only the secrets it needed for pods scheduled on that node. This single change dramatically reduced the blast radius of a node compromise.


Kubernetes 1.7 — Custom Resource Definitions (June 2017)

The most significant architectural decision in Kubernetes history after the initial design: ThirdPartyResources (TPRs) were replaced with CustomResourceDefinitions (CRDs).

TPRs were a fragile mechanism introduced in 1.2 that let users define custom API types. They had serious limitations: no schema validation, no versioning, data loss bugs, and poor upgrade behavior. In 1.7, they were replaced with CRDs.

CRDs are what make the Kubernetes API extension model work. They let you define new resource types that the API server stores and serves, with optional schema validation via OpenAPI v3 schemas, version conversion, and admission webhook integration.

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: databases.stable.example.com
spec:
  group: stable.example.com
  versions:
  - name: v1
    served: true
    storage: true
    schema:
      openAPIV3Schema:
        type: object
        properties:
          spec:
            type: object
            properties:
              size:
                type: string
              version:
                type: string
  scope: Namespaced
  names:
    plural: databases
    singular: database
    kind: Database

CRDs enabled the entire Operator ecosystem that would define the next phase of Kubernetes. Without stable, schema-validated custom resources, you can’t build reliable controllers on top of them.

Also in 1.7:
Secrets encryption at rest (alpha): Finally, secrets stored in etcd could be encrypted with AES-CBC or AES-GCM
Network Policy promoted to stable: CNI plugins implementing NetworkPolicy could now enforce pod-level ingress/egress rules
API aggregation layer: Extend the Kubernetes API with custom API servers — the foundation for metrics-server and other API extensions


Kubernetes 1.8 — RBAC Goes Stable (September 2017)

RBAC graduated to stable in 1.8. This was the point of no return for enterprise adoption. Security teams could now enforce least-privilege on Kubernetes API access with a documented, stable API.

Key additions:
Storage Classes stable: Dynamic volume provisioning — request a PersistentVolume and have the underlying storage (EBS, GCE PD, NFS) automatically provisioned
Workloads API (apps/v1beta2): Deployments, ReplicaSets, DaemonSets, and StatefulSets all moved under a unified API group, signaling they were heading toward stable

The admission webhook framework — which would become the foundation for policy enforcement tools like OPA/Gatekeeper — was also being refined in this period.


The Cloud Provider Moment (2017–2018)

October 2017: Docker Surrenders

At DockerCon Europe in October 2017, Docker Inc. announced that Docker Enterprise Edition would ship with Kubernetes support alongside Docker Swarm. This was, effectively, Docker Inc. conceding the orchestration market to Kubernetes. Swarm remained available, but the message was clear: Kubernetes was the production standard.

October 2017: Microsoft Previews AKS

Microsoft previewed Azure Kubernetes Service at DockerCon Europe. The managed Kubernetes race was on.

November 2017: Amazon Announces EKS

At AWS re:Invent 2017, Amazon announced Elastic Kubernetes Service. The three major cloud providers — Google (GKE, running since 2014), Microsoft (AKS), and Amazon (EKS) — were all committed to managed Kubernetes.

For enterprise buyers, this was the signal they needed. Kubernetes was no longer a bet on an experimental technology — it was the supported, managed offering from every major cloud provider.


Kubernetes 1.9 — Workloads API Stable (December 2017)

The Workloads API (apps/v1) went stable in 1.9. This matters because it locked in the API contract for Deployments, ReplicaSets, DaemonSets, and StatefulSets. Infrastructure built on these APIs would not break on upgrades.

# apps/v1 Deployment — the stable form that operators rely on
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"

Also in 1.9:
Windows container support moved to beta — actual Windows Server 2016 nodes in a cluster
CoreDNS available as an alternative to kube-dns: A more extensible, plugin-based DNS server that would replace kube-dns as the default in 1.11


Kubernetes 1.10 — Storage, Auth, and Scale (March 2018)

1.10 continued the enterprise hardening:
CSI (Container Storage Interface) beta: A standardized interface between Kubernetes and storage providers. Before CSI, storage drivers were compiled into the kubelet binary. CSI moved them out-of-tree, allowing storage vendors to ship their own drivers without waiting for a Kubernetes release
External credential providers (alpha): Authenticate against external systems (cloud IAM, HashiCorp Vault) for kubeconfig credentials
Node problem detector stable: Detect and report node-level problems (kernel deadlocks, corrupted file systems) as Kubernetes events and node conditions

The CSI transition was one of the most important infrastructure decisions of this period. It decoupled storage driver development from the Kubernetes release cycle — a necessary step for cloud providers to ship storage integrations rapidly and independently.


The Istio Announcement and Service Mesh Wars (May 2017)

Google and IBM announced Istio in May 2017 — a service mesh that layered mTLS, traffic management, and observability on top of existing Kubernetes deployments without changing application code.

Istio’s architecture: sidecar proxies (Envoy) injected into every pod, managed by a control plane. Every service-to-service call passes through the sidecar, enabling:
– Mutual TLS between services (zero-trust networking at the service layer)
– Fine-grained traffic control (canary releases, circuit breaking, retries)
– Distributed tracing and metrics

Linkerd (from Buoyant) had been working on the same problem since 2016. The two projects would compete for the “service mesh standard” throughout 2017–2019.

The service mesh conversation was fundamentally a security architecture conversation: how do you enforce mutual authentication and encryption between services in a Kubernetes cluster without requiring application developers to implement it?


CoreOS Acquisition and the Operator Pattern (2018)

In January 2018, Red Hat acquired CoreOS for $250 million. CoreOS had contributed two things that would permanently shape Kubernetes:

1. The Operator Pattern (introduced by CoreOS engineers Brandon Philips and Josh Wood in 2016): An Operator is a custom controller that uses CRDs to manage the lifecycle of complex, stateful applications. The etcd Operator (CoreOS’s own) was the first — it automated etcd cluster creation, scaling, backup, and failure recovery. The pattern generalized: a Prometheus Operator, a PostgreSQL Operator, a Kafka Operator.

The Operator pattern is the answer to the question “how do you encode operational knowledge into software?” A human operator knows how to deploy, scale, backup, and recover a database. An Operator codifies that knowledge into a controller loop.

# Operator pattern: watch CRD → reconcile → manage application
CRD (EtcdCluster) → Operator Controller watches → creates/updates Pods, Services, Snapshots

2. etcd: The distributed key-value store that backs the Kubernetes control plane. CoreOS built and maintained etcd. Red Hat acquiring CoreOS meant that the company maintaining Kubernetes’s most critical dependency (after the kernel) was now inside the Red Hat/IBM orbit.


Helm 2 and the Charts Ecosystem

By 2017–2018, Helm had become the de facto package manager for Kubernetes. The public Helm chart repository hosted hundreds of charts — databases (PostgreSQL, MySQL, Redis), monitoring (Prometheus, Grafana), ingress controllers (nginx), CI/CD tools (Jenkins, GitLab Runner).

Helm 2 introduced Tiller — a server-side component that managed release state in the cluster. Tiller became the most criticized security decision in the Kubernetes ecosystem: Tiller ran with cluster-admin privileges by default, meaning any user who could reach Tiller’s gRPC endpoint could do anything in the cluster.

Security teams hated Tiller. The Helm team addressed it in Helm 3 (2019) by removing Tiller entirely and storing release state as Kubernetes Secrets instead.


Key Takeaways

  • RBAC going stable in 1.8 was the single most important security event in early Kubernetes history — it gave enterprises the access control model they needed for production
  • CRDs replacing TPRs in 1.7 enabled the entire Operator ecosystem that would define the next phase of Kubernetes
  • Docker Inc.’s October 2017 announcement that it would support Kubernetes in Docker EE effectively ended the container orchestration wars
  • The three major cloud providers (GKE, AKS, EKS) all standardizing on managed Kubernetes drove enterprise adoption faster than any feature announcement could
  • The Operator pattern — Kubernetes controllers that encode operational knowledge — emerged from CoreOS and became the standard model for managing complex stateful applications
  • Helm filled a real gap but Tiller’s cluster-admin model was a security debt the community had to repay in Helm 3

What’s Next

← EP02: The Container Wars | EP04: The Operator Era →

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com