The Operator Era: Stateful Workloads, Service Mesh, and the Cloud-Native Stack (2018–2020)

Reading Time: 6 minutes


CISSP Domain Mapping

Domain Relevance
Domain 3 — Security Architecture PodSecurityPolicy enters wide adoption; OPA/Gatekeeper as policy engine
Domain 4 — Communication Security Service mesh mTLS (Istio, Linkerd) for zero-trust pod-to-pod communication
Domain 7 — Security Operations Falco for runtime anomaly detection; Prometheus-based alerting on cluster health

Introduction

By 2018, Kubernetes had won the orchestration market. The question was no longer “which orchestrator?” — it was “how do we run complex workloads on it, and how do we do it safely?”

The 2018–2020 period is defined by three parallel tracks: the Operator pattern maturing into a serious engineering discipline, the service mesh debate consuming enormous community energy, and the security model evolving from “trust everything in the cluster” toward something resembling defense-in-depth.


The OperatorHub Era

The Operator pattern, introduced by CoreOS engineers in 2016, reached critical mass in 2018–2019. In November 2018, Red Hat launched OperatorHub.io — a registry for Kubernetes Operators covering databases (PostgreSQL, MongoDB, CockroachDB), messaging (Kafka, RabbitMQ), monitoring (Prometheus), and more.

The Operator SDK (Red Hat, 2018) gave teams a framework for building Operators in Go, Ansible, or Helm — lowering the barrier from “you need to write a Kubernetes controller from scratch” to “fill in the reconciliation logic.”

The maturity model for Operators was codified into five levels:

Level Capability
1 Basic Install — automated deployment
2 Seamless Upgrades — patch and minor version upgrades
3 Full Lifecycle — backup, failure recovery
4 Deep Insights — metrics, alerts, log processing
5 Auto Pilot — horizontal/vertical scaling, auto-config tuning

Most production Operators in 2019 were at Level 1–2. Getting to Level 3+ required encoding significant domain knowledge — the kind that previously lived in a senior database administrator’s head.


Kubernetes 1.11 — CoreDNS Default, Load Balancing Stable (June 2018)

  • CoreDNS replaced kube-dns as the default DNS provider. CoreDNS is plugin-based — you can extend it for custom DNS resolution logic (split DNS, external name resolution, DNS-based service discovery for non-Kubernetes services)
  • IPVS-based kube-proxy stable: The load balancing mode for Services switched from iptables to IPVS (IP Virtual Server), enabling O(1) service routing instead of O(n) iptables rule traversal — critical at scale
  • TLS bootstrapping stable: Kubelet automatic certificate rotation — kubelets no longer needed manual certificate management

The IPVS kube-proxy mode is a good example of a performance improvement that also has security implications. iptables rules degrade linearly with rule count; at 10,000+ services, iptables becomes a performance and debuggability problem. IPVS uses a hash table — O(1) lookups regardless of service count.


Kubernetes 1.12 — 1.13: Amazon EKS, Runtime Security (September–December 2018)

Amazon EKS Goes GA (June 2018)

Amazon EKS became generally available in June 2018. This was significant not just for AWS customers but for the entire ecosystem: EKS’s launch meant every major cloud provider now had a production-grade managed Kubernetes offering.

EKS’s initial release was deliberately limited — managed control plane, self-managed worker nodes. This contrasted with GKE’s more automated approach, and the community noticed. GKE had been running managed Kubernetes longer, and it showed in feature completeness.

1.12 (September 2018)

  • RuntimeClass alpha: A mechanism to specify which container runtime to use for a pod — containerd, gVisor, Kata Containers. The foundation for confidential computing workloads where you want hardware-isolated containers
  • RBAC delegation: Service accounts could now grant RBAC permissions they themselves held — enabling Operators to manage RBAC for the applications they deploy
  • Volume snapshot alpha: Create point-in-time snapshots of PersistentVolumes — the beginning of Kubernetes-native backup primitives

1.13 (December 2018)

  • kubeadm graduates to GA: The cluster bootstrapping tool was now stable and recommended for production
  • CoreDNS stable
  • CSI stable: Storage drivers could be shipped entirely out of tree

Kubernetes 1.14 — Windows Containers Go Stable (March 2019)

Windows Server container support graduated to stable in 1.14. For the first time, Kubernetes clusters could run Windows workloads as first-class citizens — .NET Framework applications, IIS, SQL Server containers alongside Linux-based microservices.

The implementation required significant work: Windows containers have different networking models, different filesystem semantics, and different process models than Linux containers. Making them a first-class Kubernetes citizen meant handling all of those differences in the node components.

Also in 1.14:
PersistentVolume and StorageClass improvements
kubectl improvements: kubectl diff — show what would change before applying a manifest


The PodSecurityPolicy Problem

PodSecurityPolicy (PSP) was alpha in Kubernetes 1.3, beta in 1.8, and would remain in beta until it was deprecated in 1.21. It was simultaneously the most important security primitive in Kubernetes and the most broken.

PSP let administrators define what a pod was allowed to do:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: restricted
spec:
  privileged: false
  allowPrivilegeEscalation: false
  requiredDropCapabilities:
    - ALL
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    - 'persistentVolumeClaim'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  runAsUser:
    rule: MustRunAsNonRoot
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: MustRunAs
    ranges:
      - min: 1
        max: 65535
  fsGroup:
    rule: MustRunAs
    ranges:
      - min: 1
        max: 65535
  readOnlyRootFilesystem: false

The problem: the admission mechanism was confusing, the UX was hostile, and the authorization model (who could use which PSP) led to privilege escalation paths that were non-obvious. Many teams either disabled PSP entirely or created a permissive policy that made it functionally useless.

The community would spend years working toward a replacement. In 2021 it was deprecated; in 1.25 (2022) it was removed. The replacement — Pod Security Admission — is discussed in EP05.


Kubernetes 1.15 — 1.17: Custom Resource Maturity (2019)

1.15 (June 2019)

  • CRDs continue maturing: Structural schemas, pruning of unknown fields — making CRDs behave more like first-class API types
  • Kustomize integrated into kubectl: Template-free Kubernetes configuration customization. Where Helm uses Go templates, Kustomize uses overlays — a base configuration plus environment-specific patches
# kustomization.yaml — base + production overlay
bases:
  - ../../base
patches:
  - deployment-replicas.yaml
  - resource-limits.yaml
configMapGenerator:
  - name: app-config
    literals:
      - ENV=production

1.16 (September 2019)

  • CRDs graduate to GA (apps/v1, not extensions/v1beta1)
  • Admission webhooks stable: Validating and mutating webhooks that intercept every API request. This is the foundation for OPA/Gatekeeper, Kyverno, and all policy-as-code enforcement in Kubernetes

The admission webhook framework’s graduation to stable in 1.16 was more significant than it appeared. It meant that any security policy engine — OPA/Gatekeeper, Kyverno, Styra, etc. — could now enforce policies on any Kubernetes resource creation or modification, using a stable, documented API.

  • Removal of several deprecated beta APIs: extensions/v1beta1 Deployments, DaemonSets, ReplicaSets — a preview of the more aggressive API cleanup that would come in 1.22

1.17 (December 2019)

  • Volume snapshots beta
  • Cloud Provider labels stable

OPA/Gatekeeper: Policy as Code Enters the Mainstream

Open Policy Agent (OPA) + Gatekeeper emerged as the policy engine of choice for Kubernetes in 2019. Gatekeeper uses the admission webhook framework to intercept API requests and evaluate them against Rego policies:

# Deny containers running as root
package kubernetes.admission

deny[msg] {
  input.request.kind.kind == "Pod"
  container := input.request.object.spec.containers[_]
  container.securityContext.runAsUser == 0
  msg := sprintf("Container %v must not run as root", [container.name])
}

The OPA/Gatekeeper model represented a shift in security thinking: instead of configuring security at the cluster level, you codify security policy in a language (Rego) and enforce it uniformly across all admission requests. Policies can be tested, versioned, and reviewed like code.


Kubernetes 1.18 — Topology-Aware Routing, Immutability (March 2020)

  • Topology-aware service routing alpha: Route service traffic to endpoints in the same zone/node as the caller — reducing cross-zone data transfer costs and latency
  • Immutable ConfigMaps and Secrets alpha: Mark a ConfigMap or Secret as immutable — the API server rejects updates, preventing accidental mutation of configuration that applications have already loaded
  • IngressClass: A mechanism to specify which Ingress controller should handle an Ingress resource — enabling multiple ingress controllers in the same cluster
# Immutable secret — once set, cannot be changed
apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
immutable: true
data:
  password: dGhpcyBpcyBhIHRlc3Q=

The Falco Adoption Wave

CNCF-donated Falco (originated by Sysdig) became the standard tool for Kubernetes runtime security in this period. Falco uses eBPF probes or kernel modules to monitor syscalls and generate alerts based on rules:

# Falco rule: detect shell spawned in a container
- rule: Terminal shell in container
  desc: A shell was spawned in a container
  condition: >
    spawned_process and container and
    shell_procs and proc.tty != 0
  output: >
    A shell was spawned in a container
    (user=%user.name container=%container.name
     shell=%proc.name parent=%proc.pname)
  priority: WARNING

Falco addressed the gap that PodSecurityPolicy couldn’t: admission-time policy prevents known-bad configurations from running, but it can’t detect a compromise that happens at runtime — a shell spawned by an exploited web application, for example.


The Service Mesh Exhaustion

By 2019, the service mesh landscape was producing more overhead than value for many teams. Istio’s operational complexity — its control plane components, its sidecar injection model, its frequent breaking changes between versions — burned teams that adopted it early.

The community questions were real: do you actually need mTLS between every service in your cluster? Is the operational cost of a service mesh worth the security benefit for every organization?

Linkerd 2.x (Buoyant) positioned itself as the lightweight alternative — simpler to operate, less configuration surface, Rust-based proxy instead of Envoy. For teams that wanted the security benefit (mTLS) without the complexity cost, Linkerd 2.x was often the better choice.

The honest answer in 2019-2020: service meshes were the right architecture for organizations with hundreds of services and dedicated platform teams. For most organizations, they were complexity that outpaced the threat model.


Key Takeaways

  • The Operator pattern matured from a pattern into an engineering discipline with tooling (Operator SDK), a registry (OperatorHub), and a capability maturity model
  • EKS going GA completed the managed Kubernetes trifecta — every major cloud provider was now committed
  • CRDs graduating to stable in 1.16 was the foundation for everything built on Kubernetes extensibility — Operators, policy engines, GitOps tools
  • Admission webhooks graduating to stable enabled the policy-as-code ecosystem (OPA/Gatekeeper, Kyverno) — the only viable alternative to PSP’s broken model
  • Falco established runtime security as a distinct discipline from admission-time policy enforcement
  • Service mesh adoption was real but the complexity cost was frequently underestimated; many teams that adopted Istio in 2018-2019 spent 2019-2020 managing it

What’s Next

← EP03: Enterprise Awakening | EP05: Security Hardens →

Series: Kubernetes: From Borg to Platform Engineering | linuxcent.com

Leave a Comment