Meta Description: Understand how Kubernetes RBAC and AWS IAM interact in EKS — map the two-layer access model and debug permission failures across both control planes.
What Is Cloud IAM → Authentication vs Authorization → IAM Roles vs Policies → AWS IAM Deep Dive → GCP Resource Hierarchy IAM → Azure RBAC Scopes → OIDC Workload Identity → AWS IAM Privilege Escalation → AWS Least Privilege Audit → SAML vs OIDC Federation → Kubernetes RBAC and AWS IAM
TL;DR
- Kubernetes RBAC and cloud IAM are separate authorization layers — strong cloud IAM with weak Kubernetes RBAC is still a vulnerable cluster
cluster-adminClusterRoleBindings are the first thing to audit — a compromised pod with cluster-admin controls the entire cluster- Disable
automountServiceAccountTokenon pods that don’t call the Kubernetes API — most application pods don’t need it mounted - Use OIDC for human access instead of X.509 client certificates — client certs cannot be revoked without rotating the CA
- Bind groups from IdP, not individual usernames — revocation propagates automatically when someone leaves
- A ServiceAccount that can
create podsorcreate rolebindingsis a privilege escalation path: the same class of risk asiam:PassRole
The Big Picture
TWO AUTHORIZATION LAYERS — NEITHER COMPENSATES FOR THE OTHER
┌─────────────────────────────────────────────────────────────────┐
│ CLOUD IAM LAYER (AWS IAM / GCP IAM / Azure RBAC) │
│ Controls: S3, DynamoDB, Lambda, RDS, cloud services │
│ Human: federated identity from IdP (SAML / OIDC) │
│ Machine: IRSA annotation → IAM role / GKE WI / AKS WI │
│ Audit: CloudTrail, GCP Audit Logs, Azure Monitor │
└─────────────────────────────────────────────────────────────────┘
↕ separate systems — no inheritance in either direction
┌─────────────────────────────────────────────────────────────────┐
│ KUBERNETES RBAC LAYER (within the cluster) │
│ Controls: pods, secrets, deployments, configmaps, namespaces │
│ Human: OIDC groups → ClusterRoleBinding (or RoleBinding) │
│ Machine: ServiceAccount → Role / ClusterRole │
│ Audit: kube-apiserver audit log │
└─────────────────────────────────────────────────────────────────┘
Attack path: exploit app pod → SA has cluster-admin → own the cluster
Audit finding: cluster-admin on app SA, regardless of cloud IAM posture
Introduction
I spent a long time in Kubernetes environments thinking cloud IAM and Kubernetes RBAC were related in a way that meant securing one partially covered the other. They don’t. They’re separate authorization systems that happen to share infrastructure.
The moment this crystallized for me: I was auditing an EKS cluster for a fintech company. Their AWS IAM posture was actually quite good — least privilege roles, no wildcard policies, SCPs in place at the org level. I was about to give them a clean bill of health when I ran one command:
kubectl get clusterrolebindings -o json | \
jq '.items[] | select(.roleRef.name=="cluster-admin") | {name:.metadata.name, subjects:.subjects}'
The output showed five ClusterRoleBindings to cluster-admin. Two of them bound it to service accounts in production namespaces. One of those service accounts was used by an application that processed customer transactions.
cluster-admin in Kubernetes is the equivalent of AdministratorAccess in AWS. An attacker who compromises a pod running as that service account doesn’t just have access to the application’s data. They have control of the entire cluster: reading every secret in every namespace, deploying arbitrary workloads, modifying RBAC bindings to create persistence.
None of this showed up in the AWS IAM audit. AWS IAM and Kubernetes RBAC are separate systems. Securing one tells you nothing about the other.
Kubernetes RBAC Architecture
Kubernetes RBAC works with four object types:
| Object | Scope | What It Does |
|---|---|---|
| Role | Single namespace | Defines permissions within one namespace |
| ClusterRole | Cluster-wide | Permissions across all namespaces, or for non-namespaced resources |
| RoleBinding | Single namespace | Binds a Role (or ClusterRole) to subjects, scoped to one namespace |
| ClusterRoleBinding | Cluster-wide | Binds a ClusterRole to subjects with cluster-wide scope |
Subjects — the identities that receive the binding — are:
– User: an external identity (Kubernetes has no native user objects; users come from the authenticator)
– Group: a group of external identities
– ServiceAccount: a Kubernetes-native machine identity, namespaced
The scoping matters. A ClusterRole defines what permissions exist. A RoleBinding applies that ClusterRole within a single namespace. A ClusterRoleBinding applies it everywhere. The same permissions, dramatically different blast radius.
Roles and ClusterRoles
# Role: read pods and their logs — scoped to the default namespace only
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # "" = core API group (pods, secrets, configmaps, etc.)
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
# ClusterRole: manage Deployments across all namespaces
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: deployment-manager
rules:
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
The verbs map to HTTP methods against the Kubernetes API: get reads a specific resource, list returns a collection, watch streams changes, create/update/patch/delete are mutations.
One that consistently surprises people: list on secrets returns secret values in some Kubernetes versions and configurations. You might think “list” is just metadata, but listing secrets can include their data. If a service account needs to check whether a secret exists, grant get on the specific secret name. Avoid list on the secrets resource.
The Wildcard Risk
# This is effectively cluster-admin in the default namespace — avoid
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
Any * in RBAC rules is an audit finding. In practice I find wildcards most often in:
– Operator and controller service accounts (understandable, but worth reviewing)
– “Temporary” RBAC that became permanent
– Developer tooling given cluster-admin “because it was easier”
Run this to find all ClusterRoles with wildcard verbs:
kubectl get clusterroles -o json | \
jq '.items[] | select(.rules[]?.verbs[] == "*") | .metadata.name'
Bindings — Connecting Identities to Roles
# RoleBinding: alice can read pods in the default namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: alice-pod-reader
namespace: default
subjects:
- kind: User
name: [email protected]
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
# ClusterRoleBinding: Prometheus can read cluster-wide (monitoring use case)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus-cluster-reader
subjects:
- kind: ServiceAccount
name: prometheus
namespace: monitoring
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
An important pattern: a RoleBinding can reference a ClusterRole. This lets you define a role once at the cluster level (the ClusterRole) and bind it within specific namespaces through RoleBindings. The permissions are still scoped to the namespace where the RoleBinding lives. This is the right pattern for shared role definitions — define the permission set once, instantiate it with appropriate scope.
Default to RoleBinding over ClusterRoleBinding for namespace-scoped work. ClusterRoleBinding should be reserved for genuinely cluster-wide operations: monitoring agents, network plugins, cluster operators, security tooling.
Service Accounts — The Machine Identity in Kubernetes
Every pod in Kubernetes runs as a service account. If you don’t specify one, it uses the default service account in the pod’s namespace.
The default service account is where many RBAC misconfigurations accumulate. When someone creates a RoleBinding without thinking about which SA to use, they often bind the permission to default. Now every pod in that namespace that doesn’t explicitly set a service account — including pods deployed by developers who aren’t thinking about RBAC — inherits that binding.
# Create a dedicated SA for each application
kubectl create serviceaccount app-backend -n production
# Check what any SA can currently do — use this in every audit
kubectl auth can-i --list --as=system:serviceaccount:production:app-backend -n production
# Check a specific action
kubectl auth can-i get secrets \
--as=system:serviceaccount:production:app-backend -n production
kubectl auth can-i create pods \
--as=system:serviceaccount:production:app-backend -n production
Disable Auto-Mounting the SA Token
By default, Kubernetes mounts the service account token into every pod at /var/run/secrets/kubernetes.io/serviceaccount/token. A pod that doesn’t need to call the Kubernetes API doesn’t need this token. Having it mounted increases the blast radius if the pod is compromised — the token can be used to call the K8s API with whatever RBAC permissions the SA has.
# Disable at the pod level
apiVersion: v1
kind: Pod
spec:
automountServiceAccountToken: false
serviceAccountName: app-backend
containers:
- name: app
image: my-app:latest
# Or at the service account level (applies to all pods using this SA)
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-backend
namespace: production
automountServiceAccountToken: false
For most application pods — anything that isn’t a Kubernetes operator, controller, or management tool — the K8s API token is unnecessary. Disable it.
Human Access to Kubernetes — Get Off Client Certificates
Kubernetes doesn’t manage human users natively. Authentication is delegated to an external mechanism. The most common approaches:
| Method | Notes |
|---|---|
| X.509 client certificates | Common for initial cluster setup; credentials are embedded in kubeconfig; cannot be revoked without revoking the CA |
| Static bearer tokens | Long-lived; avoid |
| OIDC via external IdP | Preferred for human access — supports SSO, MFA, and revocation via IdP |
| Webhook auth | Flexible, requires custom infrastructure |
X.509 certificates are the bootstrap pattern. Every managed Kubernetes offering generates an admin kubeconfig with a client certificate. The problem: you can’t revoke individual certificates without rotating the CA. If you’re giving human engineers access via client certificates, someone leaving doesn’t actually lose cluster access until the certificate expires.
OIDC is the right model. Configure the kube-apiserver to accept JWTs from your IdP, bind RBAC permissions to groups from the IdP, and revocation becomes “remove from IdP group” rather than “hope the certificate expires soon”:
# kube-apiserver flags for OIDC (managed clusters configure this via provider settings)
--oidc-issuer-url=https://accounts.google.com
--oidc-client-id=my-cluster-client-id
--oidc-username-claim=email
--oidc-groups-claim=groups
--oidc-groups-prefix=oidc:
# User's kubeconfig — uses an exec plugin to fetch an OIDC token
users:
- name: alice
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: kubectl-oidc-login
args:
- get-token
- --oidc-issuer-url=https://dex.company.com
- --oidc-client-id=kubernetes
With managed clusters:
# EKS: add IAM role as a cluster access entry (replaces the aws-auth ConfigMap)
aws eks create-access-entry \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/DevTeamRole \
--type STANDARD
aws eks associate-access-policy \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/DevTeamRole \
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy \
--access-scope type=namespace,namespaces=production,staging
# GKE: get credentials; IAM roles map to cluster permissions
gcloud container clusters get-credentials my-cluster --region us-central1
# roles/container.developer → edit permissions
# But: use ClusterRoleBindings for fine-grained control rather than relying on GCP IAM roles
# AKS: bind Entra ID groups to Kubernetes RBAC
az aks get-credentials --name my-aks --resource-group rg-prod
kubectl create clusterrolebinding dev-team-view \
--clusterrole=view \
--group=ENTRA_GROUP_OBJECT_ID
Cloud IAM + Kubernetes RBAC: The Integration Points
EKS Pod Identity / IRSA (revisited)
The annotation on the Kubernetes ServiceAccount is the bridge:
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-backend
namespace: production
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/AppBackendRole
Kubernetes RBAC controls what the pod can do inside the cluster. The IAM role controls what the pod can do in AWS. Both must be explicitly granted; neither inherits from the other.
GKE Workload Identity
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-backend
namespace: production
annotations:
iam.gke.io/gcp-service-account: [email protected]
AKS Workload Identity
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-backend
namespace: production
annotations:
azure.workload.identity/client-id: "MANAGED_IDENTITY_CLIENT_ID"
---
apiVersion: v1
kind: Pod
metadata:
labels:
azure.workload.identity/use: "true"
spec:
serviceAccountName: app-backend
RBAC Audit — What to Check First
# Start here: who has cluster-admin?
kubectl get clusterrolebindings -o json | \
jq '.items[] | select(.roleRef.name=="cluster-admin") |
{binding: .metadata.name, subjects: .subjects}'
# cluster-admin should bind to almost nobody — review every result
# Find ClusterRoles with wildcard permissions
kubectl get clusterroles -o json | \
jq '.items[] | select(.rules[]?.verbs[]? == "*") | .metadata.name'
# What can the default SA do in each namespace?
for ns in $(kubectl get namespaces -o name | cut -d/ -f2); do
echo "=== $ns ==="
kubectl auth can-i --list --as=system:serviceaccount:${ns}:default -n ${ns} 2>/dev/null \
| grep -v "no" | head -10
done
# What can a specific SA do?
kubectl auth can-i --list \
--as=system:serviceaccount:production:app-backend \
-n production
# Check whether an SA can escalate — key risk indicators
kubectl auth can-i get secrets -n production \
--as=system:serviceaccount:production:app-backend
kubectl auth can-i create pods -n production \
--as=system:serviceaccount:production:app-backend
kubectl auth can-i create rolebindings -n production \
--as=system:serviceaccount:production:app-backend
Creating pods and creating rolebindings are privilege escalation primitives. A service account that can create pods can run a pod with a different, more powerful SA. A service account that can create rolebindings can grant itself more permissions.
Useful Tools
# rbac-tool — visualize and analyze RBAC (install: kubectl krew install rbac-tool)
kubectl rbac-tool viz # generate a graph of all bindings
kubectl rbac-tool who-can get secrets -n production
kubectl rbac-tool lookup [email protected]
# rakkess — access matrix for a subject
kubectl rakkess --sa production:app-backend
# audit2rbac — generate minimal RBAC from audit logs
audit2rbac --filename /var/log/kubernetes/audit.log \
--serviceaccount production:app-backend
Common RBAC Misconfigurations
| Misconfiguration | Risk | Fix |
|---|---|---|
cluster-admin bound to application SA |
Full cluster takeover from compromised pod | Minimal ClusterRole; scope to namespace where possible |
list or wildcard on secrets |
Read all secrets in scope — includes credentials, API keys | Grant get on specific named secrets only |
default SA with non-trivial permissions |
Every pod in the namespace inherits the permission | Bind permissions to dedicated SAs; automountServiceAccountToken: false on default |
| ClusterRoleBinding for namespace-scoped work | Namespace work with cluster-wide permission | Always prefer RoleBinding; ClusterRoleBinding only for genuinely cluster-wide needs |
| Binding users by username string | Hard to revoke; doesn’t sync with IdP | Bind groups from IdP; revocation propagates through group membership |
SA can create pods or create rolebindings |
Privilege escalation path | Audit and remove these from non-privileged SAs |
Framework Alignment
| Framework | Reference | What It Covers Here |
|---|---|---|
| CISSP | Domain 5 — Identity and Access Management | Kubernetes RBAC operates as a full IAM system at the platform layer, independent of cloud IAM |
| CISSP | Domain 3 — Security Architecture | Two independent authorization layers (cloud + K8s) must each be designed and audited — one does not compensate for the other |
| ISO 27001:2022 | 5.15 Access control | Kubernetes RBAC Roles, ClusterRoles, and bindings implement access control within the container platform |
| ISO 27001:2022 | 5.18 Access rights | Service account provisioning, OIDC-based human access, and workload identity integration with cloud IAM |
| ISO 27001:2022 | 8.2 Privileged access rights | cluster-admin and wildcard RBAC bindings represent the highest-privilege grants in Kubernetes |
| SOC 2 | CC6.1 | Kubernetes RBAC is the access control mechanism for the container platform layer in CC6.1 |
| SOC 2 | CC6.3 | Binding revocation, SA token disabling, and OIDC group-based access removal satisfy CC6.3 requirements |
Key Takeaways
- Kubernetes RBAC and cloud IAM are separate authorization layers — both must be secured; strong cloud IAM with weak K8s RBAC is still a vulnerable cluster
cluster-adminbindings are the first thing to audit in any cluster — the blast radius of a compromised pod with cluster-admin is the entire cluster- Disable
automountServiceAccountTokenon service accounts and pods that don’t call the Kubernetes API — most application pods don’t need it - Use OIDC for human access rather than client certificates; revocation via IdP is instant and reliable
- Bind groups from IdP rather than individual usernames; revocation propagates automatically when someone leaves
- A service account that can
create podsorcreate rolebindingsis a privilege escalation path — audit for these in every namespace
What’s Next
EP12 is the capstone: Zero Trust IAM — how all the concepts in this series come together into an architecture that assumes nothing is implicitly trusted, verifies everything explicitly, and limits blast radius through least privilege enforced at every layer.
Next: Zero trust access in the cloud
Get EP12 in your inbox when it publishes → linuxcent.com/subscribe