Introduction
I once inherited a GCP environment where the previous team had taken what they thought was a shortcut. They had a folder called Production with twelve projects in it. Rather than grant developers access to each project individually, they bound roles/editor at the folder level. One binding, twelve projects, all covered. Fast.
When I audited what roles/editor on that folder actually meant, I found it gave every developer in that binding write access to Cloud SQL databases they’d never heard of, BigQuery datasets from other teams, Pub/Sub topics in shared services, and Cloud Storage buckets that held data exports. Not because anyone intended that. Because permissions in GCP flow downward through the hierarchy, and a broad role at a high level means a broad role everywhere below it.
The developer who made that binding understood “Editor means edit access.” They didn’t think through what “edit access at the folder level” means across twelve projects. This is the GCP IAM trap that catches teams coming from AWS: the hierarchy feels like an organizational convenience feature, not an access control mechanism. It’s both.
This episode is about understanding GCP IAM through that lens — not as a system for granting access, but as a system where every grant propagates in ways you need to trace before you commit.
The Resource Hierarchy — Not Just Org Structure
GCP’s resource hierarchy is the backbone of its IAM model:
Organization (e.g., company.com)
└── Folder (e.g., Production, Development, Shared-Services)
└── Folder (nested, optional — up to 10 levels)
└── Project (unit of resource ownership and billing)
└── Resource (GCE instance, GCS bucket, Cloud SQL, BigQuery, etc.)
The critical rule: IAM bindings at any level inherit downward to every node below.
Org IAM binding:
[email protected] → roles/viewer (org-level)
↓ inherited by
Folder: Production
↓ inherited by
Project: prod-web-app
↓ inherited by
GCS bucket "prod-assets"
Result: alice can list and read resources across the ENTIRE org,
across every folder, every project, every resource.
Even if none of those resources have a direct binding for alice.
roles/viewer at the org level sounds benign — it’s just read access. But read access to everything in the organization, including infrastructure configurations, customer data exports in GCS, BigQuery analytics, Cloud SQL connection details, and Kubernetes cluster configs. Not benign.
Before making any binding above the project level, trace it down. Ask: what does this role grant, and at every project and resource below this folder, am I comfortable with that?
# Understand your org structure before making changes
gcloud organizations list
gcloud resource-manager folders list --organization=ORG_ID
gcloud projects list --filter="parent.id=FOLDER_ID"
# See all existing bindings at the org level — do this regularly
gcloud organizations get-iam-policy ORG_ID --format=json | jq '.bindings[]'
Member Types — Who Can Hold a Binding
GCP uses the term member (being renamed to principal) for the identity in a binding:
| Member Type | Format | Notes |
|---|---|---|
| Google Account | user:[email protected] |
Individual Google/Workspace account |
| Service Account | serviceAccount:[email protected] |
Machine identity |
| Google Group | group:[email protected] |
Workspace group |
| Workspace Domain | domain:company.com |
All users in a Workspace domain |
| All Authenticated | allAuthenticatedUsers |
Any authenticated Google identity — extremely broad |
| All Users | allUsers |
Anonymous + authenticated — public access |
| Workload Identity | principal://iam.googleapis.com/... |
External workloads via WIF |
The ones that have caused data exposure incidents: allAuthenticatedUsers and allUsers. Any GCS bucket or GCP resource bound to allAuthenticatedUsers is accessible to any of the ~3 billion Google accounts in existence. I have seen production customer data exposed this way — a developer testing a public CDN pattern applied the binding to the wrong bucket.
Audit for these regularly:
# Find any project-level binding with allUsers or allAuthenticatedUsers
gcloud projects get-iam-policy my-project --format=json \
| jq '.bindings[] | select(.members[] | contains("allUsers") or contains("allAuthenticatedUsers"))'
# Check all GCS buckets in a project for public access
gsutil iam get gs://BUCKET_NAME \
| grep -E "(allUsers|allAuthenticatedUsers)"
Role Types — Choose the Right Granularity
Basic (Primitive) Roles — Don’t Use in Production
roles/viewer → read access to most resources across the entire project
roles/editor → read + write to most resources
roles/owner → full access including IAM management
These are legacy roles from before GCP had service-specific roles. roles/editor is particularly dangerous because it grants write access across almost every GCP service in the project. Use it in production and you have no meaningful separation of duties between your services.
I’ve seen roles/editor granted to a data pipeline service account because “it needed access to BigQuery, Cloud Storage, and Pub/Sub.” All three of those have predefined roles. Three specific bindings. Instead: one broad role that also grants access to Cloud SQL, Kubernetes, Secret Manager, and Compute Engine — none of which the pipeline needed.
Predefined Roles — The Default Correct Choice
Service-specific roles managed and updated by Google. For most use cases, these are the right choice:
# Find predefined roles for Cloud Storage
gcloud iam roles list --filter="name:roles/storage" --format="table(name,title)"
# roles/storage.objectViewer — read objects (not list buckets)
# roles/storage.objectCreator — create objects, cannot read or delete
# roles/storage.objectAdmin — full object control
# roles/storage.admin — full bucket + object control (much broader)
# See exactly what permissions a predefined role includes
gcloud iam roles describe roles/storage.objectViewer
The distinction between roles/storage.objectViewer and roles/storage.admin is the difference between “can read objects” and “can read objects, create objects, delete objects, and modify bucket IAM policies.” Use the narrowest role that covers the actual need.
Custom Roles — When Predefined Is Still Too Broad
When you need finer control than any predefined role offers, create a custom role:
cat > custom-log-reader.yaml << 'EOF'
title: "Log Reader"
description: "Read application logs from Cloud Logging — nothing else"
stage: "GA"
includedPermissions:
- logging.logEntries.list
- logging.logs.list
- logging.logMetrics.get
- logging.logMetrics.list
EOF
# Create at project level (available within one project)
gcloud iam roles create LogReader \
--project=my-project \
--file=custom-log-reader.yaml
# Or at org level (reusable across projects in the org)
gcloud iam roles create LogReader \
--organization=ORG_ID \
--file=custom-log-reader.yaml
# Grant the custom role
gcloud projects add-iam-policy-binding my-project \
--member="serviceAccount:[email protected]" \
--role="projects/my-project/roles/LogReader"
Custom roles have an operational overhead: when Google adds new permissions to a service, predefined roles are updated automatically. Custom roles are not — you have to update them manually. For roles like “Log Reader” that are unlikely to need new permissions, this isn’t a concern. For roles like “App Admin” that span many services, it becomes a maintenance burden.
IAM Policy Bindings — How Access Is Actually Granted
The mechanism for granting access in GCP is adding a binding to a resource’s IAM policy. A binding is: member + role + (optional condition).
# Grant a role on a project (all resources in the project inherit this)
gcloud projects add-iam-policy-binding my-project \
--member="user:[email protected]" \
--role="roles/storage.objectViewer"
# Grant on a specific GCS bucket (narrower — only this bucket)
gcloud storage buckets add-iam-policy-binding gs://prod-assets \
--member="serviceAccount:[email protected]" \
--role="roles/storage.objectViewer"
# Grant on a specific BigQuery dataset
bq update --add_iam_policy_binding \
--member="group:[email protected]" \
--role="roles/bigquery.dataViewer" \
my-project:analytics_dataset
# View the current IAM policy on a project
gcloud projects get-iam-policy my-project --format=json
# View a specific resource's policy
gcloud storage buckets get-iam-policy gs://prod-assets
The choice between project-level and resource-level binding has real consequences. A binding on the GCS bucket affects only that bucket. A binding at the project level affects the bucket AND every other resource in the project. Default to the most specific scope available. Only move up the hierarchy when the alternative is an unmanageable number of bindings.
Conditional Bindings — Time-Limited and Context-Scoped Access
Conditions scope when a binding applies. They use CEL (Common Expression Language):
# Temporary access for a contractor — automatically expires
gcloud projects add-iam-policy-binding my-project \
--member="user:[email protected]" \
--role="roles/storage.objectViewer" \
--condition="expression=request.time < timestamp('2026-06-30T00:00:00Z'),title=Contractor access Q2 2026"
# Access only from corporate network
gcloud projects add-iam-policy-binding my-project \
--member="user:[email protected]" \
--role="roles/bigquery.admin" \
--condition="expression=request.origin.ip.startsWith('10.0.'),title=Corp network only"
Temporary access that automatically expires is one of the most practical applications of conditional bindings. Instead of “I’ll grant access and remember to remove it,” you set an expiry and it removes itself. The cognitive overhead of tracking temporary grants doesn’t disappear — you still need to know the grant exists — but the risk of it outliving its purpose drops significantly.
Service Accounts — GCP’s Machine Identity
Service accounts are the machine identity in GCP. They should be used for every workload that needs to call GCP APIs — GCE instances, GKE pods, Cloud Functions, Cloud Run services.
# Create a service account
gcloud iam service-accounts create app-backend \
--display-name="App Backend Service Account" \
--project=my-project
SA_EMAIL="[email protected]"
# Grant it the specific role it needs — on the specific resource it needs
gcloud storage buckets add-iam-policy-binding gs://app-assets \
--member="serviceAccount:${SA_EMAIL}" \
--role="roles/storage.objectViewer"
# Attach to a GCE instance
gcloud compute instances create my-vm \
--service-account="${SA_EMAIL}" \
--scopes="cloud-platform" \
--zone=us-central1-a
From inside the VM, Application Default Credentials (ADC) handles authentication automatically:
# From the VM — ADC uses the attached SA without any credential configuration
gcloud auth application-default print-access-token
# Or via the metadata server directly
curl -H "Metadata-Flavor: Google" \
"http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token"
Service Account Keys — The Antipattern to Avoid
A service account key is a JSON file containing a private key. It’s long-lived, it doesn’t expire automatically, and if it leaks it gives an attacker persistent access as that service account until someone discovers and revokes it.
# Creating a key — only if there is genuinely no alternative
gcloud iam service-accounts keys create key.json --iam-account="${SA_EMAIL}"
# This generates a long-lived credential. It will exist until explicitly deleted.
# List all active keys — do this in every audit
gcloud iam service-accounts keys list --iam-account="${SA_EMAIL}"
# Delete a key
gcloud iam service-accounts keys delete KEY_ID --iam-account="${SA_EMAIL}"
In the GCP environment I mentioned earlier — the one with roles/editor at the folder level — I also found 23 service account key files downloaded across the team’s laptops over 18 months. Several were for accounts that no longer existed. Nobody had a complete list of which keys were still valid and where they were stored. That’s not a hypothetical attack surface: that’s a breach waiting for a laptop to be stolen.
Never create service account keys when:
– Code runs on GCE/GKE/Cloud Run/Cloud Functions — use the attached service account and ADC
– Code runs in GitHub Actions — use Workload Identity Federation
– Code runs on-premises with Kubernetes — use Workload Identity Federation with OIDC
Service Account Impersonation — The Right Alternative to Keys
Instead of downloading a key, grant a user or service account permission to impersonate the service account. They generate a short-lived token, not a permanent credential:
# Allow alice to impersonate the service account
gcloud iam service-accounts add-iam-policy-binding "${SA_EMAIL}" \
--member="user:[email protected]" \
--role="roles/iam.serviceAccountTokenCreator"
# Alice generates a token for the SA — no key file, short-lived
gcloud auth print-access-token --impersonate-service-account="${SA_EMAIL}"
# Or configure ADC to use impersonation
export GOOGLE_IMPERSONATE_SERVICE_ACCOUNT="${SA_EMAIL}"
gcloud storage ls gs://app-assets # runs as the SA
This is the right model for humans who need to act as service accounts for debugging or deployment: impersonate, use, done. The token expires. No file to manage.
Workload Identity Federation — Credentials Eliminated
The cleanest solution for any workload running outside GCP that needs to call GCP APIs: Workload Identity Federation. The external workload authenticates with its native identity (a GitHub Actions OIDC JWT, an AWS IAM role, a Kubernetes service account token), exchanges it for a short-lived GCP access token, and never handles a service account key.
# Create a Workload Identity Pool
gcloud iam workload-identity-pools create "github-actions-pool" \
--project=my-project \
--location=global \
--display-name="GitHub Actions WIF Pool"
# Create a provider (GitHub OIDC)
gcloud iam workload-identity-pools providers create-oidc "github-provider" \
--project=my-project \
--location=global \
--workload-identity-pool="github-actions-pool" \
--issuer-uri="https://token.actions.githubusercontent.com" \
--attribute-mapping="google.subject=assertion.sub,attribute.repository=assertion.repository" \
--attribute-condition="assertion.repository_owner == 'my-org'"
# Allow a specific GitHub repo to impersonate the SA
gcloud iam service-accounts add-iam-policy-binding "${SA_EMAIL}" \
--role="roles/iam.workloadIdentityUser" \
--member="principalSet://iam.googleapis.com/projects/PROJECT_NUM/locations/global/workloadIdentityPools/github-actions-pool/attribute.repository/my-org/my-repo"
GitHub Actions workflow — no key files, no secrets stored in GitHub:
jobs:
deploy:
permissions:
id-token: write # required for OIDC token request
contents: read
steps:
- uses: google-github-actions/auth@v2
with:
workload_identity_provider: "projects/PROJECT_NUM/locations/global/workloadIdentityPools/github-actions-pool/providers/github-provider"
service_account: "[email protected]"
- run: gcloud storage cp dist/ gs://app-assets/ --recursive
The OIDC JWT from GitHub is presented to GCP, which verifies it against GitHub’s public keys, checks the attribute mapping and condition (only the specified repo can use this), and issues a short-lived GCP access token. The credential exists for the duration of the job and is then gone.
IAM Deny Policies — Org-Wide Guardrails
GCP added standalone deny policies separate from bindings. They override grants:
cat > deny-iam-escalation.json << 'EOF'
{
"displayName": "Deny IAM escalation permissions to non-admins",
"rules": [{
"denyRule": {
"deniedPrincipals": ["principalSet://goog/group/[email protected]"],
"deniedPermissions": [
"iam.googleapis.com/roles.create",
"iam.googleapis.com/roles.update",
"iam.googleapis.com/serviceAccounts.actAs"
]
}
}]
}
EOF
gcloud iam policies create deny-iam-escalation-policy \
--attachment-point="cloudresourcemanager.googleapis.com/projects/my-project" \
--policy-file=deny-iam-escalation.json
iam.serviceAccounts.actAs is worth calling out specifically. It’s the GCP equivalent of AWS’s iam:PassRole — it allows an identity to make a service act as a specified service account. If a developer can call actAs on a high-privileged service account, they can launch a GCE instance using that service account and then operate with its permissions. Same privilege escalation pattern as iam:PassRole, different name. Deny it for anyone who doesn’t explicitly need it.
Framework Alignment
| Framework | Reference | What It Covers Here |
|---|---|---|
| CISSP | Domain 5 — Identity and Access Management | GCP’s hierarchical model and service account patterns are the primary IAM constructs for GCP environments |
| CISSP | Domain 3 — Security Architecture | Resource hierarchy design determines access inheritance — architectural decisions with direct security implications |
| ISO 27001:2022 | 5.15 Access control | GCP IAM bindings are the technical implementation of access control policy in GCP environments |
| ISO 27001:2022 | 5.18 Access rights | Service account provisioning, conditional bindings with expiry, and workload identity federation |
| ISO 27001:2022 | 8.2 Privileged access rights | Folder/org-level bindings and basic roles represent the highest-risk privilege grants in GCP |
| SOC 2 | CC6.1 | IAM bindings and Workload Identity Federation address machine identity controls for CC6.1 |
| SOC 2 | CC6.3 | Conditional bindings with time-bound expiry directly satisfy access removal requirements |
Key Takeaways
- GCP IAM is hierarchical — bindings inherit downward; a binding at org or folder level has much larger scope than it appears
- Basic roles (viewer/editor/owner) are too coarse for production; use predefined or custom roles and grant at the narrowest scope
- Service account keys are a long-lived credential antipattern; use ADC on GCP infrastructure, impersonation for humans, and Workload Identity Federation for external workloads
allAuthenticatedUsersandallUsersbindings expose resources to the internet — audit for these in every environmentiam.serviceAccounts.actAsis a privilege escalation vector — treat it likeiam:PassRole- Conditional bindings with expiry dates are better than “I’ll remember to remove this later”
What’s Next
EP06 covers Azure RBAC and Entra ID — the most directory-centric of the three models, where Active Directory’s 25 years of enterprise history shapes both the strengths and the complexity of Azure’s access control.
)
)