SAML vs OIDC vs OAuth2: Which Protocol Handles Which Identity Problem

Reading Time: 6 minutes

The Identity Stack, Episode 10
EP09: Active DirectoryEP10EP11: Identity Providers → …

Focus Keyphrase: SAML vs OIDC explained
Search Intent: Investigational
Meta Description: SAML, OAuth2, and OIDC solve different problems and are often confused. Here’s what each protocol does, when to use it, and how a browser SSO login actually works. (163 chars)


TL;DR

  • SAML 2.0 is a federation protocol for browser-based SSO — an IdP issues a signed XML assertion that a Service Provider trusts; designed for enterprise applications
  • OAuth2 is an authorization delegation protocol, not authentication — it lets an application act on your behalf without knowing your password; the access token says what, not who
  • OIDC (OpenID Connect) = OAuth2 + an identity layer — adds the id_token (a JWT containing who you are) on top of OAuth2’s access_token (what you can do)
  • SAML vs OIDC: SAML is XML, enterprise-native, stateful; OIDC is JSON/JWT, API-native, stateless — new applications almost always use OIDC
  • The id_token is a JWT — decode it at jwt.io and read every claim — it tells you exactly what the IdP asserts about the user
  • The browser SSO flow is three redirects: user → SP → IdP (authenticate) → SP (consume assertion)

The Problem: LDAP and Kerberos Don’t Cross the Internet

EP09 showed how authentication works inside a corporate network. LDAP and Kerberos both assume network proximity to the directory server — firewall-friendly ports don’t help when the authentication protocol requires a direct connection to the KDC or directory.

Internal network: works
  Browser → intranet app → LDAP/Kerberos → AD DC (all on 10.0.0.0/8)

Internet: breaks
  Browser → SaaS app (AWS) → LDAP/Kerberos → AD DC (on-prem behind firewall)
  ✗ KDC not reachable across NAT
  ✗ LDAP not exposed to internet (shouldn't be)
  ✗ Every SaaS app can't have its own LDAP connection to your DC

SAML was invented in 2002 to solve this. OIDC in 2014. Both let identity assertions travel over HTTPS — the one protocol that crosses every firewall.


SAML 2.0: Enterprise Browser SSO

SAML 2.0 has three actors: the User, the Identity Provider (IdP), and the Service Provider (SP).

1. User visits SP (e.g., Salesforce)
   SP: "I don't know this user — send them to the IdP"
   ↓  HTTP redirect with SAMLRequest (base64-encoded AuthnRequest)

2. User arrives at IdP (e.g., Okta, AD FS, Entra ID)
   IdP: "Authenticate me" → user enters credentials
   IdP: generates a signed SAML Assertion (XML)
   ↓  HTTP POST to SP's Assertion Consumer Service (ACS) URL

3. SP receives the SAMLResponse
   SP: verifies the signature using IdP's public key
   SP: extracts user attributes from the Assertion
   SP: creates a session — user is logged in

The SAML Assertion is an XML document signed by the IdP. It contains:

<saml:Assertion>
  <saml:Issuer>https://idp.corp.com</saml:Issuer>
  <saml:Subject>
    <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">
      [email protected]
    </saml:NameID>
  </saml:Subject>
  <saml:Conditions
    NotBefore="2026-04-27T01:00:00Z"
    NotOnOrAfter="2026-04-27T01:05:00Z">  ← short-lived: replay protection
  </saml:Conditions>
  <saml:AttributeStatement>
    <saml:Attribute Name="email">
      <saml:AttributeValue>[email protected]</saml:AttributeValue>
    </saml:Attribute>
    <saml:Attribute Name="groups">
      <saml:AttributeValue>engineers</saml:AttributeValue>
      <saml:AttributeValue>sre-team</saml:AttributeValue>
    </saml:Attribute>
  </saml:AttributeStatement>
</saml:Assertion>

The SP trusts the assertion because it’s signed with the IdP’s private key, and the SP has the IdP’s public certificate configured. No direct connection between SP and IdP needed during authentication — only the browser carries the assertion.

SP-initiated vs IdP-initiated:
– SP-initiated: user visits the SP, gets redirected to IdP, authenticates, redirected back — the common flow
– IdP-initiated: user starts at the IdP (e.g., company portal), clicks an app, IdP sends assertion directly — simpler but no SP-generated RequestID, so the SP can’t verify the request was expected (a security concern)


OAuth2: Authorization Delegation (Not Authentication)

This distinction is important and consistently confused: OAuth2 is for authorization, not authentication.

OAuth2 solves: “I want to let GitHub Actions post to my Slack without giving GitHub my Slack password.”

Resource Owner (you)  → grants permission to →  Client (GitHub Actions)
                                                        │
                                                        │ access_token
                                                        ▼
                                               Resource Server (Slack API)
                                               "this token can post messages"

The access_token answers “what can this client do?” not “who is this user?” A resource server receiving an access token knows the token is valid and what scopes it carries — it does not necessarily know which human authorized it.

The four OAuth2 grant types:

Grant Use case
Authorization Code Web apps (server-side) — most secure, recommended
PKCE (+ Auth Code) Native/SPA apps — Auth Code without client secret
Client Credentials Machine-to-machine (no user) — service accounts
Device Code Devices without browsers (smart TVs, CLIs)

The Implicit grant (tokens in URL fragment) is deprecated. Don’t use it.


OIDC: OAuth2 + Who You Are

OpenID Connect adds identity to OAuth2 by adding the id_token — a JWT that the IdP signs and that contains claims about the authenticated user.

Authorization Code flow with OIDC:

1. Client redirects user to IdP:
   GET /authorize?
     response_type=code
     &client_id=myapp
     &scope=openid email profile    ← "openid" scope triggers OIDC
     &redirect_uri=https://app.com/callback
     &state=random-nonce

2. IdP authenticates user, returns:
   GET /callback?code=AUTH_CODE&state=random-nonce

3. Client exchanges code for tokens:
   POST /token
   grant_type=authorization_code&code=AUTH_CODE...

4. IdP returns:
   {
     "access_token": "eyJ...",    ← what the user authorized
     "id_token": "eyJ...",        ← who the user is (JWT)
     "token_type": "Bearer",
     "expires_in": 3600
   }

The id_token decoded:

{
  "iss": "https://idp.corp.com",          ← issuer (the IdP)
  "sub": "user-guid-12345",               ← subject (stable user identifier)
  "aud": "myapp",                          ← audience (your client_id)
  "exp": 1745730000,                       ← expiry (Unix timestamp)
  "iat": 1745726400,                       ← issued at
  "email": "[email protected]",
  "name": "Vamshi Krishna",
  "groups": ["engineers", "sre-team"]     ← custom claims from IdP
}
# Decode any JWT at the command line (no verification — for debugging only)
echo "eyJ..." | cut -d. -f2 | base64 -d 2>/dev/null | python3 -m json.tool

# Or: jwt.io — paste the token, read every claim

sub is the stable user identifier. Email addresses change. Names change. The sub claim is the IdP’s internal identifier for the user — use it as the primary key when storing user data. Never store email as the primary key.


SAML vs OIDC: When to Use Which

SAML 2.0 OIDC
Format XML JSON / JWT
Transport HTTP POST (browser only) HTTP redirect + JSON API
Age 2002 2014
Enterprise adoption Very high (AD FS, Okta, Entra ID) Very high (newer apps)
API-friendly No Yes
Mobile apps No Yes
Complexity High (XML, schemas, signatures) Medium (JWT, JSON)
Single Logout Specified (rarely works well) Optional, inconsistent

Use SAML when: You’re integrating with an enterprise SaaS that only supports SAML (Salesforce classic, legacy HR systems), or your IdP team mandates it.

Use OIDC when: You’re building a new application, integrating with a modern IdP, or need API-based token validation. OIDC is the default for everything new.

Use OAuth2 (Client Credentials) when: Service-to-service authentication with no user — your CI/CD pipeline authenticating to an API, your microservice calling another microservice.


A Complete Browser SSO Flow (OIDC)

1. User visits https://app.corp.com (not logged in)
   App: no session → redirect to IdP

2. GET https://idp.corp.com/authorize?
        response_type=code
        &client_id=app-corp
        &scope=openid email
        &redirect_uri=https://app.corp.com/callback
        &state=abc123
        &nonce=xyz789

3. IdP: user is not authenticated → show login form
   User: enters [email protected] + password
   (or: IdP sees existing session cookie → skip login)

4. IdP: authentication success
   Redirect: GET https://app.corp.com/callback?code=AUTH_CODE&state=abc123

5. App (server-side): validate state=abc123 (CSRF protection)
   POST https://idp.corp.com/token
     grant_type=authorization_code
     &code=AUTH_CODE
     &client_id=app-corp
     &client_secret=SECRET
     &redirect_uri=https://app.corp.com/callback

6. IdP responds:
   { "id_token": "JWT...", "access_token": "JWT...", "expires_in": 3600 }

7. App: validate id_token signature (using IdP's JWKS endpoint)
   App: extract sub, email, groups from id_token
   App: create session for [email protected]
   App: redirect user to original destination

Step 7 is where most bugs live. The app must validate: signature (using IdP’s public keys from /.well-known/jwks.json), iss (matches the expected IdP), aud (matches the client_id), exp (not expired), and nonce (matches what was sent in step 2). Skip any of these and you have an authentication bypass.


⚠ Common Misconceptions

“OAuth2 is for login.” OAuth2 is for authorization delegation. It can be used as a login mechanism only when OIDC (the openid scope + id_token) is added on top. “Login with Google” uses OIDC, not bare OAuth2.

“JWTs are encrypted.” By default, JWTs are signed (JWS), not encrypted. The header and payload are base64url-encoded — anyone can decode them. Encryption (JWE) is a separate, less commonly used spec. Never put secrets in a JWT payload assuming it’s private.

“SAML Single Logout works reliably.” SAML SLO is specified but inconsistently implemented. Many SPs ignore SLO requests or don’t propagate them correctly. Don’t depend on SLO for security — session revocation requires additional mechanisms (short-lived tokens, token introspection, session registries).


Framework Alignment

Domain Relevance
CISSP Domain 5: Identity and Access Management SAML, OAuth2, and OIDC are the three protocols that enable federated identity and SSO — understanding which does what is foundational to modern IAM design
CISSP Domain 4: Communications and Network Security JWT validation (signature, claims, expiry) is a network security control — failing to validate any claim is an authentication bypass vulnerability
CISSP Domain 3: Security Architecture and Engineering The choice of SAML vs OIDC is an architectural decision that affects every application integration, mobile support, and API design

Key Takeaways

  • SAML 2.0: XML-based browser SSO — three redirects, signed assertion, enterprise legacy apps
  • OAuth2: authorization delegation — access tokens grant scopes, not identity
  • OIDC: OAuth2 + id_token — adds who the user is on top of what they can do
  • sub is the stable user identifier in OIDC — never use email as a primary key
  • JWT validation must check: signature, iss, aud, exp, nonce — missing any is a security bypass
  • New applications: OIDC. Legacy enterprise SaaS: SAML. Service-to-service: OAuth2 Client Credentials

What’s Next

EP10 covered the protocols. EP11 covers the systems that implement them — the identity providers: what Okta, Entra ID, Keycloak, and AD FS actually do, how they federate with each other, and how SCIM handles user provisioning separately from authentication.

Next: Identity Providers Explained: On-Prem, Cloud, SCIM, and Federation

Get EP11 in your inbox when it publishes → linuxcent.com/subscribe

How Kerberos Works: Tickets, KDC, and Why Enterprises Use It With LDAP

Reading Time: 7 minutes

The Identity Stack, Episode 5
EP01EP02EP03EP04: SSSDEP05EP06: OpenLDAP → …

Focus Keyphrase: how Kerberos works
Search Intent: Informational
Meta Description: How Kerberos works: the KDC, ticket-granting tickets, and the three-step flow that lets enterprises authenticate without sending passwords on the wire. (157 chars)


TL;DR

  • Kerberos is a network authentication protocol — it proves identity without sending passwords over the network, using time-limited cryptographic tickets
  • Three actors: the client, the KDC (Key Distribution Center), and the service — the KDC issues tickets; clients use tickets to authenticate to services
  • The ticket flow: AS-REQ (get a TGT) → TGS-REQ (exchange TGT for a service ticket) → AP-REQ (present service ticket to the target service)
  • A TGT (Ticket-Granting Ticket) is a session credential — it lets you request service tickets without re-entering your password for the lifetime of the ticket (default 10 hours)
  • LDAP + Kerberos together: LDAP stores identity (who you are), Kerberos authenticates it (proves you are who you say you are) — Active Directory is exactly this combination
  • kinit, klist, kdestroy are the hands-on tools — run them and read the ticket output

The Big Picture: Three Actors, Three Steps

         1. AS-REQ / AS-REP
Client ◄────────────────────► AS (Authentication Server)
  │                                     │
  │    (part of KDC)                    │
  │                                     ▼
  │         2. TGS-REQ / TGS-REP   TGS (Ticket-Granting Server)
  ├───────────────────────────────────►│
  │         (part of KDC)              │
  │                                    │
  │    3. AP-REQ / AP-REP              │
  └─────────────────────────────► Service (SSH, LDAP, NFS, HTTP...)

KDC = AS + TGS (usually the same process, same machine)

EP04 mentioned Kerberos tickets and clock skew requirements without explaining the protocol. This episode explains why Kerberos was invented, what a ticket actually is, and how the three-step flow works — so that when SSSD says “KDC unreachable” or kinit fails with “pre-authentication required,” you know exactly what’s happening.


The Problem Kerberos Was Built to Solve

MIT’s Project Athena started in 1983 — a campus-wide computing initiative giving students access to thousands of workstations. The problem: how do you authenticate a student at workstation 847 to a file server across campus without sending their password over the network?

In 1988, Steve Miller and Clifford Neuman published Kerberos version 4. The core insight: a trusted third party (the KDC) can issue cryptographic proof that a user has authenticated, and that proof can be presented to any service on the network without the service ever seeing the user’s password.

The password never leaves the client machine after the initial authentication. Every subsequent authentication — to a different service, to the same service again — uses a ticket. The KDC knows both the client and the service. The client and service only need to trust the KDC.


Keys, Tickets, and Sessions

Before the protocol, the primitives:

Long-term keys — derived from passwords. When you set a password in Kerberos, it’s hashed into a key stored in the KDC database (in the krbtgt account on AD, in /var/lib/krb5kdc/principal on MIT Kerberos). The client also derives this key from the password at authentication time. Neither ever sends the raw password.

Session keys — temporary symmetric keys created by the KDC for a specific session. They’re valid for the ticket’s lifetime. After the ticket expires, the session key is useless.

Tickets — encrypted blobs issued by the KDC. A ticket contains the session key, the client identity, the expiry time, and optional flags. It’s encrypted with the target service’s long-term key — only the service can decrypt it. The client carries the ticket but can’t read the contents.


The Three-Step Flow

Step 1: AS-REQ / AS-REP — Getting a TGT

Client                        KDC (AS component)
  │                                │
  │── AS-REQ ──────────────────────►
  │   {username, timestamp}         │
  │   (timestamp encrypted with     │
  │    client's long-term key)       │
  │                                 │
  │   KDC verifies: decrypts        │
  │   timestamp with stored key.    │
  │   If valid → issues TGT         │
  │                                 │
  ◄── AS-REP ──────────────────────│
      {session_key_enc_with_client, │
       TGT_enc_with_krbtgt_key}     │

The client decrypts the session key using its long-term key (derived from the password). The TGT is encrypted with the KDC’s own key (krbtgt) — the client can’t read it, but carries it.

This is the step that requires the password. After this, the TGT is what the client uses for everything else.

Step 2: TGS-REQ / TGS-REP — Getting a Service Ticket

Client                        KDC (TGS component)
  │                                │
  │── TGS-REQ ─────────────────────►
  │   {TGT, authenticator,         │
  │    target_service_name}        │
  │   (authenticator encrypted      │
  │    with TGT session key)        │
  │                                 │
  │   KDC: decrypts TGT,           │
  │   verifies authenticator,       │
  │   issues service ticket         │
  │                                 │
  ◄── TGS-REP ────────────────────│
      {service_session_key_enc,    │
       service_ticket_enc_with_    │
       service_long_term_key}      │

No password involved. The client proves its identity by presenting the TGT (which only the KDC can issue) and an authenticator (a timestamp encrypted with the TGT’s session key, proving the client holds the session key without revealing it).

Step 3: AP-REQ / AP-REP — Authenticating to the Service

Client                        Service (sshd, LDAP, NFS...)
  │                                │
  │── AP-REQ ──────────────────────►
  │   {service_ticket,             │
  │    authenticator_enc_with_      │
  │    service_session_key}        │
  │                                 │
  │   Service: decrypts ticket      │
  │   with its long-term key,       │
  │   verifies authenticator        │
  │                                 │
  ◄── AP-REP (optional) ───────────│
      {mutual authentication}       │

The service decrypts the ticket using its own key. It extracts the client identity and session key. It verifies the authenticator. No communication with the KDC required — the service trusts what the KDC signed.


Why Clock Skew Matters

Every Kerberos authenticator contains a timestamp. The service rejects authenticators older than 5 minutes (by default) — this prevents replay attacks where an attacker captures an authenticator and replays it later.

This is why clock skew over 5 minutes breaks Kerberos authentication entirely. If your machine’s clock drifts 6 minutes from the KDC, every authenticator you generate is rejected as too old or too far in the future. No tickets. No AD logins. No SSSD authentication.

# Check time sync status
timedatectl status
chronyc tracking        # if using chrony
ntpq -p                 # if using ntpd

# If clock is off: force a sync
chronyc makestep        # immediate step correction (chrony)

Hands-On: kinit, klist, kdestroy

# Get a TGT (will prompt for password)
kinit [email protected]

# Show current tickets
klist
# Credentials cache: FILE:/tmp/krb5cc_1001
# Principal: [email protected]
#
# Valid starting     Expires            Service principal
# 04/27/26 01:00:00  04/27/26 11:00:00  krbtgt/[email protected]
#   renew until 05/04/26 01:00:00

# Show encryption types used (the -e flag)
klist -e
# 04/27/26 01:00:00  04/27/26 11:00:00  krbtgt/[email protected]
#         Etype: aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96

# Get a service ticket for a specific service
kvno host/[email protected]
# host/[email protected]: kvno = 3

# Show all tickets including service tickets
klist -f
# Flags: F=forwardable, f=forwarded, P=proxiable, p=proxy, D=postdated,
#        d=postdated, R=renewable, I=initial, i=invalid, H=hardware auth

# Destroy all tickets
kdestroy

The Valid starting and Expires fields are the ticket lifetime. After expiry, you need to re-authenticate (or renew the ticket if it’s within the renew until window). The renew until date is when even renewal stops working.


/etc/krb5.conf

[libdefaults]
    default_realm = CORP.COM
    dns_lookup_realm = false
    dns_lookup_kdc = true         # find KDCs via DNS SRV records
    ticket_lifetime = 10h
    renew_lifetime = 7d
    forwardable = true            # tickets can be forwarded to remote hosts (needed for SSH forwarding)
    rdns = false

[realms]
    CORP.COM = {
        kdc = dc01.corp.com
        kdc = dc02.corp.com       # failover KDC
        admin_server = dc01.corp.com
    }

[domain_realm]
    .corp.com = CORP.COM
    corp.com = CORP.COM

With dns_lookup_kdc = true, Kerberos finds KDCs by querying DNS SRV records (_kerberos._tcp.corp.com). AD sets these up automatically. On MIT Kerberos, you add them manually. DNS-based discovery is the recommended approach for AD environments — it picks up new DCs automatically.


Kerberos + LDAP: Why Enterprises Run Both

LDAP and Kerberos solve different problems and are almost always deployed together:

LDAP answers:  "Who is vamshi? What groups is he in? What's his home directory?"
Kerberos answers: "Is this really vamshi? Prove it without sending a password."

Active Directory is exactly this combination — the directory is LDAP-based, the authentication is Kerberos. When a Linux machine joins an AD domain via realm join or adcli, it gets:
– LDAP access to the AD directory (for NSS: user and group lookups)
– A Kerberos principal registered in AD (for PAM: ticket-based authentication)
– A machine account (the machine’s identity in the directory)

When you SSH into an AD-joined Linux machine:
1. SSSD issues a Kerberos AS-REQ for the user’s TGT
2. SSSD uses the TGT to get a service ticket for the Linux machine’s PAM service
3. Authentication is verified via the service ticket — no LDAP Bind with a password
4. SSSD does an LDAP Search to get POSIX attributes (UID, GID, home dir)

Password-based LDAP Bind is the fallback when Kerberos isn’t available. Kerberos is the default on AD-joined systems — and it’s more secure because the password never leaves the client.


⚠ Common Misconceptions

“Kerberos sends your password to the KDC.” It doesn’t. The client derives a key from the password locally and uses that key to encrypt a timestamp (the pre-authentication data). The KDC verifies the timestamp using the stored key. The raw password never travels.

“Kerberos is an authorization protocol.” Kerberos authenticates — it proves who you are. Authorization (what you can do) is a separate decision, usually handled by ACLs on the service or directory group membership.

“Once you have a TGT, you’re authenticated to everything.” A TGT only proves your identity to the KDC. Each service requires a separate service ticket. The TGT is what lets you get those service tickets without re-entering your password.

“Kerberos requires AD.” MIT Kerberos 5 is a standalone implementation. FreeIPA (EP08) runs MIT Kerberos. Heimdal is another implementation. AD uses a Microsoft-extended version of Kerberos 5, but the core protocol is the same RFC.


Framework Alignment

Domain Relevance
CISSP Domain 5: Identity and Access Management Kerberos is the de facto enterprise authentication protocol — SSO, delegation, and service account authentication all depend on it
CISSP Domain 4: Communications and Network Security Kerberos prevents credential sniffing and replay attacks — two of the core network authentication threat categories
CISSP Domain 3: Security Architecture and Engineering The KDC is a critical single point of trust — its availability, key management, and account (krbtgt) rotation are architectural security decisions

Key Takeaways

  • Kerberos is a ticket-based protocol — the password is used once to get a TGT; from then on, tickets prove identity without the password
  • The three-step flow: get a TGT from the AS, exchange it for a service ticket at the TGS, present the service ticket to the target service
  • Clock skew over 5 minutes breaks Kerberos — time synchronization is a hard dependency
  • LDAP stores identity; Kerberos authenticates it — Active Directory is exactly this combination, and so is FreeIPA
  • klist -e shows the encryption types in use — aes256-cts-hmac-sha1-96 is what you want to see; arcfour-hmac (RC4) is legacy and should be disabled

What’s Next

EP05 covered Kerberos as a protocol. EP06 goes hands-on: building a real LDAP directory with OpenLDAP, configuring replication, and understanding how the server-side components — slapd, the MDB backend, SyncRepl — fit together.

Next: OpenLDAP Setup and Replication: Running Your Own Directory

Get EP06 in your inbox when it publishes → linuxcent.com/subscribe

SSSD: The Caching Daemon That Powers Every Enterprise Linux Login

Reading Time: 7 minutes

The Identity Stack, Episode 4
EP01: What Is LDAPEP02: LDAP InternalsEP03: LDAP Auth on LinuxEP04EP05: Kerberos → …

Focus Keyphrase: SSSD Linux
Search Intent: Informational
Meta Description: SSSD powers every enterprise Linux login — but most engineers only interact with it when it breaks. Here’s the architecture, the config knobs that matter, and how to debug it. (185 chars — trim to: SSSD powers every enterprise Linux login. Here’s the architecture, the sssd.conf knobs that matter, and how to debug it when it breaks. (137 chars))
Meta Description (final): SSSD powers every enterprise Linux login. Here’s the architecture, the sssd.conf knobs that matter, and how to debug it when it breaks. (137 chars)


TL;DR

  • SSSD (System Security Services Daemon) is the caching and brokering layer between Linux and directory services — it handles LDAP, Kerberos, and AD so PAM and NSS don’t have to
  • Architecture: three tiers — responders (answer PAM/NSS queries), providers (talk to AD/LDAP/Kerberos), and a shared cache (LDB database on disk)
  • Credential caching means offline logins work — a user who authenticated yesterday can log in today even if the domain controller is unreachable
  • Key config: sssd.conf — the [domain] section is where almost all tuning happens
  • Debugging toolkit: sssctl, sss_cache, id, getent, journalctl -u sssd
  • The most common failure modes are: SSSD not running, stale cache, misconfigured ldap_search_base, and clock skew breaking Kerberos

The Big Picture: SSSD as the Identity Broker

PAM (pam_sss)         NSS (sss module)
      │                      │
      └──────────┬───────────┘
                 ▼
          SSSD Responders
          ┌────────────────────────────────────┐
          │  PAM responder   NSS responder      │
          │  (auth, account, (passwd, group,    │
          │   session)        shadow lookups)   │
          └────────────┬───────────────────────┘
                       │  shared cache (LDB)
                       ▼
          SSSD Providers
          ┌────────────────────────────────────┐
          │  identity provider  auth provider   │
          │  (user/group attrs) (credentials)   │
          └────────────┬───────────────────────┘
                       │
          ┌────────────┼────────────┐
          ▼            ▼            ▼
       LDAP          Kerberos    Local files
    (AD / OpenLDAP)  (KDC / AD)

EP03 showed that SSSD sits between PAM and LDAP. This episode goes inside it — the architecture, the config, and how to tell exactly what it’s doing on any given login attempt.


Why SSSD Exists

The problem before SSSD: nss_ldap and pam_ldap made direct LDAP connections for every query. No caching, no connection pooling, no failover, no offline support. On a system that makes dozens of getpwuid() calls per second (every ls -l, every process spawn), this meant dozens of LDAP roundtrips per second hitting the domain controller.

SSSD solved this with a single daemon that:
– Maintains a persistent connection pool to the directory
– Caches identity and credential data in an LDB (LDAP-like) database on disk
– Handles failover across multiple directory servers
– Satisfies PAM and NSS queries from cache when the directory is unreachable

The credential cache is the key insight. When you authenticate successfully, SSSD stores a hash of your credentials locally. If the domain controller is unreachable on your next login — network outage, laptop offline, VPN not connected — SSSD can verify your credentials against the local cache. You log in. You never knew the DC was down.


SSSD Architecture

SSSD is a set of cooperating processes sharing a cache:

Monitor — the parent process. Starts and restarts all other SSSD processes. If a responder or provider crashes, the monitor restarts it.

Responders — answer queries from PAM and NSS. Each responder handles a specific interface:
sssd_nss — answers getpwnam(), getpwuid(), getgrnam(), initgroups() calls
sssd_pam — handles PAM authentication, account checks, and session management
sssd_autofs, sssd_ssh, sssd_sudo — optional responders for specific services

Providers — the backend processes that talk to the actual directory:
– Each domain gets its own provider process (sssd_be[domain_name])
– The provider connects to LDAP/Kerberos/AD, fetches data, and writes it to the shared cache
– If the provider crashes or loses connectivity, responders fall back to serving from cache

Cache — LDB files in /var/lib/sss/db/. One database per configured domain, plus a cache for negative results (lookups that returned “not found”). The cache is an LDAP-like directory stored on disk — SSSD uses the same hierarchical structure for local storage as the remote directory uses.

# See the cache files
ls -la /var/lib/sss/db/
# cache_corp.com.ldb         ← user/group data for domain corp.com
# ccache_corp.com            ← Kerberos credential cache
# timestamps_corp.com.ldb   ← when entries were last refreshed

sssd.conf: The Config That Matters

/etc/sssd/sssd.conf has a [sssd] section (global) and one [domain/name] section per directory. The domain section is where almost all tuning happens.

[sssd]
services = nss, pam, sudo
domains = corp.com
config_file_version = 2

[domain/corp.com]
# What type of directory this is
id_provider = ad               # or: ldap, ipa, files
auth_provider = ad             # or: ldap, krb5, none
access_provider = ad           # controls who can log in

# The AD/LDAP server (can be a list for failover)
ad_domain = corp.com
ad_server = dc01.corp.com, dc02.corp.com

# Where to look for users and groups
ldap_search_base = dc=corp,dc=com

# Cache behavior
cache_credentials = true       # enable offline login
entry_cache_timeout = 5400     # how long before re-querying (seconds)
offline_credentials_expiration = 1  # days cached credentials stay valid offline

# What uid/gid range belongs to this domain (prevents UID conflicts)
ldap_id_mapping = true         # auto-map AD SIDs to UIDs (no uidNumber needed)
# OR for classical POSIX LDAP:
# ldap_id_mapping = false      # use uidNumber/gidNumber from directory

# Restrict logins to specific AD groups
# access_provider = simple
# simple_allow_groups = linux-admins, sre-team

# Home directory and shell defaults
override_homedir = /home/%u
default_shell = /bin/bash
fallback_homedir = /home/%u

# Enumerate all users (expensive on large dirs — disable unless needed)
enumerate = false

The two most commonly wrong settings:

ldap_search_base — if this doesn’t include the OU where your users live, SSSD won’t find them. On AD, the default searches the entire domain, which is usually correct. On OpenLDAP, you may need ou=people,dc=corp,dc=com.

ldap_id_mapping — on AD, users typically don’t have uidNumber attributes. Setting ldap_id_mapping = true tells SSSD to derive a UID from the user’s SID algorithmically. This produces consistent UIDs across machines. Setting it to false requires actual uidNumber attributes in the directory.


Credential Caching and Offline Logins

The cache is what separates SSSD from a simple proxy. When cache_credentials = true:

  1. On successful authentication, SSSD stores a hash of the credential in the LDB cache
  2. On the next authentication attempt, SSSD first tries the domain controller
  3. If the DC is unreachable, SSSD falls back to the local credential hash
  4. If the hash matches, login succeeds — even with no network

The credential hash is not the cleartext password — it’s a salted hash stored in /var/lib/sss/db/cache_corp.com.ldb. The security model is the same as /etc/shadow: someone with root access to the machine can access the hashes.

offline_credentials_expiration controls how long cached credentials stay valid when the DC is unreachable. 0 means forever (not recommended for high-security environments). 1 means one day — after 24 hours offline, even cached credentials expire and the user must authenticate online.


The Debugging Toolkit

# 1. Is SSSD running?
systemctl status sssd
pgrep -a sssd    # shows all SSSD processes (monitor + responders + providers)

# 2. Domain connectivity status
sssctl domain-status corp.com
# Domain: corp.com
# Active servers:
#   LDAP: dc01.corp.com
#   KDC: dc01.corp.com
# Discovered servers:
#   LDAP: dc01.corp.com, dc02.corp.com

# 3. Can SSSD find a specific user?
sssctl user-checks vamshi
# user: vamshi
# user name: [email protected]
# POSIX attributes: UID=1001, GID=1001, ...
# Authentication: success (uses actual PAM auth stack)

# 4. What does NSS see?
getent passwd vamshi          # full passwd entry
id vamshi                     # uid, gid, groups

# 5. Flush stale cache entries
sss_cache -u vamshi           # invalidate one user
sss_cache -G engineers        # invalidate one group
sss_cache -E                  # invalidate everything (nuclear option)

# 6. Live logs
journalctl -u sssd -f         # tail all SSSD logs
# Then attempt login in another terminal — watch the auth flow in real time

# 7. Increase log verbosity temporarily
sssctl config-check            # validate sssd.conf syntax
# Edit sssd.conf: add debug_level = 6 under [domain/corp.com]
systemctl restart sssd
journalctl -u sssd -f          # now shows LDAP queries, cache hits/misses

The single most useful command is sssctl user-checks <username>. It runs the full NSS + PAM auth stack internally and prints what SSSD would do on a real login — without creating a session or touching the running system.


Breaking SSSD (and What Each Failure Looks Like)

SSSD not running:

ssh vamshi@server
# Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password)
# getent passwd vamshi → (empty)
# Fix: systemctl start sssd

Stale cache after AD password change:

# User changed password in AD but SSSD still has old credential hash
ssh vamshi@server  # password accepted (wrong!) — cache hit with old hash
# Fix: sss_cache -u vamshi, then attempt login again

Clock skew > 5 minutes (breaks Kerberos):

journalctl -u sssd | grep -i "clock skew\|KDC\|kinit"
# sssd_be[corp.com]: Kerberos authentication failed: Clock skew too great
# Fix: systemctl restart chronyd (or ntpd), verify time sync

ldap_search_base wrong:

getent passwd vamshi  # empty, but user exists in AD
sssctl user-checks vamshi  # "User not found"
# Check: ldap_search_base must include the OU containing users
# Test: ldapsearch -x -H ldap://dc -b "ou=engineers,dc=corp,dc=com" "(uid=vamshi)"

⚠ Common Misconceptions

“Restarting SSSD logs everyone out.” Restarting SSSD doesn’t affect existing authenticated sessions. Active shell sessions, running processes — all unaffected. Only new authentication attempts are disrupted during the restart window, which takes a few seconds.

“sss_cache -E fixes everything.” Flushing the entire cache forces SSSD to re-fetch all entries from the domain controller on the next lookup. On a system with many users or enumeration enabled, this can cause a brief spike in LDAP traffic and slow lookups. Use targeted flushes (-u username, -G group) when possible.

“debug_level should always be high.” SSSD at debug_level = 9 logs every LDAP packet. On a production system with active logins, this generates gigabytes of logs quickly. Set it temporarily for debugging, then remove it and restart.


Framework Alignment

Domain Relevance
CISSP Domain 5: Identity and Access Management SSSD is the runtime implementation of enterprise identity integration on Linux — understanding its caching model, failover behavior, and credential storage is foundational to IAM operations
CISSP Domain 3: Security Architecture and Engineering The credential cache design (/var/lib/sss/db/) creates a local credential store with specific security properties — architects need to understand the offline login trade-off
CISSP Domain 7: Security Operations SSSD is a critical security service — monitoring it, understanding its failure modes, and knowing how to recover it quickly are operational security skills

Key Takeaways

  • SSSD is a three-tier system: responders (serve PAM/NSS), providers (talk to AD/LDAP), and a shared LDB cache — each tier is independently restartable
  • Credential caching enables offline logins — the security trade-off is a local hash store in /var/lib/sss/db/
  • sssctl user-checks is the first tool to reach for when a login fails — it simulates the full auth flow and shows exactly where it breaks
  • ldap_id_mapping = true is the right choice for AD environments without POSIX attributes; false requires actual uidNumber/gidNumber in the directory
  • Clock skew over 5 minutes silently breaks Kerberos authentication — time sync is a hard dependency

What’s Next

EP04 showed SSSD’s role as the caching and brokering layer. What it referenced repeatedly — “Kerberos ticket”, “KDC”, “GSSAPI” — is the authentication protocol that sits underneath AD-joined Linux logins. SSSD uses Kerberos to authenticate. LDAP carries the identity data. EP05 explains how Kerberos works.

Next: How Kerberos Works: Tickets, KDC, and Why Enterprises Use It With LDAP

Get EP05 in your inbox when it publishes → linuxcent.com/subscribe

How LDAP Authentication Works on Linux: PAM, NSS, and the Login Stack

Reading Time: 9 minutes

The Identity Stack, Episode 3
EP01: What Is LDAPEP02: LDAP InternalsEP03EP04: SSSD → …

Focus Keyphrase: LDAP authentication Linux
Search Intent: Informational
Meta Description: Trace a Linux SSH login through PAM, NSS, and LDAP step by step — and understand why LDAP alone is not an authentication protocol. (144 chars)


TL;DR

  • LDAP is a directory protocol — it stores identity information and can verify a password via Bind, but authentication on Linux runs through PAM, not directly through LDAP
  • NSS (/etc/nsswitch.conf) answers “who is this user?” — it resolves UIDs, group memberships, and home directories by querying LDAP (or the local files, or SSSD)
  • PAM (/etc/pam.d/) answers “are they allowed in?” — it enforces authentication, account validity, session setup, and password policy
  • pam_ldap (the old way) opened a direct LDAP connection on every login — fragile, no caching, broken when the LDAP server was unreachable
  • pam_sss (the modern way) delegates to SSSD, which caches credentials and handles failover — SSSD is the layer between Linux and the directory
  • Tracing a single SSH login: sshd → PAM → pam_sss → SSSD → LDAP Bind + Search → session created

The Big Picture: One SSH Login, Four Layers

You type: ssh [email protected]

  sshd
    │
    ▼
  PAM  (/etc/pam.d/sshd)          ← "Is this user allowed in?"
    │
    ├── pam_sss    (auth)          ← sends credentials to SSSD
    ├── pam_sss    (account)       ← checks account not expired/locked
    ├── pam_sss    (session)       ← logs the session open/close
    └── pam_mkhomedir (session)    ← creates /home/vamshi if it doesn't exist
    │
    ▼
  SSSD  (/etc/sssd/sssd.conf)     ← "Let me check the directory"
    │
    ├── NSS responder              ← answers getent, id, getpwnam
    └── LDAP/Kerberos provider     ← talks to the actual directory
    │
    ▼
  LDAP Server (AD / OpenLDAP)
    │
    ├── Bind: uid=vamshi + password (or Kerberos ticket)
    └── Search: posixAccount attrs for uid=vamshi
    │
    ▼
  Linux session created
  UID=1001, GID=1001, HOME=/home/vamshi, SHELL=/bin/bash

EP02 showed what the directory contains and what travels on the wire. What it left open is how Linux uses that to grant a login — and why LDAP is not, by itself, an authentication protocol.


Why LDAP Is Not an Authentication Protocol

This is the confusion that trips people most. LDAP can verify a password — the Bind operation does exactly that. But authentication on Linux means something broader: checking credentials, checking account validity, enforcing password policy, setting up a session, creating a home directory. LDAP handles one piece of that. PAM handles the rest.

More precisely: LDAP doesn’t know what a Linux session is. It doesn’t know about /etc/pam.d/. It doesn’t enforce login hours, account expiry, or concurrent session limits. It returns directory entries and verifies binds. The intelligence about what to do with those results lives in the Linux authentication stack.

When you run ssh vamshi@server, the OS doesn’t open an LDAP connection and ask “can this user log in?” It calls PAM. PAM consults its configuration, and PAM decides whether to call LDAP (directly or via SSSD), whether to check the shadow file, whether to enforce MFA. LDAP is one possible backend. It’s not the gatekeeper.


NSS: The Traffic Controller

Before PAM runs, Linux needs to know if the user exists at all. That’s NSS’s job.

/etc/nsswitch.conf is a routing table for name resolution. It tells the OS where to look when something asks “who is UID 1001?” or “what groups is vamshi in?”:

# /etc/nsswitch.conf

passwd:     files sss        ← user lookups: check /etc/passwd first, then SSSD
group:      files sss        ← group lookups: check /etc/group first, then SSSD
shadow:     files sss        ← shadow password lookups
hosts:      files dns        ← hostname lookups (not identity-related)
netgroup:   sss              ← NIS netgroups from SSSD only
automount:  sss              ← autofs maps from SSSD

Every call to getpwnam(), getpwuid(), getgrnam(), getgrgid() in any process — including sshd — goes through NSS. The entries in nsswitch.conf control which backends are tried in order.

With passwd: files sss, a lookup for user vamshi:
1. Checks /etc/passwd — not found (vamshi is a domain user, not in local files)
2. Queries SSSD — SSSD checks its cache, or queries LDAP, and returns the posixAccount attributes

Without the sss entry in passwd:, domain users don’t exist on the system — getent passwd vamshi returns nothing, id vamshi fails, SSH login never gets to PAM’s authentication step.

# Verify NSS is routing to SSSD correctly
getent passwd vamshi
# vamshi:*:1001:1001:Vamshi K:/home/vamshi:/bin/bash

# If this returns nothing, NSS isn't reaching SSSD
# Check: systemctl status sssd && grep passwd /etc/nsswitch.conf

# See what groups the user is in (NSS group lookup)
id vamshi
# uid=1001(vamshi) gid=1001(engineers) groups=1001(engineers),1002(ops)

PAM: The Real Gatekeeper

PAM (Pluggable Authentication Modules) is the framework that lets Linux swap authentication backends without recompiling anything. Every service that needs to authenticate users — sshd, sudo, login, su, gdm — has a PAM configuration file in /etc/pam.d/.

Each PAM config defines four stacks:

auth        ← verify credentials (password, key, MFA)
account     ← check if the account is valid (not expired, not locked, login hours)
password    ← password change policy
session     ← set up/tear down the session (home dir, limits, logging)

A typical /etc/pam.d/sshd on a system joined to AD via SSSD:

# /etc/pam.d/sshd

# auth stack — verify the user's credentials
auth    required      pam_sepermit.so
auth    substack      password-auth   ← usually includes pam_sss.so

# account stack — check account validity
account required      pam_nologin.so
account include       password-auth

# password stack — handle password changes
password include      password-auth

# session stack — set up the session
session required      pam_selinux.so close
session required      pam_loginuid.so
session optional      pam_keyinit.so force revoke
session include       password-auth
session optional      pam_motd.so
session optional      pam_mkhomedir.so skel=/etc/skel/ umask=0077
session required      pam_selinux.so open

The include and substack directives pull in shared stacks from other files (like /etc/pam.d/password-auth). On a system with SSSD, password-auth contains:

auth    required      pam_env.so
auth    sufficient    pam_sss.so      ← try SSSD first
auth    required      pam_deny.so     ← if pam_sss fails, deny

account required      pam_unix.so
account sufficient    pam_localuser.so
account sufficient    pam_sss.so      ← SSSD account check
account required      pam_permit.so

session optional      pam_sss.so      ← SSSD session tracking

The sufficient flag means: if this module succeeds, stop checking this stack and consider it passed. required means: this must pass (but continue checking other modules and report failure at the end). requisite means: if this fails, stop immediately.


PAM Control Flags at a Glance

required   — must succeed; failure reported after remaining modules run
requisite  — must succeed; failure reported immediately, stack stops
sufficient — if success, stop stack (ignore remaining); failure continues
optional   — result ignored unless it's the only module in the stack

This matters for debugging. If pam_sss.so is sufficient and SSSD is down, PAM falls through to pam_deny.so — login denied. If it were optional, the login would proceed to the next module. The control flag is the policy decision.


The Old Way: pam_ldap

Before SSSD, Linux systems used pam_ldap and nss_ldap directly:

# Old /etc/pam.d/common-auth (Ubuntu pre-SSSD era)
auth    sufficient    pam_ldap.so    ← direct LDAP connection per login
auth    required      pam_unix.so nullok_secure

# Old /etc/nsswitch.conf
passwd: files ldap    ← nss_ldap for user lookups
group:  files ldap

pam_ldap opened a fresh LDAP connection on every login attempt. No caching. If the LDAP server was unreachable for 3 seconds, the login hung for 3 seconds — sometimes much longer. If the LDAP server was down, all domain logins failed immediately. Previously logged-in users with active sessions were fine; new logins simply didn’t work.

nss_ldap had the same problem for NSS lookups: every getpwnam() call hit the LDAP server directly. On a busy system with many processes doing user lookups, this meant hundreds of LDAP queries per second, no connection reuse, and no way to survive a brief network blip.

The problems were structural:
– No credential caching — offline logins impossible
– No connection pooling — LDAP server saw one connection per login attempt
– No failover logic — one LDAP server down meant all logins down
– Slow timeouts that blocked login sessions

SSSD was built to fix all of this.


The Modern Way: pam_sss + SSSD

pam_sss doesn’t talk to LDAP directly. It’s a thin client that passes authentication requests to SSSD over a Unix domain socket. SSSD manages the LDAP connection, the credential cache, and the failover logic.

sshd  →  PAM (pam_sss)  →  SSSD (Unix socket)  →  LDAP server
                                   │
                                   └── credential cache
                                       (survives brief LDAP outages)

When pam_sss sends a credential to SSSD:
1. SSSD checks its in-memory cache — if the credential hash matches a recent successful auth, it can satisfy the request without hitting LDAP
2. If not cached (or cache expired), SSSD sends a Bind to the LDAP server
3. On success, SSSD caches the result and returns success to pam_sss
4. pam_sss returns PAM_SUCCESS, and the auth stack continues

The credential cache is what enables offline logins. If the LDAP server is unreachable and a user has authenticated successfully within the cache TTL (default: 1 day for credentials, configurable via cache_credentials = True in sssd.conf), SSSD satisfies the auth from cache and the login succeeds. The user never knows the LDAP server was down.


Tracing a Full SSH Login

Here’s every step of an SSH login for a domain user, in order:

1.  sshd accepts the TCP connection
2.  sshd calls PAM: pam_start("sshd", "vamshi", ...)

3.  PAM auth stack runs pam_sss:
      pam_sss sends credentials to SSSD via /var/lib/sss/pipes/pam

4.  SSSD auth provider:
      a. Check credential cache — miss (first login)
      b. Resolve user: NSS lookup for uid=vamshi
         → SSSD LDAP provider searches dc=corp,dc=com for (uid=vamshi)
         → Returns: uidNumber=1001, gidNumber=1001, homeDirectory=/home/vamshi
      c. Authenticate: LDAP Simple Bind as uid=vamshi,ou=engineers,dc=corp,dc=com
         → Server returns: success
      d. Cache the credential hash + POSIX attrs

5.  SSSD returns PAM_SUCCESS to pam_sss

6.  PAM account stack runs pam_sss:
      SSSD checks: account not expired, not locked, login permitted
      → PAM_ACCT_MGMT success

7.  PAM session stack:
      pam_loginuid sets /proc/self/loginuid = 1001
      pam_mkhomedir creates /home/vamshi if missing
      pam_sss opens session (records in SSSD session tracking)

8.  sshd creates the shell, sets environment:
      USER=vamshi, HOME=/home/vamshi, SHELL=/bin/bash, LOGNAME=vamshi

9.  Shell prompt appears

Steps 4b and 4c are the only two LDAP operations in the entire login flow: one Search to resolve the user’s attributes, one Bind to verify the password. Everything else is PAM and SSSD.


Debugging the Stack

When a login fails, the failure could be in any layer. Work top-down:

# 1. Does NSS resolve the user at all?
getent passwd vamshi
# If empty: NSS isn't reaching SSSD, or SSSD isn't finding the user in LDAP

# 2. Is SSSD running and healthy?
systemctl status sssd
sssctl domain-status corp.com      # shows SSSD's view of domain connectivity

# 3. What does SSSD think about the user?
sssctl user-checks vamshi          # runs auth + account checks internally
id vamshi                          # forces NSS resolution and shows group memberships

# 4. What does SSSD's log say?
journalctl -u sssd -f              # tail SSSD logs live, then attempt login

# 5. Can you reach the LDAP server at all?
ldapsearch -x -H ldap://dc.corp.com \
  -D "cn=svc-ldap,ou=services,dc=corp,dc=com" \
  -w "password" \
  -b "dc=corp,dc=com" \
  "(uid=vamshi)" dn

# 6. Force a cache flush if entries are stale
sss_cache -u vamshi                # invalidate this user's cache entry
sss_cache -G engineers             # invalidate a group

The sssctl user-checks command is the single most useful diagnostic — it simulates the full PAM auth + account check flow without actually creating a session, and prints exactly what SSSD would do on a real login attempt.


⚠ Common Misconceptions

“If ldapsearch works, SSH login should work.” Not necessarily. ldapsearch tests the LDAP layer. An SSH login requires NSS to resolve the user, PAM to authenticate, SSSD to be running and configured correctly, and pam_mkhomedir to create the home directory if it’s the first login. Any of these can fail independently.

“pam_ldap and pam_sss do the same thing.” They have the same job (authenticate via LDAP) but completely different architectures. pam_ldap is a direct-connect, no-cache module. pam_sss is a client of SSSD, which provides caching, connection pooling, failover, and offline support. On any modern system, you want pam_sss.

“nsswitch.conf order doesn’t matter much.” It matters exactly as much as the order suggests. passwd: files sss means local /etc/passwd is always checked first — if a domain username collides with a local user, the local account wins. This is the intended behavior (local accounts should always be reachable), but it means you’ll never override a local account with a directory entry.

“SSSD cache = security risk.” The cache stores a credential hash, not the cleartext password. An attacker with access to the SSSD cache database (/var/lib/sss/db/) would see hashed credentials — the same situation as /etc/shadow. The real concern is whether offline authentication is appropriate for your security posture; it can be disabled with offline_credentials_expiration = 0.


Framework Alignment

Domain Relevance
CISSP Domain 5: Identity and Access Management PAM is the enforcement layer for authentication policy on Linux — understanding its stack is foundational to any Linux IAM deployment
CISSP Domain 3: Security Architecture and Engineering The separation between NSS (resolution) and PAM (authentication) is an architectural boundary — misunderstanding it leads to misconfigured systems where account checks are bypassed
CISSP Domain 4: Communications and Network Security pam_ldap vs pam_sss affects whether credentials travel over a direct LDAP connection (one socket per login, no TLS guarantee) or through SSSD’s managed, pooled connection

Key Takeaways

  • LDAP alone is not an authentication protocol for Linux — authentication flows through PAM, and LDAP is one of PAM’s possible backends
  • NSS (/etc/nsswitch.conf) resolves user identity (who is UID 1001?); PAM enforces it (are they allowed in?)
  • pam_ldap talks to LDAP directly — no cache, no failover, login blocked when LDAP is unreachable
  • pam_sss delegates to SSSD — credential caching, connection pooling, offline login, and failover are all built in
  • A full SSH login touches LDAP exactly twice: one Search for POSIX attributes, one Bind to verify the password
  • When login fails, debug top-down: NSS resolution → SSSD status → LDAP reachability → PAM config

What’s Next

EP03 showed how authentication reaches LDAP — through PAM, through SSSD, through a Bind. What it assumed is that SSSD is healthy and the LDAP server is reachable. The moment either goes wrong, the behavior depends entirely on how SSSD is configured — its cache TTLs, its failover order, its offline credential policy.

EP04 goes inside SSSD: the architecture, the sssd.conf knobs that matter, how to read the logs, and how to break it intentionally and fix it.

Next: SSSD: The Caching Daemon That Powers Every Enterprise Linux Login

Get EP04 in your inbox when it publishes → linuxcent.com/subscribe

LDAP Internals: The Directory Tree, Schema, and What Travels on the Wire

Reading Time: 12 minutes

The Identity Stack, Episode 2
EP01: What Is LDAPEP02EP03: LDAP Authentication on Linux → …

Focus Keyphrase: LDAP internals
Search Intent: Informational
Meta Description: Understand LDAP internals: the Directory Information Tree, DN syntax, object classes, schema, and the BER bytes that travel when you run ldapsearch. (150 chars)


TL;DR

  • The Directory Information Tree (DIT) is the hierarchical database LDAP stores — every entry lives at a unique path described by its Distinguished Name (DN)
  • Object classes define what attributes an entry is allowed or required to have — posixAccount adds UID, GID, and home directory; inetOrgPerson adds email and display name
  • Schema is the rulebook: which attribute types exist across the entire directory, what syntax each follows, and which object classes require or permit them
  • An LDAP Search sends four things: a base DN, a scope (base/one/sub), a filter like (uid=vamshi), and a list of attributes to return — the server traverses the tree and returns LDIF
  • Every LDAP message on the wire is BER-encoded (Basic Encoding Rules, a subset of ASN.1) — a compact binary format, not text
  • ldapsearch output is LDIF (LDAP Data Interchange Format) — the human-readable representation of what the BER payload carried

The Big Picture: From ldapsearch to Directory Entry

ldapsearch -x -H ldap://dc.corp.com -b "dc=corp,dc=com" "(uid=vamshi)" cn mail uidNumber
     │
     │  TCP port 389 (or 636 for LDAPS)
     │  BER-encoded SearchRequest
     ▼
┌─────────────────────────────────────────────────┐
│  LDAP Server (AD / OpenLDAP / 389-DS / FreeIPA)  │
│                                                   │
│  Directory Information Tree                       │
│                                                   │
│  dc=corp,dc=com                    ← search base  │
│    └── ou=engineers                ← scope: sub   │
│          ├── uid=alice                            │
│          └── uid=vamshi  ← filter match           │
│                cn: vamshi                         │
│                mail: [email protected]              │
│                uidNumber: 1001                    │
└─────────────────────────────────────────────────┘
     │
     │  BER-encoded SearchResultEntry
     ▼
# LDIF output on your terminal
dn: uid=vamshi,ou=engineers,dc=corp,dc=com
cn: vamshi
mail: [email protected]
uidNumber: 1001

LDAP internals are the mechanics between the command you type and the directory entry you get back. EP01 explained why LDAP was invented. This episode explains what it actually does when you run it.


The Directory Information Tree

EP01 introduced the DIT as a concept inherited from X.500. Here’s what it actually looks like inside a directory.

Every LDAP directory has a root — the base DN — from which all entries descend. For a company called Corp with a domain corp.com, the base is typically dc=corp,dc=com. Below that, the tree branches into organizational units, and below those, individual entries for people, groups, services, and anything else the directory administrator decided to model.

dc=corp,dc=com                          ← domain root (base DN)
│
├── ou=people                           ← organizational unit: people
│     ├── uid=alice                     ← user entry
│     ├── uid=vamshi
│     └── uid=bob
│
├── ou=groups                           ← organizational unit: groups
│     ├── cn=engineers
│     └── cn=ops
│
├── ou=services                         ← organizational unit: service accounts
│     ├── cn=jenkins
│     └── cn=gitlab-runner
│
└── ou=hosts                            ← organizational unit: machines
      ├── cn=web01.corp.com
      └── cn=db01.corp.com

This hierarchy is not a file system and not a relational database. It is specifically optimized for reads — the query “give me everything about this user” is the operation the protocol is built around. Writes are infrequent. Reads are constant.

Every entry in the tree has exactly one parent. There are no cross-links between branches, no foreign keys. The tree is the structure. An entry’s position in the tree is what defines it.


Distinguished Names: Reading the Path

The Distinguished Name (DN) is how you address any entry in the directory. It reads right-to-left, from the leaf to the root, with each component separated by a comma.

uid=vamshi,ou=engineers,dc=corp,dc=com

Reading right-to-left:
  dc=corp,dc=com       ← domain: corp.com
  ou=engineers         ← organizational unit: engineers
  uid=vamshi           ← this specific entry: user "vamshi"

Each component of a DN — uid=vamshi, ou=engineers, dc=corp — is a Relative Distinguished Name (RDN). The RDN is the attribute-value pair that uniquely identifies the entry within its parent container. Two users in the same ou=engineers cannot both have uid=vamshi — that would create two entries with identical DNs, which the directory won’t allow.

Common RDN attribute types and what they mean:

Attribute Stands for Typical use
dc Domain Component Domain name segments (dc=corp,dc=com = corp.com)
ou Organizational Unit Container for grouping entries
cn Common Name Groups, service accounts, human-readable name
uid User ID Linux username — the standard RDN for user entries
o Organization Top-level org containers (less common in modern setups)

When your Linux system calls getent passwd vamshi, SSSD translates that into an LDAP Search for an entry where uid=vamshi somewhere under the configured base DN. The full DN comes back with the result, but what your system cares about are the attributes inside it.


Object Classes and Schema

Every entry in the directory has a objectClass attribute — usually several values. Object classes define what attributes the entry is allowed or required to have.

# A typical user entry's object classes
dn: uid=vamshi,ou=engineers,dc=corp,dc=com
objectClass: top
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount

Each object class contributes a set of attributes — some required (MUST), some optional (MAY):

objectClass: posixAccount
  MUST: cn, uid, uidNumber, gidNumber, homeDirectory
  MAY:  userPassword, loginShell, gecos, description

objectClass: inetOrgPerson
  MUST: sn (surname), cn
  MAY:  mail, telephoneNumber, displayName, jpegPhoto, ...

objectClass: shadowAccount
  MUST: uid
  MAY:  shadowLastChange, shadowMin, shadowMax, shadowWarning, ...

When Linux authenticates a user via LDAP, it needs the posixAccount attributes: uidNumber (the numeric UID), gidNumber, homeDirectory, and loginShell. Without posixAccount, the user entry exists in the directory but can’t be used for Linux logins — getent passwd will return nothing.

Object classes are grouped into three kinds:

Groups in LDAP use their own object class:

objectClass: groupOfNames
  MUST: cn, member
  MAY:  description, owner, ...

# A group entry looks like this:
dn: cn=engineers,ou=groups,dc=corp,dc=com
objectClass: groupOfNames
cn: engineers
member: uid=vamshi,ou=engineers,dc=corp,dc=com
member: uid=alice,ou=engineers,dc=corp,dc=com

groupOfNames stores members as full DNs — which is why the SSSD group search filter is (member=uid=vamshi,ou=...) rather than (member=vamshi). The directory stores the exact path to each member entry. posixGroup is the alternative, which stores the memberUid as a bare username string instead of a DN — Active Directory uses groupOfNames; pure POSIX environments often use posixGroup.

Object classes are grouped into three kinds:

Structural — defines what the entry fundamentally is. Every entry must have exactly one structural class. posixAccount is structural.

Auxiliary — adds additional attributes to an existing entry. shadowAccount and inetOrgPerson can be auxiliary. You can stack multiple auxiliary classes on a single entry.

Abstract — base classes that other classes inherit from. top is the root abstract class that every entry implicitly has. You never add top to an entry; it’s always there.

Schema: The Directory’s Type System

Schema is the global rulebook for the entire directory. It defines:

  • Attribute type definitions — what each attribute is named, what syntax it uses (a string? an integer? a binary blob?), whether it’s case-sensitive, whether multiple values are allowed
  • Object class definitions — which attributes each class requires or permits
  • Matching rules — how equality comparisons work for each attribute type

The schema is stored in the directory itself, under a special entry at cn=schema,cn=config (OpenLDAP) or cn=Schema,cn=Configuration (Active Directory). You can query it:

# View the schema for the posixAccount object class
ldapsearch -x -H ldap://your-dc \
  -b "cn=schema,cn=config" \
  "(objectClass=olcObjectClasses)" \
  olcObjectClasses | grep -A 10 "posixAccount"

# Output:
# olcObjectClasses: ( 1.3.6.1.1.1.2.0
#   NAME 'posixAccount'
#   DESC 'Abstraction of an account with POSIX attributes'
#   SUP top
#   AUXILIARY
#   MUST ( cn $ uid $ uidNumber $ gidNumber $ homeDirectory )
#   MAY ( userPassword $ loginShell $ gecos $ description ) )

That OID (1.3.6.1.1.1.2.0) is the globally unique identifier for the posixAccount object class. Every object class and attribute type in every LDAP directory on the planet has a unique OID assigned by an authority. This is how schema interoperability works across different directory implementations — OpenLDAP, Active Directory, and 389-DS can all understand each other’s posixAccount entries because they share the same OID.


LDAP Operations: What Actually Runs

LDAP defines eight operations. Day-to-day authentication uses two: Bind and Search.

LDAP Operation Set
──────────────────
Bind        ← authenticate (prove identity)
Search      ← query the directory
Add         ← create a new entry
Modify      ← change attributes on an existing entry
Delete      ← remove an entry
ModifyDN    ← rename or move an entry
Compare     ← test if an attribute has a specific value
Abandon     ← cancel an outstanding operation

Bind: Proving Who You Are

Before any authenticated operation, the client sends a Bind request. There are two types:

Simple Bind — the client sends its DN and password in the clear (or over TLS). This is what -x in ldapsearch means: simple authentication.

# Simple bind as a service account
ldapsearch -x \
  -D "cn=svc-ldap-reader,ou=services,dc=corp,dc=com" \
  -w "service-account-password" \
  -H ldap://dc.corp.com \
  -b "dc=corp,dc=com" \
  "(uid=vamshi)"

SASL Bind — the client uses an authentication mechanism registered with SASL (Simple Authentication and Security Layer). Kerberos (via the GSSAPI mechanism) is the most common. EP05 covers Kerberos in detail.

# SASL bind using Kerberos (after kinit)
ldapsearch -Y GSSAPI \
  -H ldap://dc.corp.com \
  -b "dc=corp,dc=com" \
  "(uid=vamshi)"

An anonymous Bind (no DN, no password) is also valid for directories configured to allow anonymous reads. Many public LDAP directories (and some internal ones, misconfigured) allow this.

Search: The Core Operation

A Search request has five required parameters:

baseObject   — where in the DIT to start (e.g., "dc=corp,dc=com")
scope        — how deep to look
               base    = only the base entry itself
               one     = one level below base (immediate children)
               sub     = entire subtree below base (most common)
derefAliases — how to handle alias entries (usually derefAlways)
filter       — what to match (e.g., "(uid=vamshi)")
attributes   — which attributes to return (empty = return all)

When SSSD authenticates a user login, it runs exactly two Search operations:

Search 1 — find the user's entry
  base:       dc=corp,dc=com
  scope:      sub
  filter:     (uid=vamshi)
  attributes: dn, uid, uidNumber, gidNumber, homeDirectory, loginShell

Search 2 — find the user's group memberships
  base:       dc=corp,dc=com
  scope:      sub
  filter:     (member=uid=vamshi,ou=engineers,dc=corp,dc=com)
  attributes: dn, cn, gidNumber

The first search locates the user entry and retrieves the POSIX attributes. The second finds all group entries that contain the user’s DN as a member. These two queries are the complete basis for a Linux login over LDAP.

Search Filters

LDAP filters follow a prefix (Polish notation) syntax. Every filter is wrapped in parentheses:

# Simple equality
(uid=vamshi)

# Presence — entry has this attribute at all
(mail=*)

# Substring match
(cn=vam*)

# Comparison
(uidNumber>=1000)

# Logical AND — both conditions must match
(&(objectClass=posixAccount)(uid=vamshi))

# Logical OR — either condition matches
(|(uid=vamshi)([email protected]))

# Logical NOT
(!(uid=guest))

# Combined — posixAccount entries with UID >= 1000 and no disabled flag
(&(objectClass=posixAccount)(uidNumber>=1000)(!(pwdAccountLockedTime=*)))

The & and | operators take any number of operands. Filter syntax looks strange the first time but is unambiguous and compact — which matters when you’re encoding it into BER for the wire.


What Actually Travels on the Wire

Every LDAP message is encoded in BER (Basic Encoding Rules), a binary subset of ASN.1. LDAP is not a text protocol.

When you run ldapsearch, the tool constructs a BER-encoded SearchRequest message and sends it over TCP. The server responds with one or more SearchResultEntry messages (one per matching entry), followed by a SearchResultDone. All of these are BER.

BER uses a type-length-value (TLV) encoding:

Tag byte(s)    — what type of data this is
Length byte(s) — how many bytes of data follow
Value byte(s)  — the actual data

A minimal LDAP SearchRequest for ldapsearch -x -b "dc=corp,dc=com" "(uid=vamshi)" uid looks like this on the wire:

30 45          ← SEQUENCE (LDAPMessage)
  02 01 01     ← INTEGER 1 (messageID = 1)
  63 40        ← [APPLICATION 3] SearchRequest
    04 11       ← OCTET STRING: baseObject
      64 63 3d  ← "dc=corp,dc=com" (20 bytes)
      63 6f 72
      70 2c 64
      63 3d 63
      6f 6d
    0a 01 02   ← ENUMERATED: scope = wholeSubtree (2)
    0a 01 03   ← ENUMERATED: derefAliases = derefAlways (3)
    02 01 00   ← INTEGER: sizeLimit = 0 (unlimited)
    02 01 00   ← INTEGER: timeLimit = 0 (unlimited)
    01 01 00   ← BOOLEAN: typesOnly = false
    a7 0f      ← [7] equalityMatch filter
      04 03 75 69 64   ← attributeDesc: "uid"
      04 06 76 61 6d   ← assertionValue: "vamshi"
             73 68 69
    30 05      ← SEQUENCE: AttributeDescriptionList
      04 03 75 69 64   ← "uid"

You don’t need to read BER by hand in practice. But knowing it’s binary — not HTTP, not JSON, not plain text — explains some things:

  • Why tcpdump port 389 shows binary output you can’t read directly
  • Why LDAP on port 389 looks different in Wireshark than HTTP traffic
  • Why ldapsearch output (LDIF) is a transformation of the wire data, not the wire data itself

To see the wire protocol in action:

# Run ldapsearch with debug output (level 1 = protocol tracing)
ldapsearch -d 1 -x \
  -H ldap://ldap.forumsys.com \
  -b "dc=example,dc=com" \
  -D "cn=read-only-admin,dc=example,dc=com" \
  -w readonly \
  "(uid=tesla)" cn

# You'll see output like:
# ldap_connect_to_host: TCP ldap.forumsys.com:389
# ldap_new_connection 1 1 0
# ldap_connect_to_host: Trying ldap.forumsys.com:389
# ldap_pvt_connect: fd: 5 tm: -1 async: 0
# TLS: can't connect.
# ldap_open_defconn: successful
# ber_scanf fmt ({it) ber:     ← BER decoding of the response
# ber_scanf fmt ({) ber:
# ber_scanf fmt (W) ber:
# ...

The ber_scanf lines are the BER decoder working through the server’s response. Each line represents one TLV element being read off the wire.


Reading ldapsearch Output: Every Field

ldapsearch output is LDIF (LDAP Data Interchange Format), defined in RFC 2849. It’s the standard text serialization of LDAP entries.

ldapsearch -x \
  -H ldap://ldap.forumsys.com \
  -b "dc=example,dc=com" \
  -D "cn=read-only-admin,dc=example,dc=com" \
  -w readonly \
  "(uid=tesla)" \
  cn mail uid uidNumber objectClass

Output, annotated:

# extended LDIF
#
# LDAPv3                              ← protocol version confirmed
# base <dc=example,dc=com> with scope subtree
# filter: (uid=tesla)                 ← your search filter echoed back
# requesting: cn mail uid uidNumber objectClass
#

# tesla, example.com                  ← comment: CN, base DN
dn: uid=tesla,dc=example,dc=com      ← Distinguished Name — full path in the tree

objectClass: inetOrgPerson           ← structural class: person with org attrs
objectClass: organizationalPerson    ← auxiliary: adds telephoneNumber etc.
objectClass: person                  ← auxiliary: adds sn (surname)
objectClass: top                     ← every entry has this implicitly
cn: Tesla                            ← common name (from inetOrgPerson MUST)
mail: [email protected]        ← email (from inetOrgPerson MAY)
uid: tesla                           ← userid (from inetOrgPerson MAY)

# search result
search: 2                            ← messageID of the SearchResultDone
result: 0 Success                    ← 0 = no error; 32 = no such object; 49 = invalid credentials

# numResponses: 2                    ← 1 result entry + 1 SearchResultDone
# numEntries: 1

The result: line is the one to watch when debugging. LDAP result codes:

Code Meaning What it tells you
0 Success Query ran, results returned (or no results found — check numEntries)
32 No Such Object Base DN doesn’t exist in this directory
49 Invalid Credentials Bind failed — wrong DN, wrong password, or account locked
50 Insufficient Access Your bind DN doesn’t have read permission on these entries
53 Unwilling to Perform Server refused the operation (e.g., password policy, anonymous bind disabled)
65 Object Class Violation Add/Modify would violate schema (missing MUST attribute, unrecognized object class)

Ports: 389, 636, and 3268

Port 389   — LDAP (plaintext, or StartTLS in-session upgrade)
Port 636   — LDAPS (LDAP wrapped in TLS from the start)
Port 3268  — Active Directory Global Catalog (plain)
Port 3269  — Active Directory Global Catalog over TLS

Port 389 vs 636: Both carry the same BER-encoded LDAP protocol. The difference is when TLS starts. On 636 (LDAPS), the TLS handshake happens before the first LDAP message. On 389 with StartTLS, the client sends a plaintext ExtendedRequest with OID 1.3.6.1.4.1.1466.20037 to initiate the TLS upgrade, then both sides continue over TLS. In production, use one or the other — never unencrypted port 389. Your credentials transit the wire on every Bind.

Ports 3268/3269 — Active Directory Global Catalog: AD organizes domains into forests. Each domain controller holds the full LDAP tree for its own domain. The Global Catalog is a read-only, partial replica of every domain in the forest — just the most-queried attributes from every object. When an application needs to find a user across domains in the same forest (not just in one domain), it queries the Global Catalog on 3268/3269 instead of a domain-specific DC on 389/636.

Forest: corp.com
  ├── Domain: corp.com       → DC at port 389/636   (full copy of corp.com)
  ├── Domain: emea.corp.com  → DC at port 389/636   (full copy of emea.corp.com)
  └── Global Catalog        → GC at port 3268/3269  (partial copy of ALL domains)

If your SSSD or application is configured to use port 3268 instead of 389, it’s talking to the Global Catalog — useful for forest-wide user lookups, but missing some less-common attributes that aren’t replicated to the GC.


Try It: ldapsearch Against Your Own Directory

If your Linux machine is joined to AD or connected to an LDAP directory, you can run these right now:

# 1. Confirm your SSSD knows where the LDAP server is
grep -E "ldap_uri|ad_domain|krb5_server" /etc/sssd/sssd.conf

# 2. Look up your own user entry
ldapsearch -x \
  -H ldap://$(grep ldap_uri /etc/sssd/sssd.conf | awk -F= '{print $2}' | tr -d ' ') \
  -b "dc=$(hostname -d | sed 's/\./,dc=/g')" \
  "(uid=$(whoami))" \
  dn objectClass uid uidNumber gidNumber homeDirectory loginShell

# 3. Find the groups you're in
ldapsearch -x \
  -H ldap://your-dc \
  -b "dc=corp,dc=com" \
  "(member=$(ldapsearch -x ... "(uid=$(whoami))" dn | grep ^dn | cut -d' ' -f2-))" \
  cn gidNumber

# 4. Check what object classes your entry has
ldapsearch -x \
  -H ldap://your-dc \
  -b "dc=corp,dc=com" \
  "(uid=$(whoami))" \
  objectClass

On a machine joined to Active Directory, the ldap_uri in sssd.conf is your domain controller’s address. On FreeIPA or OpenLDAP, it’s the directory server. The same ldapsearch commands work against all of them — because they all speak LDAP v3.


⚠ Common Misconceptions

“The DN is like a file path.” The analogy holds for reading it, but the DIT is not a file system. Entries don’t inherit permissions from parent containers the way files inherit from directories. Access control in LDAP is defined by ACLs on the server — not by position in the tree.

“LDAP is case-sensitive.” It depends on the attribute. Most string attributes (like cn and mail) use case-insensitive matching by default — (cn=Vamshi) and (cn=vamshi) return the same results. But some attributes (like userPassword and most binary types) are case-sensitive. The schema’s matching rules define this per-attribute.

“You need the full DN to search for a user.” No. The Search operation with a sub scope searches the entire subtree below the base DN. You search with a filter like (uid=vamshi) without knowing the full DN. The DN comes back in the result.

“LDAP accounts and Linux accounts are the same thing.” An LDAP user entry becomes a Linux account only if the entry has a posixAccount object class with the required POSIX attributes (uidNumber, gidNumber, homeDirectory). An LDAP entry without posixAccount can exist in the directory but getent passwd will not return it.

“The objectClass attribute can be changed freely.” Structural object classes cannot be changed after an entry is created — you’d have to delete and recreate the entry. Auxiliary classes can be added or removed. This is why correctly choosing the structural class at entry creation time matters.


Framework Alignment

Domain Relevance
CISSP Domain 5: Identity and Access Management DIT structure, DN addressing, object classes, and schema are the data model underpinning every enterprise identity store — understanding them is foundational to managing directory-based IAM
CISSP Domain 4: Communications and Network Security BER on port 389 is unencrypted; LDAPS (port 636) or StartTLS is required for production — wire-level understanding informs the transport security decision
CISSP Domain 3: Security Architecture and Engineering Schema design and DIT hierarchy are architectural decisions with security consequences: overly permissive schemas enable privilege escalation; flat DITs make access delegation harder

Key Takeaways

  • The DIT is a hierarchical database — every entry has a unique DN that describes its path from leaf to root
  • Object classes define the schema rules for each entry: what attributes are required (MUST) vs optional (MAY), and what the entry fundamentally is
  • For a user to be usable for Linux logins, the directory entry needs the posixAccount object class with uidNumber, gidNumber, and homeDirectory populated
  • An LDAP login is two operations: a Bind (authenticate), then a Search (retrieve POSIX attributes and group memberships)
  • Everything on the wire is BER-encoded binary — ldapsearch output is LDIF, a human-readable transformation of what the wire actually carries
  • LDAP result code 0 means success; 49 means bad credentials; 32 means the base DN doesn’t exist — these are the three you’ll debug most often


Run ldapsearch against your own directory and look at the object classes on your entry. Does it have posixAccount? Does it have shadowAccount? What attributes is your SSSD actually reading on every login — and what does it do when the LDAP server is unreachable? 👇


What’s Next

EP02 showed what’s inside the directory: the tree structure, the schema, the operations, and the wire protocol. What it left open is how Linux actually uses this information to grant a login.

LDAP is not, by itself, an authentication protocol. The Bind operation can verify a password — but that’s a tiny piece of what happens when you SSH into a machine joined to Active Directory. The full login flow runs through PAM, NSS, and SSSD before LDAP ever gets queried. EP03 traces that path.

Next: LDAP Authentication on Linux: PAM, NSS, and the Login Stack

Get EP03 in your inbox when it publishes → linuxcent.com/subscribe

What Is LDAP — and Why It Was Invented to Replace Something Worse

Reading Time: 9 minutes

The Identity Stack, Episode 1
EP01EP02: LDAP Internals → EP03 → …

Focus Keyphrase: what is LDAP
Search Intent: Informational
Meta Description: LDAP solved 1980s authentication chaos and still powers enterprise logins today. Learn what it replaced, how it works, and why it’s still in your stack. (155 chars)


TL;DR

  • LDAP (Lightweight Directory Access Protocol) is a protocol for reading and writing directory information — most commonly, who is allowed to do what
  • It was built in 1993 as a “lightweight” alternative to X.500/DAP, which ran over the full OSI stack and was impossible to deploy on anything but mainframe hardware
  • Before LDAP, every server had its own /etc/passwd — 50 machines meant 50 separate user databases, managed manually
  • NIS (Network Information Service) was the first attempt to centralize this — it worked, then became a cleartext-credentials security liability
  • LDAP v3 (RFC 2251, 1997) is the version still in production today — 27 years of backwards compatibility
  • Everything you use today — Active Directory, Okta, Entra ID — is built on top of, or speaks, LDAP

The Big Picture: 50 Years of “Who Are You?”

1969–1980s   /etc/passwd — per-machine, no network auth
     │        50 servers = 50 user databases, managed manually
     │
     ▼
1984         Sun NIS / Yellow Pages — first centralized directory
     │        broadcast-based, no encryption, flat namespace
     │        Revolutionary for its era. A liability by the 1990s.
     │
     ▼
1988         X.500 / DAP — enterprise-grade directory services
     │        OSI protocol stack. Powerful. Impossible to deploy.
     │        Mainframe-class infrastructure required just to run it.
     │
     ▼
1993         RFC 1487 — LDAP v1
     │        Tim Howes, University of Michigan.
     │        Lightweight. TCP/IP. Actually deployable.
     │
     ▼
1997         RFC 2251 — LDAP v3
     │        SASL authentication. TLS. Controls. Referrals.
     │        The version still in production today.
     │
     ▼
2000s–now    Active Directory, OpenLDAP, 389-DS, FreeIPA
             Okta, Entra ID, Google Workspace
             LDAP DNA in every identity system on the planet.

What is LDAP? It’s the protocol that solved one of the most boring and consequential problems in computing: how do you know who someone is, across machines, at scale, without sending their password in cleartext?


The World Before LDAP

Before you understand why LDAP was invented, you need to feel the problem it solved.

Every Unix machine in the 1970s and 1980s managed its own users. When you created an account on a server, your username, UID, and hashed password went into /etc/passwd on that machine. Another machine had no idea you existed. If you needed access to ten servers, an administrator created ten separate accounts — manually, one by one. When you changed your password, each account had to be updated separately.

For a university with 200 machines and 10,000 students, this was chaos. For a company with offices in three cities, it was a full-time job for multiple sysadmins.

Machine A           Machine B           Machine C
/etc/passwd         /etc/passwd         /etc/passwd
vamshi:x:1001       (vamshi unknown)    vamshi:x:1004
alice:x:1002        alice:x:1001        alice:x:1003
bob:x:1003          bob:x:1002          (bob unknown)

Same people, different UIDs, different machines, no central truth.
File permissions become meaningless when UID 1001 means
different users on different hosts.

For every new hire, an admin SSHed to every machine and ran useradd. When someone left, you hoped whoever ran the offboarding remembered all the machines. Most organizations didn’t know their own attack surface because there was no single place to look.


Sun NIS: The First Attempt at Centralization

Sun Microsystems released NIS (Network Information Service) in 1984, originally called Yellow Pages — a name they had to drop after a trademark dispute with British Telecom. The idea was elegant: one server holds the authoritative /etc/passwd (and /etc/group, /etc/hosts, and a dozen other maps), and client machines query it instead of reading local files.

For the first time, you could create an account once and have it work across your entire network. For a generation of Unix administrators, NIS was liberating.

       NIS Master Server
       /var/yp/passwd.byname
              │
    ┌─────────┼──────────┐
    ▼         ▼          ▼
 Client A   Client B   Client C
 (query NIS — no local /etc/passwd needed)

NIS worked well — until it didn’t. The failure modes were structural:

No encryption. NIS responses were cleartext UDP. An attacker on the same network segment could capture the full password database with a packet sniffer. In 1984, “the network” meant a trusted corporate LAN. By the mid-1990s, it meant ethernet segments that included lab workstations, and the assumptions no longer held.

Broadcast-based discovery. NIS clients found servers by broadcasting on the local network. This worked on a single flat ethernet. It failed completely across routers, across buildings, and across WAN links. Multi-site organizations ended up running separate NIS domains with no connection between them — which partially defeated the purpose.

Flat namespace. NIS had no organizational hierarchy. One domain. Everything flat. You couldn’t have engineering and finance as separate administrative units. You couldn’t delegate user management to a department. One person — usually one overworked sysadmin — managed the whole thing.

UIDs had to match across all machines. If alice was UID 1002 on one server but UID 1001 on another, NFS file ownership became wrong. NIS enforced consistency, but onboarding a new machine into an existing network required manually auditing UID conflicts across the entire directory. Get one wrong and files end up owned by the wrong person.

NIS worked for thousands of installations from 1984 to the mid-1990s. It also ended careers when it failed. What the industry needed was a hierarchical, structured, encrypted, scalable directory service.


X.500 and DAP: The Right Idea, Wrong Protocol

The OSI (Open Systems Interconnection) standards body had an answer: X.500 directory services. X.500 was comprehensive, hierarchical, globally federated. The ITU-T published the standard in 1988, and it looked like exactly what enterprises needed.

X.500 Directory Information Tree (DIT)
              c=US                   ← country
                │
         o=University                ← organization
                │
         ┌──────┴──────┐
     ou=CS           ou=Physics      ← organizational units
         │
     cn=Tim Howes                    ← common name (person)
     telephoneNumber: +1-734-...
     mail: [email protected]

This data model — the hierarchy, the object classes, the distinguished names — is exactly what LDAP inherited. The DIT, the cn=, ou=, dc= notation in every LDAP query you’ve ever read: all of it came from X.500.

The problem was DAP: the Directory Access Protocol that X.500 used to communicate.

DAP ran over the full OSI protocol stack. Not TCP/IP — OSI. Seven layers, all of which required specialized software that in 1988 only mainframe and minicomputer vendors had implemented. A university department wanting to run X.500 needed hardware and software licenses that cost as much as a small car. The vast majority of workstations couldn’t speak OSI at all.

The data model was sound. The transport was impractical.

X.500 / DAP (1988)              LDAP v1 (1993)
──────────────────              ──────────────
Full OSI stack (7 layers)  →    TCP/IP only
Mainframe-class hardware   →    Any Unix box with a TCP stack
$50,000+ deployment cost   →    Free (reference implementation)
Vendor-specific OSI impl.  →    Standard socket API
Zero internet adoption     →    Universities deployed immediately

The Invention: LDAP at the University of Michigan

Tim Howes was at the University of Michigan in the early 1990s. The university was running X.500 for its directory — faculty, staff, student contact information, credentials. The data model was good. The protocol was the problem.

His insight, working with colleagues Wengyik Yeong and Steve Kille: strip X.500 down to what actually needs to function over a TCP/IP connection. Keep the hierarchical data model. Throw away the OSI transport. The result was the Lightweight Directory Access Protocol.

RFC 1487, published July 1993, described LDAP v1. It preserved the X.500 directory information model — the hierarchy, the object classes, the distinguished name format — and mapped it onto a protocol that could run over a simple TCP socket on port 389.

No specialized hardware. No OSI. If you had a Unix machine and TCP/IP, you could run LDAP. By 1993, that meant virtually every workstation and server in every university and most enterprises.

The University of Michigan deployed it immediately. Within two years, organizations across the internet were running the reference implementation.

LDAP v2 (RFC 1777, 1995) cleaned up the protocol. LDAP v3 (RFC 2251, 1997) is the version in production today — adding SASL authentication (which enables Kerberos integration), TLS support, referrals for federated directories, and extensible controls for server-side operations. The RFC that standardized the internet’s primary identity protocol is 27 years old and still running.


What LDAP Actually Is

LDAP is a client-server protocol for reading and writing a directory — a structured, hierarchical database optimized for reads.

Every entry in the directory has a Distinguished Name (DN) that describes its position in the hierarchy, and a set of attributes defined by its object classes. A person entry looks like this:

dn: cn=vamshi,ou=engineers,dc=linuxcent,dc=com

objectClass: inetOrgPerson
objectClass: posixAccount
cn: vamshi
uid: vamshi
uidNumber: 1001
gidNumber: 1001
homeDirectory: /home/vamshi
loginShell: /bin/bash
mail: [email protected]

The DN reads right-to-left: domain linuxcent.com (dc=linuxcent,dc=com) → organizational unit engineers → common name vamshi. Every entry in the directory has a unique path through the tree — there’s no ambiguity about which vamshi you mean.

LDAP defines eight operations: Bind (authenticate), Search, Add, Modify, Delete, ModifyDN (rename), Compare, and Abandon. Most of what a Linux authentication system does with LDAP reduces to two: Bind (prove you are who you say you are) and Search (tell me everything you know about this user).

When your Linux machine authenticates an SSH login against LDAP:

1. User types password
2. PAM calls pam_sss (or pam_ldap on older systems)
3. SSSD issues a Bind to the LDAP server: "I am cn=vamshi, and here is my credential"
4. LDAP server verifies the bind → success or failure
5. SSSD issues a Search: "give me the posixAccount attributes for uid=vamshi"
6. LDAP returns uidNumber, gidNumber, homeDirectory, loginShell
7. PAM creates the session with those attributes

The entire login flow is two LDAP operations: one Bind, one Search.


Try It Right Now

You don’t need to set up an LDAP server to run your first query. There’s a public test LDAP directory at ldap.forumsys.com:

# Query a public LDAP server — no setup required
ldapsearch -x \
  -H ldap://ldap.forumsys.com \
  -b "dc=example,dc=com" \
  -D "cn=read-only-admin,dc=example,dc=com" \
  -w readonly \
  "(objectClass=inetOrgPerson)" \
  cn mail uid

# What you get back (abbreviated):
# dn: uid=tesla,dc=example,dc=com
# cn: Tesla
# mail: [email protected]
# uid: tesla
#
# dn: uid=einstein,dc=example,dc=com
# cn: Albert Einstein
# mail: [email protected]
# uid: einstein

Decode what you just ran:

  • -x — simple authentication (username/password bind, not Kerberos/SASL)
  • -H ldap://ldap.forumsys.com — the LDAP server URI, port 389
  • -b "dc=example,dc=com" — the base DN, the top of the subtree to search
  • -D "cn=read-only-admin,dc=example,dc=com" — the bind DN (who you’re authenticating as)
  • -w readonly — the bind password
  • "(objectClass=inetOrgPerson)" — the search filter: return entries that are people
  • cn mail uid — the attributes to return (default returns all)

That’s a live LDAP query returning real directory entries from a server running RFC 2251 — the same protocol Tim Howes designed in 1993.

On your own Linux system, if you’re joined to AD or LDAP, you can query it the same way with your domain credentials.


Why It Never Went Away

LDAP v3 was finalized in 1997. In 2024, it’s still the protocol every enterprise directory speaks. Why?

Because it became the lingua franca of enterprise identity before any replacement existed. Every application that needs to authenticate users — VPN concentrators, mail servers, network switches, web applications, HR systems — implemented LDAP support. Every directory service Microsoft, Red Hat, Sun, and Novell shipped stored data in an LDAP-accessible tree.

When Microsoft built Active Directory in 1999, they built it on top of LDAP + Kerberos. When your Linux machine joins an AD domain, it speaks LDAP to enumerate users and groups, and Kerberos to verify credentials. When Okta or Entra ID syncs with your on-premises directory, it uses LDAP Sync (or a modern protocol that maps directly to LDAP semantics).

The protocol is old. The ecosystem built on top of it is so deep that replacing LDAP would mean simultaneously replacing every enterprise application that depends on it. Nobody has done that. Nobody has had to.

What happened instead is the stack got taller. LDAP at the bottom, Kerberos for network authentication, SSSD as the local caching daemon, PAM as the Linux integration layer, SAML and OIDC at the top for web-based federation. The directory is still LDAP. The interfaces above it evolved.

That full stack — from the directory at the bottom to Zero Trust at the top — is what this series covers.


⚠ Common Misconceptions

“LDAP is an authentication protocol.” LDAP is a directory protocol. It stores identity information and can verify credentials (via Bind). Authentication in modern stacks is typically Kerberos or OIDC — LDAP provides the directory backing it.

“LDAP is obsolete.” LDAP is the storage layer for Active Directory, OpenLDAP, 389-DS, FreeIPA, and every enterprise IdP’s on-premises sync. It is ubiquitous. What’s changed is the interface layer above it.

“You need Active Directory to run LDAP.” Active Directory uses LDAP. OpenLDAP, 389-DS, FreeIPA, and Apache Directory Server are all standalone LDAP implementations. You can run a directory without Microsoft.

“LDAP and LDAPS are different protocols.” LDAP is the protocol. LDAPS is LDAP over TLS on port 636. StartTLS is LDAP on port 389 with an in-session upgrade to TLS. Same protocol, different transport security.


Framework Alignment

Domain Relevance
CISSP Domain 5: Identity and Access Management LDAP is the foundational directory protocol for centralized identity stores — the base layer of every enterprise IAM stack
CISSP Domain 4: Communications and Network Security Port 389 (LDAP), 636 (LDAPS), 3268/3269 (AD Global Catalog) — transport security decisions affect every directory deployment
CISSP Domain 3: Security Architecture and Engineering DIT hierarchy, schema design, replication topology — directory structure is an architectural security decision
NIST SP 800-63B LDAP as a credential service provider (CSP) backing enterprise authenticators

Key Takeaways

  • LDAP was invented to solve a real, painful problem: the authentication chaos that NIS couldn’t fix and X.500/DAP was too expensive to deploy
  • It inherited the right thing from X.500 (the hierarchical data model) and replaced the right thing (the impractical OSI transport with TCP/IP)
  • NIS was the predecessor that worked until it didn’t — its failure modes (no encryption, flat namespace, broadcast discovery) are exactly what LDAP was designed to fix
  • LDAP v3 (RFC 2251, 1997) is still the production standard — 27 years later
  • Active Directory, OpenLDAP, FreeIPA, Okta, Entra ID — every enterprise identity system either runs LDAP or speaks it
  • The full authentication stack is deeper than LDAP: the next 12 episodes peel it apart layer by layer

What’s Next

EP01 stayed at the design level — the problem, the predecessor failures, the invention, the data model.

EP02 goes inside the wire. The DIT structure, DN syntax, object classes, schema, and the BER-encoded bytes that actually travel from the server to your authentication daemon. Run ldapsearch against your own directory and read every line of what comes back.

Next: LDAP Internals: The Directory Tree, Schema, and What Travels on the Wire

Get EP02 in your inbox when it publishes → linuxcent.com/subscribe

Authentication vs Authorization: AWS AccessDenied Explained

Reading Time: 10 minutes

Meta Description: Understand the difference between authentication vs authorization — and debug AWS AccessDenied errors by knowing whether to fix the credential or the policy.


What Is Cloud IAMAuthentication vs AuthorizationIAM Roles vs PoliciesAWS IAM Deep DiveGCP Resource Hierarchy IAMAzure RBAC Scopes


TL;DR

  • Authentication asks are you who you claim to be? Authorization asks are you allowed to do this? — two separate gates, two separate failure modes
  • AWS AccessDenied is an authorization failure — the identity authenticated fine; fix the policy, not the credentials
  • Prefer short-lived credentials (STS temporary tokens, Managed Identities) over long-lived access keys — the difference is the blast radius window
  • MFA strengthens authentication; it does nothing for authorization — a hijacked session with broad permissions is just as dangerous with or without MFA on the original login
  • HTTP 401 = authentication failure; HTTP 403 = authorization failure — the code tells you which gate to debug
  • Both layers must enforce least privilege independently — application-layer authorization is not a substitute for tight cloud IAM

The Big Picture

Every API call in the cloud passes through two gates before it executes. Most engineers know the first one. The second is where most security failures live.

  THE TWO GATES — every cloud API call passes through both, in order

  ┌──────────────────────────────────────────────────────────────────┐
  │  GATE 1 — AUTHENTICATION                                         │
  │  "Are you who you claim to be?"                                  │
  │                                                                  │
  │  IAM user     →  Access Key + Secret (long-lived, rotatable)    │
  │  IAM role     →  Temporary STS token (expires automatically)    │
  │  Human        →  Password + MFA via console or IdP              │
  │  Service      →  Instance profile / Managed Identity / OIDC     │
  │                                                                  │
  │  Passes → move to Gate 2                                        │
  │  Fails  → stopped here, HTTP 401                                │
  └──────────────────────────────────────────────────────────────────┘
                                 │
                                 ▼
  ┌──────────────────────────────────────────────────────────────────┐
  │  GATE 2 — AUTHORIZATION                                          │
  │  "Are you allowed to do what you're trying to do?"               │
  │                                                                  │
  │  Evaluated against: identity-based policies · SCPs              │
  │                     resource-based policies · conditions         │
  │                     permissions boundaries · session policies    │
  │                                                                  │
  │  Default answer: DENY (explicit Allow required every time)      │
  │                                                                  │
  │  Passes → request executes                                      │
  │  Fails  → AccessDenied / HTTP 403                               │
  └──────────────────────────────────────────────────────────────────┘

  MFA hardens Gate 1. It has zero effect on Gate 2.
  A hijacked session with a valid token clears Gate 1 automatically.
  Gate 2 is your last line of defense — and the one that's most often misconfigured.

Introduction

The authentication vs authorization distinction is the most commonly confused boundary in cloud security — and the source of most misdirected debugging when an AWS AccessDenied error appears. These are two separate gates, two separate failure modes, and two entirely different fixes.

Early in my career I wrote an API endpoint I was proud of. Token validation. Rejection of unauthenticated requests. I called it “secured” in the code review.

A senior engineer asked one question: “What happens if I take a valid token from a regular user and call your /admin/delete-user endpoint?”

I ran the test. It worked. Any employee — with a perfectly valid, properly issued token — could delete any user account in the system.

The authentication was correct. The authorization didn’t exist.

That gap between proving who you are and proving you’re allowed to do this is where a surprising number of security incidents live. Not just in application code — in cloud IAM too.

I’ve reviewed AWS environments where MFA was enforced on every human account, access keys were rotated quarterly, and yet a Lambda function had s3:* on * because whoever wrote the deployment script reached for AmazonS3FullAccess and moved on.

Gate 1 was solid. Gate 2 was wide open.

This episode draws the boundary cleanly — what each gate is, how each cloud implements it, and the specific failure modes that happen when the two get conflated.


How Authentication Works in Cloud IAM

Authentication answers: are you who you claim to be?

The three factor types

Authentication has not fundamentally changed in decades. What has changed is how cloud platforms implement it.

Factor Type Cloud Examples
Something you know Knowledge Password, access key secret, PIN
Something you have Possession TOTP app, FIDO2 hardware key, smart card
Something you are Inherence Biometrics — less common in cloud contexts

MFA requires two distinct factors. A password plus a username is not MFA — both are knowledge factors. A password plus a TOTP code is MFA. Worth stating clearly because I’ve seen internal documentation describe “username and password” as two-factor authentication.

SMS codes count as MFA, but they’re the weakest form. SIM-swapping attacks — convincing a carrier to port your number — have been used to defeat SMS MFA on high-value accounts. If TOTP or FIDO2 hardware keys are available, use them.

How AWS authenticates

AWS has two fundamentally different identity classes:

Human identities authenticate via console (password + optional MFA) or CLI/API (Access Key ID + Secret Access Key). The access key is a long-lived credential with no default expiry. Every .env file with an access key, every git commit that included one, every CI/CD log that printed one — that credential is live until someone explicitly rotates or deletes it.

Machine identities — EC2, Lambda, ECS tasks — authenticate via temporary credentials issued by STS:

# Assume a role — get temporary credentials that expire
aws sts assume-role \
  --role-arn arn:aws:iam::123456789012:role/DevRole \
  --role-session-name alice-session \
  --duration-seconds 3600
# Returns: AccessKeyId + SecretAccessKey + SessionToken
# All three expire together. Nothing to rotate.

# From inside an EC2 instance — credentials arrive automatically via IMDS
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/MyAppRole
# Returns: AccessKeyId, SecretAccessKey, Token, Expiration
# AWS refreshes these before expiry. The application never sees a rotation event.

The IMDS model is the right one. The application never manages a credential — it appears, it’s used, it expires. If it leaks, it’s usable for hours at most, not years.

Why Long-Lived Credentials Keep Appearing

How GCP authenticates

GCP cleanly separates human and machine authentication.

Humans authenticate via Google Account or Workspace (OAuth2). The gcloud CLI handles the flow:

gcloud auth login                        # browser-based OAuth2 for humans
gcloud auth application-default login    # sets up Application Default Credentials for local dev

Machine identities use service accounts, ideally attached to the resource rather than using downloaded key files. Key files are GCP’s equivalent of long-lived AWS access keys — same problems, same risks.

# From inside a GCE VM — ADC uses the attached service account, no key file needed
gcloud auth print-access-token
# Use it: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" ...

How Azure authenticates

Azure’s identity plane is Entra ID (formerly Azure Active Directory). Humans authenticate via Entra ID using OAuth2/OIDC. Machine identities use Managed Identities — Azure handles the entire credential lifecycle, nothing to configure or rotate.

az login                                  # browser-based OAuth2
az login --service-principal \            # service principal for automation
  -u APP_ID -p CERT_OR_SECRET \
  --tenant TENANT_ID

# From inside an Azure VM — get a token via IMDS, no credentials needed
curl 'http://169.254.169.254/metadata/identity/oauth2/token\
?api-version=2018-02-01&resource=https://management.azure.com/' \
  -H 'Metadata: true'

The credential failure modes that repeat everywhere

In practice, the same patterns appear across all three clouds in every audit:

Leaked credentials — access keys in git commits, .env files, Docker image layers, CI/CD logs. GitHub’s secret scanning finds thousands of these monthly on public repos alone.

Long-lived credentials — an access key from 2019 is still valid in 2026 unless someone explicitly rotated it. I’ve audited accounts where 30% of access keys had never been rotated, some five years old.

Shared credentials — one key used by three services. When you revoke it, three things break. When it leaks, you can’t tell which service was the source.

Credential sprawl — service account keys downloaded for “one quick test” and never deleted. I once found seventeen key files for a single GCP service account, created by different engineers over two years. None rotated. Five belonged to accounts that no longer existed.

The direction of travel in all three clouds is credential-less: workload identity federation, managed identities, instance profiles. We’ll cover this specifically in OIDC Workload Identity: Eliminate Cloud Access Keys Entirely.


How Authorization Evaluates Every API Call

Authorization happens after authentication. The system knows who you are — now it decides what you can do. This decision is enforced through IAM roles vs policies — the building blocks that express what each identity is allowed to do on which resources.

What the evaluation looks like

Every API call triggers an authorization check. You don’t notice when it succeeds. You notice when it fails:

REQUEST:
  Action:    s3:DeleteObject
  Resource:  arn:aws:s3:::prod-backups/2024-01-15.tar.gz
  Principal: arn:aws:iam::123456789012:role/DevEngineerRole
  Context:   { source_ip: "10.0.1.5", mfa: false, time: "14:32 UTC" }

EVALUATION:
  1. Explicit Deny anywhere? → none found
  2. Explicit Allow in any policy? → not granted
  3. Default → DENY

RESULT: AccessDenied

The engineer authenticated successfully. Valid credentials, valid session. But DevEngineerRole has no policy granting s3:DeleteObject on that bucket. Gate 1 passed. Gate 2 denied. They are evaluated independently.

Policy evaluation chains by cloud

AWS — evaluated in layers, explicit Deny wins at any layer:

1. Explicit Deny in any SCP?           → DENY (cannot be overridden anywhere)
2. No SCP Allow?                       → DENY
3. Explicit Deny in identity or resource policy? → DENY
4. Resource-based policy Allow?        → can ALLOW (same account)
5. Permissions boundary — no Allow?    → DENY
6. Session policy — no Allow?          → DENY
7. Identity-based policy Allow?        → ALLOW
Default (nothing granted):             → DENY

The default is always Deny. Every successful authorization is an explicit "Effect": "Allow" somewhere in the chain. This is the opposite of traditional Unix — in the cloud, if you didn’t explicitly grant it, it doesn’t exist.

GCP — additive, permissions accumulate up the hierarchy:

Permission granted if ANY binding grants it at:
  resource level → project level → folder level → organization level

IAM Deny Policies can override all grants (newer feature).
No binding at any level? → Denied.

Azure RBAC:

1. Explicit Deny Assignment?           → DENY (even Owner can't override)
2. Role Assignment with Allow?         → ALLOW
Default:                               → DENY

Why Confusing Authentication and Authorization Breaks Security

The token-as-authorization antipattern

An application checks for a valid JWT and if found, proceeds. The JWT proves the user authenticated with the IdP. However, it says nothing about what they’re allowed to do.

# This is authentication only — anyone with a valid token gets through
@app.route("/admin/delete-user", methods=["POST"])
def delete_user():
    token = request.headers.get("Authorization")
    if verify_token(token):           # asks: is this token real and unexpired?
        delete_user_from_db(...)      # executes for any valid token holder
        return "OK"
    return "Unauthorized", 401

# This separates the two correctly
@app.route("/admin/delete-user", methods=["POST"])
def delete_user():
    token = request.headers.get("Authorization")
    principal = verify_token(token)                    # Gate 1: authentication
    if not has_permission(principal, "users:delete"):  # Gate 2: authorization
        return "Forbidden", 403
    delete_user_from_db(...)
    return "OK"

The short-expiry principle

Credential type Provider Typical lifetime Risk
Access Key + Secret AWS Permanent (until deleted) Years of exposure if leaked
STS Temporary Token AWS 15 min – 12 hours Hours at most
OAuth2 Access Token GCP / Azure ~1 hour Short window
IMDS Token (VM) All three Minutes Auto-refreshed by platform

A credential that expires in an hour has a one-hour exposure window if stolen. A credential that never expires has an unlimited window. This is the operational argument for managed identities and instance profiles, beyond just convenience.

# AWS — configure max session duration at role level
aws iam update-role \
  --role-name MyRole \
  --max-session-duration 3600   # 1 hour max

# GCP — access tokens expire in ~1 hour automatically
gcloud auth print-access-token
# Refresh: gcloud auth application-default print-access-token

# Azure — token lifetime configurable in Entra ID token policies
az account get-access-token --resource https://management.azure.com/

⚠ Production Gotchas

╔══════════════════════════════════════════════════════════════════════╗
║  ⚠  GOTCHA 1 — "We have MFA, so permissions can be broad"          ║
║                                                                      ║
║  MFA protects Gate 1 only. If a session is hijacked after login    ║
║  (via malware, SSRF, or a stolen session cookie), the attacker has  ║
║  a valid, MFA-authenticated token. Gate 1 is already cleared.       ║
║  Broad permissions in Gate 2 are the full attack surface.           ║
║                                                                      ║
║  Fix: treat Gate 2 (IAM policy) as your primary blast-radius        ║
║  control. MFA buys time. Least privilege limits damage.             ║
╚══════════════════════════════════════════════════════════════════════╝

╔══════════════════════════════════════════════════════════════════════╗
║  ⚠  GOTCHA 2 — Debugging AccessDenied by rotating credentials      ║
║                                                                      ║
║  AWS AccessDenied is an authorization failure. The identity         ║
║  authenticated successfully — there's no Allow in the policy.       ║
║  Rotating the access key does nothing.                              ║
║                                                                      ║
║  Fix: check the policy chain. Use simulate-principal-policy to      ║
║  confirm where the Allow is missing before touching credentials.    ║
╚══════════════════════════════════════════════════════════════════════╝

╔══════════════════════════════════════════════════════════════════════╗
║  ⚠  GOTCHA 3 — Application-layer authZ with broad cloud IAM        ║
║                                                                      ║
║  "The app controls access" is not a substitute for scoped cloud     ║
║  IAM. An SSRF vulnerability, exposed debug endpoint, or            ║
║  compromised dependency bypasses the application layer entirely.    ║
║  The cloud identity's permissions become the attacker's surface.    ║
║                                                                      ║
║  Fix: both layers enforce least privilege independently.            ║
╚══════════════════════════════════════════════════════════════════════╝

Authentication vs Authorization Audit Checklist

Split your IAM review along the authN/authZ boundary — they’re different problems with different fixes.

Authentication — Gate 1:
– Are there long-lived access keys that could be replaced with STS/Managed Identity?
– Is MFA enforced for all human identities with console or API access?
– Are service account key files present where workload identity is available?
– Are credentials stored in a secrets manager — not in code, .env files, or repos?
– When did each long-lived credential last rotate?

Authorization — Gate 2:
– Does every policy follow least privilege — only the permissions the workload actually uses?
– Are there wildcards (s3:*, "Resource": "*") that could be narrowed?
– Are write, delete, and IAM-modification actions scoped to specific resources?
– Are SCPs or permissions boundaries capping maximum permissions at org or account level?
– When were each role’s permissions last reviewed against actual usage (Access Analyzer)?


Quick Reference

┌────────────────────────────┬──────────────────────────────────────────────────┐
│ Term                       │ What it means                                    │
├────────────────────────────┼──────────────────────────────────────────────────┤
│ Authentication (AuthN)     │ Verifying identity — are you who you claim?      │
│ Authorization (AuthZ)      │ Verifying permission — are you allowed to act?   │
│ MFA                        │ Two distinct factors; strengthens Gate 1 only    │
│ STS (AWS)                  │ Security Token Service — issues temp credentials │
│ Access Key                 │ Long-lived AWS credential; avoid for services    │
│ Instance profile (AWS)     │ Container attaching a role to EC2                │
│ Managed Identity (Azure)   │ Credential-less identity for Azure services      │
│ Service Account (GCP)      │ Machine identity; prefer attached over key file  │
│ HTTP 401                   │ Authentication failure — prove who you are       │
│ HTTP 403 / AccessDenied    │ Authorization failure — fix the policy           │
└────────────────────────────┴──────────────────────────────────────────────────┘

Commands to know:
┌──────────────────────────────────────────────────────────────────────────────┐
│  # AWS — assume a role and get temporary credentials                        │
│  aws sts assume-role --role-arn arn:aws:iam::ACCOUNT:role/ROLE \            │
│    --role-session-name my-session --duration-seconds 3600                   │
│                                                                              │
│  # AWS — simulate a policy to debug AccessDenied before touching anything   │
│  aws iam simulate-principal-policy \                                         │
│    --policy-source-arn arn:aws:iam::ACCOUNT:role/MyRole \                   │
│    --action-names s3:GetObject \                                             │
│    --resource-arns arn:aws:s3:::my-bucket/*                                 │
│                                                                              │
│  # AWS — check what credentials your session is using                       │
│  aws sts get-caller-identity                                                 │
│                                                                              │
│  # GCP — print the current access token (expires in ~1 hour)                │
│  gcloud auth print-access-token                                              │
│                                                                              │
│  # GCP — show which account ADC is using                                    │
│  gcloud auth application-default print-access-token                         │
│                                                                              │
│  # Azure — get current token for ARM                                         │
│  az account get-access-token --resource https://management.azure.com/       │
│                                                                              │
│  # Azure — check who you're logged in as                                     │
│  az account show                                                             │
└──────────────────────────────────────────────────────────────────────────────┘

Framework Alignment

Framework Reference What It Covers Here
CISSP Domain 5 — Identity and Access Management AuthN and AuthZ are the two core mechanisms; this episode defines the boundary
CISSP Domain 1 — Security & Risk Management Conflating the two creates systematic, measurable risk with different attack surfaces
ISO 27001:2022 5.17 Authentication information Managing credentials and authentication mechanisms across the identity lifecycle
ISO 27001:2022 8.5 Secure authentication Technical controls — MFA, session management, credential policies
ISO 27001:2022 5.15 Access control Policy requirements that depend on cleanly separating identity from permission
SOC 2 CC6.1 Logical access controls — this episode defines the two-gate model CC6.1 is built on
SOC 2 CC6.7 Access restrictions enforced at the authorization layer, not just authentication

Key Takeaways

  • Authentication proves identity; authorization proves permission — two gates, two separate failure modes, two separate fixes
  • AWS AccessDenied is a Gate 2 failure — the credential is valid, the policy is missing; fix the policy
  • Short-lived credentials (STS, Managed Identities, instance profiles) reduce the blast radius of a credential compromise from years to hours
  • MFA hardens Gate 1 — it has no effect on what an authenticated identity can do
  • HTTP 401 = Gate 1 failed; HTTP 403 = Gate 2 failed — the status code tells you where to look
  • Application-layer authorization and cloud IAM authorization are independent — both must enforce least privilege

What’s Next

You now know what the two gates are and where failures in each originate. IAM Roles vs Policies: How Cloud Authorization Actually Works goes into the mechanics of Gate 2 — the permissions, policies, and roles that implement authorization in practice, and the structural patterns that keep them from turning into an unmanageable sprawl.

Next: IAM Roles vs Policies: How Cloud Authorization Actually Works

Get the IAM roles vs policies breakdown in your inbox when it publishes → linuxcent.com/subscribe