Secrets Management in CI/CD Pipelines: Patterns, Anti-Patterns, and Vault Integration

Introduction: Why Secrets Are the #1 Cause of CI/CD Compromise

If you examine the root cause of almost every major CI/CD breach in recent years — from the Codecov supply chain attack to the CircleCI security incident — you will find the same culprit: compromised secrets. API keys, cloud credentials, database passwords, signing certificates — these are the skeleton keys that attackers pursue, and CI/CD pipelines are where they concentrate their efforts.

The reason is structural. Pipelines exist in a uniquely dangerous position: they must have access to production credentials to deploy software, yet they are inherently ephemeral, multi-tenant, and exposed to untrusted code. Every pull request, every dependency update, every contributor push triggers pipeline execution — and each run is a potential vector for secret exfiltration.

The challenge is not simply “don’t put secrets in code.” It is much deeper than that. How do you give a short-lived, disposable compute environment access to your most sensitive credentials without those credentials leaking into logs, artifacts, downstream jobs, or the hands of malicious actors? That is the question this guide answers.

We will cover how secrets get exposed, how to inject them safely, how to integrate HashiCorp Vault and cloud-native identity federation, and what anti-patterns to avoid. This is a practitioner’s guide — expect real YAML, real CLI commands, and real architectural decisions.

How Secrets Get Exposed in CI/CD

Before we discuss solutions, we need to understand the threat landscape. Secrets leak from pipelines through several well-documented vectors.

Hardcoded Secrets in Pipeline Configs and IaC

The most basic — and still disturbingly common — leak vector is hardcoded credentials directly in pipeline configuration files or Infrastructure as Code templates. A developer testing a deployment might drop an AWS access key into a .github/workflows/deploy.yml or a Terraform main.tf file, commit it, and forget about it. Even if removed in a subsequent commit, the secret lives forever in Git history.

# NEVER DO THIS — hardcoded credentials in a workflow file
jobs:
  deploy:
    runs-on: ubuntu-latest
    env:
      AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
      AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
    steps:
      - run: aws s3 sync ./build s3://my-bucket

Secrets in Environment Variables Printed to Logs

CI platforms typically inject secrets as environment variables. The problem arises when pipeline steps inadvertently print those variables to stdout. A careless env command, a debug printenv, or a verbose tool that dumps its configuration can expose secrets in build logs that are often retained for days or weeks and accessible to all project members.

# Dangerous: this prints ALL environment variables, including secrets
- run: printenv | sort

# Also dangerous: verbose flags in tools that dump config
- run: terraform plan -debug

Secrets Persisted in Build Artifacts or Container Layers

A secret injected during a Docker build might persist in an intermediate layer even after it is deleted in a subsequent RUN instruction. Similarly, build artifacts — JARs, ZIPs, compiled binaries — might embed configuration files containing credentials that were present at build time.

# BAD: The secret persists in the layer created by the COPY instruction
COPY .env /app/.env
RUN /app/setup.sh
RUN rm /app/.env   # Too late — it is still in a previous layer

Secrets Accessible to Untrusted PR Workflows

This is one of the most dangerous vectors, particularly in open-source projects. GitHub Actions, for example, does not provide secrets to workflows triggered by pull_request from forks — by design. However, the pull_request_target event does have access to secrets, and if the workflow checks out and executes the PR author’s code, it creates a direct secret exfiltration path.

Overly Broad Secret Scopes

Many organizations configure secrets at the organization or group level when they should be scoped to individual repositories or environments. An org-level secret in GitHub Actions is available to every repository in that organization. If any one of those repositories is compromised — or simply has a misconfigured workflow — all org-level secrets are at risk.

Secrets Injection Patterns

Now that we understand how secrets leak, let us examine how to get them into pipelines safely.

Native Platform Secrets

Every major CI/CD platform provides a built-in secrets management mechanism. GitHub Actions has repository, environment, and organization secrets. GitLab CI has project-level and group-level CI/CD variables with optional masking and protection. These are the simplest starting point.

# GitHub Actions: referencing a repository secret
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to production
        env:
          API_KEY: ${{ secrets.PRODUCTION_API_KEY }}
        run: ./deploy.sh
# GitLab CI: using a masked, protected variable
deploy:
  stage: deploy
  script:
    - echo "Deploying with masked credentials"
    - ./deploy.sh
  variables:
    API_KEY: $PRODUCTION_API_KEY
  only:
    - main

Native platform secrets are adequate for many use cases, but they have significant limitations: no dynamic generation, limited audit logging, manual rotation, and no centralized management across multiple platforms.

External Secret Managers

For organizations with mature security requirements, external secret managers — HashiCorp Vault, AWS Secrets Manager, GCP Secret Manager, Azure Key Vault — provide centralized control, audit logging, dynamic secret generation, automatic rotation, and fine-grained access policies. We will dive deep into Vault integration in the next section.

Just-in-Time Injection vs Pre-loaded Secrets

Pre-loaded secrets are configured once and made available to all pipeline runs. This is how most platform-native secrets work. Just-in-time (JIT) injection retrieves secrets at the moment they are needed, often with short TTLs. JIT injection is superior because it reduces the window of exposure, enables dynamic credentials, and provides per-run audit trails.

# JIT injection: fetch the secret only when needed
- name: Get database credentials
  run: |
    DB_CREDS=$(vault kv get -format=json secret/data/myapp/db)
    export DB_USER=$(echo $DB_CREDS | jq -r '.data.data.username')
    export DB_PASS=$(echo $DB_CREDS | jq -r '.data.data.password')
    ./run-migrations.sh

Masked vs Encrypted Secrets

A common misconception: “masked” does not mean “secure.” When GitHub Actions masks a secret, it performs string replacement in log output. If the secret value is short (e.g., a 4-character token), masking may not activate. If the secret is base64-encoded or transformed in any way, the transformed value will not be masked. Masking is a convenience, not a security boundary. Encrypted-at-rest secrets (which all major platforms provide) protect against platform-side storage compromise but do nothing to prevent runtime exfiltration.

Integrating HashiCorp Vault with CI/CD

HashiCorp Vault is the most widely adopted external secrets manager for CI/CD pipelines. It supports multiple authentication methods suitable for automated systems, dynamic secret generation, and fine-grained policies. Here is how to integrate it with the two most common CI/CD platforms.

Vault AppRole Auth for CI Runners

AppRole is Vault’s machine-oriented authentication method. It uses a Role ID (like a username) and a Secret ID (like a password) to authenticate. The Secret ID can be configured for single use and with a TTL, making it suitable for CI runners.

# Enable AppRole auth method
vault auth enable approle

# Create a policy for CI
vault policy write ci-deploy - <<EOF
path "secret/data/myapp/*" {
  capabilities = ["read"]
}
path "database/creds/myapp-role" {
  capabilities = ["read"]
}
EOF

# Create an AppRole with the CI policy
vault write auth/approle/role/ci-deploy \
  token_policies="ci-deploy" \
  token_ttl=15m \
  token_max_ttl=30m \
  secret_id_ttl=10m \
  secret_id_num_uses=1

# Retrieve the Role ID (store in CI platform as a non-sensitive variable)
vault read auth/approle/role/ci-deploy/role-id

# Generate a single-use Secret ID (store in CI platform as a secret)
vault write -f auth/approle/role/ci-deploy/secret-id

Vault JWT/OIDC Auth with GitHub Actions

The modern and preferred approach for GitHub Actions is JWT/OIDC authentication. GitHub Actions can issue an OIDC token for each workflow run, and Vault can validate that token to authenticate the pipeline — eliminating the need to store any Vault credentials in GitHub.

# Configure Vault JWT auth for GitHub Actions
vault auth enable jwt

vault write auth/jwt/config \
  bound_issuer="https://token.actions.githubusercontent.com" \
  oidc_discovery_url="https://token.actions.githubusercontent.com"

# Create a role that binds to a specific repo and branch
vault write auth/jwt/role/github-deploy \
  role_type="jwt" \
  bound_audiences="https://github.com/my-org" \
  bound_claims_type="glob" \
  bound_claims='{"sub": "repo:my-org/my-repo:ref:refs/heads/main"}' \
  user_claim="repository_owner" \
  token_policies="ci-deploy" \
  token_ttl="10m"

Then in your GitHub Actions workflow, use the hashicorp/vault-action:

jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read
    steps:
      - name: Import secrets from Vault
        uses: hashicorp/vault-action@v3
        with:
          url: https://vault.mycompany.com
          method: jwt
          role: github-deploy
          jwtGithubAudience: https://github.com/my-org
          secrets: |
            secret/data/myapp/db username | DB_USER ;
            secret/data/myapp/db password | DB_PASS

      - name: Run deployment
        run: |
          echo "Deploying with fetched credentials"
          ./deploy.sh

Vault JWT Auth with GitLab CI

GitLab CI has native support for Vault integration using id_tokens. GitLab can generate a JWT that Vault validates, similar to the GitHub Actions approach.

# Configure Vault for GitLab JWT auth
vault auth enable -path=gitlab jwt

vault write auth/gitlab/config \
  bound_issuer="https://gitlab.com" \
  jwks_url="https://gitlab.com/-/jwks" \
  supported_algs="RS256"

vault write auth/gitlab/role/gitlab-deploy \
  role_type="jwt" \
  bound_claims='{"project_id": "12345", "ref_protected": "true"}' \
  user_claim="user_email" \
  token_policies="ci-deploy" \
  token_ttl="10m"

And in your .gitlab-ci.yml:

deploy:
  stage: deploy
  id_tokens:
    VAULT_ID_TOKEN:
      aud: https://vault.mycompany.com
  secrets:
    DB_USER:
      vault: myapp/db/username@secret
      token: $VAULT_ID_TOKEN
    DB_PASS:
      vault: myapp/db/password@secret
      token: $VAULT_ID_TOKEN
  script:
    - ./deploy.sh

Dynamic Secrets

One of Vault’s most powerful features is dynamic secret generation. Instead of storing static database passwords, Vault can generate short-lived credentials on demand. When the pipeline finishes, the credentials automatically expire.

# Enable the database secrets engine
vault secrets enable database

# Configure a PostgreSQL connection
vault write database/config/myapp-db \
  plugin_name=postgresql-database-plugin \
  connection_url="postgresql://{{username}}:{{password}}@db.mycompany.com:5432/myapp" \
  allowed_roles="myapp-role" \
  username="vault_admin" \
  password="vault_admin_password"

# Create a role that generates credentials with a 1-hour TTL
vault write database/roles/myapp-role \
  db_name=myapp-db \
  creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; GRANT SELECT, INSERT, UPDATE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
  default_ttl="1h" \
  max_ttl="2h"

# In your pipeline, fetch dynamic credentials
# vault read database/creds/myapp-role
# Returns a unique username/password pair valid for 1 hour

Dynamic secrets eliminate the problem of credential rotation entirely. Each pipeline run gets its own unique credentials, and compromised credentials expire automatically.

Short-Lived Credentials and Workload Identity

The most significant advancement in CI/CD secrets management in recent years is workload identity federation — the ability for a CI/CD platform to authenticate directly to a cloud provider using its own identity, without any stored credentials.

GitHub Actions OIDC with AWS

GitHub Actions can assume an AWS IAM role directly using OIDC federation. No AWS access keys are stored anywhere.

# First, create an OIDC identity provider in AWS (via Terraform)
resource "aws_iam_openid_connect_provider" "github" {
  url             = "https://token.actions.githubusercontent.com"
  client_id_list  = ["sts.amazonaws.com"]
  thumbprint_list = ["6938fd4d98bab03faadb97b34396831e3780aea1"]
}

# Create an IAM role that GitHub Actions can assume
resource "aws_iam_role" "github_actions" {
  name = "github-actions-deploy"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect = "Allow"
      Principal = {
        Federated = aws_iam_openid_connect_provider.github.arn
      }
      Action = "sts:AssumeRoleWithWebIdentity"
      Condition = {
        StringEquals = {
          "token.actions.githubusercontent.com:aud" = "sts.amazonaws.com"
        }
        StringLike = {
          "token.actions.githubusercontent.com:sub" = "repo:my-org/my-repo:ref:refs/heads/main"
        }
      }
    }]
  })
}
# GitHub Actions workflow using OIDC
jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read
    steps:
      - uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::123456789012:role/github-actions-deploy
          aws-region: us-east-1
          role-duration-seconds: 900   # 15 minutes

      - name: Deploy
        run: aws s3 sync ./build s3://my-bucket

GitHub Actions OIDC with GCP

Google Cloud supports the same pattern through Workload Identity Federation.

# Create a Workload Identity Pool and Provider (gcloud CLI)
gcloud iam workload-identity-pools create "github-pool" \
  --project="my-project" \
  --location="global" \
  --display-name="GitHub Actions Pool"

gcloud iam workload-identity-pools providers create-oidc "github-provider" \
  --project="my-project" \
  --location="global" \
  --workload-identity-pool="github-pool" \
  --display-name="GitHub Provider" \
  --attribute-mapping="google.subject=assertion.sub,attribute.repository=assertion.repository" \
  --attribute-condition="assertion.repository_owner == 'my-org'" \
  --issuer-uri="https://token.actions.githubusercontent.com"

# Grant the Workload Identity the ability to impersonate a service account
gcloud iam service-accounts add-iam-policy-binding \
  deploy-sa@my-project.iam.gserviceaccount.com \
  --role="roles/iam.workloadIdentityUser" \
  --member="principalSet://iam.googleapis.com/projects/123456/locations/global/workloadIdentityPools/github-pool/attribute.repository/my-org/my-repo"
# GitHub Actions workflow for GCP
jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read
    steps:
      - uses: google-github-actions/auth@v2
        with:
          workload_identity_provider: projects/123456/locations/global/workloadIdentityPools/github-pool/providers/github-provider
          service_account: deploy-sa@my-project.iam.gserviceaccount.com

      - name: Deploy to Cloud Run
        run: gcloud run deploy my-service --image=gcr.io/my-project/my-app:latest

GitLab CI OIDC Federation

GitLab CI supports the same OIDC federation pattern with AWS, GCP, and Azure. The configuration is similar — you configure the cloud provider to trust GitLab’s OIDC issuer and bind access to specific project IDs, branches, or environments.

# GitLab CI with AWS OIDC
assume_role:
  stage: deploy
  id_tokens:
    AWS_OIDC_TOKEN:
      aud: https://sts.amazonaws.com
  script:
    - >
      STS_CREDS=$(aws sts assume-role-with-web-identity
      --role-arn arn:aws:iam::123456789012:role/gitlab-deploy
      --role-session-name "gitlab-ci-${CI_PIPELINE_ID}"
      --web-identity-token "${AWS_OIDC_TOKEN}"
      --duration-seconds 900)
    - export AWS_ACCESS_KEY_ID=$(echo $STS_CREDS | jq -r '.Credentials.AccessKeyId')
    - export AWS_SECRET_ACCESS_KEY=$(echo $STS_CREDS | jq -r '.Credentials.SecretAccessKey')
    - export AWS_SESSION_TOKEN=$(echo $STS_CREDS | jq -r '.Credentials.SessionToken')
    - aws s3 sync ./build s3://my-bucket

Why Short-Lived Credentials Win

The advantages of short-lived, federated credentials over stored long-lived secrets are substantial:

  • No secrets to steal. There are no stored credentials to exfiltrate. The pipeline authenticates with a signed JWT that is valid only for that specific run.
  • No rotation needed. Credentials are generated per-run and expire automatically. There is nothing to rotate.
  • Granular scoping. Access can be restricted to specific repositories, branches, environments, and even specific workflow jobs.
  • Full audit trail. Cloud provider logs show exactly which pipeline run accessed which resources, tied to the OIDC claim.
  • Blast radius reduction. Even if a credential is somehow exfiltrated, it expires in minutes, not months.

Anti-Patterns to Avoid

Knowing what not to do is as important as knowing the correct patterns. These anti-patterns are observed regularly in production environments.

Using Personal Access Tokens in CI

Personal access tokens (PATs) tied to individual developer accounts are one of the most common and most dangerous patterns. When a developer leaves the organization, their PAT may continue to work. PATs typically have broad permissions — far more than the pipeline needs. If exfiltrated, the attacker gains access to everything that developer could access.

Instead: Use machine accounts with scoped tokens, or better yet, use GitHub App installation tokens or OIDC federation.

Sharing Secrets Across Environments

Using the same database password for development, staging, and production — or the same API key for all environments — means that a compromise of your least-secured environment (usually dev) gives attackers access to production. Environment separation is meaningless if the credentials are the same.

Instead: Use environment-scoped secrets. In GitHub Actions, configure deployment environments with their own secret stores. In GitLab, use protected variables scoped to specific environments.

Not Rotating Secrets After Exposure

When a secret is accidentally logged, committed to a repository, or exposed in a build artifact, many teams simply delete the log or remove the commit without rotating the credential. This is inadequate. You must assume the secret has been observed and rotate it immediately.

Instead: Treat any exposure as a compromise. Rotate immediately. Automate rotation where possible. Use dynamic secrets to make the problem irrelevant.

Trusting pull_request_target with Secrets

The pull_request_target event in GitHub Actions runs in the context of the base branch, which means it has access to secrets. This is intended for safe operations like labeling PRs. However, if your workflow checks out the PR head ref and runs that code, you have given an external contributor full access to your secrets.

# DANGEROUS: This gives the PR author access to all repository secrets
on: pull_request_target
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          ref: ${{ github.event.pull_request.head.sha }}  # Checking out untrusted code!
      - run: make test  # Running untrusted code with access to secrets!

Instead: Never check out and execute PR code in a pull_request_target workflow. If you need to run tests on PR code with secrets, use a two-workflow approach: run untrusted code in a pull_request workflow (no secrets), then use a separate workflow_run trigger for trusted operations.

Defense in Depth: A Layered Approach

No single control is sufficient. Effective secrets management requires multiple overlapping layers of defense.

Secret Scanning

Implement scanning at three stages:

  • Pre-commit: Use tools like gitleaks or detect-secrets as pre-commit hooks to prevent secrets from ever entering the repository.
  • In-pipeline: Run secret scanning as a CI step on every pull request. Tools like trufflehog can scan diffs, commit history, and even binary files.
  • Post-commit: Enable GitHub’s built-in secret scanning or GitLab’s secret detection to continuously scan all repository content and alert on findings.
# Pre-commit hook with gitleaks
# .pre-commit-config.yaml
repos:
  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.21.2
    hooks:
      - id: gitleaks
# In-pipeline scanning with trufflehog
- name: Scan for secrets
  run: |
    docker run --rm -v "$PWD:/repo" trufflesecurity/trufflehog:latest \
      git file:///repo --only-verified --fail

Audit Logging for Secret Access

Every access to a secret should be logged. Vault provides detailed audit logs by default. Cloud provider secret managers (AWS Secrets Manager, GCP Secret Manager) integrate with CloudTrail and Cloud Audit Logs respectively. For platform-native secrets, enable the audit log features available in GitHub Enterprise or GitLab Ultimate.

# Enable Vault audit logging
vault audit enable file file_path=/var/log/vault/audit.log

# Each access generates a log entry like:
# {"type": "response", "auth": {"token_type": "service", "policies": ["ci-deploy"]},
#  "request": {"path": "secret/data/myapp/db", "operation": "read"}, ...}

Least Privilege Scoping

Apply the principle of least privilege aggressively:

  • Scope secrets to the specific repository that needs them, not the organization.
  • Use environment-level secrets so that production credentials are only available to workflows deploying to production.
  • Configure branch protection so that only workflows running on protected branches can access production secrets.
  • In Vault, write policies that grant access to the narrowest possible path with read-only capabilities.
# Vault policy: minimal access for a specific microservice's CI
path "secret/data/payments-service/production" {
  capabilities = ["read"]
}

# Deny access to everything else by default (Vault's default behavior)
# No wildcards, no broad paths

Automated Rotation

Static secrets should be rotated on a regular schedule and immediately after any suspected exposure. Automate this process:

  • Use Vault’s dynamic secrets to eliminate the need for rotation entirely.
  • For secrets that must be static (e.g., third-party API keys), use AWS Secrets Manager’s built-in rotation with Lambda functions or similar cloud-native solutions.
  • Implement alerts for secrets that have not been rotated within their expected lifetime.
# AWS Secrets Manager: configure automatic rotation
aws secretsmanager rotate-secret \
  --secret-id myapp/api-key \
  --rotation-lambda-arn arn:aws:lambda:us-east-1:123456789012:function:rotate-api-key \
  --rotation-rules '{"ScheduleExpression": "rate(30 days)"}'

Conclusion: Secrets Management Is Continuous

Secrets management is not a box to check during initial pipeline setup. It is an ongoing discipline that must evolve as your infrastructure grows, as new attack techniques emerge, and as your team changes. The patterns described in this guide — OIDC federation, dynamic secrets, just-in-time injection, least privilege scoping, and layered scanning — represent the current state of the art, but they require continuous attention.

Start by auditing your current pipelines. Identify every stored credential. For each one, ask: can this be replaced with a short-lived credential or workload identity federation? Can this scope be narrowed? Is this secret being logged anywhere? Is there an audit trail for every access?

The organizations that suffer CI/CD breaches are not the ones that never stored a secret — that is impossible. They are the ones that treated secrets management as a one-time configuration task rather than a living security practice. Build the automation, enforce the policies, monitor the access logs, and iterate. Your pipelines will be significantly harder to compromise as a result.