Introduction
Understanding how CI/CD pipelines are attacked is only half the picture. Threat modeling and attack taxonomy give us a map of the battlefield, but without concrete defensive patterns and engineering mitigations, that knowledge remains theoretical. This guide bridges the gap between awareness and action.
The goal is not to build an impenetrable fortress — that does not exist. Instead, we focus on reducing the attack surface, limiting the blast radius when something does go wrong, and making pipelines resilient enough to recover quickly. Every control described here maps back to real-world attack patterns: poisoned pipelines, credential theft, dependency hijacking, and artifact tampering.
We will walk through defenses layer by layer — from source code to runtime — and then cover the detection and incident response capabilities that close the loop. Whether you are securing GitHub Actions, GitLab CI, Jenkins, or any other CI/CD platform, the principles remain the same.
Defense in Depth for CI/CD
No single security control is sufficient to protect a CI/CD pipeline. Attackers are creative, and they will find the gap in any single-layer defense. The only viable strategy is defense in depth: overlapping controls at every stage of the software delivery lifecycle.
Mapping Defenses to OWASP Top 10 CI/CD Security Risks
The OWASP Top 10 CI/CD Security Risks provides a structured framework for understanding what can go wrong. Each risk — from CICD-SEC-1 (Insufficient Flow Control Mechanisms) through CICD-SEC-10 (Insufficient Logging and Visibility) — demands specific mitigations. The defenses in this guide are organized to address these risks systematically.
The Three Pillars: Prevention, Detection, Response
- Prevention: Controls that stop attacks before they succeed — branch protections, minimal permissions, signed artifacts, ephemeral runners.
- Detection: Monitoring and alerting that surfaces anomalies — unexpected pipeline behavior, configuration drift, new dependencies, secret exposure.
- Response: Playbooks and procedures for when defenses fail — credential revocation, blast radius analysis, artifact integrity verification, forensic investigation.
Defense at Each Layer
Think of the CI/CD pipeline as a chain of trust boundaries:
- Source: Where code and configuration enter the pipeline
- Build: Where code is compiled, tested, and packaged
- Artifact: Where build outputs are stored and distributed
- Deploy: Where artifacts reach production infrastructure
- Runtime: Where deployed software executes and is monitored
Each layer has distinct threats and requires distinct defenses. Compromise at one layer should not automatically cascade to the next.
Source Layer Defenses — Protecting Pipeline Inputs
The source layer is where most CI/CD attacks begin. An attacker who can modify code, pipeline definitions, or configuration files controls what the pipeline executes. Source layer defenses ensure that only authorized, reviewed, and verified changes enter the pipeline.
Branch Protection Rules
Branch protection is the first line of defense. At a minimum, main and release branches should enforce:
- Required pull request reviews: No direct pushes to protected branches. All changes go through code review.
- Required status checks: CI must pass before merge. This prevents merging broken or malicious code that bypasses tests.
- No force pushes: Force pushing rewrites history and can be used to remove evidence of malicious commits.
- Require linear history: Prevents merge commits that can obscure malicious changes in complex diffs.
CODEOWNERS for Sensitive Paths
Not all files in a repository carry the same risk. Pipeline definitions, infrastructure-as-code templates, and container configurations are high-value targets. Use CODEOWNERS to require review from specific teams for sensitive paths:
# .github/CODEOWNERS
# Pipeline definitions require security team review
.github/workflows/ @org/security-team
.gitlab-ci.yml @org/security-team
Jenkinsfile @org/security-team
# Infrastructure as code
terraform/ @org/platform-team @org/security-team
pulumi/ @org/platform-team @org/security-team
# Container definitions
Dockerfile* @org/security-team
docker-compose*.yml @org/security-team
# Dependency manifests
package.json @org/security-team
requirements.txt @org/security-team
go.sum @org/security-team
Signed Commits and Verification
Commit signing provides cryptographic proof of authorship. Without it, an attacker who compromises a developer’s access token can push commits that appear to come from anyone. Enable commit signature verification on protected branches to ensure every commit is signed with a verified GPG or SSH key.
# Configure Git to sign commits with SSH key
git config --global gpg.format ssh
git config --global user.signingkey ~/.ssh/id_ed25519.pub
git config --global commit.gpgsign true
# Verify a commit signature
git verify-commit HEAD
PR Review Policies
Code review is a human control, and it needs guardrails:
- No self-approval: The author of a PR should never be able to approve their own changes.
- Required reviewers from the security team for changes to pipeline files, secrets configuration, or deployment manifests.
- Dismiss stale reviews: If new commits are pushed after approval, previous approvals should be dismissed to force re-review.
- Require review from code owners: Pair this with CODEOWNERS to enforce domain-specific review requirements.
Limiting Pipeline Triggers
Not every event should trigger a full pipeline run, especially one that has access to secrets:
- Fork restrictions: PRs from forks should run in a restricted context with no access to repository secrets.
- Contributor permissions: Only collaborators with write access should be able to trigger workflows that access sensitive resources.
- Approval for first-time contributors: Require manual approval before running pipelines for new contributors.
Build Layer Defenses — Securing the Build Process
The build layer is where code becomes executable. Compromise here means an attacker can inject malicious logic into your artifacts without modifying source code. Build layer defenses focus on isolation, ephemerality, and minimal privilege.
Ephemeral Runners
Persistent CI runners accumulate state: cached credentials, leftover files from previous builds, environment variables that leak between jobs. Ephemeral runners eliminate this class of risk entirely by provisioning a fresh VM or container for every job and destroying it immediately after.
# GitHub Actions: Self-hosted ephemeral runner with actions-runner-controller
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: ephemeral-runners
spec:
replicas: 5
template:
spec:
ephemeral: true
repository: your-org/your-repo
labels:
- self-hosted
- ephemeral
- linux
dockerdWithinRunnerContainer: false
image: ghcr.io/actions/actions-runner:latest
resources:
limits:
cpu: "2"
memory: "4Gi"
Isolated Build Environments
Even with ephemeral runners, builds that share caches or network namespaces can leak information between jobs. Ensure:
- No shared caches between untrusted builds: Cache poisoning is a real attack vector. Isolate caches per branch or per PR.
- Separate runner pools: Production deployment runners should never be shared with PR validation runners.
- Container isolation: Use rootless containers or microVMs (Firecracker, gVisor) for stronger isolation than standard Docker.
Network Restrictions During Builds
A compromised build step with unrestricted network access can exfiltrate secrets to attacker-controlled infrastructure. Restrict outbound network access:
- No outbound internet: The strictest option. All dependencies must come from internal mirrors or pre-cached images.
- Allowlisted domains only: If internet access is necessary, restrict it to known-good registries and package repositories.
- DNS-based filtering: Use DNS policies to block access to unauthorized domains during builds.
# Kubernetes NetworkPolicy for CI runner pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ci-runner-egress-restricted
namespace: ci-runners
spec:
podSelector:
matchLabels:
role: ci-runner
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/8 # Internal network only
ports:
- protocol: TCP
port: 443 # HTTPS to internal registries
- protocol: TCP
port: 53 # DNS
- protocol: UDP
port: 53 # DNS
Minimal Build Images
Every tool installed in a build image is a potential attack surface. Strip build images down to the bare minimum:
- Use distroless or Alpine-based images as build bases.
- Remove shells, package managers, and network utilities from production build images where possible.
- Pin image digests, not tags, to prevent tag-based supply chain attacks.
# Pin by digest, not by tag
FROM golang:1.22@sha256:a3b21c5d8e... AS builder
WORKDIR /app
COPY . .
RUN CGO_ENABLED=0 go build -o /app/binary
# Use distroless for the final image
FROM gcr.io/distroless/static-debian12@sha256:f4e8b1c2d9...
COPY --from=builder /app/binary /binary
ENTRYPOINT ["/binary"]
Disable Debug Modes in Production Pipelines
Debug logging and verbose output are invaluable during development but dangerous in production pipelines. They can leak secrets, internal paths, and infrastructure details. Ensure that ACTIONS_STEP_DEBUG, CI_DEBUG_TRACE, and equivalent flags are disabled in production pipeline configurations.
Credential and Identity Defenses — Limiting What Pipelines Can Access
Credentials are the most valuable target in any CI/CD pipeline. An attacker who obtains a cloud access key, a deployment token, or an API secret can pivot far beyond the pipeline itself. Credential defenses focus on minimizing what exists, what can be accessed, and for how long.
Minimal Token Permissions
The default GITHUB_TOKEN in GitHub Actions has broad permissions. Always restrict it to the minimum required:
# GitHub Actions: Restrict default token permissions
permissions:
contents: read
packages: read
id-token: write # Only if using OIDC
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v4
- run: make build
deploy:
runs-on: ubuntu-latest
needs: build
permissions:
contents: read
id-token: write # For OIDC authentication
steps:
- name: Authenticate to cloud
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/deploy-role
aws-region: us-east-1
OIDC and Workload Identity
Long-lived secrets stored in CI/CD systems are ticking time bombs. Replace them with OIDC-based workload identity federation wherever possible:
- GitHub Actions → AWS: Use
aws-actions/configure-aws-credentialswith OIDC role assumption. - GitHub Actions → GCP: Use
google-github-actions/authwith Workload Identity Federation. - GitHub Actions → Azure: Use
azure/loginwith federated credentials. - GitLab CI → AWS/GCP/Azure: Use GitLab’s native OIDC token (
CI_JOB_JWT_V2) with cloud provider federation.
With OIDC, the CI/CD platform issues a short-lived JWT, and the cloud provider exchanges it for temporary credentials. No static secrets are stored anywhere.
Per-Environment, Per-Stage Credentials
A single set of credentials shared across all environments is a catastrophic blast radius. Segment credentials:
- Development, staging, and production should use separate service accounts with separate permissions.
- Build stages should not have access to deployment credentials.
- Test stages should use isolated test infrastructure, not shared environments.
No Secrets in PR/Fork Workflows
Pull requests from forks should never have access to repository secrets. This is a common misconfiguration that enables attackers to exfiltrate secrets by submitting a malicious PR. In GitHub Actions, use pull_request (not pull_request_target) for untrusted code, and never pass secrets to steps that execute PR code.
Vault Integration with Dynamic Secrets
For credentials that cannot use OIDC (database passwords, API keys for third-party services), use a secrets manager like HashiCorp Vault with dynamic, short-lived secrets:
# HashiCorp Vault: Generate short-lived database credentials
vault read database/creds/ci-readonly
# Returns:
# Key Value
# --- -----
# lease_id database/creds/ci-readonly/abc123
# lease_duration 1h
# username v-ci-readonly-xyz789
# password A1B2-C3D4-E5F6-G7H8
Dynamic secrets are generated on demand, scoped to the requesting identity, and automatically revoked when they expire. Even if leaked, the window of exposure is measured in minutes, not months.
Audit Logging All Secret Access
Every secret retrieval should generate an audit log entry. If your secrets manager does not log access, you have no way to investigate a compromise. Ensure logs capture: who accessed what, when, from which pipeline run, and which IP address.
Artifact Layer Defenses — Ensuring Output Integrity
Build artifacts — container images, binaries, packages — are the bridge between your pipeline and production. If an attacker can tamper with artifacts after they are built, all upstream defenses become irrelevant. Artifact layer defenses ensure integrity, provenance, and immutability.
Sign All Artifacts with Sigstore/Cosign
Artifact signing provides cryptographic proof that an artifact was produced by your pipeline and has not been modified since. Sigstore’s Cosign makes keyless signing practical:
# Sign a container image using Cosign (keyless, OIDC-based)
cosign sign --yes ghcr.io/your-org/your-app:v1.2.3@sha256:abc123...
# Verify the signature
cosign verify \
--certificate-identity=https://github.com/your-org/your-app/.github/workflows/build.yml@refs/heads/main \
--certificate-oidc-issuer=https://token.actions.githubusercontent.com \
ghcr.io/your-org/your-app:v1.2.3@sha256:abc123...
With keyless signing, the signing key is ephemeral and bound to the CI/CD workflow’s OIDC identity. There is no long-lived signing key to steal.
Generate and Store SLSA Provenance
SLSA (Supply-chain Levels for Software Artifacts) provenance records how, where, and by whom an artifact was built. At SLSA Level 3, provenance is generated by the build platform itself and cannot be forged by the build process:
# GitHub Actions: Generate SLSA provenance for container images
- uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v2.0.0
with:
image: ghcr.io/your-org/your-app
digest: ${{ steps.build.outputs.digest }}
secrets:
registry-username: ${{ github.actor }}
registry-password: ${{ secrets.GITHUB_TOKEN }}
Immutable Artifact Storage
Published artifacts should never be overwritten. If an attacker can replace a published version, they can inject malicious code into every deployment that references that version. Configure your registries for immutability:
- Container registries: Enable tag immutability (ECR, GCR, ACR all support this).
- Package registries: Prevent re-publishing of existing versions.
- Binary storage: Use write-once storage policies (S3 Object Lock, GCS retention policies).
SBOM Generation and Attestation
A Software Bill of Materials (SBOM) lists every component in your artifact. Generating an SBOM at build time and attesting it alongside the artifact creates a verifiable inventory for vulnerability management:
# Generate SBOM with Syft and attest with Cosign
syft ghcr.io/your-org/your-app:v1.2.3 -o spdx-json > sbom.spdx.json
cosign attest --predicate sbom.spdx.json --type spdxjson \
ghcr.io/your-org/your-app:v1.2.3@sha256:abc123...
Admission Controllers for Signature Verification
Signing artifacts is only useful if you verify signatures before deployment. Use Kubernetes admission controllers to enforce this automatically:
# Kyverno: Require signed images
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-signed-images
spec:
validationFailureAction: Enforce
background: false
rules:
- name: verify-cosign-signature
match:
any:
- resources:
kinds:
- Pod
verifyImages:
- imageReferences:
- "ghcr.io/your-org/*"
attestors:
- entries:
- keyless:
issuer: "https://token.actions.githubusercontent.com"
subject: "https://github.com/your-org/*"
With this policy in place, any container image that lacks a valid Cosign signature from your GitHub Actions workflows will be rejected at admission time — before it ever runs in your cluster.
Deployment Layer Defenses — Controlling What Reaches Production
The deployment layer is the last gate before changes reach production. Defenses here ensure that only verified, approved artifacts are deployed, and that the deployment process itself is controlled and auditable.
Required Manual Approvals
For production deployments, automated pipelines should pause and require explicit human approval. This provides a final checkpoint where a human can verify that the change is expected, tested, and authorized.
# GitHub Actions: Environment with required reviewers
jobs:
deploy-production:
runs-on: ubuntu-latest
environment:
name: production
url: https://your-app.example.com
steps:
- name: Deploy to production
run: ./deploy.sh production
In the GitHub repository settings, configure the “production” environment to require approval from designated reviewers before the job proceeds.
GitOps with Pull-Based Deployment
Traditional CI/CD pipelines push to production: the pipeline has credentials to modify production infrastructure. This is a large attack surface. GitOps with pull-based deployment inverts the model:
- The pipeline updates a Git repository with the desired state (image tags, manifests).
- The cluster runs a controller (Flux, ArgoCD) that watches the Git repository and pulls changes.
- The pipeline never has direct access to the cluster. The cluster pulls from Git, and Git is the single source of truth.
# Flux: GitRepository and Kustomization for pull-based deployment
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
name: app-manifests
namespace: flux-system
spec:
interval: 1m
url: https://github.com/your-org/app-manifests
ref:
branch: main
secretRef:
name: git-credentials
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: app-production
namespace: flux-system
spec:
interval: 5m
path: ./environments/production
prune: true
sourceRef:
kind: GitRepository
name: app-manifests
healthChecks:
- apiVersion: apps/v1
kind: Deployment
name: your-app
namespace: production
Canary Deployments and Automated Rollback
Even with all upstream defenses, a deployment can introduce issues. Canary deployments limit exposure by rolling out changes to a small percentage of traffic first. If metrics degrade, automated rollback reverts the change before it affects all users.
- Use progressive delivery tools like Flagger, Argo Rollouts, or native cloud provider canary features.
- Define clear success criteria: error rate, latency, saturation metrics.
- Automate rollback triggers — do not rely on humans to notice and react in time.
Deployment Freezes
During active incidents, maintenance windows, or high-traffic periods, deployments should be frozen. Implement deployment freeze policies that prevent pipeline-initiated deployments during specified windows, and ensure that only designated incident commanders can override the freeze.
Detection and Monitoring — Knowing When Something Is Wrong
Prevention will eventually fail. Detection capabilities determine whether you catch a compromise in minutes or months. CI/CD monitoring is a blind spot for many organizations — their SIEM ingests application and infrastructure logs but ignores pipeline telemetry entirely.
Pipeline Execution Anomaly Detection
Establish baselines for normal pipeline behavior and alert on deviations:
- Unusual run times: A build that normally takes 5 minutes suddenly taking 30 minutes could indicate cryptomining or data exfiltration.
- Unexpected steps: New pipeline steps appearing without corresponding PR changes.
- Off-hours execution: Pipeline runs triggered outside normal working hours by unusual accounts.
- Failed authentication spikes: Multiple failed secret access attempts from a single pipeline run.
Dependency Diff Alerting
New dependencies added in PRs should trigger automated review and alerting. A dependency diff tool can:
- Flag new dependencies added in a PR for manual review.
- Check new dependencies against known-malicious package databases.
- Verify that dependency versions match those in known-good lockfiles.
- Alert on dependencies with very recent publish dates (potential typosquatting).
Secret Scanning
Secrets leak through commits, logs, and artifacts. Layer multiple scanning approaches:
- Pre-commit hooks: Tools like
gitleaksortrufflehogcatch secrets before they enter the repository. - In-pipeline scanning: Scan build outputs and logs for accidentally exposed credentials.
- GitHub secret scanning / GitLab secret detection: Platform-native scanning that covers push events and historical commits.
- Partner program alerts: GitHub’s secret scanning partner program notifies service providers when their tokens are exposed, enabling automatic revocation.
# Pre-commit hook with gitleaks
# .pre-commit-config.yaml
repos:
- repo: https://github.com/gitleaks/gitleaks
rev: v8.18.0
hooks:
- id: gitleaks
Configuration Drift Monitoring
Pipeline definitions should change through the normal PR process. Monitor for unexpected changes:
- Alert when workflow files, CI configurations, or deployment manifests change outside of approved PRs.
- Track pipeline permission changes over time.
- Detect new pipeline secrets being added without corresponding change requests.
SIEM Integration for CI/CD Audit Logs
Forward CI/CD audit logs to your SIEM alongside application and infrastructure logs. Key log sources include:
- GitHub Audit Log (organization and enterprise level)
- GitLab Audit Events
- Jenkins system logs and build logs
- Cloud provider audit logs for pipeline-initiated API calls (CloudTrail, Cloud Audit Logs, Azure Activity Log)
Correlate pipeline activity with cloud infrastructure changes. If a pipeline run coincides with unexpected IAM policy modifications or resource creation, that is a high-priority alert.
Incident Response for CI/CD — When Defenses Fail
When a CI/CD compromise is detected — or suspected — speed matters. The attacker may still have active access, and every minute of delay expands the blast radius. A prepared incident response playbook for CI/CD-specific scenarios is essential.
Immediate Actions: Contain the Compromise
- Revoke compromised credentials immediately. Rotate all secrets that the compromised pipeline had access to. This includes cloud provider credentials, API tokens, database passwords, and the CI/CD platform tokens themselves.
- Disable the compromised pipeline. Prevent further executions until the investigation is complete.
- Quarantine affected runners. If using persistent runners, isolate them from the network for forensic analysis.
Blast Radius Analysis
Determine what the attacker could have accessed:
- Which secrets were available to the compromised job?
- Which cloud resources could those credentials access?
- Which artifacts were produced during the compromised period?
- Which environments were deployed to from the compromised pipeline?
Artifact Integrity Verification
Check whether published artifacts were tampered with:
- Verify signatures on all artifacts published during the compromised window.
- Compare artifact checksums against known-good builds.
- If artifact integrity cannot be verified, rebuild and republish from known-good source commits.
- Notify downstream consumers if potentially compromised artifacts were distributed.
Forensic Investigation
Gather evidence from multiple sources:
- Runner logs: What commands were executed? What network connections were made?
- API audit logs: What API calls did the attacker make using pipeline credentials?
- Git history: Were any commits or branches modified? Check for force pushes or history rewriting.
- Cloud audit logs: What infrastructure changes were made by pipeline service accounts?
Post-Incident Recovery
After containment and investigation, restore secure operations:
- Rotate all secrets that were accessible to the compromised pipeline, even if there is no evidence they were exfiltrated.
- Review and tighten pipeline permissions. The incident likely revealed permission scopes that were broader than necessary.
- Update monitoring rules based on indicators of compromise discovered during the investigation.
- Conduct a blameless post-mortem focused on what systemic changes prevent recurrence.
CI/CD Incident Response Playbook Template
## CI/CD Security Incident Playbook
### Phase 1: Detection & Triage (0-15 minutes)
- [ ] Confirm the alert is a true positive
- [ ] Classify severity (P1: active compromise, P2: suspected compromise, P3: policy violation)
- [ ] Notify the incident commander and security team
### Phase 2: Containment (15-60 minutes)
- [ ] Revoke compromised credentials
- [ ] Disable affected pipelines
- [ ] Isolate affected runners
- [ ] Block attacker's access (revoke tokens, disable accounts)
### Phase 3: Investigation (1-24 hours)
- [ ] Collect runner logs, audit logs, git history
- [ ] Determine blast radius (credentials, artifacts, deployments)
- [ ] Identify attack vector (how did the attacker get in?)
- [ ] Check artifact integrity for the compromised period
### Phase 4: Recovery (24-72 hours)
- [ ] Rotate all potentially compromised secrets
- [ ] Rebuild and republish affected artifacts from known-good source
- [ ] Redeploy affected environments from verified artifacts
- [ ] Restore pipeline operations with tightened controls
### Phase 5: Post-Incident (1-2 weeks)
- [ ] Conduct blameless post-mortem
- [ ] Document lessons learned and update this playbook
- [ ] Implement systemic improvements to prevent recurrence
- [ ] Update detection rules based on IOCs discovered
Conclusion
CI/CD security is not a checklist you complete once and forget. It is an ongoing engineering practice that evolves with your pipelines, your infrastructure, and the threat landscape. Attackers will continue to target the software supply chain because it offers high leverage — a single compromised pipeline can affect every deployment, every environment, and every customer.
The defenses in this guide are organized by layer, but the most impactful starting points cut across layers:
- Ephemeral runners eliminate entire classes of persistence and state-leakage attacks.
- Minimal permissions (token scoping, OIDC, per-environment credentials) limit what an attacker can do even after gaining pipeline access.
- Signed artifacts with admission control ensure that tampered artifacts cannot reach production.
- Detection and audit logging close the visibility gap that lets compromises go unnoticed for months.
Start with these high-impact controls. Layer additional defenses as your security program matures. And always assume that your pipeline will be targeted — because it will be.
In the next post in this series, we will walk through implementing these defenses in a real-world GitHub Actions pipeline, with a complete working example you can adapt for your own repositories.