Overview
GitHub-hosted runners are shared and ephemeral by default — every job gets a fresh virtual machine that is destroyed after the job completes. Self-hosted runners, on the other hand, are persistent and shared across workflow runs. This creates a significant security risk: secrets, tokens, and build artifacts from one job can leak into the next. A compromised workflow can poison the runner environment for all future jobs.
Actions Runner Controller (ARC) solves this problem. ARC is a Kubernetes-native operator that gives you ephemeral, auto-scaling, container-based self-hosted runners. Each job gets a fresh pod that is destroyed when the job completes — just like GitHub-hosted runners, but running on your own infrastructure with your own tools and network policies.
In this hands-on lab, you will:
- Deploy ARC on a local Kubernetes cluster
- Configure ephemeral runner scale sets
- Demonstrate cross-job isolation (the core security benefit)
- Build custom runner images
- Implement runner group isolation for separation of duties
- Configure autoscaling
- Apply network policies to restrict runner network access
Prerequisites
Before starting this lab, ensure you have the following:
- Kubernetes cluster — kind, minikube, or a cloud-managed cluster (EKS, GKE, AKS)
- Helm 3 — Install from helm.sh
- kubectl — Configured to communicate with your cluster
- GitHub account — With admin access to a repository or organization
- GitHub App or Personal Access Token (PAT) — With
repoandadmin:orgscopes (PAT) or appropriate GitHub App permissions - Docker — For building custom runner images (Exercise 4)
Environment Setup
We will use kind (Kubernetes in Docker) to create a local cluster. This keeps the lab self-contained and easy to clean up.
Create a kind Cluster
kind create cluster --name arc-lab
Verify the cluster is running:
kubectl cluster-info --context kind-arc-lab
Create a Test GitHub Repository
Create a new repository (e.g., arc-lab-test) in your GitHub account. Add a simple workflow file at .github/workflows/test.yml:
name: ARC Test Workflow
on:
push:
branches: [main]
workflow_dispatch:
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Hello from GitHub-hosted runner
run: echo "This runs on a GitHub-hosted runner"
Push this to your repository. We will modify it later to target ARC runners.
Exercise 1: Install ARC with Helm
Actions Runner Controller v2 uses Helm charts to deploy two components: a controller that manages the lifecycle of runner pods, and one or more runner scale sets that register with GitHub and accept jobs.
Step 1: Add the Helm Repository
helm repo add actions-runner-controller \
https://actions-runner-controller.github.io/actions-runner-controller
helm repo update
Step 2: Configure Authentication
ARC needs to authenticate with the GitHub API. You have two options:
Option A: GitHub App (Recommended for Production)
Create a GitHub App in your organization or account settings:
- Go to Settings → Developer settings → GitHub Apps → New GitHub App
- Set the following permissions:
- Repository:
Actions(read),Administration(read/write),Metadata(read) - Organization:
Self-hosted runners(read/write)
- Repository:
- Generate a private key and download it
- Install the App on your organization or repository
- Note the App ID and Installation ID
Option B: Personal Access Token (Simpler for Labs)
Create a PAT (classic) with repo and admin:org scopes, or a fine-grained PAT with Actions and Administration permissions. For this lab, we will use a PAT for simplicity.
Step 3: Install the ARC Controller
helm install arc \
actions-runner-controller/gha-runner-scale-set-controller \
--namespace arc-systems \
--create-namespace
Verify the controller is running:
kubectl get pods -n arc-systems
You should see output similar to:
NAME READY STATUS RESTARTS AGE
arc-gha-runner-scale-set-controller-xxx 1/1 Running 0 30s
Step 4: Install a Runner Scale Set
Now deploy a runner scale set that registers with your GitHub repository:
helm install arc-runner-set \
actions-runner-controller/gha-runner-scale-set \
--namespace arc-runners \
--create-namespace \
--set githubConfigUrl="https://github.com/<org>/<repo>" \
--set githubConfigSecret.github_token="<PAT>"
Replace <org>/<repo> with your repository path and <PAT> with your personal access token.
Verify the runner scale set:
kubectl get pods -n arc-runners
At this point, there may be no runner pods yet — ARC uses a scale-to-zero model. Pods are created only when jobs are queued.
Step 5: Verify in GitHub
Navigate to your repository on GitHub: Settings → Actions → Runners. You should see the runner scale set listed with the name arc-runner-set. The status shows it is ready to accept jobs.
Exercise 2: Run a Workflow on ARC Runners
Now update the test workflow to target the ARC runner scale set instead of GitHub-hosted runners.
Step 1: Update the Workflow
Modify .github/workflows/test.yml to use the ARC runner label:
name: ARC Test Workflow
on:
push:
branches: [main]
workflow_dispatch:
jobs:
test:
runs-on: arc-runner-set
steps:
- name: Hello from ARC runner
run: |
echo "This runs on an ephemeral ARC runner!"
echo "Hostname: $(hostname)"
echo "Runner OS: $(uname -a)"
- name: Show environment
run: env | sort
The key change is runs-on: arc-runner-set — this matches the name of the Helm release for the runner scale set.
Step 2: Trigger the Workflow
Push the updated workflow file or use the “Run workflow” button (workflow_dispatch) in the GitHub Actions UI.
Step 3: Observe the Runner Pod
Watch the arc-runners namespace while the workflow runs:
kubectl get pods -n arc-runners -w
You will see a pod created for the job:
NAME READY STATUS RESTARTS AGE
arc-runner-set-xxxxx-runner 1/1 Running 0 5s
After the job completes, the pod is terminated and removed:
NAME READY STATUS RESTARTS AGE
arc-runner-set-xxxxx-runner 0/1 Completed 0 45s
Run kubectl get pods -n arc-runners again — the pod is gone. This is the ephemeral model: each job gets a fresh container, and the container is destroyed when the job finishes. There is no state persistence between jobs.
Exercise 3: Demonstrate Ephemeral Security
This exercise demonstrates the core security benefit of ephemeral runners: no cross-job contamination.
Step 1: Create a Workflow That Writes Sensitive Data
Create .github/workflows/ephemeral-test.yml:
name: Ephemeral Security Test
on: workflow_dispatch
jobs:
write-secret:
runs-on: arc-runner-set
steps:
- name: Write sensitive data
run: |
echo "SECRET_API_KEY=sk-prod-abc123xyz" > /tmp/secret-data
echo "DB_PASSWORD=super-secret-password" >> /tmp/secret-data
echo "Written sensitive data to /tmp/secret-data"
cat /tmp/secret-data
read-secret:
runs-on: arc-runner-set
needs: write-secret
steps:
- name: Attempt to read previous job data
run: |
echo "Checking if /tmp/secret-data exists from previous job..."
if [ -f /tmp/secret-data ]; then
echo "SECURITY RISK: Found data from previous job!"
cat /tmp/secret-data
else
echo "SECURE: /tmp/secret-data does not exist."
echo "Each job gets a fresh container — no cross-job contamination."
fi
Step 2: Run the Workflow
Trigger the workflow via workflow_dispatch. The first job (write-secret) writes sensitive data to /tmp/secret-data. The second job (read-secret) runs in a new pod and attempts to read that file.
Step 3: Verify the Results
In the GitHub Actions logs, you will see:
- write-secret job: Successfully writes the file and prints the contents
- read-secret job: The file does not exist — output shows
SECURE: /tmp/secret-data does not exist.
Each job ran in a separate, freshly created pod. When the write-secret pod was destroyed, all data — including the sensitive file — was destroyed with it.
Why This Matters
On a persistent self-hosted runner, the /tmp/secret-data file would still be on disk when the second job runs. A malicious workflow in a pull request could read secrets, tokens, or credentials left by previous jobs. With ephemeral runners, this attack vector is eliminated.
Exercise 4: Custom Runner Images
ARC runners use a base container image. For real-world use, you need to customize this image to include your build tools.
Step 1: Create a Custom Dockerfile
Create a Dockerfile for your custom runner:
FROM ghcr.io/actions/actions-runner:latest
USER root
# Install build tools
RUN apt-get update && apt-get install -y \
curl \
wget \
git \
jq \
unzip \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Go
RUN wget -q https://go.dev/dl/go1.22.4.linux-amd64.tar.gz \
&& tar -C /usr/local -xzf go1.22.4.linux-amd64.tar.gz \
&& rm go1.22.4.linux-amd64.tar.gz
ENV PATH="$PATH:/usr/local/go/bin"
# Install cosign
RUN curl -sSL -o /usr/local/bin/cosign \
https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64 \
&& chmod +x /usr/local/bin/cosign
# Install Docker CLI (for Docker-in-Docker workflows)
RUN curl -fsSL https://get.docker.com | sh
USER runner
Step 2: Build and Push the Image
# Build the image
docker build -t ghcr.io/<org>/custom-runner:latest .
# Authenticate to GitHub Container Registry
echo "<PAT>" | docker login ghcr.io -u <username> --password-stdin
# Push the image
docker push ghcr.io/<org>/custom-runner:latest
Step 3: Configure ARC to Use the Custom Image
Create a values file custom-runner-values.yaml:
githubConfigUrl: "https://github.com/<org>/<repo>"
githubConfigSecret:
github_token: "<PAT>"
template:
spec:
containers:
- name: runner
image: ghcr.io/<org>/custom-runner:latest
command: ["/home/runner/run.sh"]
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2"
memory: "2Gi"
Upgrade the runner scale set with the custom image:
helm upgrade arc-runner-set \
actions-runner-controller/gha-runner-scale-set \
--namespace arc-runners \
-f custom-runner-values.yaml
Step 4: Verify Custom Tools
Create a workflow that uses the custom tools:
name: Custom Runner Tools Test
on: workflow_dispatch
jobs:
verify-tools:
runs-on: arc-runner-set
steps:
- name: Verify Go
run: go version
- name: Verify cosign
run: cosign version
- name: Verify Docker CLI
run: docker --version
Security benefit: By building your own runner image, you control exactly what tools and dependencies are present in the build environment. There are no unexpected binaries, no pre-installed software you did not approve, and you can pin every tool to a specific version. You can also scan the image for vulnerabilities before deploying it.
Exercise 5: Runner Group Isolation
Different workflows have different trust levels. Pull request validation should not have access to production secrets. Deployment workflows need secrets but should only run from the main branch. ARC lets you implement this separation by creating distinct runner scale sets with different labels and configurations.
Step 1: Create a PR Validation Runner Scale Set
Create pr-runner-values.yaml:
githubConfigUrl: "https://github.com/<org>/<repo>"
githubConfigSecret:
github_token: "<PAT>"
template:
spec:
containers:
- name: runner
image: ghcr.io/<org>/custom-runner:latest
command: ["/home/runner/run.sh"]
env:
- name: RUNNER_GROUP
value: "pr-validation"
resources:
requests:
cpu: "250m"
memory: "256Mi"
limits:
cpu: "1"
memory: "1Gi"
helm install arc-runner-pr \
actions-runner-controller/gha-runner-scale-set \
--namespace arc-runners \
-f pr-runner-values.yaml
Step 2: Create a Deployment Runner Scale Set
Create deploy-runner-values.yaml:
githubConfigUrl: "https://github.com/<org>/<repo>"
githubConfigSecret:
github_token: "<PAT>"
template:
spec:
containers:
- name: runner
image: ghcr.io/<org>/custom-runner:latest
command: ["/home/runner/run.sh"]
env:
- name: RUNNER_GROUP
value: "deployment"
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2"
memory: "2Gi"
serviceAccountName: deploy-runner-sa
nodeSelector:
runner-type: deployment
helm install arc-runner-deploy \
actions-runner-controller/gha-runner-scale-set \
--namespace arc-runners \
-f deploy-runner-values.yaml
Step 3: Configure Workflows for Isolation
Use different runner labels based on the workflow trigger:
name: CI/CD Pipeline
on:
pull_request:
branches: [main]
push:
branches: [main]
jobs:
validate:
if: github.event_name == 'pull_request'
runs-on: arc-runner-pr
steps:
- uses: actions/checkout@v4
- name: Run tests
run: make test
- name: Run linter
run: make lint
deploy:
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
runs-on: arc-runner-deploy
steps:
- uses: actions/checkout@v4
- name: Deploy to production
run: make deploy
env:
DEPLOY_TOKEN: ${{ secrets.DEPLOY_TOKEN }}
This implements separation of duties at the runner level. PR validation jobs run on runners that have no access to deployment secrets or privileged network segments. Deployment jobs run on a separate set of runners that have the necessary credentials and network access, but only trigger on pushes to main.
Exercise 6: Autoscaling
ARC natively supports autoscaling. Runner pods are created on demand and destroyed when idle. You can configure minimum and maximum replicas to control cost and responsiveness.
Step 1: Configure Autoscaling Parameters
Update your runner scale set values file to include scaling parameters:
githubConfigUrl: "https://github.com/<org>/<repo>"
githubConfigSecret:
github_token: "<PAT>"
minRunners: 0
maxRunners: 10
template:
spec:
containers:
- name: runner
image: ghcr.io/actions/actions-runner:latest
command: ["/home/runner/run.sh"]
helm upgrade arc-runner-set \
actions-runner-controller/gha-runner-scale-set \
--namespace arc-runners \
-f autoscale-values.yaml
Step 2: Generate Load
Create a workflow that triggers multiple parallel jobs:
name: Autoscale Test
on: workflow_dispatch
jobs:
parallel-job:
runs-on: arc-runner-set
strategy:
matrix:
id: [1, 2, 3, 4, 5]
steps:
- name: Simulate work
run: |
echo "Job ${{ matrix.id }} running on $(hostname)"
sleep 60
Trigger this workflow and watch the pods scale up:
kubectl get pods -n arc-runners -w
You will see five pods created — one for each matrix job:
NAME READY STATUS RESTARTS AGE
arc-runner-set-abcde-runner 1/1 Running 0 5s
arc-runner-set-fghij-runner 1/1 Running 0 5s
arc-runner-set-klmno-runner 1/1 Running 0 5s
arc-runner-set-pqrst-runner 1/1 Running 0 5s
arc-runner-set-uvwxy-runner 1/1 Running 0 5s
After the jobs complete (60 seconds), all pods are terminated. The namespace returns to zero pods.
Step 3: Configure Scale-Down Delay
For cost optimization, you may want pods to remain warm for a short period after a job completes. This avoids cold-start latency for bursty workloads. ARC’s scale-to-zero behavior is the default and most secure option. If you need warm runners, keep the window short (under 5 minutes) and ensure ephemeral mode is still enforced.
Exercise 7: Network Policies for Runners
Kubernetes NetworkPolicies let you restrict the network access of runner pods. This is a critical defense against data exfiltration from compromised builds.
Step 1: Create a NetworkPolicy
Apply the following NetworkPolicy to the arc-runners namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: runner-egress-policy
namespace: arc-runners
spec:
podSelector: {}
policyTypes:
- Egress
egress:
# Allow DNS resolution
- to:
- namespaceSelector: {}
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
# Allow GitHub API and Actions services
- to:
- ipBlock:
cidr: 140.82.112.0/20
- ipBlock:
cidr: 143.55.64.0/20
- ipBlock:
cidr: 185.199.108.0/22
- ipBlock:
cidr: 4.0.0.0/8
ports:
- protocol: TCP
port: 443
# Allow your container registry (example: ghcr.io)
- to:
- ipBlock:
cidr: 140.82.112.0/20
ports:
- protocol: TCP
port: 443
# Allow your artifact storage (replace with your CIDR)
# - to:
# - ipBlock:
# cidr: 10.0.0.0/8
# ports:
# - protocol: TCP
# port: 443
kubectl apply -f runner-network-policy.yaml
Note: GitHub publishes its IP ranges at https://api.github.com/meta. Use the actions and api ranges. The CIDRs above are examples — check the current ranges and update accordingly.
Step 2: Test the NetworkPolicy
Create a workflow that attempts to reach an external URL:
name: Network Policy Test
on: workflow_dispatch
jobs:
test-network:
runs-on: arc-runner-set
steps:
- name: Test GitHub API (should work)
run: curl -s -o /dev/null -w "%{http_code}" https://api.github.com
- name: Test external URL (should be blocked)
run: |
if curl -s --connect-timeout 5 https://evil-exfiltration-server.example.com; then
echo "FAIL: External access was allowed"
exit 1
else
echo "PASS: External access was blocked by NetworkPolicy"
fi
When you run this workflow:
- The GitHub API request succeeds (HTTP 200) because the NetworkPolicy allows traffic to GitHub’s IP ranges.
- The external URL request times out and fails because it is not in the allowed egress list.
This prevents a compromised build from exfiltrating source code, secrets, or build artifacts to an attacker-controlled server. Even if a malicious dependency runs arbitrary code during the build, it cannot phone home.
Cleanup
Remove all resources created during this lab:
# Delete Helm releases
helm uninstall arc-runner-set -n arc-runners
helm uninstall arc-runner-pr -n arc-runners
helm uninstall arc-runner-deploy -n arc-runners
helm uninstall arc -n arc-systems
# Delete namespaces
kubectl delete namespace arc-runners
kubectl delete namespace arc-systems
# Delete the kind cluster
kind delete cluster --name arc-lab
If you created a GitHub App for this lab, you can delete it from Settings → Developer settings → GitHub Apps. Revoke any PATs you created.
Key Takeaways
- Ephemeral runners eliminate cross-job contamination. Each job gets a fresh container — secrets, tokens, and build artifacts are destroyed when the job completes.
- ARC provides self-hosted runner benefits without the security risks. You get custom tools, private network access, and cost control while maintaining the ephemeral security model.
- Custom runner images give you full control over the build environment. Pin tool versions, scan for vulnerabilities, and eliminate supply chain risk from pre-installed software.
- Runner group isolation implements separation of duties. PR validation and deployment workflows run on separate runner sets with different privileges and network access.
- Network policies are a critical layer of defense. Restricting runner egress prevents data exfiltration even if a build step is compromised.
- Scale-to-zero autoscaling reduces cost and attack surface. Runner pods exist only for the duration of a job — there is no persistent infrastructure to maintain or secure.
Next Steps
Continue strengthening your CI/CD security posture with these related guides:
- Securing GitHub Actions Runners — Deep dive into runner security best practices, token management, and monitoring for both GitHub-hosted and self-hosted runners.
- Separation of Duties and Least Privilege in CI/CD Pipelines — Comprehensive guide to implementing least-privilege principles across your entire CI/CD pipeline, from source control to production deployment.