{"id":543,"date":"2026-02-22T12:16:54","date_gmt":"2026-02-22T11:16:54","guid":{"rendered":"https:\/\/secure-pipelines.com\/?p=543"},"modified":"2026-03-24T12:59:05","modified_gmt":"2026-03-24T11:59:05","slug":"lab-ephemeral-self-hosted-runners-actions-runner-controller","status":"publish","type":"post","link":"https:\/\/secure-pipelines.com\/fr\/ci-cd-security\/lab-ephemeral-self-hosted-runners-actions-runner-controller\/","title":{"rendered":"Lab : Runners \u00c9ph\u00e9m\u00e8res Self-Hosted avec Actions Runner Controller"},"content":{"rendered":"<h2>Overview<\/h2>\n<p>GitHub-hosted runners are shared and ephemeral by default \u2014 every job gets a fresh virtual machine that is destroyed after the job completes. Self-hosted runners, on the other hand, are persistent and shared across workflow runs. This creates a significant security risk: secrets, tokens, and build artifacts from one job can leak into the next. A compromised workflow can poison the runner environment for all future jobs.<\/p>\n<p><strong>Actions Runner Controller (ARC)<\/strong> solves this problem. ARC is a Kubernetes-native operator that gives you ephemeral, auto-scaling, container-based self-hosted runners. Each job gets a fresh pod that is destroyed when the job completes \u2014 just like GitHub-hosted runners, but running on your own infrastructure with your own tools and network policies.<\/p>\n<p>In this hands-on lab, you will:<\/p>\n<ul>\n<li>Deploy ARC on a local Kubernetes cluster<\/li>\n<li>Configure ephemeral runner scale sets<\/li>\n<li>Demonstrate cross-job isolation (the core security benefit)<\/li>\n<li>Build custom runner images<\/li>\n<li>Implement runner group isolation for separation of duties<\/li>\n<li>Configure autoscaling<\/li>\n<li>Apply network policies to restrict runner network access<\/li>\n<\/ul>\n<h2>Prerequisites<\/h2>\n<p>Before starting this lab, ensure you have the following:<\/p>\n<ul>\n<li><strong>Kubernetes cluster<\/strong> \u2014 <a href=\"https:\/\/kind.sigs.k8s.io\/\" target=\"_blank\" rel=\"noopener\">kind<\/a>, <a href=\"https:\/\/minikube.sigs.k8s.io\/\" target=\"_blank\" rel=\"noopener\">minikube<\/a>, or a cloud-managed cluster (EKS, GKE, AKS)<\/li>\n<li><strong>Helm 3<\/strong> \u2014 Install from <a href=\"https:\/\/helm.sh\/docs\/intro\/install\/\" target=\"_blank\" rel=\"noopener\">helm.sh<\/a><\/li>\n<li><strong>kubectl<\/strong> \u2014 Configured to communicate with your cluster<\/li>\n<li><strong>GitHub account<\/strong> \u2014 With admin access to a repository or organization<\/li>\n<li><strong>GitHub App or Personal Access Token (PAT)<\/strong> \u2014 With <code>repo<\/code> and <code>admin:org<\/code> scopes (PAT) or appropriate GitHub App permissions<\/li>\n<li><strong>Docker<\/strong> \u2014 For building custom runner images (Exercise 4)<\/li>\n<\/ul>\n<h2>Environment Setup<\/h2>\n<p>We will use <strong>kind<\/strong> (Kubernetes in Docker) to create a local cluster. This keeps the lab self-contained and easy to clean up.<\/p>\n<h3>Create a kind Cluster<\/h3>\n<pre><code>kind create cluster --name arc-lab<\/code><\/pre>\n<p>Verify the cluster is running:<\/p>\n<pre><code>kubectl cluster-info --context kind-arc-lab<\/code><\/pre>\n<h3>Create a Test GitHub Repository<\/h3>\n<p>Create a new repository (e.g., <code>arc-lab-test<\/code>) in your GitHub account. Add a simple workflow file at <code>.github\/workflows\/test.yml<\/code>:<\/p>\n<pre><code>name: ARC Test Workflow\non:\n  push:\n    branches: [main]\n  workflow_dispatch:\n\njobs:\n  test:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Hello from GitHub-hosted runner\n        run: echo \"This runs on a GitHub-hosted runner\"<\/code><\/pre>\n<p>Push this to your repository. We will modify it later to target ARC runners.<\/p>\n<h2>Exercise 1: Install ARC with Helm<\/h2>\n<p>Actions Runner Controller v2 uses Helm charts to deploy two components: a <strong>controller<\/strong> that manages the lifecycle of runner pods, and one or more <strong>runner scale sets<\/strong> that register with GitHub and accept jobs.<\/p>\n<h3>Step 1: Add the Helm Repository<\/h3>\n<pre><code>helm repo add actions-runner-controller \\\n  https:\/\/actions-runner-controller.github.io\/actions-runner-controller\nhelm repo update<\/code><\/pre>\n<h3>Step 2: Configure Authentication<\/h3>\n<p>ARC needs to authenticate with the GitHub API. You have two options:<\/p>\n<p><strong>Option A: GitHub App (Recommended for Production)<\/strong><\/p>\n<p>Create a GitHub App in your organization or account settings:<\/p>\n<ol>\n<li>Go to <strong>Settings \u2192 Developer settings \u2192 GitHub Apps \u2192 New GitHub App<\/strong><\/li>\n<li>Set the following permissions:\n<ul>\n<li>Repository: <code>Actions<\/code> (read), <code>Administration<\/code> (read\/write), <code>Metadata<\/code> (read)<\/li>\n<li>Organization: <code>Self-hosted runners<\/code> (read\/write)<\/li>\n<\/ul>\n<\/li>\n<li>Generate a private key and download it<\/li>\n<li>Install the App on your organization or repository<\/li>\n<li>Note the App ID and Installation ID<\/li>\n<\/ol>\n<p><strong>Option B: Personal Access Token (Simpler for Labs)<\/strong><\/p>\n<p>Create a PAT (classic) with <code>repo<\/code> and <code>admin:org<\/code> scopes, or a fine-grained PAT with Actions and Administration permissions. For this lab, we will use a PAT for simplicity.<\/p>\n<h3>Step 3: Install the ARC Controller<\/h3>\n<pre><code>helm install arc \\\n  actions-runner-controller\/gha-runner-scale-set-controller \\\n  --namespace arc-systems \\\n  --create-namespace<\/code><\/pre>\n<p>Verify the controller is running:<\/p>\n<pre><code>kubectl get pods -n arc-systems<\/code><\/pre>\n<p>You should see output similar to:<\/p>\n<pre><code>NAME                                     READY   STATUS    RESTARTS   AGE\narc-gha-runner-scale-set-controller-xxx  1\/1     Running   0          30s<\/code><\/pre>\n<h3>Step 4: Install a Runner Scale Set<\/h3>\n<p>Now deploy a runner scale set that registers with your GitHub repository:<\/p>\n<pre><code>helm install arc-runner-set \\\n  actions-runner-controller\/gha-runner-scale-set \\\n  --namespace arc-runners \\\n  --create-namespace \\\n  --set githubConfigUrl=\"https:\/\/github.com\/&lt;org&gt;\/&lt;repo&gt;\" \\\n  --set githubConfigSecret.github_token=\"&lt;PAT&gt;\"<\/code><\/pre>\n<p>Replace <code>&lt;org&gt;\/&lt;repo&gt;<\/code> with your repository path and <code>&lt;PAT&gt;<\/code> with your personal access token.<\/p>\n<p>Verify the runner scale set:<\/p>\n<pre><code>kubectl get pods -n arc-runners<\/code><\/pre>\n<p>At this point, there may be no runner pods yet \u2014 ARC uses a scale-to-zero model. Pods are created only when jobs are queued.<\/p>\n<h3>Step 5: Verify in GitHub<\/h3>\n<p>Navigate to your repository on GitHub: <strong>Settings \u2192 Actions \u2192 Runners<\/strong>. You should see the runner scale set listed with the name <code>arc-runner-set<\/code>. The status shows it is ready to accept jobs.<\/p>\n<h2>Exercise 2: Run a Workflow on ARC Runners<\/h2>\n<p>Now update the test workflow to target the ARC runner scale set instead of GitHub-hosted runners.<\/p>\n<h3>Step 1: Update the Workflow<\/h3>\n<p>Modify <code>.github\/workflows\/test.yml<\/code> to use the ARC runner label:<\/p>\n<pre><code>name: ARC Test Workflow\non:\n  push:\n    branches: [main]\n  workflow_dispatch:\n\njobs:\n  test:\n    runs-on: arc-runner-set\n    steps:\n      - name: Hello from ARC runner\n        run: |\n          echo \"This runs on an ephemeral ARC runner!\"\n          echo \"Hostname: $(hostname)\"\n          echo \"Runner OS: $(uname -a)\"\n      - name: Show environment\n        run: env | sort<\/code><\/pre>\n<p>The key change is <code>runs-on: arc-runner-set<\/code> \u2014 this matches the name of the Helm release for the runner scale set.<\/p>\n<h3>Step 2: Trigger the Workflow<\/h3>\n<p>Push the updated workflow file or use the \u00ab\u00a0Run workflow\u00a0\u00bb button (workflow_dispatch) in the GitHub Actions UI.<\/p>\n<h3>Step 3: Observe the Runner Pod<\/h3>\n<p>Watch the <code>arc-runners<\/code> namespace while the workflow runs:<\/p>\n<pre><code>kubectl get pods -n arc-runners -w<\/code><\/pre>\n<p>You will see a pod created for the job:<\/p>\n<pre><code>NAME                          READY   STATUS    RESTARTS   AGE\narc-runner-set-xxxxx-runner   1\/1     Running   0          5s<\/code><\/pre>\n<p>After the job completes, the pod is terminated and removed:<\/p>\n<pre><code>NAME                          READY   STATUS      RESTARTS   AGE\narc-runner-set-xxxxx-runner   0\/1     Completed   0          45s<\/code><\/pre>\n<p>Run <code>kubectl get pods -n arc-runners<\/code> again \u2014 the pod is gone. This is the ephemeral model: each job gets a fresh container, and the container is destroyed when the job finishes. There is no state persistence between jobs.<\/p>\n<h2>Exercise 3: Demonstrate Ephemeral Security<\/h2>\n<p>This exercise demonstrates the core security benefit of ephemeral runners: <strong>no cross-job contamination<\/strong>.<\/p>\n<h3>Step 1: Create a Workflow That Writes Sensitive Data<\/h3>\n<p>Create <code>.github\/workflows\/ephemeral-test.yml<\/code>:<\/p>\n<pre><code>name: Ephemeral Security Test\non: workflow_dispatch\n\njobs:\n  write-secret:\n    runs-on: arc-runner-set\n    steps:\n      - name: Write sensitive data\n        run: |\n          echo \"SECRET_API_KEY=sk-prod-abc123xyz\" &gt; \/tmp\/secret-data\n          echo \"DB_PASSWORD=super-secret-password\" &gt;&gt; \/tmp\/secret-data\n          echo \"Written sensitive data to \/tmp\/secret-data\"\n          cat \/tmp\/secret-data\n\n  read-secret:\n    runs-on: arc-runner-set\n    needs: write-secret\n    steps:\n      - name: Attempt to read previous job data\n        run: |\n          echo \"Checking if \/tmp\/secret-data exists from previous job...\"\n          if [ -f \/tmp\/secret-data ]; then\n            echo \"SECURITY RISK: Found data from previous job!\"\n            cat \/tmp\/secret-data\n          else\n            echo \"SECURE: \/tmp\/secret-data does not exist.\"\n            echo \"Each job gets a fresh container \u2014 no cross-job contamination.\"\n          fi<\/code><\/pre>\n<h3>Step 2: Run the Workflow<\/h3>\n<p>Trigger the workflow via <code>workflow_dispatch<\/code>. The first job (<code>write-secret<\/code>) writes sensitive data to <code>\/tmp\/secret-data<\/code>. The second job (<code>read-secret<\/code>) runs in a new pod and attempts to read that file.<\/p>\n<h3>Step 3: Verify the Results<\/h3>\n<p>In the GitHub Actions logs, you will see:<\/p>\n<ul>\n<li><strong>write-secret job:<\/strong> Successfully writes the file and prints the contents<\/li>\n<li><strong>read-secret job:<\/strong> The file does not exist \u2014 output shows <code>SECURE: \/tmp\/secret-data does not exist.<\/code><\/li>\n<\/ul>\n<p>Each job ran in a separate, freshly created pod. When the <code>write-secret<\/code> pod was destroyed, all data \u2014 including the sensitive file \u2014 was destroyed with it.<\/p>\n<h3>Why This Matters<\/h3>\n<p>On a <strong>persistent self-hosted runner<\/strong>, the <code>\/tmp\/secret-data<\/code> file would still be on disk when the second job runs. A malicious workflow in a pull request could read secrets, tokens, or credentials left by previous jobs. With ephemeral runners, this attack vector is eliminated.<\/p>\n<h2>Exercise 4: Custom Runner Images<\/h2>\n<p>ARC runners use a base container image. For real-world use, you need to customize this image to include your build tools.<\/p>\n<h3>Step 1: Create a Custom Dockerfile<\/h3>\n<p>Create a <code>Dockerfile<\/code> for your custom runner:<\/p>\n<pre><code>FROM ghcr.io\/actions\/actions-runner:latest\n\nUSER root\n\n# Install build tools\nRUN apt-get update &amp;&amp; apt-get install -y \\\n    curl \\\n    wget \\\n    git \\\n    jq \\\n    unzip \\\n    build-essential \\\n    &amp;&amp; rm -rf \/var\/lib\/apt\/lists\/*\n\n# Install Go\nRUN wget -q https:\/\/go.dev\/dl\/go1.22.4.linux-amd64.tar.gz \\\n    &amp;&amp; tar -C \/usr\/local -xzf go1.22.4.linux-amd64.tar.gz \\\n    &amp;&amp; rm go1.22.4.linux-amd64.tar.gz\nENV PATH=\"$PATH:\/usr\/local\/go\/bin\"\n\n# Install cosign\nRUN curl -sSL -o \/usr\/local\/bin\/cosign \\\n    https:\/\/github.com\/sigstore\/cosign\/releases\/latest\/download\/cosign-linux-amd64 \\\n    &amp;&amp; chmod +x \/usr\/local\/bin\/cosign\n\n# Install Docker CLI (for Docker-in-Docker workflows)\nRUN curl -fsSL https:\/\/get.docker.com | sh\n\nUSER runner<\/code><\/pre>\n<h3>Step 2: Build and Push the Image<\/h3>\n<pre><code># Build the image\ndocker build -t ghcr.io\/&lt;org&gt;\/custom-runner:latest .\n\n# Authenticate to GitHub Container Registry\necho \"&lt;PAT&gt;\" | docker login ghcr.io -u &lt;username&gt; --password-stdin\n\n# Push the image\ndocker push ghcr.io\/&lt;org&gt;\/custom-runner:latest<\/code><\/pre>\n<h3>Step 3: Configure ARC to Use the Custom Image<\/h3>\n<p>Create a values file <code>custom-runner-values.yaml<\/code>:<\/p>\n<pre><code>githubConfigUrl: \"https:\/\/github.com\/&lt;org&gt;\/&lt;repo&gt;\"\ngithubConfigSecret:\n  github_token: \"&lt;PAT&gt;\"\n\ntemplate:\n  spec:\n    containers:\n      - name: runner\n        image: ghcr.io\/&lt;org&gt;\/custom-runner:latest\n        command: [\"\/home\/runner\/run.sh\"]\n        resources:\n          requests:\n            cpu: \"500m\"\n            memory: \"512Mi\"\n          limits:\n            cpu: \"2\"\n            memory: \"2Gi\"<\/code><\/pre>\n<p>Upgrade the runner scale set with the custom image:<\/p>\n<pre><code>helm upgrade arc-runner-set \\\n  actions-runner-controller\/gha-runner-scale-set \\\n  --namespace arc-runners \\\n  -f custom-runner-values.yaml<\/code><\/pre>\n<h3>Step 4: Verify Custom Tools<\/h3>\n<p>Create a workflow that uses the custom tools:<\/p>\n<pre><code>name: Custom Runner Tools Test\non: workflow_dispatch\n\njobs:\n  verify-tools:\n    runs-on: arc-runner-set\n    steps:\n      - name: Verify Go\n        run: go version\n      - name: Verify cosign\n        run: cosign version\n      - name: Verify Docker CLI\n        run: docker --version<\/code><\/pre>\n<p><strong>Security benefit:<\/strong> By building your own runner image, you control exactly what tools and dependencies are present in the build environment. There are no unexpected binaries, no pre-installed software you did not approve, and you can pin every tool to a specific version. You can also scan the image for vulnerabilities before deploying it.<\/p>\n<h2>Exercise 5: Runner Group Isolation<\/h2>\n<p>Different workflows have different trust levels. Pull request validation should not have access to production secrets. Deployment workflows need secrets but should only run from the main branch. ARC lets you implement this separation by creating distinct runner scale sets with different labels and configurations.<\/p>\n<h3>Step 1: Create a PR Validation Runner Scale Set<\/h3>\n<p>Create <code>pr-runner-values.yaml<\/code>:<\/p>\n<pre><code>githubConfigUrl: \"https:\/\/github.com\/&lt;org&gt;\/&lt;repo&gt;\"\ngithubConfigSecret:\n  github_token: \"&lt;PAT&gt;\"\n\ntemplate:\n  spec:\n    containers:\n      - name: runner\n        image: ghcr.io\/&lt;org&gt;\/custom-runner:latest\n        command: [\"\/home\/runner\/run.sh\"]\n        env:\n          - name: RUNNER_GROUP\n            value: \"pr-validation\"\n        resources:\n          requests:\n            cpu: \"250m\"\n            memory: \"256Mi\"\n          limits:\n            cpu: \"1\"\n            memory: \"1Gi\"<\/code><\/pre>\n<pre><code>helm install arc-runner-pr \\\n  actions-runner-controller\/gha-runner-scale-set \\\n  --namespace arc-runners \\\n  -f pr-runner-values.yaml<\/code><\/pre>\n<h3>Step 2: Create a Deployment Runner Scale Set<\/h3>\n<p>Create <code>deploy-runner-values.yaml<\/code>:<\/p>\n<pre><code>githubConfigUrl: \"https:\/\/github.com\/&lt;org&gt;\/&lt;repo&gt;\"\ngithubConfigSecret:\n  github_token: \"&lt;PAT&gt;\"\n\ntemplate:\n  spec:\n    containers:\n      - name: runner\n        image: ghcr.io\/&lt;org&gt;\/custom-runner:latest\n        command: [\"\/home\/runner\/run.sh\"]\n        env:\n          - name: RUNNER_GROUP\n            value: \"deployment\"\n        resources:\n          requests:\n            cpu: \"500m\"\n            memory: \"512Mi\"\n          limits:\n            cpu: \"2\"\n            memory: \"2Gi\"\n    serviceAccountName: deploy-runner-sa\n    nodeSelector:\n      runner-type: deployment<\/code><\/pre>\n<pre><code>helm install arc-runner-deploy \\\n  actions-runner-controller\/gha-runner-scale-set \\\n  --namespace arc-runners \\\n  -f deploy-runner-values.yaml<\/code><\/pre>\n<h3>Step 3: Configure Workflows for Isolation<\/h3>\n<p>Use different runner labels based on the workflow trigger:<\/p>\n<pre><code>name: CI\/CD Pipeline\non:\n  pull_request:\n    branches: [main]\n  push:\n    branches: [main]\n\njobs:\n  validate:\n    if: github.event_name == 'pull_request'\n    runs-on: arc-runner-pr\n    steps:\n      - uses: actions\/checkout@v4\n      - name: Run tests\n        run: make test\n      - name: Run linter\n        run: make lint\n\n  deploy:\n    if: github.ref == 'refs\/heads\/main' &amp;&amp; github.event_name == 'push'\n    runs-on: arc-runner-deploy\n    steps:\n      - uses: actions\/checkout@v4\n      - name: Deploy to production\n        run: make deploy\n        env:\n          DEPLOY_TOKEN: ${{ secrets.DEPLOY_TOKEN }}<\/code><\/pre>\n<p>This implements <strong>separation of duties at the runner level<\/strong>. PR validation jobs run on runners that have no access to deployment secrets or privileged network segments. Deployment jobs run on a separate set of runners that have the necessary credentials and network access, but only trigger on pushes to main.<\/p>\n<h2>Exercise 6: Autoscaling<\/h2>\n<p>ARC natively supports autoscaling. Runner pods are created on demand and destroyed when idle. You can configure minimum and maximum replicas to control cost and responsiveness.<\/p>\n<h3>Step 1: Configure Autoscaling Parameters<\/h3>\n<p>Update your runner scale set values file to include scaling parameters:<\/p>\n<pre><code>githubConfigUrl: \"https:\/\/github.com\/&lt;org&gt;\/&lt;repo&gt;\"\ngithubConfigSecret:\n  github_token: \"&lt;PAT&gt;\"\n\nminRunners: 0\nmaxRunners: 10\n\ntemplate:\n  spec:\n    containers:\n      - name: runner\n        image: ghcr.io\/actions\/actions-runner:latest\n        command: [\"\/home\/runner\/run.sh\"]<\/code><\/pre>\n<pre><code>helm upgrade arc-runner-set \\\n  actions-runner-controller\/gha-runner-scale-set \\\n  --namespace arc-runners \\\n  -f autoscale-values.yaml<\/code><\/pre>\n<h3>Step 2: Generate Load<\/h3>\n<p>Create a workflow that triggers multiple parallel jobs:<\/p>\n<pre><code>name: Autoscale Test\non: workflow_dispatch\n\njobs:\n  parallel-job:\n    runs-on: arc-runner-set\n    strategy:\n      matrix:\n        id: [1, 2, 3, 4, 5]\n    steps:\n      - name: Simulate work\n        run: |\n          echo \"Job ${{ matrix.id }} running on $(hostname)\"\n          sleep 60<\/code><\/pre>\n<p>Trigger this workflow and watch the pods scale up:<\/p>\n<pre><code>kubectl get pods -n arc-runners -w<\/code><\/pre>\n<p>You will see five pods created \u2014 one for each matrix job:<\/p>\n<pre><code>NAME                              READY   STATUS    RESTARTS   AGE\narc-runner-set-abcde-runner       1\/1     Running   0          5s\narc-runner-set-fghij-runner       1\/1     Running   0          5s\narc-runner-set-klmno-runner       1\/1     Running   0          5s\narc-runner-set-pqrst-runner       1\/1     Running   0          5s\narc-runner-set-uvwxy-runner       1\/1     Running   0          5s<\/code><\/pre>\n<p>After the jobs complete (60 seconds), all pods are terminated. The namespace returns to zero pods.<\/p>\n<h3>Step 3: Configure Scale-Down Delay<\/h3>\n<p>For cost optimization, you may want pods to remain warm for a short period after a job completes. This avoids cold-start latency for bursty workloads. ARC&rsquo;s scale-to-zero behavior is the default and most secure option. If you need warm runners, keep the window short (under 5 minutes) and ensure ephemeral mode is still enforced.<\/p>\n<h2>Exercise 7: Network Policies for Runners<\/h2>\n<p>Kubernetes NetworkPolicies let you restrict the network access of runner pods. This is a critical defense against data exfiltration from compromised builds.<\/p>\n<h3>Step 1: Create a NetworkPolicy<\/h3>\n<p>Apply the following NetworkPolicy to the <code>arc-runners<\/code> namespace:<\/p>\n<pre><code>apiVersion: networking.k8s.io\/v1\nkind: NetworkPolicy\nmetadata:\n  name: runner-egress-policy\n  namespace: arc-runners\nspec:\n  podSelector: {}\n  policyTypes:\n    - Egress\n  egress:\n    # Allow DNS resolution\n    - to:\n        - namespaceSelector: {}\n      ports:\n        - protocol: UDP\n          port: 53\n        - protocol: TCP\n          port: 53\n    # Allow GitHub API and Actions services\n    - to:\n        - ipBlock:\n            cidr: 140.82.112.0\/20\n        - ipBlock:\n            cidr: 143.55.64.0\/20\n        - ipBlock:\n            cidr: 185.199.108.0\/22\n        - ipBlock:\n            cidr: 4.0.0.0\/8\n      ports:\n        - protocol: TCP\n          port: 443\n    # Allow your container registry (example: ghcr.io)\n    - to:\n        - ipBlock:\n            cidr: 140.82.112.0\/20\n      ports:\n        - protocol: TCP\n          port: 443\n    # Allow your artifact storage (replace with your CIDR)\n    # - to:\n    #     - ipBlock:\n    #         cidr: 10.0.0.0\/8\n    #   ports:\n    #     - protocol: TCP\n    #       port: 443<\/code><\/pre>\n<pre><code>kubectl apply -f runner-network-policy.yaml<\/code><\/pre>\n<p><strong>Note:<\/strong> GitHub publishes its IP ranges at <a href=\"https:\/\/api.github.com\/meta\" target=\"_blank\" rel=\"noopener\">https:\/\/api.github.com\/meta<\/a>. Use the <code>actions<\/code> and <code>api<\/code> ranges. The CIDRs above are examples \u2014 check the current ranges and update accordingly.<\/p>\n<h3>Step 2: Test the NetworkPolicy<\/h3>\n<p>Create a workflow that attempts to reach an external URL:<\/p>\n<pre><code>name: Network Policy Test\non: workflow_dispatch\n\njobs:\n  test-network:\n    runs-on: arc-runner-set\n    steps:\n      - name: Test GitHub API (should work)\n        run: curl -s -o \/dev\/null -w \"%{http_code}\" https:\/\/api.github.com\n\n      - name: Test external URL (should be blocked)\n        run: |\n          if curl -s --connect-timeout 5 https:\/\/evil-exfiltration-server.example.com; then\n            echo \"FAIL: External access was allowed\"\n            exit 1\n          else\n            echo \"PASS: External access was blocked by NetworkPolicy\"\n          fi<\/code><\/pre>\n<p>When you run this workflow:<\/p>\n<ul>\n<li>The GitHub API request succeeds (HTTP 200) because the NetworkPolicy allows traffic to GitHub&rsquo;s IP ranges.<\/li>\n<li>The external URL request times out and fails because it is not in the allowed egress list.<\/li>\n<\/ul>\n<p>This prevents a compromised build from exfiltrating source code, secrets, or build artifacts to an attacker-controlled server. Even if a malicious dependency runs arbitrary code during the build, it cannot phone home.<\/p>\n<h2>Cleanup<\/h2>\n<p>Remove all resources created during this lab:<\/p>\n<pre><code># Delete Helm releases\nhelm uninstall arc-runner-set -n arc-runners\nhelm uninstall arc-runner-pr -n arc-runners\nhelm uninstall arc-runner-deploy -n arc-runners\nhelm uninstall arc -n arc-systems\n\n# Delete namespaces\nkubectl delete namespace arc-runners\nkubectl delete namespace arc-systems\n\n# Delete the kind cluster\nkind delete cluster --name arc-lab<\/code><\/pre>\n<p>If you created a GitHub App for this lab, you can delete it from <strong>Settings \u2192 Developer settings \u2192 GitHub Apps<\/strong>. Revoke any PATs you created.<\/p>\n<h2>Key Takeaways<\/h2>\n<ul>\n<li><strong>Ephemeral runners eliminate cross-job contamination.<\/strong> Each job gets a fresh container \u2014 secrets, tokens, and build artifacts are destroyed when the job completes.<\/li>\n<li><strong>ARC provides self-hosted runner benefits without the security risks.<\/strong> You get custom tools, private network access, and cost control while maintaining the ephemeral security model.<\/li>\n<li><strong>Custom runner images give you full control over the build environment.<\/strong> Pin tool versions, scan for vulnerabilities, and eliminate supply chain risk from pre-installed software.<\/li>\n<li><strong>Runner group isolation implements separation of duties.<\/strong> PR validation and deployment workflows run on separate runner sets with different privileges and network access.<\/li>\n<li><strong>Network policies are a critical layer of defense.<\/strong> Restricting runner egress prevents data exfiltration even if a build step is compromised.<\/li>\n<li><strong>Scale-to-zero autoscaling reduces cost and attack surface.<\/strong> Runner pods exist only for the duration of a job \u2014 there is no persistent infrastructure to maintain or secure.<\/li>\n<\/ul>\n<h2>Next Steps<\/h2>\n<p>Continue strengthening your CI\/CD security posture with these related guides:<\/p>\n<ul>\n<li><a href=\"https:\/\/secure-pipelines.com\/fr\/non-categorise\/securing-github-actions-runners\/\">Securing GitHub Actions Runners<\/a> \u2014 Deep dive into runner security best practices, token management, and monitoring for both GitHub-hosted and self-hosted runners.<\/li>\n<li><a href=\"https:\/\/secure-pipelines.com\/fr\/ci-cd-security\/separation-of-duties-least-privilege-ci-cd-pipelines\/\">Separation of Duties and Least Privilege in CI\/CD Pipelines<\/a> \u2014 Comprehensive guide to implementing least-privilege principles across your entire CI\/CD pipeline, from source control to production deployment.<\/li>\n<\/ul>\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview GitHub-hosted runners are shared and ephemeral by default \u2014 every job gets a fresh virtual machine that is destroyed after the job completes. Self-hosted runners, on the other hand, are persistent and shared across workflow runs. This creates a significant security risk: secrets, tokens, and build artifacts from one job can leak into the &#8230; <a title=\"Lab : Runners \u00c9ph\u00e9m\u00e8res Self-Hosted avec Actions Runner Controller\" class=\"read-more\" href=\"https:\/\/secure-pipelines.com\/fr\/ci-cd-security\/lab-ephemeral-self-hosted-runners-actions-runner-controller\/\" aria-label=\"En savoir plus sur Lab : Runners \u00c9ph\u00e9m\u00e8res Self-Hosted avec Actions Runner Controller\">Lire la suite<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[49,52],"tags":[],"post_folder":[],"class_list":["post-543","post","type-post","status-publish","format-standard","hentry","category-ci-cd-security","category-github-actions"],"_links":{"self":[{"href":"https:\/\/secure-pipelines.com\/fr\/wp-json\/wp\/v2\/posts\/543","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/secure-pipelines.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/secure-pipelines.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/secure-pipelines.com\/fr\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/secure-pipelines.com\/fr\/wp-json\/wp\/v2\/comments?post=543"}],"version-history":[{"count":2,"href":"https:\/\/secure-pipelines.com\/fr\/wp-json\/wp\/v2\/posts\/543\/revisions"}],"predecessor-version":[{"id":568,"href":"https:\/\/secure-pipelines.com\/fr\/wp-json\/wp\/v2\/posts\/543\/revisions\/568"}],"wp:attachment":[{"href":"https:\/\/secure-pipelines.com\/fr\/wp-json\/wp\/v2\/media?parent=543"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/secure-pipelines.com\/fr\/wp-json\/wp\/v2\/categories?post=543"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/secure-pipelines.com\/fr\/wp-json\/wp\/v2\/tags?post=543"},{"taxonomy":"post_folder","embeddable":true,"href":"https:\/\/secure-pipelines.com\/fr\/wp-json\/wp\/v2\/post_folder?post=543"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}