{"id":788,"date":"2026-03-01T20:21:31","date_gmt":"2026-03-01T19:21:31","guid":{"rendered":"https:\/\/secure-pipelines.com\/uncategorized\/network-filesystem-restrictions-ci-cd-build-environments\/"},"modified":"2026-03-25T09:55:28","modified_gmt":"2026-03-25T08:55:28","slug":"network-filesystem-restrictions-ci-cd-build-environments","status":"publish","type":"post","link":"https:\/\/secure-pipelines.com\/ar\/ci-cd-security\/network-filesystem-restrictions-ci-cd-build-environments\/","title":{"rendered":"\u0642\u064a\u0648\u062f \u0627\u0644\u0634\u0628\u0643\u0629 \u0648\u0646\u0638\u0627\u0645 \u0627\u0644\u0645\u0644\u0641\u0627\u062a \u0644\u0628\u064a\u0626\u0627\u062a \u0628\u0646\u0627\u0621 CI\/CD"},"content":{"rendered":"\n<p>CI\/CD pipelines are among the most privileged workloads in any organization. They pull source code, download dependencies, access secrets, and push artifacts to production registries. Yet in many environments, the build processes behind these pipelines run with unrestricted network access and full filesystem permissions \u2014 a combination that represents one of the most exploitable gaps in modern software delivery.<\/p>\n\n<p>When a build environment can reach any IP address and write to any path on disk, a single compromised dependency or malicious pull request can exfiltrate secrets, tamper with artifacts, or establish persistent backdoors. This guide covers practical techniques for locking down network and filesystem access in CI\/CD build environments, from Kubernetes NetworkPolicies to hermetic build systems.<\/p>\n\n<h2 class=\"wp-block-heading\">Why Unrestricted Build Environments Are Dangerous<\/h2>\n\n<p>Before diving into solutions, it is worth understanding the specific threats that unrestricted build environments create. These risks are not theoretical \u2014 they have been exploited in real-world supply chain attacks.<\/p>\n\n<h3 class=\"wp-block-heading\">Data Exfiltration<\/h3>\n\n<p>Build environments frequently have access to secrets: API keys, registry credentials, signing keys, and deployment tokens. If a build process has unrestricted outbound network access, a compromised dependency can send those secrets to an attacker-controlled server. This can happen through a malicious <code>postinstall<\/code> script in an npm package, a compromised PyPI dependency, or even a crafted Makefile target. Without network restrictions, there is no barrier between the secret and the attacker&#8217;s endpoint.<\/p>\n\n<h3 class=\"wp-block-heading\">Supply Chain Attacks<\/h3>\n\n<p>An attacker who can execute arbitrary code during a build can modify the output artifacts. If the filesystem is writable without restriction, compiled binaries, container images, or deployment manifests can be tampered with after the legitimate build step but before the artifact is pushed. This is the essence of many supply chain attacks \u2014 the source code looks clean, but the delivered artifact is poisoned.<\/p>\n\n<h3 class=\"wp-block-heading\">Lateral Movement<\/h3>\n\n<p>Build environments that share a network with other infrastructure (databases, internal APIs, cloud metadata services) provide an attacker with a pivot point. A compromised build job can scan internal networks, access cloud instance metadata endpoints (like <code>169.254.169.254<\/code>), and escalate from a CI\/CD context into broader infrastructure access.<\/p>\n\n<h2 class=\"wp-block-heading\">Network Restrictions<\/h2>\n\n<p>The most impactful control you can implement is restricting outbound network access from build environments. Builds need to pull dependencies and push artifacts \u2014 but they rarely need unrestricted internet access.<\/p>\n\n<h3 class=\"wp-block-heading\">Kubernetes NetworkPolicy for Runner Pods<\/h3>\n\n<p>If you run CI\/CD runners on Kubernetes (for example, using Actions Runner Controller or the GitLab Kubernetes executor), NetworkPolicy resources give you fine-grained control over pod-level network access. A well-designed policy denies all egress by default and then allows only the specific endpoints the build needs.<\/p>\n\n<pre><code>apiVersion: networking.k8s.io\/v1\nkind: NetworkPolicy\nmetadata:\n  name: ci-runner-netpol\n  namespace: ci-runners\nspec:\n  podSelector:\n    matchLabels:\n      app: ci-runner\n  policyTypes:\n    - Egress\n  egress:\n    # Allow DNS resolution\n    - to:\n        - namespaceSelector: {}\n      ports:\n        - protocol: UDP\n          port: 53\n        - protocol: TCP\n          port: 53\n    # Allow access to container registry\n    - to:\n        - ipBlock:\n            cidr: 10.0.50.0\/24\n      ports:\n        - protocol: TCP\n          port: 443\n    # Allow access to artifact storage\n    - to:\n        - ipBlock:\n            cidr: 10.0.60.0\/24\n      ports:\n        - protocol: TCP\n          port: 443\n    # Deny everything else by omission<\/code><\/pre>\n\n<p>This policy allows the runner pods to resolve DNS, reach the container registry, and access artifact storage \u2014 nothing else. Every other outbound connection is dropped. If you are using a CNI plugin that supports NetworkPolicy (Calico, Cilium, or Weave Net), this takes effect immediately when applied.<\/p>\n\n<p>For more granular control, Cilium&#8217;s <code>CiliumNetworkPolicy<\/code> supports DNS-based rules, allowing you to specify domain names rather than IP blocks:<\/p>\n\n<pre><code>apiVersion: cilium.io\/v2\nkind: CiliumNetworkPolicy\nmetadata:\n  name: ci-runner-cilium-policy\n  namespace: ci-runners\nspec:\n  endpointSelector:\n    matchLabels:\n      app: ci-runner\n  egress:\n    - toEndpoints:\n        - matchLabels:\n            io.kubernetes.pod.namespace: kube-system\n            k8s-app: kube-dns\n      toPorts:\n        - ports:\n            - port: \"53\"\n              protocol: ANY\n    - toFQDNs:\n        - matchName: \"ghcr.io\"\n        - matchName: \"registry.npmjs.org\"\n        - matchName: \"pypi.org\"\n      toPorts:\n        - ports:\n            - port: \"443\"\n              protocol: TCP<\/code><\/pre>\n\n<h3 class=\"wp-block-heading\">Docker &#8211;network=none<\/h3>\n\n<p>For Docker-based build steps that should not need any network access (compilation, static analysis, unit tests), you can remove network access entirely by running the container with <code>--network=none<\/code>:<\/p>\n\n<pre><code>docker run --network=none \\\n  --rm \\\n  -v \"$(pwd)\/src:\/workspace:ro\" \\\n  -v \"$(pwd)\/output:\/output\" \\\n  my-build-image:latest \\\n  make build<\/code><\/pre>\n\n<p>With <code>--network=none<\/code>, the container has no network interfaces at all \u2014 not even loopback in some configurations. This is the strongest network isolation you can achieve for a build step. The key is to structure your pipeline so that dependency fetching happens in one stage (with limited network access) and the actual build happens in a separate, network-less stage.<\/p>\n\n<h3 class=\"wp-block-heading\">Firewall Rules for Self-Hosted Runners<\/h3>\n\n<p>If you use self-hosted runners on VMs rather than containers, host-level firewall rules provide equivalent protection. On Linux, <code>iptables<\/code> or <code>nftables<\/code> rules can restrict outbound traffic from the user account that runs CI jobs:<\/p>\n\n<pre><code># Allow DNS\niptables -A OUTPUT -m owner --uid-owner ci-runner -p udp --dport 53 -j ACCEPT\niptables -A OUTPUT -m owner --uid-owner ci-runner -p tcp --dport 53 -j ACCEPT\n\n# Allow HTTPS to specific registries\niptables -A OUTPUT -m owner --uid-owner ci-runner -p tcp --dport 443 \\\n  -d registry.example.com -j ACCEPT\niptables -A OUTPUT -m owner --uid-owner ci-runner -p tcp --dport 443 \\\n  -d ghcr.io -j ACCEPT\n\n# Deny all other outbound traffic from the CI runner\niptables -A OUTPUT -m owner --uid-owner ci-runner -j DROP<\/code><\/pre>\n\n<p>This approach works well when you run the CI agent under a dedicated user account and need to allow the host system itself to maintain broader connectivity for management and updates.<\/p>\n\n<h3 class=\"wp-block-heading\">Allowlisting Registries and APIs<\/h3>\n\n<p>Regardless of the enforcement mechanism, the principle is the same: default-deny outbound, then allowlist only what the build actually needs. A typical allowlist includes the package registry (npm, PyPI, Maven Central), the container registry (Docker Hub, GHCR, ECR), the CI\/CD platform&#8217;s API (for status updates and artifact uploads), and possibly a proxy or mirror that you control. Everything else should be blocked. Use an internal proxy or mirror for dependencies whenever possible \u2014 it reduces the allowlist to a single endpoint and gives you caching and audit logging for free.<\/p>\n\n<h2 class=\"wp-block-heading\">Filesystem Restrictions<\/h2>\n\n<p>Network restrictions prevent data from leaving the build environment. Filesystem restrictions prevent unauthorized modifications within it. Together, they form a strong defense-in-depth posture.<\/p>\n\n<h3 class=\"wp-block-heading\">Read-Only Root Filesystem<\/h3>\n\n<p>Running build containers with a read-only root filesystem prevents any process from modifying the base image. This blocks a class of attacks where malicious code modifies system binaries, installs backdoors, or alters build tool configurations at the system level.<\/p>\n\n<p>In Docker, use the <code>--read-only<\/code> flag:<\/p>\n\n<pre><code>docker run --read-only \\\n  --tmpfs \/tmp:rw,noexec,nosuid,size=512m \\\n  --tmpfs \/workspace\/build:rw,size=2g \\\n  -v \"$(pwd)\/src:\/workspace\/src:ro\" \\\n  my-build-image:latest \\\n  make build<\/code><\/pre>\n\n<p>In Kubernetes, set the security context on the pod spec:<\/p>\n\n<pre><code>apiVersion: v1\nkind: Pod\nmetadata:\n  name: ci-build-pod\nspec:\n  containers:\n    - name: build\n      image: my-build-image:latest\n      securityContext:\n        readOnlyRootFilesystem: true\n        runAsNonRoot: true\n        allowPrivilegeEscalation: false\n      volumeMounts:\n        - name: build-tmp\n          mountPath: \/tmp\n        - name: build-output\n          mountPath: \/workspace\/build\n        - name: source\n          mountPath: \/workspace\/src\n          readOnly: true\n  volumes:\n    - name: build-tmp\n      emptyDir:\n        medium: Memory\n        sizeLimit: 512Mi\n    - name: build-output\n      emptyDir:\n        sizeLimit: 2Gi\n    - name: source\n      configMap:\n        name: source-code<\/code><\/pre>\n\n<h3 class=\"wp-block-heading\">tmpfs for Build Artifacts<\/h3>\n\n<p>When the root filesystem is read-only, builds need writable space for temporary files, caches, and output artifacts. Use <code>tmpfs<\/code> mounts (backed by RAM) or <code>emptyDir<\/code> volumes (in Kubernetes) for these paths. This has the added benefit that all build artifacts are automatically cleaned up when the container exits \u2014 no stale data persists between builds.<\/p>\n\n<p>Mount <code>tmpfs<\/code> with restrictive options whenever possible: <code>noexec<\/code> prevents execution of binaries written to temp directories (blocking a common attack vector), <code>nosuid<\/code> prevents SUID bit attacks, and <code>size<\/code> limits prevent a runaway build from exhausting host memory.<\/p>\n\n<h3 class=\"wp-block-heading\">Preventing Writes to Sensitive Paths<\/h3>\n\n<p>Beyond the root filesystem, specific paths deserve extra protection. Mount the source code as read-only to prevent the build from modifying its own inputs. Ensure <code>\/etc<\/code>, <code>\/usr<\/code>, and <code>\/var<\/code> are not writable. If the build needs to write to a home directory (for tool configuration), provide a dedicated writable mount rather than making the entire home directory writable. Block access to Docker sockets, Kubernetes service account tokens, and cloud credential files by not mounting them into build containers at all.<\/p>\n\n<h2 class=\"wp-block-heading\">Hermetic Builds<\/h2>\n\n<p>The gold standard for build environment security is the hermetic build: a build that has no network access at all and uses only explicitly declared, pre-fetched inputs. Hermetic builds eliminate entire classes of supply chain attacks because the build process cannot download code that was not explicitly specified and verified.<\/p>\n\n<h3 class=\"wp-block-heading\">The Hermetic Build Pattern<\/h3>\n\n<p>A hermetic build pipeline typically has two phases. In the first phase (the resolve\/fetch phase), dependencies are downloaded from approved sources, their checksums are verified against a lockfile, and they are stored in a local cache or vendored directory. This phase requires limited network access. In the second phase (the build phase), the actual compilation or assembly happens with zero network access. All inputs come from the local filesystem \u2014 source code and the pre-fetched dependencies.<\/p>\n\n<pre><code># Phase 1: Fetch dependencies (limited network)\ndocker run --network=ci-restricted \\\n  -v \"$(pwd):\/workspace\" \\\n  my-build-image:latest \\\n  sh -c \"cd \/workspace && npm ci --ignore-scripts\"\n\n# Phase 2: Build (no network)\ndocker run --network=none \\\n  --read-only \\\n  --tmpfs \/tmp:rw,noexec,size=512m \\\n  -v \"$(pwd):\/workspace:ro\" \\\n  -v \"$(pwd)\/dist:\/dist\" \\\n  my-build-image:latest \\\n  sh -c \"cd \/workspace && npm run build && cp -r build\/* \/dist\/\"<\/code><\/pre>\n\n<h3 class=\"wp-block-heading\">Bazel and Hermetic Builds<\/h3>\n\n<p>Bazel is designed around hermeticity. With <code>--sandbox_default_allow_network=false<\/code>, Bazel blocks network access during build actions by default. Dependencies are declared in <code>WORKSPACE<\/code> or <code>MODULE.bazel<\/code> files with explicit SHA-256 hashes, and Bazel fetches them in a separate phase before the build begins. If a dependency does not match its declared hash, the build fails.<\/p>\n\n<pre><code># In .bazelrc\nbuild --sandbox_default_allow_network=false\nbuild --incompatible_strict_action_env\nfetch --repository_cache=\/shared\/bazel-cache\/repos<\/code><\/pre>\n\n<p>This makes Bazel builds reproducible and resistant to dependency confusion attacks. Every input is content-addressed and verified.<\/p>\n\n<h3 class=\"wp-block-heading\">Nix and Reproducible Builds<\/h3>\n\n<p>Nix takes a similar approach. Every build derivation specifies its inputs by content hash, and Nix&#8217;s build sandbox blocks network access by default. The <code>nix-build<\/code> command fetches all sources into the Nix store (verifying hashes), then runs the build in an isolated environment with no network and a minimal filesystem. This guarantees that builds are reproducible \u2014 the same inputs always produce the same output.<\/p>\n\n<h2 class=\"wp-block-heading\">Practical Implementation<\/h2>\n\n<p>Let us look at how to implement these restrictions in specific CI\/CD platforms.<\/p>\n\n<h3 class=\"wp-block-heading\">GitHub Actions with Actions Runner Controller (ARC) + NetworkPolicy<\/h3>\n\n<p>If you use <a href=\"\/ci-cd-security\/lab-ephemeral-self-hosted-runners-actions-runner-controller\/\">Actions Runner Controller<\/a> to run GitHub Actions on Kubernetes, you can apply NetworkPolicies directly to the runner pods. ARC creates pods with predictable labels, making them easy to target with policies.<\/p>\n\n<pre><code>apiVersion: actions.summerwind.dev\/v1alpha1\nkind: RunnerDeployment\nmetadata:\n  name: secure-runner\n  namespace: ci-runners\nspec:\n  replicas: 3\n  template:\n    metadata:\n      labels:\n        app: ci-runner\n        security-tier: restricted\n    spec:\n      containers:\n        - name: runner\n          securityContext:\n            readOnlyRootFilesystem: true\n            runAsNonRoot: true\n            allowPrivilegeEscalation: false\n            capabilities:\n              drop:\n                - ALL\n          volumeMounts:\n            - name: work\n              mountPath: \/runner\/_work\n            - name: tmp\n              mountPath: \/tmp\n      volumes:\n        - name: work\n          emptyDir:\n            sizeLimit: 10Gi\n        - name: tmp\n          emptyDir:\n            medium: Memory\n            sizeLimit: 1Gi\n---\napiVersion: networking.k8s.io\/v1\nkind: NetworkPolicy\nmetadata:\n  name: secure-runner-netpol\n  namespace: ci-runners\nspec:\n  podSelector:\n    matchLabels:\n      app: ci-runner\n  policyTypes:\n    - Egress\n    - Ingress\n  ingress: []\n  egress:\n    - to:\n        - namespaceSelector: {}\n      ports:\n        - protocol: UDP\n          port: 53\n    - to:\n        - ipBlock:\n            cidr: 0.0.0.0\/0\n      ports:\n        - protocol: TCP\n          port: 443<\/code><\/pre>\n\n<p>This configuration denies all ingress traffic (runners should not accept inbound connections) and limits egress to DNS and HTTPS. For production use, replace the <code>0.0.0.0\/0<\/code> CIDR with specific IP ranges for GitHub&#8217;s API, your container registry, and your artifact store.<\/p>\n\n<h3 class=\"wp-block-heading\">GitLab CI with Runner Configuration<\/h3>\n\n<p>GitLab&#8217;s Kubernetes executor supports security context configuration in the runner&#8217;s <code>config.toml<\/code>. You can set read-only filesystem and other restrictions directly:<\/p>\n\n<pre><code># config.toml for GitLab Runner (Kubernetes executor)\n[[runners]]\n  name = \"secure-k8s-runner\"\n  executor = \"kubernetes\"\n  [runners.kubernetes]\n    namespace = \"ci-runners\"\n    image = \"alpine:latest\"\n    privileged = false\n    allow_privilege_escalation = false\n    [runners.kubernetes.pod_security_context]\n      run_as_non_root = true\n      run_as_user = 1000\n    [runners.kubernetes.build_container_security_context]\n      read_only_root_filesystem = true\n      allow_privilege_escalation = false\n      [runners.kubernetes.build_container_security_context.capabilities]\n        drop = [\"ALL\"]\n    [runners.kubernetes.volumes]\n      [[runners.kubernetes.volumes.empty_dir]]\n        name = \"build-tmp\"\n        mount_path = \"\/tmp\"\n        medium = \"Memory\"\n        size_limit = \"512Mi\"\n      [[runners.kubernetes.volumes.empty_dir]]\n        name = \"build-workspace\"\n        mount_path = \"\/builds\"\n        size_limit = \"5Gi\"<\/code><\/pre>\n\n<p>Combine this with a NetworkPolicy applied to the <code>ci-runners<\/code> namespace and you have both filesystem and network restrictions in place.<\/p>\n\n<h3 class=\"wp-block-heading\">Docker-in-Docker Restrictions<\/h3>\n\n<p>Docker-in-Docker (DinD) is commonly used for building container images in CI. It is also one of the riskiest patterns because it typically requires privileged mode. If you must use DinD, apply these restrictions:<\/p>\n\n<pre><code># Use rootless DinD instead of privileged mode\nservices:\n  dind:\n    image: docker:24-dind-rootless\n    environment:\n      - DOCKER_TLS_CERTDIR=\/certs\n    volumes:\n      - dind-certs:\/certs\/client\n      - dind-data:\/var\/lib\/docker\n\n# When running builds inside DinD, pass network and filesystem restrictions\ndocker --host tcp:\/\/dind:2376 --tlsverify \\\n  run --network=none --read-only \\\n  --tmpfs \/tmp:rw,noexec,size=256m \\\n  --security-opt=no-new-privileges \\\n  my-build-image:latest make build<\/code><\/pre>\n\n<p>Better yet, replace DinD with tools that do not need a Docker daemon at all. <code>kaniko<\/code>, <code>buildah<\/code>, and <code>ko<\/code> can build container images without privileged access, and they work well with read-only filesystems and restricted networks.<\/p>\n\n<h2 class=\"wp-block-heading\">Monitoring and Auditing<\/h2>\n\n<p>Restrictions are only useful if you know when they are being tested or bypassed. Monitoring completes the security picture.<\/p>\n\n<h3 class=\"wp-block-heading\">Detecting Unexpected Network Connections<\/h3>\n\n<p>Use Cilium&#8217;s Hubble, Calico&#8217;s flow logs, or Falco to detect network connections that your policy should have blocked (or connections to unusual destinations on allowed ports). Set up alerts for any DNS queries to domains not in your allowlist, outbound connections to non-standard ports, connections to known-bad IP ranges, and any egress traffic from pods that should have <code>--network=none<\/code>.<\/p>\n\n<pre><code># Falco rule: detect unexpected outbound connections from CI runners\n- rule: CI Runner Unexpected Outbound Connection\n  desc: Detect network connections from CI runner pods to non-approved destinations\n  condition: >\n    evt.type in (connect, sendto) and\n    container and\n    k8s.ns.name = \"ci-runners\" and\n    not (fd.sip in (approved_registry_ips) or fd.sport = 53)\n  output: >\n    Unexpected outbound connection from CI runner\n    (command=%proc.cmdline connection=%fd.name container=%container.name\n    pod=%k8s.pod.name namespace=%k8s.ns.name)\n  priority: WARNING\n  tags: [network, ci-cd, supply-chain]<\/code><\/pre>\n\n<h3 class=\"wp-block-heading\">Auditing Filesystem Access<\/h3>\n\n<p>Monitor filesystem writes in build containers to detect unexpected modifications. Linux&#8217;s <code>auditd<\/code> can watch specific paths, and Falco can detect writes to sensitive locations. Key paths to monitor include <code>\/etc<\/code> and <code>\/usr<\/code> (should never be written in a build), the Docker socket path, Kubernetes service account token paths, and any path containing credentials or signing keys.<\/p>\n\n<p>If you use read-only root filesystems, any write attempt to a protected path generates an error \u2014 log these errors and alert on them. They indicate either a misconfigured build or a potential attack.<\/p>\n\n<h2 class=\"wp-block-heading\">Trade-offs and Developer Experience<\/h2>\n\n<p>Strict network and filesystem restrictions inevitably create friction. Understanding and managing the trade-offs is critical to successful adoption.<\/p>\n\n<h3 class=\"wp-block-heading\">Build Speed<\/h3>\n\n<p>Hermetic builds require all dependencies to be pre-fetched, which adds a pipeline stage. However, this also means dependencies can be aggressively cached. In practice, many teams find that hermetic builds are actually faster because the cache hit rate is much higher when dependency resolution is deterministic. Use a shared cache (a Bazel remote cache, a Nix binary cache, or a simple HTTP cache for vendored dependencies) to amortize the cost across builds.<\/p>\n\n<h3 class=\"wp-block-heading\">Developer Experience<\/h3>\n\n<p>Developers will encounter failures when builds try to access blocked network endpoints or write to read-only paths. Good error messages are essential. Wrap your build steps in scripts that catch permission errors and network failures, then output actionable messages explaining why the access was blocked and how to fix the issue (usually by adding a dependency to the lockfile or changing the output path).<\/p>\n\n<p>Consider implementing a graduated rollout: start with monitoring mode (log violations but do not block), then move to enforcement. This gives teams time to update their build configurations without breaking every pipeline at once.<\/p>\n\n<h3 class=\"wp-block-heading\">Debugging<\/h3>\n\n<p>Debugging build failures in a restricted environment is harder when you cannot install additional tools or reach external services. Provide a &#8220;debug mode&#8221; that relaxes restrictions for a specific, manually triggered pipeline run (never for automated runs on the main branch). Log that debug mode was used and who triggered it. Never allow debug mode to bypass restrictions on production artifact builds.<\/p>\n\n<h2 class=\"wp-block-heading\">Putting It All Together<\/h2>\n\n<p>Here is a summary of the layered approach to securing CI\/CD build environments:<\/p>\n\n<p><strong>Layer 1 \u2014 Network restrictions:<\/strong> Default-deny egress with allowlists for registries and APIs. Use Kubernetes NetworkPolicy, Docker <code>--network=none<\/code>, or host-level firewall rules depending on your runner infrastructure.<\/p>\n\n<p><strong>Layer 2 \u2014 Filesystem restrictions:<\/strong> Read-only root filesystem, tmpfs for writable paths with size limits and noexec, source code mounted read-only.<\/p>\n\n<p><strong>Layer 3 \u2014 Hermetic builds:<\/strong> Separate dependency resolution from building. Run the build phase with zero network access and only pre-fetched, hash-verified inputs.<\/p>\n\n<p><strong>Layer 4 \u2014 Monitoring:<\/strong> Detect and alert on policy violations, unexpected connections, and filesystem modification attempts.<\/p>\n\n<p>No single layer is sufficient on its own. Network restrictions without filesystem controls still allow artifact tampering. Filesystem restrictions without network controls still allow exfiltration. Hermetic builds without monitoring leave you blind to attack attempts. The layers reinforce each other.<\/p>\n\n<h2 class=\"wp-block-heading\">Related Guides<\/h2>\n\n<p>For more on securing your CI\/CD pipeline, see these related guides:<\/p>\n\n<ul class=\"wp-block-list\">\n<li><a href=\"\/ci-cd-security\/build-integrity-reproducible-builds-ci-cd\/\">Build Integrity and Reproducible Builds in CI\/CD<\/a> \u2014 covers SLSA compliance, reproducible build verification, and artifact provenance.<\/li>\n\n\n\n<li><a href=\"\/ci-cd-security\/lab-ephemeral-self-hosted-runners-actions-runner-controller\/\">Lab: Ephemeral Self-Hosted Runners with Actions Runner Controller<\/a> \u2014 hands-on guide to deploying ARC on Kubernetes with ephemeral, single-use runner pods.<\/li>\n<\/ul>\n\n<p>Start with network restrictions \u2014 they offer the highest impact for the lowest implementation effort. Then add filesystem restrictions and work toward hermetic builds as your pipeline maturity increases. Every layer you add makes supply chain attacks meaningfully harder to execute.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>CI\/CD pipelines are among the most privileged workloads in any organization. They pull source code, download dependencies, access secrets, and push artifacts to production registries. Yet in many environments, the build processes behind these pipelines run with unrestricted network access and full filesystem permissions \u2014 a combination that represents one of the most exploitable gaps &#8230; <a title=\"\u0642\u064a\u0648\u062f \u0627\u0644\u0634\u0628\u0643\u0629 \u0648\u0646\u0638\u0627\u0645 \u0627\u0644\u0645\u0644\u0641\u0627\u062a \u0644\u0628\u064a\u0626\u0627\u062a \u0628\u0646\u0627\u0621 CI\/CD\" class=\"read-more\" href=\"https:\/\/secure-pipelines.com\/ar\/ci-cd-security\/network-filesystem-restrictions-ci-cd-build-environments\/\" aria-label=\"Read more about \u0642\u064a\u0648\u062f \u0627\u0644\u0634\u0628\u0643\u0629 \u0648\u0646\u0638\u0627\u0645 \u0627\u0644\u0645\u0644\u0641\u0627\u062a \u0644\u0628\u064a\u0626\u0627\u062a \u0628\u0646\u0627\u0621 CI\/CD\">\u0627\u0642\u0631\u0623 \u0627\u0644\u0645\u0632\u064a\u062f<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26,28],"tags":[],"post_folder":[],"class_list":["post-788","post","type-post","status-publish","format-standard","hentry","category-ci-cd-security","category-pipeline-hardening"],"_links":{"self":[{"href":"https:\/\/secure-pipelines.com\/ar\/wp-json\/wp\/v2\/posts\/788","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/secure-pipelines.com\/ar\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/secure-pipelines.com\/ar\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/secure-pipelines.com\/ar\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/secure-pipelines.com\/ar\/wp-json\/wp\/v2\/comments?post=788"}],"version-history":[{"count":1,"href":"https:\/\/secure-pipelines.com\/ar\/wp-json\/wp\/v2\/posts\/788\/revisions"}],"predecessor-version":[{"id":794,"href":"https:\/\/secure-pipelines.com\/ar\/wp-json\/wp\/v2\/posts\/788\/revisions\/794"}],"wp:attachment":[{"href":"https:\/\/secure-pipelines.com\/ar\/wp-json\/wp\/v2\/media?parent=788"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/secure-pipelines.com\/ar\/wp-json\/wp\/v2\/categories?post=788"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/secure-pipelines.com\/ar\/wp-json\/wp\/v2\/tags?post=788"},{"taxonomy":"post_folder","embeddable":true,"href":"https:\/\/secure-pipelines.com\/ar\/wp-json\/wp\/v2\/post_folder?post=788"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}