TL;DR

If you run self-hosted GitHub Actions runners, every actions/cache step is round-tripping to GitHub’s cloud storage. For a homelab cluster with local runners, that means cache restores travel from GitHub’s CDN to your runner, through your ISP, and back – even though the runner is 10 feet from your NAS. I deployed falcondev-oss/github-actions-cache-server as a Kubernetes deployment, pointed it at NFS storage on my NAS, set one environment variable on my runners, and flushed all the GitHub-hosted caches. Zero workflow changes required.

The Problem

My k3s cluster runs 8 self-hosted ARC (Actions Runner Controller) runners. These handle CI/CD for ~20 repos: Docker builds, Terraform validation, Ansible linting, Python tests, Node.js builds. Many of these workflows use actions/cache for npm modules, pip packages, and Docker layer caches.

The irony: my runners are pods in the same cluster as everything else, connected via 10GbE backbone to a NAS with 5Gbps aggregate throughput. But every cache save and restore was going out to GitHub’s servers and back. Cache restores for a 500MB node_modules directory were taking 30-60 seconds. On a local network with NFS, that same restore takes under 5 seconds.

The Solution

The github-actions-cache-server project is a drop-in replacement for GitHub’s cache API. It implements the same REST endpoints that actions/cache expects, stores everything on local disk, and handles cache eviction. The runner just needs one environment variable – ACTIONS_CACHE_URL – to redirect all cache operations to the local server.

Architecture

Workflow step (actions/cache)
    |
    v
Runner pod (ACTIONS_CACHE_URL env var)
    |
    v
actions-cache Service (ClusterIP :3000)
    |
    v
actions-cache Pod (filesystem storage)
    |
    v
NFS PersistentVolume  NAS DXP4800 (/volume1/actions-cache)

All traffic stays inside the cluster network. Cache saves and restores never leave the LAN.

Deployment

The cache server runs as a single-replica Deployment in the same namespace as the runners (arc-runner-system). Storage is a 50Gi NFS volume on the NAS, following the same pattern I use for monitoring, Harbor, and Gitea storage.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: actions-cache
  namespace: arc-runner-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    spec:
      containers:
        - name: actions-cache
          image: ghcr.io/falcondev-oss/github-actions-cache-server:latest
          env:
            - name: API_BASE_URL
              value: "http://actions-cache.arc-runner-system.svc.cluster.local:3000"
            - name: URL_ACCESS_TOKEN
              value: "actions-cache-token"
            - name: CLEANUP_OLDER_THAN
              value: "168"  # 7 days in hours
            - name: CLEANUP_SCHEDULE
              value: "0 3 * * *"
            - name: STORAGE_DRIVER
              value: "filesystem"
            - name: STORAGE_PATH
              value: "/data"
          volumeMounts:
            - name: cache-data
              mountPath: /data
      volumes:
        - name: cache-data
          persistentVolumeClaim:
            claimName: actions-cache-nfs-pvc

The NFS PersistentVolume uses the same mount options as all my other NAS volumes:

mountOptions:
  - hard
  - intr
  - rsize=1048576
  - wsize=1048576
  - nfsvers=3

Connecting the Runners

The only change to the runner deployment is adding the ACTIONS_CACHE_URL environment variable:

apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: k3s-runner
spec:
  template:
    spec:
      env:
        - name: ACTIONS_CACHE_URL
          value: "http://actions-cache.arc-runner-system.svc.cluster.local:3000/actions-cache-token/"

That is it. Every actions/cache step in every workflow across every repo now hits the local server. No workflow YAML changes, no per-repo configuration, no action version bumps.

Cache Expiration

The CLEANUP_OLDER_THAN setting handles cache eviction. I set it to 168 hours (7 days), with cleanup running daily at 3am via CLEANUP_SCHEDULE. This matches GitHub’s own 7-day eviction policy for inactive caches, so workflows behave identically – they just run faster.

Flushing GitHub-Hosted Caches

After deploying the local server, I flushed all existing GitHub-hosted caches across the org. This was 14 caches across 6 repos:

for repo in $(gh api /orgs/my-org/repos --paginate -q '.[].full_name'); do
  while IFS= read -r id; do
    [ -z "$id" ] && continue
    gh api -X DELETE "repos/$repo/actions/caches/$id"
  done < <(gh api "repos/$repo/actions/caches" -q '.actions_caches[].id')
done

The next workflow run for each repo has a one-time cache miss, then everything rebuilds into the local NAS-backed store.

NAS Storage Setup

The NFS share setup follows the same pattern as my other NAS-backed services (monitoring, Harbor, Gitea):

  1. Create the directory on the NAS
  2. Export it via NFS with no_root_squash and async for the k8s subnet
  3. Create a static NFS PersistentVolume in Kubernetes
  4. Bind it with a PVC in the runner namespace

50Gi is generous for action caches. Most of mine are npm and pip caches in the 200-800MB range. The 7-day eviction keeps it from growing unbounded.

Results

The server came up healthy on the first deploy (after fixing two config issues: the default port is 3000, not 8080, and API_BASE_URL is a required env var). Cache operations now complete in seconds instead of minutes, and I am no longer burning GitHub’s cache storage quota – which matters if you are on a free or team plan with limited cache storage.

The best part: zero workflow changes. The ACTIONS_CACHE_URL env var is a documented feature of the GitHub Actions runner. Any workflow using actions/cache (or actions/setup-node with caching, or actions/setup-python with caching) automatically uses the local server. It is completely transparent.

When This Makes Sense

This setup makes sense if:

  • You run self-hosted runners (ARC, standalone, whatever)
  • Your runners are on a local network with fast storage available
  • You want to keep CI artifacts off GitHub’s infrastructure
  • You are hitting GitHub’s 10GB cache storage limit

It does not make sense if your runners are cloud-hosted close to GitHub’s infrastructure, or if you only have a handful of small caches that restore in a few seconds anyway.

For a homelab with a NAS and a local Kubernetes cluster, it is a no-brainer. Twenty minutes of setup, zero ongoing maintenance, and every CI run benefits.