TL;DR
I run 3-5 parallel Claude Code sessions against the same homelab. One tab mid-refactor, one tab doing docs, one tab chasing a bug. They don’t know about each other, so every so often one tab “tidies up” a file another tab is actively editing — and Claude, being a dutiful little overwriter, just clobbers the work. I built a small Go binary that hooks into SessionStart / SessionEnd / PreToolUse, tracks file claims on disk, and injects a warning straight into the LLM’s context window when it’s about to step on another session’s toes. Optional Slack mirror so I can watch the timeline from my phone. MIT, single binary, no runtime deps. Repo: github.com/zolty-mat/claude-session-guard.
The actual problem
Parallel Claude sessions are a multiplier. I can have one session refactoring the signal worker while another is drafting a blog post while another is chasing an alert. The throughput feels unreasonable in the best way — until something breaks in a way that only ever breaks with parallelism.
Here’s the specific failure that made me finally sit down and fix this:
- Tab A is deep in a refactor of
vix-signal-worker/worker.py. It’s halfway through restructuring the alpha rule evaluator. Changes are on disk, uncommitted. - Tab B, in the same repo, gets asked to “clean up the docstrings across the worker module.”
- Tab B reads
worker.py, applies aWritewith the new docstrings — and silently overwrites Tab A’s in-flight restructure. - Tab A runs a test, it fails in ways that don’t match anything Tab A did. It spends twenty minutes diagnosing its own “mistake.”
Tab A was not wrong. Tab B had no idea Tab A existed.
This is a coordination problem, not a model problem. Claude Code’s runtime is sandboxed to a single session — sessions don’t share state, don’t see each other’s tool calls, don’t know they’re siblings. From each session’s perspective it’s alone with the filesystem.
The three options I considered
1. Git worktrees
The obvious answer: give each session its own worktree, never have them touching the same files. parallel-cc does exactly this.
This works great for greenfield parallel features. It does not work when I actually want multiple tabs coordinating on the same thing — like Tab A refactoring code while Tab B is next door writing tests for it. I want them to touch the same files, I just want them to know about it.
2. Claude’s built-in Agent Teams
Anthropic shipped a flock()-based advisory locking system in the native Agent Teams feature (experimental, February 2026). It’s the right direction. But it assumes non-overlapping work ownership — you pre-declare which agent owns which area. In my workflow that ownership boundary doesn’t exist; I’m the one routing tasks in real time, so there’s nothing to pre-declare.
3. Visibility tools
claude-session-manager, Claude’s own /sessions command, the Conductor app. All of them show you what’s running. None of them prevent two things from running into each other.
I wanted something closer to advisory file locking — but at the granularity of “this specific file is being touched by another session right now”, surfaced to the LLM before it writes.
What I actually built
One Go binary. Three hooks. One state directory. Optional Slack mirror.
~/.local/share/claude-session-guard/
├── config.env # SLACK_BOT_TOKEN, SLACK_CHANNEL_ID (both optional)
├── events.log # append-only ops log
└── state/
├── abc12345.json # session A — claims, edit count, cwd, branch
├── def67890.json # session B
└── fed01234.json # session C
The three hooks:
| Event | What it does |
|---|---|
SessionStart | Writes a new state file for the session. Optionally posts a 🟢 thread header to Slack. |
PreToolUse (Edit|Write|NotebookEdit) | Scans other sessions’ state files for a claim on the same file_path within the last 10 minutes. If it finds one, returns a hookSpecificOutput.additionalContext block — which Claude Code injects straight into the LLM’s context before the tool call runs. |
SessionEnd | Removes the state file. Optionally closes the Slack thread with a ✅ and a session summary. |
The load-bearing trick is additionalContext. That’s the hook field that lets you whisper a message to the model in-line with its tool call. So instead of blocking the edit (hard locks are a nightmare with LLMs — they’ll just retry differently), the guard gives Claude a heads-up like:
⚠️ CONCURRENCY WARNING: 1 other Claude session(s) recently claimed
src/auth.ts:a1b2c3d4inbackend-api(cwd/Users/me/code/backend-api) claimed this 3m agoConsider pausing and coordinating before editing to avoid clobbering their work.
And Claude, 9 times out of 10, stops and surfaces it to me. “Another session is editing this file — do you want me to proceed?” That’s the entire win. Human-in-the-loop only when it matters.
Why a Go binary
Hooks run on every tool call. Latency budget is tight — a 200ms Python cold start is noticeable when you’re watching a fast session. The first version of this was in Python, and Slack posts kept blocking the hook path in ways that made edits feel laggy.
Rewrite to Go:
- Cold start ~5ms on an M3.
- Slack I/O off the hot path — the binary re-invokes itself in
bg-postmode as a detached child (Setsid: true), hands off payload via temp file, parent returns immediately. - State writes are atomic — write to
foo.json.PID.tmp, thenos.Rename()(POSIX rename-atomic). No half-written JSON files ever. - Single binary, no runtime deps.
go build,install -m 0755, done.
That last point matters more than it sounds. I have Mac laptops, Linux VMs, and homelab worker nodes where I occasionally run Claude Code over SSH. go build on each, drop the binary somewhere on PATH, same behavior everywhere.
The stdin-pipe race that almost shipped
Writing the detached-child pattern was the one part I had to iterate on twice. Initial version:
cmd := exec.Command(binPath, "bg-post")
cmd.Stdin = bytes.NewReader(payload) // looks fine, right?
cmd.SysProcAttr = &syscall.SysProcAttr{Setsid: true}
cmd.Start()
cmd.Process.Release()
This fails about 15% of the time with bg-post parse: unexpected end of JSON input. Why? Because exec.Cmd spawns an internal goroutine to copy from the io.Reader into the child’s stdin pipe. That goroutine runs on the parent process. The parent’s hook returns immediately (that’s the whole point), the Go runtime tears down, and the copy goroutine never finishes. The child wakes up on a closed-then-empty pipe and reads zero bytes.
The fix is to hand the payload to the child via a path the parent doesn’t own:
tmp, _ := os.CreateTemp("", "csg-bg-*.json")
tmp.Write(payload)
tmp.Close()
cmd := exec.Command(binPath, "bg-post", tmp.Name()) // child reads file, then unlinks it
cmd.Stdin = nil
cmd.Start()
cmd.Process.Release()
The child opens the file, reads it, unlinks it. No pipe, no race. This is one of those “obvious in hindsight” mistakes — you never notice when the parent is long-lived, only when it intentionally exits 500µs after spawning the child.
Slack mirror is the fun part
The LLM-facing additionalContext warning is the correctness feature. The Slack thread is the ambient-awareness feature, and honestly it’s the part I use most.
Every session gets its own thread. Green circle while live, grey headstone when GC’d after 24h of no activity, white checkmark when cleanly closed. Each edit posts ✏️ editing vix-signal-worker/worker.py. Conflicts post ⚠️ conflict with 1 other session(s). I can glance at my phone from the couch downstairs and see exactly what my upstairs sessions are doing.
It also exposed a pattern I didn’t know I had: most of my “parallel” sessions are actually just me context-switching at 3-5 minute intervals, not truly concurrent. The conflict rate is low — maybe once or twice a day. But when it hits, it’s expensive enough (twenty minutes of debugging phantom test failures) that the guard has paid for itself a dozen times over already.
Slack is opt-in. If you don’t set SLACK_BOT_TOKEN, the guard runs local-only and the LLM still gets the warnings. It’s a progressive-enhancement design — one binary, two modes.
Some honest limits
- It’s advisory. If you tell Claude “yes, overwrite anyway,” it will. I want this. Hard locks would break workflows where one session is intentionally taking over from another.
- Single machine. State is local to one host. I have a design for a central Postgres sync — per-machine SQLite mirroring nightly to a Postgres in the homelab — but v1 is single-host and that’s fine for now.
- Slack free tier workspace cap. If you edit a lot, you can hit the cumulative message limit. I added a 60s-per-file throttle to mitigate. On paid workspaces or with a dedicated channel it’s a non-issue.
- MCP server ships in v0.1. I wrote “trivial to add” in the draft and then immediately added it. Any session can now call
list_sessionsand get a live view of every sibling before touching a file. I’ll write it up properly once I have more runtime data on how Claude actually uses it — but the short version is: it works, and it’s kind of eerie watching Claude proactively check on its siblings unprompted.
Grab it
Single command install if you have Go 1.22+:
git clone https://github.com/zolty-mat/claude-session-guard.git
cd claude-session-guard
make install
Wire it into ~/.claude/settings.json per the README. Start a session. claude-session-guard status should list it.
If you’re running parallel Claude Code sessions against shared code — or you’ve ever had one session silently overwrite another’s work — try it and tell me what breaks. GitHub issues are open. MIT license, PRs welcome.
Second pass: adversarial security review
After publishing the initial version I ran a proper security review — a sub-agent adversarial pass followed by govulncheck against the Go stdlib. Here’s what it found and what I fixed.
File permission issues
State files and the log file were being created with 0o644 — world-readable. On a shared machine that means any other user can list which files your sessions are editing, which repos you’re working in, and which branches. Not a credential leak, but a real privacy issue in a multi-user environment.
Fix: both now created with 0o600. One-line change, but important to get right.
// before
os.WriteFile(tmp, data, 0o644)
// after
os.WriteFile(tmp, data, 0o600)
bg-post accepts arbitrary file paths
bg-post is an internal subcommand the binary calls on itself for detached Slack I/O. It accepts a file path from os.Args[2], reads it, then calls os.Remove() on it. The design is intentional — you can’t use a stdin pipe when the parent exits immediately. But it means a malicious invocation like claude-session-guard bg-post /home/user/important-file would read and then delete an arbitrary file the user owns.
The binary isn’t SUID, so cross-user attacks aren’t possible. But same-user abuse (a script that invokes the binary with a crafted path) was a real attack surface.
Fix: validate the path is inside os.TempDir() before touching it.
if !strings.HasPrefix(filepath.Clean(fname), filepath.Clean(os.TempDir())) {
logMsg("bg-post: rejected non-temp path: %s", filepath.Base(fname))
return
}
Config file world-readable warning
The binary reads config.env but can’t enforce its permissions — that’s up to the user and their umask. Instead of silently accepting a 0644 config file, the binary now logs a warning if it detects the file is group- or world-readable:
WARNING: /home/user/.local/share/claude-session-guard/config.env has permissions 644 — run: chmod 600 /home/...
This shows up in events.log, not stdout, so it doesn’t spam the hook output. But if you’re checking logs you’ll see it.
Four Go stdlib CVEs — fixed by bumping the toolchain
govulncheck found four CVEs in go1.26.1:
| CVE | Component | Impact |
|---|---|---|
| GO-2026-4866 | crypto/x509 | Auth bypass via case-sensitive name constraint handling |
| GO-2026-4947 | crypto/x509 | Unexpected work during chain building |
| GO-2026-4946 | crypto/x509 | Inefficient policy validation |
| GO-2026-4870 | crypto/tls | TLS KeyUpdate connection retention |
The auth bypass (GO-2026-4866) is the one that matters — it affects TLS certificate validation in the Slack HTTP client. All four are fixed in go1.26.2. Fix: bumped go.mod from go 1.22 to go 1.26.2, which triggers auto-download of the patched toolchain.
$ govulncheck ./...
No vulnerabilities found.
The lesson here isn’t that the code was insecure — it’s that govulncheck is fast (one command, fifteen seconds) and the stdlib moves faster than you expect. Worth running it before publishing anything that makes outbound TLS connections.
If you liked this, my Amazon affiliate links (tag zoltyblog07-20) are what keep the lights on at zolty.systems. Or don’t — this thing is MIT and I’d rather you run it.
