TL;DR
I added a standardized environment.instructions.md file to every repository in my workspace. It’s a simple Markdown table of tool versions plus a few workflow snippets. AI assistants pick it up automatically, and they’ve stopped suggesting commands for tools or versions I don’t have. The whole thing took less than an hour to write out and propagate.
The problem
I run GitHub Copilot heavily across several projects — homelab infrastructure, a blog, a few Python services, and some supporting tooling. The AI context setup (those copilot-instructions.md files I wrote about here) covers what each project does and what its conventions are. What it doesn’t cover is what I’m actually running when I type commands in a terminal.
The gap showed up constantly:
- Copilot would suggest
helm upgrade --installwith flags that exist in Helm v3.x but not v4.x (I run v4.1.1). - It would generate
kubectlcommands with API versions that my cluster’s API server had already deprecated. - It would suggest
pip installwhen my project usespyenvand a.venv, so the package would land in the wrong interpreter. - It suggested
docker buildx build --platform linux/arm64for a cluster that only runs amd64 nodes.
None of these are catastrophic — I’d catch them, correct the AI, and move on. But it’s friction. Every correction costs a round trip, and over a full day of development it adds up. The AI was making confident suggestions based on assumed defaults rather than my actual environment.
The deeper issue: each repository had good project-level context, but no repository knew what version of Python or kubectl or Terraform I was actually running on this machine.
The solution: one file, every repo
I started with a simple idea. If I need the AI to know my environment in every repo, put the environment in every repo. Not in a README, not buried in a comment — in a file that the AI assistant reads as an instruction file before anything else.
In VS Code with GitHub Copilot, that mechanism is the .github/instructions/ directory. Any Markdown file in that directory with valid frontmatter gets ingested as a system instruction for Copilot. The key frontmatter property is applyTo — set it to ** and it applies to every file in the workspace.
The result is an environment.instructions.md that looks like this:
---
applyTo: "**"
---
# Local Development Environment
> Last updated: 2026-03-01
## Tool Versions (macOS arm64 — Apple Silicon)
| Tool | Version | Notes |
|------|---------|-------|
| kubectl | v1.35.2 | Connects to home k3s cluster |
| helm | v4.1.1 | |
| Python | 3.14.3 | |
| pyenv | 2.6.23 | Per-project Python pinning |
| Docker | 29.2.1 | Image builds (amd64 only) |
| Terraform | v1.13.3 | |
| AWS CLI | 2.34.0 | |
## Cluster Identity
- 7-node k3s cluster (amd64 Debian 13 VMs on Proxmox)
- The cluster is amd64-only — do not suggest arm64 builds
## Common Workflows
\`\`\`bash
# Build for k3s — always amd64, always --provenance=false
docker buildx build --platform linux/amd64 --provenance=false -t myapp .
# Verify kubectl context before applying anything
kubectl config current-context
\`\`\`
That’s the core of it. A version table, a few critical constraints, and the workflows that always trip up AI tools.
What goes in the file
After iterating across several projects, the file has settled into three sections.
Tool versions table. A simple Markdown table with the tool name, the exact version, and any note worth knowing. I include notes like “Image builds (amd64 only)” next to Docker, and “Per-project Python pinning” next to pyenv — those notes carry context that a bare version number doesn’t.
Environment constraints. Short, imperative statements about facts the AI must not contradict. The amd64-only cluster constraint is the most important one. I also include notes about which package manager to use for Python, how state is managed in Terraform (S3 backend, never local), and the ECR pull secret expiry timing.
Common workflows. A small number of frequently-used command patterns. Not exhaustive — just the ones where getting it wrong is expensive or where the correct form isn’t obvious. For Docker builds, the --provenance=false flag matters. For Kubernetes applies, dry-run first. These live as copyable code blocks.
I deliberately kept each file short. The goal is signal density, not completeness. A file that takes 30 seconds to scan is more likely to stay current than one that aspires to document everything.
Propagating it across repositories
My workspace includes several independent projects, each its own git repository. Each one already had a .github/instructions/ directory (or I created one). Propagation was straightforward: write one well-considered base file, then adapt it per project where the tool set differs.
Some projects don’t use Helm or kubectl at all — those rows come out. A frontend-heavy project gets Node and npm versions added. A Python-only service doesn’t need the cluster identity section. The shared backbone (OS, architecture, Git, Homebrew, AWS CLI) stays the same across all of them because I’m always working on the same Mac.
The applyTo: "**" frontmatter means every file in that workspace pulls the instruction in. When I open a Terraform file, the AI knows I’m on Terraform v1.13.3 and using an S3 backend. When I open a Python file, it knows I’m using pyenv and .venv. It doesn’t have to guess, and I don’t have to correct it.
Results
The measurable change is fewer prompt correction cycles. Before, maybe one in four Copilot suggestions involving tool-specific syntax needed a follow-up “actually, I’m on X version, try again.” Now that number is close to zero for the tool-specific cases covered in the file.
The subtler benefit is that the suggestions feel more grounded. When I ask for a kubectl command, it comes back with valid syntax for my cluster’s API version. When I ask for a Docker build command, --provenance=false is already there. The AI isn’t more capable — it just has the context it needs.
The amd64 constraint has been particularly reliable. I historically wasted time because Docker on Apple Silicon defaults to building for arm64 unless you say otherwise. Now that constraint is in the instruction file for every project that touches Docker, and I haven’t seen a wrong-arch suggestion since.
Keeping it current
The file has a > Last updated: timestamp at the top. That’s the extent of my version control process for it — when I upgrade a tool, I update the date and the version number. It takes about ninety seconds.
I update it reactively: when I notice a suggestion was wrong because a version changed, I update the file immediately and don’t let it drift. So far the file has needed updating about once every two weeks, which is roughly how often I update something significant in the local toolchain.
The bigger risk isn’t version drift — it’s forgetting to add a new tool when I start using one. I’ve missed that a couple of times and gotten suggestions for the wrong package manager. My solution is to add the tool to the file at the same time I install it, before I start using it in any project. That’s easy to remember when it’s a fresh install and you’re context-switching into the project anyway.
Lessons learned
- Put constraints before workflows. I originally had the version table first, constraints buried somewhere in the middle. Moved the hard constraints (amd64-only, never local Terraform state) to a dedicated callout section right after the table. Reordering made a visible difference in how reliably the AI honored them.
- Shorter sections get read. I drafted a longer “Common Workflows” section with eight or nine examples. The AI treated it as reference material and pulled from it selectively. Cutting it to three or four critical examples made each one more reliably applied.
- Notes in the table column matter. “Docker | 29.2.1 | Image builds (amd64 only)” does more work than “Docker | 29.2.1”. The note is what gets incorporated into suggestions, not just the version number.
- One file per repo, not one shared file. I considered symlinking a single source-of-truth file across all repos. I rejected it because different projects have different slices of the toolchain, and a single shared file would either be too general to be useful or too long to be scanned. Per-project files with shared boilerplate work better.
What’s next
The instruction file approach works well inside VS Code. The next thing I want to try is generating a machine-readable version — a devenv.json or similar — that can be consumed by other tools. Not because I need it today, but because the same information (exact tool versions, constraints) is useful in CI pipelines and onboarding scripts, not just AI prompts.
I’m also watching whether future Copilot workspace features make the applyTo: "**" mechanism redundant or replaceable. For now, it’s the right hook point and it’s working. When something better comes along I’ll migrate.
If you’re running AI assistants across a multi-repo workspace and noticing a pattern of version-mismatch corrections, this is a cheap intervention worth trying. Half an hour upfront, then marginal maintenance after.