Claude Code vs GitHub Copilot

Why I Switched from GitHub Copilot to Claude Code Max

TL;DR GitHub Copilot is more capable than most people give it credit for. I used it heavily – not just for autocomplete, but for multi-file edits, chat-driven debugging, and workspace-aware refactoring. After a year of intensive Copilot usage and a month with Claude Code Max ($100/month for the Max plan with Opus), I moved my primary workflow to Claude Code for infrastructure and backend work. The reason is not that Copilot cannot do these things – it is that Claude Code is faster and I can hand it a task and let it run without babysitting. Copilot still wins for inline code completion in the editor. Claude Code wins when I want to describe a goal and walk away while it executes. ...

March 22, 2026 · 11 min · zolty
AI coding governance framework for engineering teams

Governing AI Coding Tools Across an Engineering Team

TL;DR AI coding tools are now default behavior for most developers, not an experiment. If you manage a team and you haven’t formalized this, you have ungoverned spend, security exposure, and inconsistent behavior happening right now. The fix isn’t to take the tools away — it’s to pick one, pay for it centrally, encode your policies into the AI itself using instruction files and skills, and govern the control folder rather than individual usage. Here’s the framework I’d implement. ...

March 3, 2026 · 10 min · zolty
Two AIs managing a GitHub repository via issues and pull requests

Two AIs, One Codebase: Using Local Copilot to Direct GitHub Copilot via Issues and PRs

TL;DR A 109-day project plan. One day of actual work. Eight hours of active pipeline time. The key was treating planning and implementation as two separate AI-driven phases: spend an evening getting the plan right by routing it through multiple models, then let Claude Sonnet 4.6 implement it autonomously overnight via GitHub Copilot’s cloud agent while you sleep. This is the full playbook — planning phase included. The Project This came out of building dnd-multi, a full-stack AI Dungeon Master platform: FastAPI backend, Next.js 15 frontend, a Discord bot, LiveKit voice, and AWS Bedrock integration. Seven feature phases, a plan projected to take until June 19. ...

March 2, 2026 · 11 min · zolty

Reference: DnD Multi — Project Plan (v1.0)

Context: This is the real project plan for dnd-multi, a full-stack AI Dungeon Master platform. It was generated by Claude Opus 4.6 during Phase 0 of the LLM GitHub PR workflow — synthesizing gap analysis from four different models into a structured execution document. Claude Sonnet 4.6 then used this plan overnight to open 24 PRs and ship all seven phases. Personal identifiers have been removed. Technical content is verbatim. Milestone Timeline Milestone Target Date Deliverable M0 — Platform Stable 2026-03-13 All broken deps fixed, migrations applied, Tier 2 lore generating, smoke tests passing M1 — First Playable Session 2026-04-03 Turn structure live, player identity in DM prompt, hot phrase + /dm command working M2 — Full Action Flow 2026-04-24 Action confirmation, non-active player queue, player votes operational M3 — IC/OOC + Personality 2026-05-08 Meta-mode detection, in-character assumption, DM personality tuning deployed M4 — Content & Reporting 2026-05-22 Book/media content generation live, /report + /flag system in admin dashboard M5 — Combat Tracker 2026-06-12 Live HP tracker UI, [COMBAT:] directives wired to death protection M6 — Feature Complete v1.0 2026-06-19 Spell reference Discord command, shareable campaign invitation links Current State Summary The platform has a solid full-stack foundation with all core systems implemented and deployed to the home k3s cluster. The gap is game experience polish — the AI DM has no awareness of whose turn it is, doesn’t distinguish in-character from out-of-character speech, lacks an action confirmation flow, and has no mechanism for players to report misbehavior. These are the features that make the difference between a tech demo and a playable game. ...

March 2, 2026 · 18 min · zolty
A terminal prompt with tool version numbers lined up neatly

Environment manifests for AI assistants across every repo

TL;DR I added a standardized environment.instructions.md file to every repository in my workspace. It’s a simple Markdown table of tool versions plus a few workflow snippets. AI assistants pick it up automatically, and they’ve stopped suggesting commands for tools or versions I don’t have. The whole thing took less than an hour to write out and propagate. The problem I run GitHub Copilot heavily across several projects — homelab infrastructure, a blog, a few Python services, and some supporting tooling. The AI context setup (those copilot-instructions.md files I wrote about here) covers what each project does and what its conventions are. What it doesn’t cover is what I’m actually running when I type commands in a terminal. ...

March 1, 2026 · 8 min · zolty
GitHub Copilot setup guide with AI skills and memory

Getting Started with GitHub Copilot: What Actually Works

TL;DR A $20/month GitHub Copilot subscription gives you Claude Sonnet 4.6, GPT-4o, and Gemini inside VS Code. Out of the box it’s useful. With a proper instruction setup — a copilot-instructions.md file, path-scoped rules, and skill documents — it becomes something you actually rely on. Most of the posts on this blog were built with this toolchain, mostly in the context of my k3s cluster, but the patterns apply anywhere. This is how I have it set up. ...

March 1, 2026 · 12 min · zolty
AI memory system architecture

Building an AI Memory System: From Blank Slate to 482 Lines of Hard-Won Knowledge

TL;DR The .github/copilot-instructions.md file started as 10 lines of project description and grew into a 99-line “operating system” for AI assistants. Then it split: failure patterns moved into docs/ai-lessons.md (now 482 lines across 20+ categories), and file-type-specific rules moved into .github/instructions/ with applyTo glob patterns. The same template structure was standardized across 5 repositories. This post traces the three generations of AI instruction architecture and shows how every production incident permanently improves AI reliability. ...

February 26, 2026 · 11 min · zolty
AI context window audit

When Your AI Memory System Eats Its Own Context Window

TL;DR The AI memory system I built three weeks ago started causing the problem it was designed to solve: context window exhaustion. Five generic Claude skills — duplicated identically across all 5 repositories in my workspace — consumed 401KB (~100K tokens) of potential context. The gh-cli skill alone was 40KB per copy, accounting for 42% of all skill content. I ran a full audit, deleted 25 duplicate files, and documented the anti-pattern to prevent recurrence. ...

February 23, 2026 · 6 min · zolty
AI-assisted infrastructure development

AI-Assisted Infrastructure: Claude, Copilot, and the Memory Protocol

TL;DR Two weeks of building a production Kubernetes cluster with AI pair programming. Claude Opus 4.6 handles complex multi-step infrastructure work via the CLI. GitHub Copilot provides inline code completion in VS Code. AWS Bedrock (Nova Micro, Claude Sonnet 4.5) powers runtime AI services inside the cluster. The key discovery: AI tools without persistent memory are dangerous. Every session starts from zero. The same bugs get recreated, the same anti-patterns get suggested, the same cluster-specific constraints get forgotten. The solution is the “Memory Protocol” – a set of documentation files the AI reads before every session and updates after every discovery. ...

February 22, 2026 · 9 min · zolty

Affiliate Disclosure: Some links on this site are affiliate links (Amazon Associates, DigitalOcean referral). As an Amazon Associate, I earn from qualifying purchases. This does not affect the price you pay or my editorial independence — I only recommend products and services I personally use and trust.