TL;DR

GitHub Copilot is more capable than most people give it credit for. I used it heavily – not just for autocomplete, but for multi-file edits, chat-driven debugging, and workspace-aware refactoring. After a year of intensive Copilot usage and a month with Claude Code Max ($100/month for the Max plan with Opus), I moved my primary workflow to Claude Code for infrastructure and backend work. The reason is not that Copilot cannot do these things – it is that Claude Code is faster and I can hand it a task and let it run without babysitting. Copilot still wins for inline code completion in the editor. Claude Code wins when I want to describe a goal and walk away while it executes.

What I Was Using Before

My setup for the past year:

  • GitHub Copilot ($19/month) for inline completion, multi-file edits via Copilot Chat, workspace-aware Q&A, and terminal command suggestions
  • Claude Pro ($20/month) for the web chat when I needed to think through architecture or debug complex problems
  • ChatGPT Plus ($20/month) for a second opinion and image generation
  • Manual terminal work for all kubectl, terraform, ansible, and git operations

That is $59/month across three subscriptions. Copilot was doing real work – I was using it for more than ghost text. Copilot Chat could explain code, suggest fixes across files, and generate test scaffolding. But the workflow still had a gap: Copilot could suggest changes, but I was the one running commands, checking logs, applying fixes, and iterating. The AI proposed, I executed. Every debugging loop required me to manually shuttle context between the editor, the terminal, and the AI.

What Claude Code Actually Is

Claude Code runs in your terminal (or as a VS Code extension) with direct access to your filesystem, shell, and git. When I say “debug why the digital signage CI pipeline is failing,” it does not ask me to paste the error. It runs the workflow, reads the logs, identifies the Docker Hub rate limit, checks the Dockerfile, and proposes a fix – then edits the file and runs the build again.

The mental model shift is not “dumb autocomplete vs smart agent.” Copilot is genuinely useful and I was getting real value from it. The shift is: Copilot helps you write code faster. Claude Code lets you describe what you want done and then does it – reads files, runs commands, edits code, runs it again, and iterates until it works. I can hand Claude Code a task like “fix the CI pipeline” and come back to a working commit. With Copilot, I was still the one driving every step.

The Skills System

Claude Code reads CLAUDE.md files at the repo root and .github/copilot-instructions.md for project context. But the real power is in the skills system. I have 30+ skill files across my repos that encode domain knowledge:

.claude/skills/
├── k3s-deployment/     # How to deploy to my cluster
├── k3s-debugging/      # Failure patterns and triage steps
├── docker-ecr/         # Build conventions, --provenance=false
├── terraform-infra/    # Proxmox + AWS module patterns
├── grafana-dashboards/ # ConfigMap-based dashboard creation
└── openclaw-operator/  # OpenClaw-specific operations

Each skill file is a prompt that gets injected when the task matches. When I say “deploy a new service,” Claude Code loads k3s-deployment and follows my exact checklist: namespace, deployment, service with the right selector labels, ingress with cert-manager annotation, ServiceMonitor for Prometheus, RBAC for the CI runner. It knows about the service selector trap (the app.kubernetes.io/component: web label that has caused four production incidents) because that is encoded in the skill.

Copilot has no equivalent. It autocompletes based on the current file. It does not know that my cluster requires Recreate strategy because I use single-replica Longhorn PVCs. It does not know that MetalLB breaks if you specify metallb.universe.tf/address-pool: default. That knowledge lives in my skill files, and Claude Code reads them before writing a single line.

Memory Across Sessions

Claude Code has a persistent memory system that survives across conversations. It stores facts about my setup:

  • The cluster is amd64-only (Lima VM removed June 2025)
  • Longhorn is disabled on k3s-agent-4 (aging NVMe on pve4)
  • Prometheus and Grafana must use nfs-monitoring StorageClass, never Longhorn
  • AWS spend is ~$476/month and changes should be flagged

This means I do not re-explain my infrastructure every session. When I start a new conversation and say “add a new CronJob for log aggregation,” Claude Code already knows which namespace conventions I use, which storage classes are available, and which nodes have scheduling constraints.

Copilot starts fresh every time. There is no institutional memory.

The Cost Math

ToolMonthly CostWhat You Get
GitHub Copilot Individual$19Inline completion, chat (GPT-4o)
Claude Pro$20Web chat, 5x Opus usage vs free
Claude Code Max (Opus)$100Terminal agent, Opus model, 20x usage
Claude Code Max (Sonnet)$100Terminal agent, Sonnet model, higher rate limits

I dropped Claude Pro and ChatGPT Plus when I switched to Claude Code Max. My total AI tooling cost went from $59/month to $100/month – a $41 increase. But the throughput difference is not 1.7x. It is closer to 5-10x for infrastructure tasks.

On March 21, I shipped work across five repositories in a single day: a 13,674-line stock trading platform, Harbor registry migration across 13 CI workflows, API key authentication for digital signage, inventory sell signals for a trading card tracker, and OpenClaw cost optimization. Every commit was co-authored with Claude Code. That volume of work across that many codebases would have taken me a week with copy-paste workflows.

The $41/month pays for itself in the first hour of a serious debugging session.

Where Claude Code Wins

Multi-File Refactoring

“Rename the trade_bot namespace to stock-automation across all manifests, Terraform files, and CI workflows.” Copilot cannot do this. Claude Code greps the codebase, identifies every reference, edits each file, and shows you the diff.

Infrastructure Debugging

“The Grafana dashboard is not loading data.” Claude Code checks the pod status, reads the ConfigMap, verifies the Prometheus datasource URL, checks the ServiceMonitor selector labels, tails the Grafana logs, and identifies that the datasource was pointing to the wrong Prometheus service name after a Helm upgrade. It fixes the ConfigMap and restarts the pod. Total time: 90 seconds. With Copilot, I would have done every one of those steps manually.

Writing Tests

“Add unit tests for the sell signals engine.” Claude Code reads the source file, understands the five strategy functions, writes tests covering normal cases, edge cases (insufficient data, boundary conditions, null values), mocks the price history provider, and runs the test suite to verify they pass. It wrote 25 tests in one pass. With Copilot, I would write one test at a time with inline completion, manually thinking through each edge case.

Kubernetes Manifest Generation

“Deploy a new Flask app called rag-bridge to the polymarket-lab namespace.” Claude Code generates the complete manifest following my conventions: namespace reference, deployment with Recreate strategy, service with app.kubernetes.io/component: web selector, ingress with letsencrypt-prod cluster issuer, ServiceMonitor scraping /metrics, and a CI workflow for building and pushing to Harbor. It knows all of this from the skill files.

Where Copilot Still Wins

Inline Completion Speed

When I am typing Python in VS Code, Copilot’s ghost text appears in milliseconds. It completes function signatures, dictionary keys, f-string interpolations, and boilerplate patterns faster than I can think about them. Claude Code does not do inline completion at all – it is a different interaction model. You describe a task, it executes. There is no ghost text while you type.

I kept Copilot active alongside Claude Code for the first two weeks. Eventually I dropped it because the $19/month was not justified when Claude Code handles the harder tasks and VS Code’s built-in IntelliSense covers basic completion. But I want to be clear: Copilot was earning its keep before Claude Code arrived. I was not just using it for autocomplete – I was using Copilot Chat for multi-file explanations, test generation, and workspace-level questions about my codebase. It is a legitimately powerful tool.

Simple, Single-File Edits

“Add a try/except around this database call.” Copilot handles this instantly with inline completion. Claude Code would read the file, make the edit, and return – correct, but slower for something this trivial. Use the right tool for the scale of the task.

Lower Friction for Quick Questions

Copilot Chat in the editor sidebar is fast for “what does this function do” or “explain this error.” You highlight code, ask a question, get an answer. Claude Code can do this too, but the interaction model is heavier – it reads the file, thinks, responds. For quick explanations while you are in the middle of editing, Copilot’s in-editor chat is more natural.

What Surprised Me

The Context Window Matters More Than the Model

Claude Code with Opus has a massive context window and it uses it aggressively. It reads your CLAUDE.md, skill files, relevant source files, test files, and git history before generating a response. The quality difference between “AI that read your repo instructions” and “AI that is guessing from a code snippet you pasted” is enormous. Most of the improvement I attribute to Claude Code is not model quality – it is context quality.

The Approval Flow Is the Right Default

Claude Code asks before running destructive commands. It shows you the diff before writing files. It asks before pushing to remote. This felt annoying for the first day and essential by the second. When an AI agent has shell access to your infrastructure, you want a confirmation step before kubectl delete or git push --force. The permission system is configurable – you can auto-approve reads and file edits but require approval for shell commands. I landed on auto-approve for reads and edits, manual approval for anything that touches the cluster or git remote.

It Makes Mistakes Differently Than Humans

Claude Code’s failure mode is not “wrong answer.” It is “correct answer to a slightly different question.” It will write a perfect Kubernetes manifest that uses RollingUpdate instead of Recreate because that is the Kubernetes default, ignoring my documented convention. It will add a ServiceMonitor that scrapes the wrong port because it inferred the port from the Deployment instead of reading the application’s metrics configuration. The skill files and CLAUDE.md mitigate this, but you still need to review every diff. It is a junior engineer who reads the docs, not a senior engineer who knows the system.

The Setup That Works

My current configuration:

  1. Claude Code Max ($100/month) as the primary development tool for all multi-file tasks, debugging, deployment, and refactoring
  2. VS Code with Claude Code extension for the IDE integration (inline diff view, accept/reject changes)
  3. CLAUDE.md + skill files in every repo encoding project conventions, anti-patterns, and deployment checklists
  4. Memory system storing user preferences, project state, and operational knowledge across sessions
  5. No Copilot, no ChatGPT Plus – consolidated to a single AI subscription

The key insight: both tools are good at understanding code. The difference is agency. Copilot understands your workspace and gives you great suggestions – but you are still the executor. Claude Code understands your workspace, your conventions, your past failures, and then goes and does the work. For a solo engineer running a homelab with 15+ applications across 7 Kubernetes nodes, the ability to say “fix this” and have it actually fixed – not just suggested – is the difference. It is not that Copilot is bad. It is that Claude Code is faster to hand things off to, and I can let it run with less supervision.

What I Would Change

The Claude Code Max pricing is steep for hobbyists. $100/month is fine if you are shipping production software daily. It is hard to justify if you code on weekends only. I would like to see a usage-based tier – the API pricing model applied to Claude Code, where you pay for what you consume rather than a flat rate.

The skill system could also be shareable. Right now, every user writes their own skill files. A community registry of skills for common platforms (k3s, Terraform, Ansible, Django, Rails) would accelerate onboarding significantly. The skills are just markdown files with prompts – there is no technical barrier to sharing them.

Finally, Claude Code needs better integration with CI/CD. It can run tests locally, but it cannot trigger a GitHub Actions workflow and wait for the result. I end up switching to the terminal to run gh workflow run and check status manually. An MCP server for GitHub Actions would close this loop.

Bottom Line

Copilot is a great tool. I used it intensively for a year and got real value from it – not just autocomplete, but chat-driven debugging, multi-file explanations, and test scaffolding. If your workflow is primarily writing and editing code in VS Code, Copilot is excellent at that.

Claude Code is a different category. The advantage is not intelligence – both tools understand code well. The advantage is speed and autonomy. I can describe a goal, let Claude Code run with it, and come back to a working result. For a solo engineer managing infrastructure across multiple repos, that ability to delegate and let it execute end-to-end is what justifies the $100/month. It is the best money I spend on developer tooling.