Self-Hosted AI on a 24GB GPU: OpenClaw + Ollama Setup Guide for Windows
TL;DR You have a 24GB VRAM GPU. You want a private, self-hosted AI assistant that rivals ChatGPT – no subscriptions, no data leaving your machine. This guide walks you through setting up Ollama (local model runtime) and OpenClaw (AI gateway with a web UI) on Windows using Docker Desktop. I also cover which models actually fit in 24GB, which ones don’t despite the marketing, and how to pick models for coding, reasoning, creative writing, and general use. ...