TL;DR

Jellyseerr already knows what I have. Radarr and Sonarr already know how to find things. The missing piece was a front door that understood intent instead of requiring me to search for specific titles. I wired Jellyseerr’s REST API to Claude and gave it a system prompt that knows my taste profile. Now I can say “download 100GB of family-friendly anime I might like” and get a queue of requests back. A Kubernetes CronJob runs the same prompt on a schedule so the library grows without me thinking about it.

The Problem with Manual Requests

Jellyseerr is a great interface. It surfaces trending content, shows what’s available, and lets users submit requests that flow automatically into Radarr and Sonarr. But it still requires me to know what I want.

If I want to expand the anime library, I have to think of titles, search for them individually, check if they’re family-appropriate, and submit them one by one. That friction means I only do it when I have a specific title in mind. The library grows reactively, not proactively.

I wanted a system where I could express intent — “more of what I already like, family-friendly, maybe 100GB worth” — and have it execute.

The Architecture

User prompt (natural language)
   Claude (intent parser + recommender)
    Uses: Jellyseerr API (existing content)
          TMDB API (metadata + ratings)
   Jellyseerr REST API (/api/v1/request)
   Radarr / Sonarr (search + organize)
   Media pipeline (seedbox → NAS → Jellyfin)

The Claude layer does two things: it generates recommendations based on the prompt and my taste profile, then filters them against what I already have before submitting requests. Duplicates and already-owned content never hit the queue.

The Taste Profile

The system prompt includes a static profile that I wrote once and rarely touch:

You are a media curator for a homelab Jellyfin server. The primary user (zolty) has the following preferences:

STRONG YES: Family-friendly anime (Ghibli, Makoto Shinkai, slice-of-life), animated films suitable for ages 8+, documentaries (nature, science, space), classic cinema pre-1990, dry British comedy.

STRONG NO: Horror, gore, anything rated R for violence or sexual content, torture sequences, jump-scare heavy content.

NEUTRAL: Live action drama, superhero films, modern comedy.

STORAGE BUDGET: When given a size target (e.g. "100GB"), estimate file sizes based on typical 1080p encode rates (~4-8GB for a film, ~500MB-1GB per TV episode) and fill the budget accordingly. Prefer quality over quantity — fewer excellent picks over many mediocre ones.

OUTPUT: Return a JSON array of objects with fields: title, year, type (movie|tv), reason (one sentence), estimated_size_gb.

This profile gets embedded into every request. Claude uses it to filter recommendations before they ever reach Jellyseerr.

The Jellyseerr API

Jellyseerr exposes a REST API. The relevant endpoints:

# Search for a title
GET /api/v1/search?query=My+Neighbor+Totoro

# Check if already in library
GET /api/v1/movie/{tmdbId}  # status field: available | unknown | pending

# Submit a request
POST /api/v1/request
{
  "mediaType": "movie",
  "mediaId": 12345,  # TMDB ID
  "seasons": "all"   # for TV
}

Authentication is an API key from Jellyseerr settings, stored as a Kubernetes secret. The script that wraps these calls is about 120 lines of Python — call Claude, parse the JSON response, check each title against the library, submit the ones that aren’t already there.

The Natural Language Flow

A sample interaction:

Prompt: “I am zolty, you know my preferences. Download approximately 100GB of family-friendly anime that I might like. Prioritize films over series.”

Claude response (abbreviated):

[
  {"title": "The Boy and the Heron", "year": 2023, "type": "movie", "reason": "Latest Miyazaki, Ghibli's most ambitious film in a decade", "estimated_size_gb": 12},
  {"title": "Nausicaä of the Valley of the Wind", "year": 1984, "type": "movie", "reason": "Foundational Miyazaki, perfect for family viewing, likely a gap in the library", "estimated_size_gb": 8},
  {"title": "Wolf Children", "year": 2012, "type": "movie", "reason": "Mamoru Hosoda at his most accessible, excellent for ages 8+", "estimated_size_gb": 9},
  {"title": "A Silent Voice", "year": 2016, "type": "movie", "reason": "Kyoto Animation, deals with bullying and redemption, emotionally resonant without being dark", "estimated_size_gb": 10}
]

The script then checks each title against the Jellyseerr library. Already owned: skip. Not found: submit. The final log shows what was requested and what was skipped, with reasons.

Scheduled Runs

The interesting part is making this proactive. A Kubernetes CronJob runs the script weekly with a rotating prompt template:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: media-curator
  namespace: media
spec:
  schedule: "0 10 * * 0"  # Sunday 10 AM UTC
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: curator
            image: harbor.k3s.internal.zolty.systems/production/media-curator:latest
            env:
            - name: PROMPT_TEMPLATE
              value: "weekly_family_anime"
            - name: SIZE_TARGET_GB
              value: "50"
            envFrom:
            - secretRef:
                name: media-curator-secrets
          restartPolicy: OnFailure

Prompt templates are stored in a ConfigMap. The weekly family anime template rotates through different intent variations: “something new this season”, “classic films we’re missing”, “a complete series to binge”, etc. This prevents the library from converging on the same recommendations every week.

What Actually Gets Requested

After a few weeks of scheduled runs, the library acquired things I wouldn’t have thought to search for manually — complete Hosoda filmography, multiple Satoshi Kon films, the full Ghibli catalog in 4K where available. The quality of Claude’s recommendations is high when the taste profile is specific. Vague prompts produce mediocre results; precise profiles produce good ones.

The one failure mode: Claude occasionally hallucinates titles that don’t exist on TMDB. The script handles this gracefully — if the search returns no results, the title is logged as “not found” and skipped. This happens maybe 5% of the time with older or more obscure recommendations.

Cost

The curator script runs Claude Haiku for the recommendation step — it’s cheap and fast enough for this use case. A typical weekly run with a 50GB target generates 15-20 recommendations, uses about 2000 tokens, and costs under $0.01. Monthly cost for four runs: negligible.

The storage cost of actually downloading the content is the real cost. That’s a deliberate choice — the whole point is to fill the library with things worth watching.