I've been working with Claude Code (the AI coding assistant) for a while now, and honestly? It's brilliant. Pair programming with an AI that can read my codebase, write code, run tests, and deploy changes has completely transformed how I build things.
But there was this recurring frustration.
The problem
Every few sessions, Claude would suggest something that used to work but doesn't anymore. An outdated API, a deprecated pattern, a breaking change in Next.js 16 canary that hadn't made it into its training data. I'd spot it, correct it, and we'd move on. But it kept happening.
The issue isn't that Claude gets things wrong (we all do). The issue is that I was spending time on trial-and-error debugging that could have been avoided if Claude just knew what version of everything I was using and had the latest docs to hand.
Next.js 16 is in canary. Tailwind CSS just shipped v4 with breaking changes. React 19 landed. Supabase SSR auth patterns changed. I'm also building custom MCP servers, so I need the Model Context Protocol docs close by. Every time I start a session, Claude should know all of this automatically.
So I thought: what if Claude could check the docs itself at the start of every session?
What I built
A global documentation loading system that runs automatically whenever I start a new Claude Code session. It's stack-agnostic, which means it works across all my projects: Next.js, React Native, Swift/iOS, Python, and MCP.
Here's what happens behind the scenes:
- Session starts (I open Claude Code)
- Stack detection kicks in (reads package.json, Podfile, requirements.txt, etc.)
- Cache check (have I fetched docs for these dependencies recently?)
- Fetch or load (grab fresh docs if needed, or load from cache)
- Optimize and inject (summarize into ~100KB, add to Claude's context)
Now, every time I start a session, Claude gets a beautifully formatted document with:
- The exact versions of everything I'm using
- Breaking changes for canary/beta versions (Next.js 16, Tailwind v4, React 19)
- API references for core frameworks
- MCP server documentation (chrome-devtools, supabase, etc.)
- Links to full docs for deeper dives
All of this happens in under a second if it's cached, or about 5-10 seconds on first run.
Why this matters
This isn't just about convenience. It's about removing friction from the development flow.
Before this, I'd start a session and spend the first 10 minutes correcting Claude when it suggested outdated patterns. Now, Claude starts every session with the right context and we get straight to building.
It's also future-proof. When I switch from a Next.js project to a Swift iOS app, the system detects the change and loads the relevant docs automatically. No manual configuration. No copy-pasting documentation. It just works.
And because it caches everything with smart TTL (time-to-live), I'm not hammering documentation sites on every session. Canary versions refresh every 2 days, stable releases every 7 days, LTS every 14 days. The cache lives locally, so it's fast and private.
The technical bits (lightly, promise)
I built this as a SessionStart hook for Claude Code. It's written in Python and lives in my global ~/.claude/hooks/ directory, so it runs for every project automatically.
The architecture is modular:
- Stack Detector (auto-detects project type from files)
- Fetchers (npm, Swift, Python, MCP, each with specialized logic)
- Cache Manager (handles TTL and expiry)
- Context Optimizer (smart summarization to fit within budget)
The system uses a priority system too. High-priority packages (Next.js, React, Django) get full content with breaking changes. Medium-priority packages (Tailwind, Supabase) get condensed summaries. Low-priority utilities just get links.
Everything fits within ~100KB, which is about 5% of Claude Max's 200K token context window. That leaves 95% for code and conversation.
I also built in graceful fallbacks. If a fetch fails (no internet, API down, whatever), it uses the cached version. If there's no cache and the fetch fails, it silently skips that package. The session always starts, even if something goes wrong.
What this unlocks
This is the kind of workflow improvement that compounds. Every session starts with Claude knowing exactly what I'm working with. No more outdated suggestions. No more "try this... actually wait, that's deprecated". Just clean, current, context-aware pair programming.
It also means I can confidently use canary and beta versions of frameworks without worrying that Claude will reference stable docs. The system automatically detects version types and highlights breaking changes.
And because it's global and stack-agnostic, it works across my entire workflow. Web, mobile, backend, AI tooling. One system, all stacks.
Real-world example
Here's what Claude sees at the start of a session for my portfolio project (navasmo.co.uk):
# 📚 Stack Documentation Context
**Project**: navasmo.co.uk
**Stacks**: npm, mcp
**Frameworks**: next 16.0.1-canary.2, react 19.0.0, tailwindcss 4.0.0
---
## next 16.0.1-canary.2 (CANARY)
### ⚠️ Next.js 16 Breaking Changes
- **Async Params**: All params and searchParams are now Promises
- **Turbopack**: Default bundler (replaces Webpack)
- **React 19**: Required
[...full Next.js documentation...]
## tailwindcss 4.0.0 (v4)
### 🔴 V4 Breaking Changes
- CSS-first: Use @import "tailwindcss" not @tailwind
- Typography: Customize with @utility prose { ... }
[...full Tailwind documentation...]
## Model Context Protocol (MCP)
### Installed MCP Servers
- **chrome-devtools**: Browser automation
- **supabase**: Database operations
[...full MCP documentation...]
That's what Claude gets automatically at session start. No manual setup. No copy-pasting docs. Just clean, current, version-specific context.
What's next
For now, this does exactly what I need. But I've got a few ideas:
- Manual refresh command (
/refresh-docs) to force cache updates - Cache viewer (
/show-cache) to see what's cached and when it expires - More stacks (Rust, Go, Ruby/Rails)
- Custom doc sources (internal company docs, project-specific wikis)
- Team-wide shared cache (so a whole team benefits from one person's session)
But honestly? The system is perfect as is. It's fast, it's reliable, and it's already transformed how I work with Claude.
Why I'm sharing this
If you're using AI coding assistants and finding yourself constantly correcting outdated suggestions, this approach might help. The idea is simple: give the AI the same context you'd give a human pair programmer.
You wouldn't pair with someone and not tell them what versions you're using or what breaking changes just landed. So why do that with AI?
This system bridges that gap. And because it's built as a Claude Code hook, it runs automatically without me thinking about it. That's the dream: tools that make you faster without adding cognitive load.
If you're curious about the implementation or want to build something similar for your own workflow, the full setup is documented in my global hooks directory. It's modular and extensible, so you can adapt it to your stack or add new fetchers as needed.
For me, this was one of those "scratch your own itch" projects that ended up being way more valuable than I expected. Now every session starts with Claude fully briefed and ready to build. No more outdated docs. No more trial and error. Just smooth, context-aware collaboration.
And that's exactly what good tooling should feel like: invisible until you need it, powerful when you do.