
I’ve been building something with Claude Code for the past several months that I didn’t initially intend to share. It started as a personal productivity system — meeting prep, email triage, document generation. But as the patterns accumulated and other people on my team started using versions of it, I realized the architectural decisions underneath were more interesting than any individual skill.
So I open-sourced the patterns. Stripped out the company-specific details, genericized the examples, and published 158 field-tested techniques organized into five parts: core architecture, specific techniques across 16 categories, a step-by-step guide to building a persistent knowledge base, quick reference cheat sheets, and live production examples of hooks and test suites actually running.
The architecture is the point
The individual tips are useful, but what I actually want people to steal is the shape of the system. Three layers, three repos, clean separation.
The schema layer is CLAUDE.md — it’s the router. Natural language trigger phrases dispatch to the right skill file. “Prep Sarah” loads the meeting prep skill. “Draft a post” loads the voice skill. “Scan email” loads inbox triage. You think in outcomes, not tools.
The skill layer is where the work happens. Each skill is a markdown file with reference data loaded on demand. Progressive disclosure — the 2,000-line persona database only loads when someone says “ICP eval.” This keeps baseline token cost low and puts heavy content behind intent gates.
The data layer is the knowledge base. An Obsidian vault where Claude writes enriched data back after every skill invocation. Contact files get richer after meetings. Account profiles accumulate signals. Observations get captured, reviewed, and graduated into permanent rules. The system compounds.
Why hooks matter more than rules
The hardest lesson was that CLAUDE.md rules degrade. After /compact (context compression), Claude loses track of earlier instructions. Rules that were crisp at the beginning of a session become vague suggestions after compaction. The instructions suggest. Hooks enforce.
So I moved everything that must never be skipped into hooks — mechanical triggers that fire regardless of context state. A SessionStart hook renders the project dashboard before every session. A PreToolUse hook detects project directory switches and forces bookmarking before allowing the switch. A PostToolUse hook logs every external action to an audit trail.
The result is a system where the behavioral guardrails survive compaction, survive context switching, survive the natural entropy of long sessions. Instructions degrade. Hooks are mechanical.
The flywheel effect
What actually makes this thing worth building is the compounding. Every skill that touches external data writes enriched data back to the knowledge base. The next session starts with richer context than the last.
Meeting prep for someone I’ve met three times pulls from contact files enriched by prior debriefs, email threads, CRM data, and LinkedIn profiles. The briefing is orders of magnitude better than the first meeting prep for the same person. And I didn’t maintain any of it manually — it accumulated as a side effect of doing work.
The same pattern applies to the learning loop. End-of-day observations get captured to daily files. When 30 accumulate, Claude scans them, finds patterns, and proposes graduated rules — rules that get applied to CLAUDE.md, skill files, or the knowledge base permanently. The system learns from my corrections without me building a training pipeline.
How to use this
The repo is designed for two audiences simultaneously. Humans browse it on GitHub, scan categories, read what interests them. AI agents consume it programmatically — each part file is self-contained with enough context to generate implementation plans.
The fastest path: clone the repo, open Claude Code in the directory, and say “Read PART3-BUILD-A-KNOWLEDGE-BASE.md and build me a plan for setting this up with my stack.” Claude reads the patterns, asks what you’re working with, and produces a sequenced implementation plan. Start with the three-layer directory structure and one skill. Add the learning loop. Add hooks. The minimum viable system is steps 1-6. Steps 7-8 make it self-improving.
Everything in the repo is production-tested. Not aspirational — deployed. The hook scripts are running. The test suites validate every commit. The learning loop has run through its first graduation review. This is what I actually use.
Take the patterns. Build your own. Make it better than mine.