Documentation Automation
We keep the docs site accurate and up to date with minimal hand-writing by treating source of truth as code and config, and generating or assembling doc content from it. This page explains the types of docs we maintain, how each is updated, and how the pipeline keeps everything in sync without you having to “remember to update the docs.”
Principle: source of truth lives in code
- API docs — Routes, request/response shapes, and auth live in endpoint modules and validators; we generate
docs/api/*.mdfrom them (via the generate-api-docs skill). - Command docs — Slash commands and parameters live in command modules; we generate
docs/commands/*.md(and optional catalog) from them. - Reference docs — Service/repository/validator contracts live in TypeScript; we generate
docs/reference/*.mdfrom them. - Release notes — Conventional commits on
maindrive semantic-release; it computes semver and updatesdocs/releases/CHANGELOG.md. - AI Ecosystem reference — Skills, rules, commands, and MCP config live under
.cursor/; we generatedocs/ai-ecosystem/reference.mdfrom the filesystem.
The only “manual” part is when to run the generator or which skill to invoke; the content comes from the codebase. So we keep docs updated without writing the API/command/reference/release/AI-reference content by hand.
Doc types and how they’re updated
| Type | Location | Source of truth | How it’s updated |
|---|---|---|---|
| API | docs/api/ | *.endpoint.ts, validators | Cursor skill generate-api-docs (or edit generated file in a pinch). |
| Commands | docs/commands/ | *.command.ts | Cursor skill generate-command-docs. |
| Reference | docs/reference/ | *.service.ts, *.repository.ts, *.validators.ts | Cursor skill generate-reference-docs. |
| Release notes | docs/releases/ | Conventional commits on main | semantic-release computes version + notes and writes docs/releases/CHANGELOG.md. |
| AI Ecosystem reference | docs/ai-ecosystem/reference.md | .cursor/skills, .cursor/rules, .cursor/commands, .cursor/mcp.json | pnpm docs:ai-ecosystem:generate. |
| Customer / developer | docs/customer/, docs/developer/ | Authored | author-customer-docs and author-developer-docs skills when features or conventions change. |
| Top-level | docs/*.md (ARCHITECTURE, CONTRIBUTING, etc.) | Authored | Edit directly when project structure or process changes. |
End-to-end doc flow (from code to site)
- Local: You (or the agent) change code, run the right skill so generated docs reflect it, then run
pnpm docs:buildto confirm. No need to hand-write API/command/reference tables. - CI: Every merge runs
pnpm docs:build; the main branch stays buildable. - Deploy: Pushes to
mainthat touchdocs/,docs-site/, orsrc/**/*.documentation.mdtrigger the docs-deploy workflow; Render serves the built site. No separate “publish docs” step.
Release notes: conventional commits -> semantic-release
We don’t hand-edit a single changelog file. semantic-release computes version + notes from commits:
- Standard flow: merge conventional commits and let release workflow run
pnpm release. - Preview flow: run
pnpm release:dryto inspect calculated next version and release notes.
So release notes stay updated without manual changelog edits: commit history is the source, semantic-release does the rest.
AI Ecosystem reference: scan and generate
When you add or change a Cursor skill, rule, command, or MCP server, the reference page can get out of date. We don’t hand-edit it:
- Rule: The ai-ecosystem-docs rule says: when you change
.cursor/skills,.cursor/rules,.cursor/commands, or.cursor/mcp.json, runpnpm docs:ai-ecosystem:generatesoreference.mdstays current. - Script: Reads the filesystem and overwrites
docs/ai-ecosystem/reference.mdwith tables of skills, rules, commands, and MCP servers. No manual table editing.
So the AI Ecosystem reference stays updated without writing it by hand—only by running one command after Cursor/MCP changes.
What we focus on (and what we don’t)
- Generated: API, commands, reference, semantic-release changelog output, AI Ecosystem reference. These are driven by code and config; we run scripts or skills to refresh them.
- Authored but guided: Customer and developer docs. We use skills (author-customer-docs, author-developer-docs) so that when features or conventions change, the right pages are updated in a consistent way.
- Authored ad hoc: Runbooks, top-level docs (ARCHITECTURE, CONTRIBUTING, AGENTS). We edit these when the project or process changes; there’s no generator, but the docs build and CI still validate that the site builds.
Pre-commit: update every doc type from the diff
We run a pre-commit doc step so that every commit that touches endpoints, commands, services, or persona-relevant code either includes the corresponding doc updates in the same commit or fails with clear instructions. The AI decides what needs updating based on the staged diff.
What runs: Before lint-staged, the hook runs scripts/docs/pre-commit-doc-update.mjs. The mapping from source-file changes to doc types is read from scripts/docs/documentation-types.rules.json (single source of truth). It:
- Reads
git diff --cached --name-onlyand infers affected doc types (API, commands, reference, customer, developer, AI Ecosystem) from file patterns. - Runs generators we have: when
.cursor/changed, runspnpm docs:ai-ecosystem:generateand stagesdocs/ai-ecosystem/reference.md. - For API, commands, reference, customer, and developer we don’t have generators—the AI (Cursor) updates those using the skills. So the script checks: for each affected type, are the corresponding doc files already staged? (e.g. you ran the prompt in Cursor and staged
docs/before committing again.) If yes, the script exits 0 and the commit proceeds. If no, it writes a prompt file (scripts/docs/.pre-commit-doc-prompt.md) with the staged file list and instructions (“use generate-api-docs, generate-command-docs, …”), then exits 1 and the commit is blocked. - Optional: If you set
DOC_UPDATE_CMDto a command that runs the AI with the prompt file (e.g. a Cursor CLI or wrapper script), the hook will run it, then stagedocs/and continue. Then the commit includes both code and doc updates without a second commit.
So the automation updates every type of Docusaurus doc that’s affected by the diff: the hook drives it, and the AI (via the prompt or DOC_UPDATE_CMD) performs the updates. CI only needs to run pnpm docs:build; it doesn’t need to regenerate anything.
Parallel subagents in automatic mode
Parallel subagents run when your DOC_UPDATE_CMD wrapper chooses to run them.
Typical pattern:
- parse prompt/doc-types from
scripts/docs/.pre-commit-doc-prompt.md - launch parallel tasks for independent doc types (API, commands, reference, customer/developer)
- wait for all tasks to finish
- return exit 0 so pre-commit stages docs and continues
Wrapper:
scripts/docs/doc-update-cmd.sh
Setup:
export DOC_UPDATE_CMD="bash scripts/docs/doc-update-cmd.sh"
What the wrapper does:
- Reads
scripts/docs/.pre-commit-doc-prompt.md. - Extracts affected doc type IDs.
- Builds one focused prompt per doc type.
- Runs
cursor-agent -pin parallel (bounded byMAX_PARALLEL_DOC_AGENTS, default4). - Returns success only when all doc-type runs succeed.
Useful options:
MAX_PARALLEL_DOC_AGENTS- cap concurrent doc runs.DOC_UPDATE_CURSOR_MODEL- force a model for doc updates.DOC_UPDATE_APPROVE_MCPS=0- disable--approve-mcps.CURSOR_AGENT_BIN- override binary path.
Operational notes:
- First-run auth: run
cursor-agent loginonce. - Failure diagnostics: per-doc-type logs are written to
scripts/docs/.doc-update-logs/. - Validation still matters: after auto-updates, pre-commit/CI still run quality gates and
pnpm docs:build.
So: we keep docs updated without writing everything manually by generating from source (API, commands, reference, semantic-release changelog, AI reference) and by using skills and scripts for the rest. The pre-commit step ties it together so every commit that touches code either includes doc updates or blocks until you run the prompt in Cursor. Flowcharts above summarize the flows; for step-by-step runbooks see the main Documentation Guide.