%%{init: {'theme': 'dark', 'themeVariables': {'primaryColor': '#14142a', 'primaryTextColor': '#d0d0e0', 'primaryBorderColor': '#b44dff', 'lineColor': '#b44dff', 'secondaryColor': '#0c0c18', 'tertiaryColor': '#080810', 'edgeLabelBackground': '#080810'}}}%%
flowchart LR
D(Discovery<br/>Literature + Data):::phase1 --> S(Strategy<br/>Identification Design):::phase2
S --> E(Execution<br/>Code + Paper):::phase3
E --> R(Review<br/>Simulated Peer Review):::phase4
R -->|score >= 95| SUB(Submission<br/>Package + Gate):::phase5
classDef phase1 fill:#14142a,stroke:#00f0ff,color:#00f0ff,stroke-width:2px
classDef phase2 fill:#14142a,stroke:#00cc66,color:#00cc66,stroke-width:2px
classDef phase3 fill:#14142a,stroke:#b44dff,color:#b44dff,stroke-width:2px
classDef phase4 fill:#14142a,stroke:#ff2d7b,color:#ff2d7b,stroke-width:2px
classDef phase5 fill:#14142a,stroke:#00f0ff,color:#00f0ff,stroke-width:2px
User Guide
What You Can Do with the Clo-Author
How It Works
You describe what you want in plain English. Claude handles the rest.
| You Do | Happens Automatically |
|---|---|
| Describe what you want | Claude selects and runs the right skills |
| Approve plans | Orchestrator coordinates agents via dependency graph |
| Review final output | Worker-critic pairs ensure quality |
| Say “commit” when ready | Quality gates enforce score thresholds |
For complex or ambiguous tasks, Claude may ask 3–5 clarifying questions to create a requirements specification before planning. Each requirement is tagged MUST/SHOULD/MAY, and each aspect is marked CLEAR/ASSUMED/BLOCKED.
A Typical Workflow
You can enter at any stage. Each phase produces structured output saved to quality_reports/.
Your First Review
Here is what happens when you run a simulated peer review.
Step 1. You type /review --peer JHR paper/main.tex
Step 2. The Editor does a desk review — checks scope, assigns each referee a disposition (Credibility, Policy, Structuralist, etc.) and pet peeves that shape their personality.
Step 3. Two blind referees review independently — a domain-referee (subject expertise) and a methods-referee (empirical methods). Neither sees the other’s report.
Step 4. The editorial decision arrives, classifying every concern as FATAL, ADDRESSABLE, or TASTE:
# Editorial Decision --- Journal of Human Resources
**Decision:** Major Revisions
## Referee 1 (domain-referee) --- Disposition: Policy
- ADDRESSABLE: Missing discussion of policy implementation costs (Section 6)
- ADDRESSABLE: Literature review omits Garcia & Santos (2024) on same population
- TASTE: Prefer confidence intervals over p-values
## Referee 2 (methods-referee) --- Disposition: Credibility
- FATAL: Pre-trends test shows significant coefficient at t-2 (Figure 3)
- ADDRESSABLE: Clustering at state level with 12 states --- use wild bootstrap
- ADDRESSABLE: No sensitivity analysis (Rambachan & Roth 2023)
## Action Items
- MUST: Address pre-trends concern (fatal if unresolved)
- MUST: Add wild bootstrap standard errors
- SHOULD: Add sensitivity bounds
- SHOULD: Cite Garcia & Santos (2024)This is what you work with. Each concern has a classification and a specific fix. Use /revise to route each comment to the right agent — new analysis goes to the Coder, clarifications go to the Writer, disagreements get flagged for your review.
For the full peer review architecture (dispositions, pet peeves, journal calibration), see Meet the Agents.
Your First Draft
Here is what happens when you draft a paper section.
Step 1. You type /write intro
Step 2. Claude reads your paper type (reduced-form, structural, descriptive, or theory+empirics), your strategy memo, and any existing sections.
Step 3. The Writer drafts using paragraph-level argument moves — each paragraph has one job:
Paragraph 1 [MOTIVATION]: Big question --- why this matters for policy
Paragraph 2 [GAP]: What the literature hasn't answered
Paragraph 3 [THIS PAPER]: "We estimate..." --- result preview with magnitude
Paragraph 4 [IDENTIFICATION]: Source of variation, one-sentence design
Paragraph 5 [PREVIEW]: Key finding with effect size
Paragraph 6 [ROADMAP]: Section-by-section guide
Step 4. A cleanup pass strips 24 known AI writing patterns (hedging, filler phrases, over-signposting).
Each paragraph has one job. The Writer adapts this structure to your paper type — reduced-form leads with identification preview and result, structural leads with the model and counterfactual, descriptive leads with the data innovation and key fact.
For all available sections and flags, see the Command Reference.
The Research Pipeline
Discovery
/discover runs an interactive interview that builds your research specification and produces a decision record documenting alternatives considered. Use /discover --lit for literature synthesis (Librarian + librarian-critic) or /discover --data for data assessment (Explorer + explorer-critic). See the Command Reference for all flags.
Strategy
/strategize proposes your identification design via the Strategist + strategist-critic pair. It first produces a pre-strategy report (paper type classification, candidate designs), then the full strategy memo with robustness plan and falsification tests. Use /strategize --pap for Pre-Analysis Plans (RCT/experimental only). For how strategy adapts to paper types, see Meet the Agents.
Execution
/analyze handles end-to-end data analysis: cleaning, main specification, robustness, publication-ready tables and figures. The Coder produces a pre-code report (naming map, script structure, numerical guards) before writing any code. Use /review --code to run a mechanical lint pass before the full code review. /write drafts paper sections using argument moves (see “Your First Draft” above). Supports R, Python, and Julia.
Review and Revision
/review routes automatically based on file type — paper files get writer-critic + strategist-critic + Verifier; scripts get coder-critic; --peer dispatches the full editorial pipeline (see “Your First Review” above). /revise classifies each referee comment as NEW ANALYSIS, CLARIFICATION, DISAGREE, or MINOR, and routes to the right agent. See Meet the Agents for the peer review architecture and Command Reference for all flags.
Presentations
/talk create [format] generates talks derived from your paper in 4 formats: job market (45–60 min), seminar (30–45 min), short (15 min), and lightning (5 min). Supports both Beamer (output to paper/talks/) and Quarto RevealJS (--quarto, output to paper/quarto/). Talk scores are advisory (non-blocking). See the Command Reference for subcommands.
Submission
/submit --target recommends journals. /submit --package builds an AEA-format replication package. /submit --audit runs the Verifier’s 10-check audit. /submit --final enforces the submission gate (aggregate >= 95, all components >= 80). See the Command Reference for details.
Utilities
/tools provides commit, compile, validate-bib, lint, journal, deploy, learn, and upgrade subcommands. Use /tools lint for mechanical code linting (R, Python, Julia) — runs prohibited-pattern checks before the full coder-critic review.
Quality Gates
Every file gets a quality score from 0 to 100, computed as a weighted aggregate across components. The orchestrator manages this automatically.
| Score | Gate | Applies To |
|---|---|---|
| 95+ | Submission | Aggregate + all components >= 80 |
| 90+ | PR | Weighted aggregate (blocking) |
| 80+ | Commit | Weighted aggregate (blocking) |
| < 80 | Blocked | Must fix before committing |
| – | Advisory | Talks (reported, non-blocking) |
These run in the background. You will rarely think about them — they just prevent broken work from being committed.
See the full scoring formula in the Architecture Reference.
Troubleshooting
LaTeX Won’t Compile
Symptom: latexmk or xelatex errors, missing packages.
- Check you have XeLaTeX and latexmk installed:
which xelatex latexmk paper/latexmkrcconfigures TEXINPUTS and BIBINPUTS automatically- Missing package? Install via TeX Live:
tlmgr install [package] - On Overleaf: set compiler to XeLaTeX via Menu > Compiler (Overleaf reads
latexmkrcautomatically)
Hooks Not Firing
Symptom: No file protection or context survival.
- Check hooks are configured in
.claude/settings.json - Ensure Python 3 is available:
which python3 - Check hook file permissions:
ls -la .claude/hooks/
Claude Ignores Rules
Symptom: Claude doesn’t follow conventions in .claude/rules/.
- Rules use
paths:frontmatter — check the path matches your files - Too many rules? Claude follows ~150 instructions reliably. Consolidate.
- Try: “Read
.claude/rules/[rule].mdand follow it for this task”
Context Lost After Compaction
Symptom: Claude forgets what you were working on.
- Point Claude to the plan: “Read
quality_reports/plans/[latest].md” - Check session log: “Read
quality_reports/session_logs/[latest].md” - The
post-compact-restore.pyhook should print recovery info automatically
Quality Score Too Low
Symptom: Score stuck below 80, can’t commit.
- Run
/review --allto get detailed issue breakdown - Fix critical issues first (they cost the most points)
- Check per-component scores — one weak component can drag down the aggregate
Upgrading
Run /tools upgrade to upgrade your project to the latest clo-author version. It replaces .claude/ and templates/, preserving your paper, scripts, data, bibliography, and CLAUDE.md.
Your files are safe. The upgrade only touches .claude/ (the infrastructure) and templates/. Your paper, scripts, data, bibliography, and all research content are never modified or deleted.
Manual alternative: Download the latest release, delete your .claude/ directory, and copy the new one in. Your CLAUDE.md stays yours — the infrastructure reads whatever paths you have there.