Command Reference

Every Command, Flag, and Subcommand

Quick Lookup

Command Flags / Modes What It Does
/new-project [topic] Full orchestrated pipeline: idea to paper
/discover --lit, --data, --ideate Discovery phase: interview, literature, data
/strategize --pap Identification strategy or pre-analysis plan
/analyze [dataset] --dual [langs] End-to-end data analysis (single or parallel language)
/write [section] --humanize Draft paper sections or strip AI patterns
/review [file] --peer [journal], --methods, --proofread, --code, --replicate [lang], --all Quality reviews (routes by target + flags)
/revise [report] R&R cycle: classify + route referee comments
/talk create, audit, compile Beamer presentations
/submit --target, --package, --audit, --final Submission pipeline
/tools commit, compile, validate-bib, journal, context, deploy, learn Utilities

/new-project

/new-project [topic]

Launches the full orchestrated pipeline. Claude conducts a research interview, builds a spec, then runs Discovery → Strategy → Execution → Peer Review → Submission autonomously.

Agents: All (orchestrator-managed) Output: Complete project scaffold + research spec + domain profile


/discover

/discover                       Interactive research interview (default)
/discover --lit [topic]         Literature review
/discover --data [question]     Data source discovery
/discover --ideate [topic]      Research question generation
Mode Agents Output
(default) Direct conversation Research spec + domain profile
--lit Librarian → librarian-critic Annotated bibliography + BibTeX + frontier map
--data Explorer → explorer-critic Ranked data sources with feasibility grades (A/B/C/D)
--ideate Direct generation 3–5 research questions with strategies

Saves to: quality_reports/


/strategize

/strategize [question]          Design identification strategy
/strategize --pap [spec]        Draft pre-analysis plan (AEA/OSF/EGAP format)
Mode Agents Output
(default) Strategist → strategist-critic Strategy memo + robustness plan + falsification tests
--pap Strategist (PAP mode) Pre-analysis plan with outcomes, power, subgroups

Saves to: quality_reports/strategy_memo_[topic].md


/analyze

/analyze [dataset path or goal]
/analyze --dual r,python        Parallel analysis in two languages
/analyze --dual r,stata         Same, with Stata as second language

End-to-end analysis: data cleaning → main specification → robustness → publication tables and figures. Reads the strategy memo (if it exists) and implements it faithfully.

Mode Agents Output
(default) Data-engineer + Coder → coder-critic Scripts, tables, figures in one language
--dual [langs] Data-engineer + 2 Coders in parallel → 2 coder-critics + comparison Same outputs in both languages + convergence report

Languages: R, Stata, Python, Julia Saves to: scripts/, Tables/, Figures/, Output/cross_language_comparison.md

Cross-language replication: When --dual is used, both implementations run the same specification independently. Results are compared against tolerances in domain-profile.md. Divergences are flagged. Inspired by Scott Cunningham’s approach: if two independent implementations agree, neither has a bug.


/write

/write intro                    Introduction (contribution in first 2 pages)
/write strategy                 Empirical strategy (design-templated)
/write results                  Results (effect sizes in economic terms)
/write conclusion               Conclusion (restate with magnitude)
/write abstract                 Abstract (100-150 words)
/write full                     All sections
/write --humanize [file]        Strip AI writing patterns only
Mode Agent Output
[section] Writer LaTeX section in Paper/sections/
--humanize Writer (humanizer mode) Edited file with 24 AI patterns stripped

Every draft gets an automatic humanizer pass before finalizing.


/review

/review Paper/main.tex          Comprehensive (auto-detect paper)
/review scripts/R/analysis.R    Code review (auto-detect script)
/review --peer                  Blind peer review (generic)
/review --peer JHR              Blind peer review (calibrated to JHR)
/review --methods [file]        Causal audit only
/review --proofread [file]      Manuscript polish only
/review --code [file]           Code quality only
/review --replicate python      Re-implement in another language, compare
/review --all                   All critics in parallel + weighted score
Flag Agents What It Checks
(auto .tex) writer-critic + strategist-critic + Verifier Writing + identification + compilation
(auto .R/.py) coder-critic Code quality and reproducibility
--peer domain-referee + methods-referee Simulated journal review (blind, independent)
--peer [journal] domain-referee + methods-referee Same, calibrated to that journal’s review culture
--methods strategist-critic 4-phase causal design review
--proofread writer-critic Notation, hedging, LaTeX, claims vs evidence
--code coder-critic Code quality (no strategy comparison)
--replicate [lang] Coder (replication) + coder-critic Re-implement in target language, compare outputs
--all All critics in parallel Weighted aggregate score

Journal calibration: When a journal name is provided with --peer, both referees read .claude/rules/journal-profiles.md and adapt their priorities, bar, and checklist to that journal. 15 journals are pre-populated; unknown journals still get adapted review using the journal name + your domain profile. You can add custom profiles.

Saves to: quality_reports/


/revise

/revise [referee-report-path]

For real R&R responses (not the simulated /review --peer). Give it your actual referee report and Claude classifies each comment, then routes to the right agent:

Classification Routed To Action
NEW ANALYSIS Coder agent Flag for user, create analysis task
CLARIFICATION Writer agent Draft revised text
REWRITE Writer agent Structural revision
DISAGREE You (the user) Draft diplomatic pushback (flagged for your review)
MINOR Writer agent Fix directly

Output: Point-by-point response letter + revised sections + tracking document Saves to: quality_reports/referee_response_[journal]_[date].tex


/talk

/talk create job-market          Full talk (45-60 min, 40-50 slides)
/talk create seminar             Standard seminar (30-45 min, 25-35 slides)
/talk create short               Conference session (15 min, 10-15 slides)
/talk create lightning           Elevator pitch (5 min, 3-5 slides)
/talk audit [file]               Visual layout check
/talk compile [file]             XeLaTeX compilation
Mode Agents Output
create Storyteller → storyteller-critic Beamer .tex in Talks/
audit Visual checks Layout issues report
compile XeLaTeX Compiled PDF

Talk scores are advisory (non-blocking).


/submit

/submit --target                 Journal recommendations
/submit --package                Build AEA replication package
/submit --audit                  Audit replication package (10 checks)
/submit --final [journal]        Final gate: score >= 95, all components >= 80
Mode Agents Output
--target Orchestrator Ranked list of 3 journals with rationale
--package Coder + Verifier Master script + README + data docs in Replication/
--audit Verifier (10 checks) Pass/fail replication audit
--final All + gate enforcement Cover letter draft + submission checklist (or blocking issues)

/tools

/tools commit [message]          Stage + commit (+ optional PR)
/tools compile [file]            3-pass XeLaTeX + BibTeX
/tools validate-bib              Cross-reference all \cite{} keys
/tools journal                   Research journal timeline
/tools context                   Context status + session health
/tools deploy                    Render guide + push to GitHub Pages
/tools learn                     Extract session discoveries to MEMORY.md

Each subcommand is lightweight — no multi-agent orchestration.