Zero-token orchestration
The ~600-line state machine is pure deterministic logic. No LLM calls in the routing layer. You only spend tokens on the actual code work.
A zero-token state machine dispatches isolated AI workers through
No wasted tokens on routing. First-class human-in-the-loop gates. Open source, self-hosted, Node.js 24+.
Most AI coding tools use an LLM to orchestrate other LLMs — spending tokens to decide what to do next. Red Queen takes the opposite approach: a deterministic state machine commands focused AI workers. Cheaper. Faster. Debuggable. Yours.
The ~600-line state machine is pure deterministic logic. No LLM calls in the routing layer. You only spend tokens on the actual code work.
Each phase — spec writer, coder, reviewer, tester, comment-handler — runs a purpose-built prompt in isolation. Focused beats kitchen-sink.
Review checkpoints are first-class workflow configuration, not an afterthought. Add them, remove them, reorder them in redqueen.yaml.
Bidirectional sync with Jira and GitHub Issues. Work flows from your tracker through the pipeline and back. Linear on the roadmap.
Failed phases retry up to three times, then escalate to a human gate. No infinite loops. No runaway token bills.
Optional webhook server for instant response, with a polling reconciler that works out of the box. No external queue required.
Skills are markdown prompt templates. Drop SKILL.md into .redqueen/skills/<name>/ and your version wins over the built-in.
Every integration implements the IssueTracker or SourceControl interface. New trackers = new adapter, never a core change.
Background orchestrator keeps tickets moving through automated phases and parks work at human gates for when you're back at the keyboard.
You file a ticket. Red Queen has Claude write a spec for it. You approve. Claude writes the code and opens a PR. Another Claude reviews it. Another tests it. You review the final PR and merge. You can jump in at any gate with feedback, or rip the gates out entirely. A tiny AI dev team — and you're the tech lead.
┌─────────────┐ ┌──────────────┐ ┌────────┐ ┌──────────────┐ ┌─────────┐ ┌──────────────┐
│ spec-writer │ ──▶ │ spec-review │ ──▶ │ coder │ ──▶ │ code-review │ ──▶ │ testing │ ──▶ │ human-review │
└─────────────┘ └──────┬───────┘ └────────┘ └──────┬───────┘ └────┬────┘ └──────┬───────┘
│ rework │ rework │ fail │ approve
▼ ▼ ▼ ▼
spec-feedback code-feedback coder merged
Phases are dynamic — defined in redqueen.yaml, not baked into the code.
Add a security-review gate. Remove spec-writing if you ship from existing tickets.
The graph is yours to shape.
From zero to your first AI-driven ticket in about 15 minutes.
claude --version. Red Queen dispatches Claude Code workers, so the CLI must be logged in.npm install -g redqueenredqueen init -y.env:
GITHUB_PAT=ghp_xxxxxxxxxxxxxxxxxxxxredqueen start
Open http://127.0.0.1:4400 — the live dashboard.rq:phase:spec-writing label. Watch the phases move.Full docs, adapter guides, and configuration reference live in the GitHub README.
Red Queen isn't a replacement for the tools below — it's the manager that delegates to them. It dispatches Claude Code today; the adapter pattern means swapping in other coding CLIs is a config change, not a rewrite.
| Red Queen | Claude Code (solo) | Aider | Cline / Roo | Devin / OpenHands | |
|---|---|---|---|---|---|
| Form factor | Background orchestrator | Interactive CLI | Interactive pair | IDE agent | Autonomous agent |
| Orchestration | Deterministic state machine | You are the orchestrator | You are the orchestrator | IDE-driven loop | LLM-driven loop |
| Human gates | First-class, configurable | Ad-hoc | Ad-hoc | Ad-hoc | Minimal |
| Token cost for routing | Zero | N/A (you route) | N/A (you route) | Per decision | Per decision |
| Works while you sleep | Yes | No | No | No | Yes |
| Debuggable flow | Read the state machine | N/A | N/A | Hope the LLM explains | Hope the LLM explains |
The questions you'd ask before adopting.
A deterministic orchestrator for AI coding agents. It's a state machine that dispatches isolated AI workers through a complete software development lifecycle: spec writing, coding, review, testing, and human review. The orchestrator spends zero AI tokens on routing — all decisions are made by pure deterministic logic, not an LLM.
Most AI coding tools use an LLM to orchestrate other LLMs, spending tokens to decide what to do next. Red Queen does the opposite: a deterministic state machine orchestrates isolated, focused AI workers. This is cheaper, faster, and fully debuggable. Human-in-the-loop gates are first-class configuration, not an afterthought.
Today, yes. Red Queen dispatches Claude Code workers. The adapter pattern makes adding support for other coding CLIs straightforward, and that's on the roadmap.
The orchestration itself is free — zero tokens spent on routing. Tokens are what your Claude subscription or API already charges for the actual code work.
Yes. Red Queen runs as a local Node.js 24+ process on your laptop or any server you own. No hosted service is required. Your code is only sent to the AI CLI you configure; Red Queen itself talks to GitHub or Jira over HTTPS.
Yes — project.buildCommand and testCommand scope per-run, and per-module commands
are configurable.
Phases retry up to three times and then escalate to a human gate. No infinite loops, no runaway bills.
Yes — edit redqueen.yaml. Gates are configuration, not hardcoded.
Yes. Drop a SKILL.md at .redqueen/skills/<name>/ and it wins over the built-in.
See the skills contract in the GitHub repository.
GitHub Issues, Jira, and GitHub (source control) ship today. Linear is on the roadmap.
It's a v0.1 preview. Use it, file bugs, expect rough edges — the API surface is stable enough to build on.
Yes — MIT-licensed, developed in the open at github.com/odyth/red-queen.
Open source. Self-hosted. Deterministic. Star the repo, kick the tires, file bugs.