Wire agents into dependency chains with parallel execution, automatic handoffs, cycle detection, critical path analysis, and auto-retry on failure. Define the graph. chAIrman runs it.
Every pipeline in chAIrman starts with a single parameter: depends_on. When you assign a task, pass an array of agent IDs that must finish first. The task queues automatically with status waiting and launches the instant all dependencies reach done.
There is no pipeline definition file, no YAML config, no DAG builder. You express dependencies at task-assignment time, and the engine does the rest. This means pipelines are dynamic — you can extend them mid-execution by adding new agents that depend on running ones.
Production pipelines need more than execution. They need protection against deadlocks, visibility into bottlenecks, and automatic recovery from failures.
Before any dependency is added, detectCycleForNewDeps checks whether the new edge would create a circular dependency. If Agent A depends on B and B depends on A, the task is rejected immediately. No deadlocks, ever. The algorithm runs in real-time at assignment, not as a post-hoc validation.
computeCriticalPath calculates the longest chain through your pipeline, identifying which agents are on the critical path and which have slack. The pipeline status includes estimated completion time based on historical task durations. You know exactly where bottlenecks will form before they happen.
When an agent fails, chAIrman automatically replaces it, inherits the handoff notes, and re-assigns the same task. Up to three retries by default (configurable per agent). Eight stderr patterns are auto-detected — auth failure, rate limit, context exceeded — with specific recovery suggestions. Downstream agents stay queued, unaffected.
chAIrman's pipeline engine uses computeLayers to decompose your dependency graph into topological layers. Agents in the same layer have no dependencies on each other and run in parallel. Agents in later layers wait for all their dependencies in earlier layers to complete.
This is not a simple queue. It is a proper graph scheduler that maximizes parallelism while respecting every dependency constraint. The getOptimalConcurrency function calculates how many agents can run simultaneously based on your CPU cores and available memory, so you never overload the system.
When an agent in a pipeline completes its task, it produces a structured handoff document — a markdown file and a JSON file containing completed tasks, files changed, blockers encountered, and recommendations for successor agents. This handoff is saved every 60 seconds during execution, so progress is never lost.
Downstream agents inherit this context. When a waiting agent's dependencies all reach done, it launches with full knowledge of what happened before it. Agents can also communicate through the message board: include [MSG:agent_id] in output to send a direct message to a specific agent, or [BROADCAST] to notify the entire team.
Watch your pipeline execute on a live dashboard. Every agent, every stage, every dollar — visible in real-time over WebSocket.
The dashboard at localhost:3456 shows a kanban board of all agents grouped by status. Click any agent to open a live terminal modal showing real-time activity — tool calls, text output, file writes, and permission requests. Pipeline dependencies are visualized as a graph.
Native macOS notifications fire on task completion, budget warnings (80%+), and pipeline completion. Notifications are rate-limited and batched during bursts so they never flood your screen. You can keep working while the pipeline runs and get pinged when it matters.
getPipelineStatus returns stage counts, active/waiting/done per layer, critical path, blocked agents, and estimated completion time. getPipelineGraph generates the full dependency graph. Both available via MCP tools and REST API. Subscribe to webhook events for external integrations.
CHAIRMAN_MAX_AGENTS or .chairmanrc.json. The getOptimalConcurrency function calculates the recommended number based on your CPU cores and available memory. On a modern laptop, 5-8 parallel agents is typical without performance degradation.error state. All downstream agents that depend on it remain in waiting status indefinitely. You can then either replace_agent to give it a fresh start with the original handoff notes, or restructure the pipeline by reassigning the downstream tasks with different dependencies. The get_queue tool shows all blocked agents and what they are waiting on.depends_on pointing to agents that are already running or already done. If the dependencies are already complete, the new agent starts immediately. If not, it queues and waits. This lets you extend pipelines based on intermediate results.Join developers who ship faster with chAIrman. From $19.99/mo.