A Claude Code skill that orchestrates multi-step agent workflows using the Agent Teams feature. Define your entire workflow in a YAML file — steps, dependencies, custom agents, conditional branching, retry loops — and let the orchestrator coordinate a team of agents to execute it.
Claude Code shipped task management skills earlier this week most likely in preparation for the Agent Team feature that's currently behind the CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS flag.
I worked on creating a skill that could:
- Read in a file that specified an entire workflow (steps, dependencies, agents/prompts, etc)
- Could reliably call the tasks without blowing up the main context window. (This was not very reliable since the main task would inconsistently call the TaskOutput tool which would dumped the entire subagent session jsonl file).
Fortunately, the SendMessage tool seems to have fixed this, and more importantly, supports the YAML configuration that was vibed up.
If you're just starting out with Agent Teams, I'd recommend trying this out as a starting point. The mermaid diagram is really to just help visualize the workflow rather than trying to interpret from the step declarations. But it also helps to reinforce the workflowflow when the model interprets the file.
You can even reference these Gist files and ask the model to create a new workflow file for your own use case!
- Claude Code CLI with Agent Teams enabled:
export CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1
- Copy
team-workflow.mdinto your project's.claude/commands/directory - Place your workflow YAML files in
.claude/workflows/(or wherever you want)
your-project/
├── .claude/
│ ├── commands/
│ │ └── team-workflow.md # The skill
│ └── workflows/
│ └── agent-loop.yml # Example: implement + verify loop
/team-workflow <@path/to/workflow.yml> <task description>
The @ file reference expands the YAML inline. Everything after it is the task prompt.
# Run an implement-and-verify loop
/team-workflow @.claude/workflows/agent-loop.yml "Add input validation to the login form"| Field | Required | Description |
|---|---|---|
name |
Yes | Workflow identifier |
description |
No | Human-readable description |
diagram |
No | Mermaid diagram for visualization (also reinforces flow for the model) |
agents |
No | Reusable custom agent templates |
inputs |
No | Declared inputs with names, descriptions, defaults |
steps |
Yes | The workflow steps to execute |
Define reusable agent personas that steps can reference by name. Each agent gets system instructions that are prepended to the step prompt.
agents:
security-expert:
instructions: | # Required — system prompt for the agent
You are a security auditor...
tools: [Read, Grep, Glob] # Optional — tool restrictions
model: sonnet # Optional — haiku, sonnet, or opusDeclare inputs that can be referenced in prompts via {{input.name}}.
inputs:
- name: path
description: Path to the code to analyze
required: true
- name: focus
description: "Analysis focus: security, performance, quality, or auto"
default: autoEach step defines an agent to run, what it depends on, and what prompt to give it.
steps:
explore:
agent: auto # Agent selection (see below)
model: haiku # Optional model override
prompt: |
Explore the codebase at {{input.path}}...
security_scan:
agent: security-expert # Reference to agents: section
depends_on: [explore] # Blocks until explore completes
condition: "{{input.focus}} contains 'security'" # Conditional execution
prompt: |
Codebase overview:
{{explore.output}} # Interpolated from previous step
...
report:
agent: general-purpose
depends_on: [security_scan, perf_check]
join: any # Runs when ANY dependency completes (vs all)
prompt: |
{{security_scan.output}}{{perf_check.output}}
...| Field | Required | Description |
|---|---|---|
agent |
Yes | Agent to use (see Agent Resolution below) |
prompt |
Yes | The task prompt, supports {{variable}} interpolation |
depends_on |
No | List of step names that must complete first |
condition |
No | Expression that must be true for step to run |
join |
No | all (default) or any — when to unblock with multiple dependencies |
model |
No | Model override: haiku, sonnet, or opus |
on_failure |
No | Retry configuration (see Retry Loops below) |
| Agent spec | What happens |
|---|---|
Explore, Plan, Bash, general-purpose |
Uses that agent type directly |
auto |
Auto-selects based on prompt keywords (explore → Explore, test → Bash, etc.) |
agent-name |
Looks up in agents: section, prepends instructions to prompt |
Inline {instructions:, tools:, model:} |
Creates an ephemeral one-off agent |
For one-off specialized agents, define them inline on the step:
steps:
adaptive_analysis:
agent:
name: adaptive-analyzer
instructions: |
You are an adaptive code analyst who determines
the most pressing concerns...
tools: [Read, Grep, Glob]
prompt: |
Analyze the codebase...| Pattern | Resolves to |
|---|---|
{{input.name}} |
Value from workflow inputs or defaults |
{{input.task}} |
The task description from the command invocation |
{{step_name.output}} |
Output from a completed step |
Steps can be conditionally skipped:
condition: "{{input.focus}} contains 'security'" # substring match
condition: "{{verify.output}} contains 'VERDICT: PASS'"
condition: "{{step.output}} exists" # step completed
condition: "not {{step.output}} empty" # not emptySkipped steps produce "SKIPPED: condition not met" as their output.
Steps can define on_failure to retry from an earlier step:
verify:
depends_on: [run_tests, code_review]
prompt: |
Output VERDICT: PASS or VERDICT: FAIL
on_failure:
goto: plan # Step to retry from
max_retries: 3 # Max retry cyclesWhen VERDICT: FAIL is detected, the orchestrator re-runs from goto forward, injecting failure context into the prompt.
The orchestrator automatically parallelizes steps using dependency layers:
Layer 0: [explore] → runs first (no dependencies)
Layer 1: [scan, check, review] → run in parallel (all depend on explore)
Layer 2: [report] → runs after Layer 1 completes
Steps in the same layer are spawned as parallel team members.
The orchestrator:
- Parses the YAML into a dependency graph
- Topologically sorts steps into parallel layers
- Creates a team via the Agent Teams API
- Walks layers in order, spawning all agents in a layer simultaneously
- Waits for completion, then interpolates outputs into the next layer's prompts
- Handles retries if a step fails and has
on_failureconfigured - Cleans up the team after all steps complete
agent-loop.yml— Retry loop: explores → plans → implements → tests + reviews → verifies (retries on failure) → reports