| name | agentic-research |
|---|---|
| description | Use when the user needs to research multiple systems, products, or technologies in parallel and synthesize findings into actionable recommendations. Triggers on competitive analysis, technology comparison, market research, "research X vs Y", architectural research across multiple systems, or any investigation requiring structured multi-source analysis. |
A structured workflow for researching multiple systems in parallel and synthesizing findings into actionable output. Based on the research brief pattern — define objectives, dispatch parallel agents, gather structured reports, synthesize across systems.
- Comparing multiple products, tools, or technologies
- Competitive analysis ("how do others solve this problem?")
- Architecture research before building something new
- Investigating a domain you're unfamiliar with
- Any task where you need to research 3+ systems and draw conclusions
Research task received → How many systems?
│
├─ 1 system (deep dive)
│ └─ Single agent, use "Research Techniques" below, skip synthesis
│
├─ 2 systems (comparison)
│ └─ Write brief → 2 parallel agents → comparative synthesis
│
├─ 3+ systems (survey)
│ └─ Write brief → N parallel agents → full synthesis with feature matrix
│
└─ Unknown / exploratory ("how do others solve X?")
└─ WebSearch to identify systems first → then treat as 3+ system survey
Per-system research → What sources are available?
│
├─ Has public API docs / OpenAPI spec?
│ └─ Start here — API shapes reveal the data model directly
│ WebFetch the docs URL, look for entity names and relationships
│
├─ Has help center / knowledge base?
│ └─ Second priority — "How to set up X" articles reveal workflows
│ WebFetch help center pages, follow links 2-3 levels deep
│
├─ Has open-source code / SDK?
│ └─ Check GitHub — types, schemas, and entity definitions are gold
│ Use Bash: gh repo view, gh api to explore
│
├─ Marketing site only?
│ └─ WebSearch for reviews (G2, Capterra), blog posts, comparison articles
│ These often contain more detail than the marketing site itself
│
└─ Very little public info?
└─ Mark confidence 🔴, state what's unknown, spend less time here
An honest "unknown" is more useful than a plausible guess
Before touching any system, write a brief that defines:
One paragraph: what are we trying to learn, and why?
## Objective
Research and document how [domain] systems handle [specific problem].
The goal is to understand [what patterns exist / how others solve this]
to inform [our own design / a decision / a recommendation].3-9 specific questions that drive the research. These are NOT "tell me about X" — they're decision-forcing questions:
### Key Design Questions
1. **[Specific design decision]**: How do different systems handle [X]?
Is the approach [A-centric or B-centric]?
2. **[Trade-off]**: When [scenario], do systems choose [approach A] or
[approach B]? What are the consequences?
3. **[Gap detection]**: What capabilities do users need that current
systems don't provide?Bad questions: "What is X?" / "Tell me about Y" (too vague, no decision to inform) Good questions: "Is the data model person-centric or credential-centric?" (forces comparison)
List all systems with URLs and categories:
### Systems to Research
| System | URL | Category |
| --- | --- | --- |
| System A | https://... | Category 1 |
| System B | https://... | Category 1 |
| System C | https://... | Category 2 (adjacent) |Include adjacent/analogous systems from related domains — they often have the best ideas.
Ordered by priority — what information matters most:
### What to Look For (priority order)
1. **Data model** — Core entities, relationships, cardinality
2. **Terminology** — What words does each system use for the same concepts?
3. **Workflows** — Key user journeys, step by step
4. **Integration model** — APIs, webhooks, native integrations
5. **Strengths and limitations** — What works, what doesn't, what's missingDefine the structure every report should follow:
### Per-System Report Structure
1. System Overview (what, who, positioning)
2. Glossary (system term → generic concept → description)
3. Data Model (entity-relationship diagram + written description)
4. Key Workflows (as diagrams)
5. Strengths & Limitations
6. Sources (all URLs consulted)Require confidence annotations on every section:
- 🟢 High — API docs, schema, or primary source available
- 🟡 Medium — Inferred from help docs, UI screenshots, or demos
- 🔴 Low — Guessed from marketing copy or reviews
"Data model: Not publicly documented. Inferred from UI screenshots and help articles." is more useful than a confident-sounding fabrication.
Each system can be researched independently and in parallel. There are no dependencies between systems.
For each system in the research brief:
Launch a sub-agent with:
- The system's entry URL
- The research brief's "What to Look For" section
- The per-system report template
- Instructions to use web search, web fetch, and any available tools
- The confidence marking requirements
Each agent should use these tools in priority order:
| Priority | Source Type | Tool | What to Extract |
|---|---|---|---|
| 1 | API docs / OpenAPI specs | WebFetch on docs URL |
Entity names, field types, relationships, endpoint shapes |
| 2 | Help center / knowledge base | WebFetch on help articles, follow links 2-3 deep |
Workflows ("How to set up X"), terminology, form fields |
| 3 | Open source code / SDKs | Bash with gh api / gh repo view |
Type definitions, schema files, entity structures |
| 4 | Third-party analysis | WebSearch for "[system] review", "[system] vs [competitor]" |
Features, limitations, user pain points |
| 5 | Product pages | WebFetch on main site |
Positioning, pricing, feature lists |
| 6 | Demo videos | WebSearch for "[system] demo" on YouTube |
UI structure, workflows invisible in docs |
| 7 | Job postings | WebSearch for "[system] engineer" |
Tech stack, internal terminology |
Concrete patterns:
# Find API docs
WebSearch: "[system name] API documentation"
WebSearch: "[system name] OpenAPI swagger"
# Deep-read help center
WebFetch: https://docs.example.com/getting-started
# Then follow links found in the content
# Check GitHub for schemas/types
WebSearch: "[system name] github"
# If repo found:
gh api repos/org/repo/contents/src/types --jq '.[].name'
# Find reviews with specific details
WebSearch: "[system name] review G2 Capterra 2025"
WebSearch: "[system name] vs [competitor] comparison"Each agent produces a standalone report following the template. Reports should be:
- Self-contained — Readable without context from other reports
- Evidence-linked — Every claim links to a source URL
- Confidence-marked — Every section has a 🟢🟡🔴 marker
- Honest about gaps — "Unknown" > plausible guess
After all individual reports are complete, synthesize across systems.
For each Key Design Question from the brief:
## Question 1: [The question]
| System | Approach | Details | Confidence |
| --- | --- | --- | --- |
| System A | Approach X | [specifics] | 🟢 |
| System B | Approach Y | [specifics] | 🟡 |
| System C | Approach X (variant) | [specifics] | 🟡 |
**Pattern**: Most systems use Approach X because [reason].
System B's Approach Y is interesting because [reason] but has the downside of [limitation].
**Recommendation**: [Specific recommendation with reasoning]Look across all reports for:
- Convergence — Where do most systems agree? This is likely the right approach.
- Divergence — Where do systems disagree? This is where interesting design decisions live.
- Gaps — What do users need that no system provides well? This is opportunity.
- Terminology patterns — What words does the industry use? Adopt the dominant vocabulary unless there's a good reason not to.
| Capability | System A | System B | System C |
| --- | --- | --- | --- |
| Feature 1 | ✅ Full support | ⚠️ Partial | ❌ Missing |
| Feature 2 | ✅ | ✅ | ✅ |
| Feature 3 | ❌ | ✅ Best in class | ⚠️ |Research is only valuable if it produces decisions. The final output should include:
- Top 3-5 findings that should influence design decisions
- Feature matrix comparison table
- Recommended approach for each Key Design Question, with citations
For each key decision:
### Recommendation: [Decision]
**Adopt**: [Specific approach], as used by [System A] and [System C]
**Why**: [Reasoning based on evidence from research]
**Avoid**: [Alternative approach] because [evidence-based reasoning]
**Open question**: [What we still don't know and how to find out]When research informs a system you're building, extend findings into an entity-relationship sketch:
## Proposed Data Model (informed by research)
Based on [System A]'s approach to [concept] and [System C]'s handling of [concept]:
[Mermaid erDiagram or written description]
Key decisions:
- [Entity X] is the central entity because [evidence from research]
- [Relationship Y] uses [pattern] based on [System B]'s approach## Feature Spec: [Feature Name]
### Background
[Summary of research findings relevant to this feature]
### Requirements (informed by research)
- [Requirement derived from competitive analysis]
- [Requirement addressing gap identified in research]
### Design
[Design decisions justified by research findings]For validation before committing to full implementation:
## Prototype Scope
Based on research, validate these hypotheses with a minimal prototype:
1. [Hypothesis derived from research] → Build: [minimal UI/API to test]
2. [Hypothesis] → Build: [minimal test]A completed research project must have:
- Research brief with clear objectives and key design questions
- Individual reports for every system (no gaps, no skipped systems)
- Confidence markers on every section of every report
- Cross-system comparison answering every key design question
- Feature matrix with all systems and capabilities
- Specific, actionable recommendations (not just "it depends")
- Honest handling of unknowns ("we couldn't determine X" > guessing)
- All sources cited with URLs
| Anti-Pattern | Why It's Bad | Do Instead |
|---|---|---|
| Starting research without a brief | You'll waste time on irrelevant details | Write the brief first, even if brief |
| Researching systems sequentially | 5x slower than parallel | Dispatch all agents at once |
| Over-researching one system | Depth without breadth | Equal effort per system, synthesis is where value lives |
| Fabricating when info is unavailable | Undermines trust in all findings | Mark confidence 🔴, state what's unknown, move on |
| "It depends" conclusions | Not actionable | Recommend a specific approach with reasoning |
| Skipping adjacent domains | Misses the best ideas | Always include 2-3 analogous systems from related industries |