Skip to content

Instantly share code, notes, and snippets.

@eonist
Created August 21, 2025 15:12
Show Gist options
  • Save eonist/9ca40141cd39b5a7f8753ce4d32b39a7 to your computer and use it in GitHub Desktop.
Save eonist/9ca40141cd39b5a7f8753ce4d32b39a7 to your computer and use it in GitHub Desktop.
mcp_info_tool.md

Short answer

  • There isn’t a standardized “usage_best_practice” command in the MCP spec that tells an LLM how to use an entire MCP server as a whole. The protocol defines primitives (tools, resources, prompts, progress, cancellation, logging, security), but no reserved command dedicated to server-wide best-practice guidance.[1][2][3]
  • Some community guidance recommends providing an explicit “info” or similar tool that returns version, health, configuration, and usage guidance for a whole MCP server, but this is a convention, not a spec-mandated standard.[4]
  • Enterprise and vendor guides discuss broader MCP “best practices” (architecture, security, latency, semantic layers), but these are deployment patterns, not a protocol-level “usage_best_practice” command.[5][6]

What the MCP spec covers (and doesn’t)

  • The current MCP specification (e.g., 2025-06-18) defines capabilities like configuration, progress tracking, cancellation, error reporting, logging, and a strong section on security/user consent, but it does not define a dedicated, standardized tool name or command for “usage best practices” at server scope.[2]
  • Official docs/intros frame MCP as a protocol for exposing tools/resources/prompts to models, without prescribing a global “how to use this server” command pattern.[7][3][1]

Common practice you can adopt

  • Community best-practices suggest adding an “info” tool/subcommand that returns: server version, dependency status checks, and configuration issues—often used to give operators and LLMs a single place to learn how to interact with the server. This is recommended guidance rather than a standard, but it’s widely useful for whole-server usage help.[4]
  • Broader operational “best practices” exist (e.g., centralizing MCP server architecture, entity-scoped data guardrails, latency focus, semantic layer exposure), but these pertain to design/deployment and not to a specific MCP command name in the protocol.[6][5]

Practical recommendation

  • If whole-server usage guidance is needed, implement an explicit tool (e.g., “server.info”, “help”, or “usage”) that returns:
    • What the server does overall and intended usage patterns.
    • Tool catalog summary with when-to-use guidance.
    • Version/build info and environment checks.
    • Auth/permission model explanation and safety constraints.
    • Links or inline docs for more details.
  • Document this tool in the server’s tool descriptions so clients/agents can reliably discover it. This aligns with community guidance on “info” while staying within the spec.[2][4]

1 2 3 4 5 6 7 8 9 10

@eonist
Copy link
Author

eonist commented Aug 21, 2025

Goal

Provide a single, discoverable tool response that explains how to use an MCP server as a whole (scope, patterns, safety, and constraints), since the MCP spec does not define a standard “usage_best_practice” command and this pattern is a convention rather than a protocol requirement.1234

Structure for the response text

Use a concise, fixed schema so agents and humans can both parse it. A practical pattern is a top-level summary plus machine-friendly sections. Keep it under ~800–1,000 words to avoid truncation in clients.

  1. Title and version
  • name: Human-friendly server name.
  • version: Semantic version or Git SHA.
  • last_updated: ISO 8601.
  1. Purpose and scope
  • What the server does overall.
  • Primary scenarios and constraints (e.g., read-only vs mutating).
  • Data domains covered.
  1. Quick-start usage patterns
  • 3–5 bullets with “When to use X tool” guidance.
  • Show minimal request/response shapes or example invocations for common flows.
  • Call out sequencing if workflows depend on a particular order.
  1. Capabilities map (catalog summary)
  • Tools: list name → 1–2 lines “Use when …; returns …; important args …; caveats …”
  • Resources: identifiers, read/write policy, typical size/format.
  • Prompts: names, intended use, guardrails.
  1. Safety, auth, and consent
  • What requires user confirmation; what is auto-approved.
  • Data boundaries (tenancy, PII handling) and redaction/ masking behavior.
  • Rate limits and quotas relevant to safety.
  1. Error handling and recovery
  • Common error codes/messages and what to do next.
  • Idempotency notes and retry guidance (including backoff hints if applicable).
  • Partial results and progress semantics.
  1. Performance guidance
  • Typical latency ranges per tool and size limits (e.g., max items, max bytes).
  • Batch vs single-call tradeoffs; pagination instructions.
  1. Configuration and environment
  • Required environment variables or upstream dependencies.
  • Feature flags that change behavior.
  • Version compatibility notes if clients need certain spec features.
  1. Examples
  • 2–3 end-to-end mini flows showing recommended sequences across tools.
  • Include expected outputs at a high level, not full payload dumps.
  1. Changelog highlights
  • Breaking changes and deprecations users should know about.
  1. Contact and support
  • Where to report issues or find fuller docs.

This mirrors common “info/help” tool conventions seen in community best-practice writeups while staying within MCP’s defined primitives rather than adding a new protocol verb.241

Example skeleton (JSON-text payload)

Return structured text (e.g., JSON) so agents can extract fields programmatically, with a human-readable summary inside. Example:

{
"name": "DataOps MCP Server",
"version": "1.4.2",
"last_updated": "2025-08-20",
"summary": "This server helps discover, validate, and export analytics datasets across the org.",
"scope": {
"domains": ["catalog", "validation", "export"],
"mutations": false,
"tenancy": "per-project isolation; no cross-tenant reads"
},
"quick_start": [
"Use tools.catalog.search to find datasets by tag or owner; paginate if count>100.",
"Use tools.validation.run for schema and freshness checks; prefer batch mode for >10 datasets.",
"Use tools.export.create to request signed download URLs; exports expire in 24h."
],
"capabilities": {
"tools": [
{
"name": "catalog.search",
"use_when": "Need to discover datasets by metadata",
"returns": "List<DatasetSummary>",
"key_args": ["query", "limit", "cursor"],
"caveats": "Results capped at 1,000; use cursor for pagination"
},
{
"name": "validation.run",
"use_when": "Need to validate schema and SLAs",
"returns": "ValidationJob with progress updates",
"key_args": ["dataset_ids", "ruleset"],
"caveats": "Batch up to 50 datasets per job"
},
{
"name": "export.create",
"use_when": "Need temporary download links",
"returns": "ExportJob + signed URLs on completion",
"key_args": ["dataset_id", "format"],
"caveats": "Links expire after 24h; max 5GB per request"
}
],
"resources": [
{ "uri": "res://catalog/{id}", "access": "read", "format": "application/json" }
],
"prompts": [
{ "name": "generate_quality_report", "use_when": "Summarize validation results for stakeholders" }
]
},
"safety": {
"user_confirmation": ["export.create"],
"data_boundaries": "No PII; masked fields are labeled",
"rate_limits": "100 req/min per API key"
},
"errors": [
{ "code": "RATE_LIMIT", "action": "Retry with exponential backoff (200–1,600ms)" },
{ "code": "NOT_FOUND", "action": "Re-run catalog.search; verify dataset_id" }
],
"performance": {
"latency": {
"catalog.search": "100–300ms typical",
"validation.run": "async; progress events every 2–5s"
},
"limits": { "max_items": 1000, "max_batch": 50 }
},
"config": {
"env": ["MCP_API_KEY", "MCP_PROJECT_ID"],
"feature_flags": ["use_async_validation"]
},
"examples": [
{
"goal": "Find and validate a dataset",
"steps": [
"tools.catalog.search { query:'owner:analytics tag:core' }",
"tools.validation.run { dataset_ids:[...] }",
"wait for progress:complete"
]
}
],
"changelog": [
"1.4.0: export.create now requires user confirmation"
],
"support": {
"issues": "internal ticketing",
"owner": "Data Platform Team"
}
}

Delivery options

  • Tool response format: Return the JSON above as a single string result from a “server.info” or “help” tool so clients can show it verbatim or parse sections.
  • Plain text fallback: If JSON is overkill, mirror the same sections as headings with bullet points.

This approach keeps the guidance discoverable and consistent, leveraging the MCP tool/resource/prompt model while providing whole-server usage patterns per community convention rather than a spec-level command.412

Footnotes

  1. https://modelcontextprotocol.io/specification/2025-06-18 2 3

  2. https://docs.anthropic.com/en/docs/mcp 2 3

  3. https://modelcontextprotocol.io

  4. https://steipete.me/posts/2025/mcp-best-practices 2 3

@eonist
Copy link
Author

eonist commented Aug 21, 2025

Short answer

  • There is no single, formal, prescriptive “workflow UX” guideline in the MCP spec itself; instead, the specification provides principles and implementation guidance (consent, privacy, tool safety, sampling controls, progress, cancellation, logging) that shape how workflows and interactions should be designed.[1]
  • The official docs outline client concepts and a user interaction model (e.g., elicitation and how clients present requests and obtain approval), which function as de facto guidelines for interaction design, but they stop short of prescribing fixed workflows.[2][1]
  • Additional community and vendor materials offer practical best practices (tool budgeting, prompts-as-macros, packaging and testing servers) that inform MCP-based workflow design in real products.[3]

What the MCP spec provides that affects workflow/UX design

  • Security and Trust & Safety principles: explicit user consent, clear UI for reviewing/authorizing activities, protecting data access, and treating tool annotations as untrusted unless from trusted servers.[1]
  • Tool safety and invocation flow: hosts must obtain explicit user consent before invoking tools; users should understand each tool before authorizing its use.[1]
  • LLM sampling controls: users explicitly approve sampling, control whether sampling happens, the exact prompt sent, and what results the server can see; the protocol limits server visibility into prompts, shaping UI flows for approvals.[1]
  • Operational affordances for UX: configuration, progress tracking, cancellation, error reporting, and logging are first-class in the spec, guiding how to implement robust, user-friendly interactions and recoverable workflows.[1]
  • Elicitation and roots: the spec defines patterns like elicitation (server requests for additional info) and roots (scoping filesystem/URI access), which directly inform stepwise, transparent user interactions and permissions scoping in clients.[1]

Official docs on client-side interaction patterns

  • Client Concepts describe a “User Interaction Model,” including how clients present requests, collect consent, and keep interactions clear and contextual, providing a framework for designing MCP UX even though it’s not a rigid workflow prescription.[2]

Practitioner guidance and conventions

  • Tool budget and abstraction: avoid exposing one tool per API endpoint; group capabilities into fewer, higher-level tools and use MCP prompts like macros to chain multi-step calls behind a single intent, reducing cognitive load and improving UX.[3]
  • Packaging and testing guidance: containerizing MCP servers, security attestations, and submission practices translate into more reliable deployed workflows and predictable user experiences.[3]
  • Ecosystem tutorials and overviews explain typical capability patterns (resources, prompts, tools; client/server transports), which help structure interactions but are not normative standards for UX.[4][5][6]

Practical design recommendations (aligned with spec and ecosystem)

  • Consent-first flows: present clear summaries of what a tool will do, required data, and side effects; require explicit approval before execution, and show scoped roots/permissions to reinforce user control.[2][1]
  • Progressive disclosure: use elicitation for incremental information gathering instead of front-loading complex forms; pair with progress tracking and cancellation to keep users in control during longer operations.[2][1]
  • Minimize surface area: design a compact toolset aligned to key user intents; use prompts to orchestrate multi-step operations internally rather than exposing many fine-grained tools.[3]
  • Transparent sampling controls: when the server requests LLM sampling, surface the exact prompt, allow edits, and clearly indicate what the server can see and store, matching the protocol’s sampling control principles.[1]
  • Error and recovery: implement structured error reporting, retries where safe, and user-facing logs to explain failures and next steps; keep cancellation always available during long-running tasks.[1]

In sum, MCP does not impose a fixed “workflow guideline,” but the specification’s consent, safety, and control requirements, plus client interaction concepts, form a clear framework to design respectful, transparent MCP-based user interactions; community best practices further suggest consolidating tools and using prompts to streamline multi-step workflows.[2][3][1]

1
2
3
4
5
6
7
8
9
10

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment