Some notes on AI Agent Rule / Instruction / Context files / etc.
- https://llmstxt.org/
-
The
/llms.txt
file -
A proposal to standardise on using an
/llms.txt
file to provide information to help LLMs use a website at inference time. - https://llmstxt.org/#proposal
-
Proposal
We propose adding a
/llms.txt
markdown file to websites to provide LLM-friendly content. This file offers brief background information, guidance, and links to detailed markdown files.llms.txt markdown is human and LLM readable, but is also in a precise format allowing fixed processing methods (i.e. classical programming techniques such as parsers and regex).
We furthermore propose that pages on websites that have information that might be useful for LLMs to read provide a clean markdown version of those pages at the same URL as the original page, but with
.md
appended. (URLs without file names should appendindex.html.md
instead.) -
This proposal does not include any particular recommendation for how to process the llms.txt file, since it will depend on the application. For example, the FastHTML project opted to automatically expand the llms.txt to two markdown files with the contents of the linked URLs, using an XML-based structure suitable for use in LLMs such as Claude. The two files are: llms-ctx.txt, which does not include the optional URLs, and llms-ctx-full.txt, which does include them. They are created using the
llms_txt2ctx
command line application, and the FastHTML documentation includes information for users about how to use them.
-
- https://llmstxt.org/#format
-
Format
-
- https://llmstxt.org/#existing-standards
-
Existing Standards
-
- https://github.com/AnswerDotAI/llms-txt
-
The
/llms.txt
file, helping language models use your website
-
-
- https://llmstxt.site/
-
llms.txt directory
-
A list of all
llms.txt
file locations across the web with stats.The
llms.txt
is derived from the llmstxt.org standard. - https://github.com/krish-adi/llmstxt-site
-
llmstxt-site
-
This is a centralized directory of all
/llms.txt
files available online. The/llms.txt
file is a proposed standard for websites to provide concise and structured information to help large language models (LLMs) efficiently use website content during inference time.Contributions are the backbone of this repository’s success. Let’s work together to build a comprehensive resource for
/llms.txt
files and advance the adoption of this standard for LLM-friendly content!
-
-
- https://directory.llmstxt.cloud/
-
/llms.txt
directory -
A curated directory of products and companies leading the adoption of the llms.txt standard.
-
- See Also (?):
- https://www.anthropic.com/engineering/claude-code-best-practices
-
Claude Code: Best practices for agentic coding
-
Claude Code is a command line tool for agentic coding. This post covers tips and tricks that have proven effective for using Claude Code across various codebases, languages, and environments.
- https://www.anthropic.com/engineering/claude-code-best-practices#1-customize-your-setup
-
Customize your setup
Claude Code is an agentic coding assistant that automatically pulls context into prompts. This context gathering consumes time and tokens, but you can optimize it through environment tuning.
-
Create
CLAUDE.md
filesCLAUDE.md
is a special file that Claude automatically pulls into context when starting a conversation. This makes it an ideal place for documenting:- Common bash commands
- Core files and utility functions
- Code style guidelines
- Testing instructions
- Repository etiquette (e.g., branch naming, merge vs. rebase, etc.)
- Developer environment setup (e.g., pyenv use, which compilers work)
- Any unexpected behaviors or warnings particular to the project
- Other information you want Claude to remember
There’s no required format for
CLAUDE.md
files. We recommend keeping them concise and human-readable. -
You can place
CLAUDE.md
files in several locations:- The root of your repo, or wherever you run
claude
from (the most common usage). Name itCLAUDE.md
and check it into git so that you can share it across sessions and with your team (recommended), or name itCLAUDE.local.md
and.gitignore
it - Any parent of the directory where you run
claude
. This is most useful for monorepos, where you might runclaude
fromroot/foo
, and haveCLAUDE.md
files in bothroot/CLAUDE.md
androot/foo/CLAUDE.md
. Both of these will be pulled into context automatically - Any child of the directory where you run
claude
. This is the inverse of the above, and in this case, Claude will pull inCLAUDE.md
files on demand when you work with files in child directories - Your home folder (
~/.claude/CLAUDE.md
), which applies it to all your claude sessions
When you run the
/init
command, Claude will automatically generate aCLAUDE.md
for you. - The root of your repo, or wherever you run
-
Tune your
CLAUDE.md
filesYour
CLAUDE.md
files become part of Claude’s prompts, so they should be refined like any frequently used prompt. A common mistake is adding extensive content without iterating on its effectiveness. Take time to experiment and determine what produces the best instruction following from the model.You can add content to your
CLAUDE.md
manually or press the#
key to give Claude an instruction that it will automatically incorporate into the relevantCLAUDE.md
. Many engineers use#
frequently to document commands, files, and style guidelines while coding, then includeCLAUDE.md
changes in commits so team members benefit as well.At Anthropic, we occasionally run
CLAUDE.md
files through the prompt improver and often tune instructions (e.g. adding emphasis with "IMPORTANT" or "YOU MUST") to improve adherence. -
Curate Claude's list of allowed tools
-
There are four ways to manage allowed tools
- ..snip..
- Manually edit your
.claude/settings.json
or~/.claude.json
(we recommend checking the former into source control to share with your team). - ..snip..
-
-
- https://www.anthropic.com/engineering/claude-code-best-practices#2-give-claude-more-tools
-
Give Claude more tools
-
Use Claude with MCP
Claude Code functions as both an MCP server and client. As a client, it can connect to any number of MCP servers to access their tools in three ways:
- In project config (available when running Claude Code in that directory)
- **In global config **(available in all projects)
- In a checked-in
.mcp.json
file (available to anyone working in your codebase). For example, you can add Puppeteer and Sentry servers to your.mcp.json
, so that every engineer working on your repo can use these out of the box.
-
Use custom slash commands
For repeated workflows—debugging loops, log analysis, etc.—store prompt templates in Markdown files within the
.claude/commands
folder. These become available through the slash commands menu when you type/
. You can check these commands into git to make them available for the rest of your team.Custom slash commands can include the special keyword
$ARGUMENTS
to pass parameters from command invocation. -
Putting the above content into
.claude/commands/fix-github-issue.md
makes it available as the/project:fix-github-issue
command in Claude Code. You could then for example use/project:fix-github-issue 1234
to have Claude fix issue #1234. Similarly, you can add your own personal commands to the_ _~/.claude/commands
folder for commands you want available in all of your sessions.
-
-
- See Also (?):
- https://modelcontextprotocol.io/quickstart/user
-
For Claude Desktop Users
Get started using pre-built servers in Claude for Desktop.
- https://modelcontextprotocol.io/quickstart/user#2-add-the-filesystem-mcp-server
-
This will create a configuration file at:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- macOS:
-
-
- https://hub.continue.dev/
- https://docs.continue.dev/
-
Continue enables developers to create, share, and use custom AI code assistants with our open-source VS Code and JetBrains extensions and hub of models, rules, prompts, docs, and other building blocks
- https://docs.continue.dev/reference
-
config.yaml
Reference -
Continue hub assistants are defined using the
config.yaml
specification. Assistants can be loaded from the Hub or locally- Continue Hub - YAML is stored on the hub and automatically synced to the extension
- Locally
- in your global
.continue
folder (~/.continue
on Mac,%USERPROFILE%\.continue
) within.continue/assistants
. The name of the file will be used as the display name of the assistant, e.g.My Assistant.yaml
- in your workspace in a
/.continue/assistants
folder, with the same naming convention
- in your global
Config YAML replaces
config.json
, which is deprecated. View the Migration Guide.An assistant is made up of:
\1. Top level properties, which specify the
name
,version
, andconfig.yaml
schema
for the assistant \2. Block lists, which are composable arrays of coding assistant building blocks available to the assistant, such as models, docs, and context providers.A block is a single standalone building block of a coding assistants, e.g., one model or one documentation source. In
config.yaml
syntax, a block consists of the same top-level properties as assistants (name
,version
, andschema
), but only has ONE item under whichever block type it is.Examples of blocks and assistants can be found on the Continue hub.
Assistants can either explicitly define blocks - see Properties below - or import and configure existing hub blocks.
- https://docs.continue.dev/reference#local-blocks
-
Local Blocks
It is also possible to define blocks locally in a
.continue
folder. This folder can be located at either the root of your workspace (these will automatically be applied to all assistants when you are in that workspace) or in your home directory at~/.continue
(these will automatically be applied globally).Place your YAML files in the following folders:
Assistants:
.continue/assistants
- for assistants
Blocks:
.continue/rules
- for rules.continue/models
- for models.continue/prompts
- for prompts.continue/context
- for context providers.continue/docs
- for docs.continue/data
- for data.continue/mcpServers
- for MCP Servers
You can find many examples of each of these block types on the Continue Explore Page
-
- https://docs.continue.dev/reference#complete-yaml-config-example
-
Complete YAML Config Example
Putting it all together, here's a complete example of a
config.yaml
configuration file
-
-
- https://docs.continue.dev/blocks/models
-
Model Blocks
These blocks form the foundation of the entire assistant experience, offering different specialized capabilities:
- Chat: Power conversational interactions about code and provide detailed guidance
- Edit: Handle complex code transformations and refactoring tasks
- Apply: Execute targeted code modifications with high accuracy
- Autocomplete: Provide real-time suggestions as developers type
- Embedding: Transform code into vector representations for semantic search
- Reranker: Improve search relevance by ordering results based on semantic meaning
-
- https://docs.continue.dev/blocks/context-providers
-
Context Blocks
These blocks determine what internal information your AI assistant can access
-
- https://docs.continue.dev/blocks/rules
-
Rules Blocks
Think of these as the guardrails for your AI coding assistants:
- Enforce company-specific coding standards and security practices
- Implement quality checks that match your engineering culture
- Create paved paths for developers to follow organizational best practices
- https://docs.continue.dev/customize/deep-dives/rules#continuerules
-
.continuerules
You can create project-specific rules by adding a.continuerules
file to the root of your project. This file is raw text and its full contents will be used as rules.
-
-
- https://docs.continue.dev/blocks/prompts
-
Prompt Blocks
These are the specialized instructions that shape how models respond:
- Define interaction patterns for specific tasks or frameworks
- Encode domain expertise for particular technologies
- Ensure consistent guidance aligned with organizational practices
- Can be shared and reused across multiple assistants
- Act as automated code reviewers that ensure consistency across teams
- https://docs.continue.dev/customize/deep-dives/prompts#local-prompt-files
-
Local
.prompt
files In addition to Prompt blocks on the Hub, you can also define prompts in local.prompt
files, located in the.continue/prompts
folder at the top level of your workspace. This is useful for quick iteration on prompts to test them out before pushing up to the Hub. -
Below is a quick example of setting up a prompt file:
- Create a folder called
.continue/prompts
at the top level of your workspace - Add a file called
test.prompt
to this folder. - Write the following contents to
test.prompt
and save.
- Create a folder called
-
- https://docs.continue.dev/customize/deep-dives/prompts#format
-
Format
The format is inspired by HumanLoops's
.prompt
file, with additional templating to reference files, URLs, and context providers.
-
-
- https://docs.continue.dev/blocks/mcp
-
MCP Blocks
Model Context Protocol servers provide specialized functionality:
- Enable integration with external tools and systems
- Create extensible interfaces for custom capabilities
- Support more complex interactions with your development environment
- Allow partners to contribute specialized functionality
- Database Connectors: Understand schema and data models during development
-
-
- https://docs.cursor.com/context/rules
-
Rules
-
Control how the Agent model behaves with reusable, scoped instructions.
Rules allow you to provide system-level guidance to the Agent and Cmd-K AI. Think of them as a persistent way to encode context, preferences, or workflows for your projects or for yourself.
-
We support three types of rules:
- Project Rules: Stored in
.cursor/rules
, version-controlled and scoped to your codebase. - User Rules: Global to your Cursor environment. Defined in settings and always applied.
.cursorrules
(Legacy): Still supported, but deprecated. Use Project Rules instead.
- Project Rules: Stored in
- https://docs.cursor.com/context/rules#project-rules
-
Project rules
-
Project rules live in
.cursor/rules
. Each rule is stored as a file and version-controlled. They can be scoped using path patterns, invoked manually, or included based on relevance.Use project rules to:
- Encode domain-specific knowledge about your codebase
- Automate project-specific workflows or templates
- Standardize style or architecture decisions
-
- https://docs.cursor.com/context/rules#cursorrules-legacy
-
.cursorrules
(Legacy) -
The
.cursorrules
file in the root of your project is still supported, but will be deprecated. We recommend migrating to the Project Rules format for more control, flexibility, and visibility.
-
-
- https://docs.cursor.com/context/ignore-files
-
Ignore Files
-
Control which files Cursor’s AI features and indexing can access using
.cursorignore
and.cursorindexingignore
-
Cursor reads and indexes your project’s codebase to power its features. You can control which directories and files Cursor can access by adding a
.cursorignore
file to your root directory.
-
- https://docs.cursor.com/context/model-context-protocol
-
Model Context Protocol Connect external tools and data sources to Cursor using the Model Context Protocol (MCP) plugin system
- https://docs.cursor.com/context/model-context-protocol#configuring-mcp-servers
-
The MCP configuration file uses a JSON format
-
- https://docs.cursor.com/context/model-context-protocol#configuration-locations
-
Configuration Locations
You can place this configuration in two locations, depending on your use case:
- Project Configuration
- For tools specific to a project, create a
.cursor/mcp.json
file in your project directory. This allows you to define MCP servers that are only available within that specific project.
- For tools specific to a project, create a
- Global Configuration
- For tools that you want to use across all projects, create a
~/.cursor/mcp.json
file in your home directory. This makes MCP servers available in all your Cursor workspaces.
- For tools that you want to use across all projects, create a
- Project Configuration
-
-
- https://github.com/PatrickJS/awesome-cursorrules
-
Awesome CursorRules
-
A curated list of awesome
.cursorrules
files for enhancing your Cursor AI experience.
-
- https://humanloop.com/
-
Your AI product needs evals The LLM evals platform for enterprises. Humanloop gives you the tools that top teams use to ship and scale AI with confidence.
- https://humanloop.com/docs/reference/prompt-file-format
-
Prompt file format
-
Our file format for serializing Prompts to store alongside your source code.
-
Our
.prompt
file format is a serialized representation of a Prompt, designed to be human-readable and suitable for checking into your version control systems alongside your code. This allows technical teams to maintain the source of truth for their prompts within their existing version control workflow. - https://humanloop.com/docs/reference/prompt-file-format#format
-
Format
The format is heavily inspired by MDX
, with model and parameters specified in a YAML header alongside a JSX-inspired syntax for chat templates.
-
-
-
- See Also (?):
- https://github.com/openai/codex
-
Lightweight coding agent that runs in your terminal
- https://github.com/openai/codex#memory--project-docs
-
Memory & Project Docs
Codex merges Markdown instructions in this order:
~/.codex/instructions.md
- personal global guidancecodex.md
at repo root - shared project notescodex.md
in cwd - sub-package specifics
-
- https://github.com/openai/codex#recipes
-
Recipes
Below are a few bite-size examples you can copy-paste. Replace the text in quotes with your own task. See the prompting guide for more tips and usage patterns.
-
- https://github.com/openai/codex/tree/main/codex-cli/examples
-
Quick start examples
This directory bundles some self‑contained examples using the Codex CLI.
-
If you want to get started using the Codex CLI directly, skip this and refer to the prompting guide.
- https://github.com/openai/codex/blob/main/codex-cli/examples/prompting_guide.md
-
Prompting guide
- https://github.com/openai/codex/blob/main/codex-cli/examples/prompting_guide.md#custom-instructions
-
Custom instructions
Codex supports two types of Markdown-based instruction files that influence model behavior and prompting:
~/.codex/instructions.md
- Global, user-level custom guidance injected into every session. You should keep this relatively short and concise. These instructions are applied to all Codex runs across all projects and are great for personal defaults, shell setup tips, safety constraints, or preferred tools.
- Example: "Before executing shell commands, create and activate a
.codex-venv
Python environment." or "Avoid running pytest until you've completed all your changes."
CODEX.md
- Project-specific instructions loaded from the current directory or Git root. Use this for repo-specific context, file structure, command policies, or project conventions. These are automatically detected unless
--no-project-doc
orCODEX_DISABLE_PROJECT_DOC=1
is set. - Example: “All React components live in
src/components/
".
- Project-specific instructions loaded from the current directory or Git root. Use this for repo-specific context, file structure, command policies, or project conventions. These are automatically detected unless
-
-
-
-
- See Also (?):
- https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot
-
Adding repository custom instructions for GitHub Copilot
-
Create a file in a repository that automatically adds information to questions you ask Copilot Chat.
- https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot#creating-a-repository-custom-instructions-file
-
Creating a repository custom instructions file
-
In the root of your repository, create a file named
.github/copilot-instructions.md
.- Create the
.github
directory if it does not already exist. - Add natural language instructions to the file, in Markdown format.
- Whitespace between instructions is ignored, so the instructions can be written as a single paragraph, each on a new line, or separated by blank lines for legibility.
To see your instructions in action, go to https://github.com/copilot, attach the repository containing the instructions file, and start a conversation.
- Create the
-
- https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot#writing-effective-repository-custom-instructions
-
Writing effective repository custom instructions
-
The instructions you add to the
.github/copilot-instructions.md
file should be short, self-contained statements that add context or relevant information to supplement users' chat questions.
-
-
- https://docs.github.com/en/copilot/customizing-copilot/extending-copilot-chat-with-mcp
-
Extending Copilot Chat with the Model Context Protocol (MCP)
-
Learn how to use the Model Context Protocol (MCP) to extend Copilot Chat.
- https://docs.github.com/en/copilot/customizing-copilot/extending-copilot-chat-with-mcp#configuring-mcp-servers-in-visual-studio-code
-
To configure MCP servers in Visual Studio Code, you need to set up a configuration script that specifies the details of the MCP servers you want to use. You can configure MCP servers for either:
- A specific repository. This will share MCP servers with anyone who opens the project in Visual Studio Code. To do this, create a
.vscode/mcp.json
file in the root of your repository. - Your personal instance of Visual Studio Code. You will be the only person who has access to configured MCP servers. To do this, add the configuration to your
settings.json
file in Visual Studio Code.
- A specific repository. This will share MCP servers with anyone who opens the project in Visual Studio Code. To do this, create a
-
-
- https://plugins.jetbrains.com/plugin/17718-github-copilot/versions/stable/722432
-
GitHub Copilot 1.5.42-241
-
Added: Custom instructions for generating Chat and Git commit messages. Specify these in the
.github/copilot-instructions.md
or.github/git-commit-instructions.md
files. - microsoft/copilot-intellij-feedback#38 (comment)
-
Support for repository custom instructions
-
You can create
.github/copilot-instructions.md
for custom instructions for inline chat and panel chat.Additionally, you can create custom instructions for Git commit message generation in:
.github/git-commit-instructions.md
-
This is available in the latest release, 1.5.41.
-
-
- https://copilot-instructions.md/
-
Adding custom instructions for GitHub Copilot
- https://copilot-instructions.md/prompts.html
-
Godlike Prompts
-
-
- https://prompts.chat/
-
prompts.chat World's First & Most Famous Prompts Directory
- https://prompts.chat/vibe/
-
awesome vibe coding prompts to help you build simple apps
-
- https://github.com/f/awesome-chatgpt-prompts
-
This repo includes ChatGPT prompt curation to use ChatGPT and other LLM tools better.
-
-
- https://copilot-instructions.md/prompts.html
-
Godlike Prompts
-
- TODO: Find and add other examples (eg. aider (
.aider.conf.yml
), llm, JetBrains AI tools (eg. Junie), etc?)- This (private) ChatGPT convo gave some other suggestions that I need to look into deeper still: https://chatgpt.com/c/680b3bdc-80e8-8008-b05e-86d3e0b627a6
-
CLAUDE.md
: Used by Claude (Anthropic) as a signal to scan the repo and use this file for context. It’s suggested in their documentation and blog posts. -
.aider.conf.json
(oraider.conf.json
): Used by Aider, a GPT-based coding assistant. Can include config such as files to include/exclude, model settings, etc. -
.aider.chat.md
: Aider can also use this (or similarly named.aider.md
) to persist chat history or provide persistent context for the assistant. While not always required, it’s sometimes created in the repo as a place to put system instructions or notes for context between runs. -
.prompt.md
,PROMPT.md
, orINSTRUCTIONS.md
: Some AI agents or prompts (especially for open-source wrappers around GPT like smol-ai, Continue, or custom LangChain agents) look for files like these in root for either default instructions or human-readable context. -
.continue/context.json
: Used by Continue (an open-source AI code agent IDE extension) to provide user preferences or context inclusion rules. -
prompt.config.json
/agent.config.json
: Custom LLM wrappers, especially those built with LangChain, Autogen, or AgentScript, sometimes define .config or .prompt.* files in root for behavior tuning. -
.smol-dev.yaml
: The smol-ai developer tools may use YAML-based configs for defining how the assistant should scaffold or interact with the repo. -
There’s a growing informal convention for files that help tune LLM behavior:
AI.md
orAI_INSTRUCTIONS.md
: General-purpose file to guide any AI tooling in a repoCONTRIBUTING.md
: While not AI-specific, many LLMs (like Copilot or Claude) are trained to respect these as guidance for changesREADME.ai.md
: Separate AI-focused readme, e.g. summarizing intent, goals, style guides, etc.STYLEGUIDE.md
: Useful for AI tooling that supports code style customization or alignment
-
- This (private) ChatGPT convo gave some other suggestions that I need to look into deeper still: https://chatgpt.com/c/680b3bdc-80e8-8008-b05e-86d3e0b627a6