MCP Resources and Prompts: Complete Guide
Tools get all the attention, but Resources and Prompts are what make MCP servers truly powerful. Learn how to give AI assistants rich context and reusable instructions.
The Three MCP Primitives
MCP servers expose three types of capabilities:
- Tools — Functions the AI can call (most common)
- Resources — Data the AI can read for context
- Prompts — Reusable instruction templates
Most tutorials focus on tools, but resources and prompts unlock different use cases. Let's explore when and how to use each.
Resources: Giving AI Context
Resources are read-only data that AI assistants can access. Think of them as files the AI can read to understand your project, configuration, or current state.
When to Use Resources
- • Project documentation or README files
- • Configuration files the AI should reference
- • Database schemas or API specs
- • Current system state or logs
- • Any read-only data that provides context
Resource Example: Project Context
Here's an MCP server that exposes project files as resources:
from fastmcp import FastMCP
import os
mcp = FastMCP("Project Context")
@mcp.resource("project://readme")
def get_readme() -> str:
"""The project README file"""
with open("README.md", "r") as f:
return f.read()
@mcp.resource("project://config")
def get_config() -> str:
"""Current project configuration"""
with open("config.yaml", "r") as f:
return f.read()
@mcp.resource("project://schema")
def get_schema() -> str:
"""Database schema definition"""
with open("schema.sql", "r") as f:
return f.read()
if __name__ == "__main__":
mcp.run()Dynamic Resources with Templates
Resources can be dynamic using URI templates:
@mcp.resource("file://{path}")
def read_file(path: str) -> str:
"""Read any file from the project"""
# Security: validate path is within project
safe_path = os.path.abspath(path)
if not safe_path.startswith(os.getcwd()):
raise ValueError("Path outside project directory")
with open(safe_path, "r") as f:
return f.read()
@mcp.resource("logs://{date}")
def get_logs(date: str) -> str:
"""Get logs for a specific date (YYYY-MM-DD)"""
log_file = f"logs/{date}.log"
if os.path.exists(log_file):
with open(log_file, "r") as f:
return f.read()
return f"No logs found for {date}"Resources vs Tools
Use Resources when: You want the AI to read data for context (project docs, schemas, configs). Resources are passive — they provide information.
Use Tools when: You want the AI to take actions or fetch dynamic data with parameters. Tools are active — they do things.
Prompts: Reusable Instructions
Prompts are pre-written instruction templates that users can invoke. They're useful for complex workflows that need consistent structure.
When to Use Prompts
- • Complex multi-step workflows
- • Code review or analysis templates
- • Report generation with specific format
- • Debugging or troubleshooting guides
- • Any repeated task that benefits from structure
Prompt Example: Code Review
from fastmcp import FastMCP
from fastmcp.prompts import Prompt
mcp = FastMCP("Code Review")
@mcp.prompt()
def code_review(code: str, language: str = "python") -> str:
"""Structured code review template"""
return f"""Please review this {language} code:
```{language}
{code}
```
Analyze for:
1. **Bugs & Logic Errors** - Any obvious bugs or edge cases?
2. **Security Issues** - Input validation, injection risks, secrets?
3. **Performance** - Inefficient operations, unnecessary loops?
4. **Readability** - Clear naming, good structure, comments?
5. **Best Practices** - Following {language} conventions?
Provide specific suggestions with code examples."""
@mcp.prompt()
def debug_error(error_message: str, stack_trace: str = "") -> str:
"""Debug an error with structured analysis"""
return f"""Help me debug this error:
**Error Message:**
{error_message}
**Stack Trace:**
{stack_trace if stack_trace else "Not provided"}
Please:
1. Explain what this error means
2. Identify the likely cause
3. Suggest fixes with code examples
4. Recommend how to prevent this in the future"""Prompts with Dynamic Context
Prompts can combine with resources for powerful workflows:
@mcp.prompt()
def analyze_project() -> list:
"""Analyze the entire project structure"""
# Return multiple messages for complex prompts
return [
{
"role": "user",
"content": """Analyze this project and provide:
1. **Architecture Overview** - How is it structured?
2. **Key Components** - What are the main modules?
3. **Dependencies** - What external libraries are used?
4. **Suggested Improvements** - What could be better?
Use the project resources (readme, config, schema) for context."""
}
]
@mcp.prompt()
def generate_tests(file_path: str) -> str:
"""Generate unit tests for a file"""
return f"""Generate comprehensive unit tests for the file at {file_path}.
Requirements:
- Use pytest as the testing framework
- Cover happy path and edge cases
- Mock external dependencies
- Include docstrings explaining each test
- Aim for 80%+ coverage
Read the file using resources first, then generate tests."""Combining All Three Primitives
The real power comes from combining tools, resources, and prompts in one server:
from fastmcp import FastMCP
import subprocess
import os
mcp = FastMCP("Full Stack Dev Assistant")
# RESOURCES - Project context
@mcp.resource("project://readme")
def get_readme() -> str:
"""Project documentation"""
return open("README.md").read()
@mcp.resource("project://package")
def get_package() -> str:
"""Package dependencies"""
return open("package.json").read()
# TOOLS - Actions the AI can take
@mcp.tool()
def run_tests() -> str:
"""Run the test suite"""
result = subprocess.run(
["npm", "test"],
capture_output=True,
text=True
)
return f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}"
@mcp.tool()
def lint_code() -> str:
"""Run the linter"""
result = subprocess.run(
["npm", "run", "lint"],
capture_output=True,
text=True
)
return result.stdout or result.stderr
@mcp.tool()
def create_file(path: str, content: str) -> str:
"""Create or update a file"""
with open(path, "w") as f:
f.write(content)
return f"Created {path}"
# PROMPTS - Structured workflows
@mcp.prompt()
def full_review() -> str:
"""Complete project review workflow"""
return """Perform a full project review:
1. Read the README and package.json resources
2. Run the test suite and linter
3. Analyze results and provide:
- Current project status
- Test coverage assessment
- Code quality issues
- Prioritized recommendations
Be specific and actionable."""
@mcp.prompt()
def add_feature(feature_name: str, description: str) -> str:
"""Guided feature implementation"""
return f"""Help me implement: {feature_name}
Description: {description}
Steps:
1. Review project structure (use resources)
2. Propose file changes needed
3. Write the implementation
4. Add tests for the feature
5. Update documentation
Let's go step by step."""
if __name__ == "__main__":
mcp.run()Using in Claude Desktop
Once your MCP server is configured in Claude Desktop, you can:
Access Resources
Resources appear in the attachment menu. Click the + button and select from available resources to add them to your conversation context.
Use Prompts
Prompts appear in the slash command menu. Type / to see available prompts, then select one to insert the template into your conversation.
Call Tools
Tools are called automatically when relevant. Ask Claude to "run the tests" or "lint the code" and it will use the appropriate tools.
Best Practices
1. Resources Should Be Read-Only
Resources provide context, not actions. If you need to modify data, use a tool instead. This keeps the mental model clear.
2. Prompts Should Be Self-Contained
A good prompt includes all necessary instructions. Reference resources and tools by name so the AI knows what's available.
3. Validate Resource Paths
If using dynamic resources with templates, always validate paths to prevent directory traversal or access to sensitive files.
4. Document Everything
Use clear docstrings for all resources, tools, and prompts. The AI uses these descriptions to understand when and how to use each capability.
Summary
- Tools: Actions the AI can perform (run commands, modify data, call APIs)
- Resources: Read-only context (project files, configs, schemas)
- Prompts: Reusable instruction templates (workflows, reviews, analysis)
Use all three together for powerful, context-aware AI assistants that understand your project and can help with complex workflows.