{
"mcpServers": {
"fs-mcp-server": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
文件资料查找mcp服务
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
MIT. View license →
Is it maintained?
Last commit 308 days ago. 6 stars.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Persistent memory using a knowledge graph
Privacy-first. MCP is the protocol for tool access. We're the virtualization layer for context.
Pre-build reality check. Scans GitHub, HN, npm, PyPI, Product Hunt — returns 0-100 signal.
Monitor browser logs directly from Cursor and other MCP compatible IDEs.
MCP Security Weekly
Get CVE alerts and security updates for Fs Mcp Server and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
A powerful MCP (Model Context Protocol) server that provides intelligent file reading and semantic search capabilities
git clone https://github.com/yourusername/fs-mcp.git
cd fs-mcp
Using uv (Recommended):
uv sync
Using pip:
pip install -r requirements.txt # If you have a requirements.txt
# OR install directly
pip install fastmcp>=2.0.0 langchain>=0.3.0 python-dotenv>=1.1.0
Create a .env file in the project root:
# Security Settings
SAFE_DIRECTORY=. # Directory restriction (required)
MAX_FILE_SIZE_MB=100 # File size limit in MB
# Encoding Settings
DEFAULT_ENCODING=utf-8
# AI Embeddings Configuration (for vector search)
OPENAI_EMBEDDINGS_API_KEY=your-api-key
OPENAI_EMBEDDINGS_BASE_URL=http://your-embedding-service/v1
EMBEDDING_MODEL_NAME=BAAI/bge-m3 # Or your preferred model
EMBEDDING_CHUNK_SIZE=1000
python main.py
The server will start on http://localhost:3002 and automatically build the vector index.
Core dependencies are managed in pyproject.toml:
fastmcp>=2.0.0 - MCP server frameworklangchain>=0.3.0 - AI and vector searchpython-dotenv>=1.1.0 - Environment management| Variable | Default | Description |
|----------|---------|-------------|
| SAFE_DIRECTORY | . | Root directory for file access |
| MAX_FILE_SIZE_MB | 100 | Maximum file size limit |
| DEFAULT_ENCODING | utf-8 | Default file encoding |
| OPENAI_EMBEDDINGS_API_KEY | - | API key for embedding service |
| OPENAI_EMBEDDINGS_BASE_URL | - | Embedding service URL |
| EMBEDDING_MODEL_NAME | BAAI/bge-m3 | AI model for embeddings |
| EMBEDDING_CHUNK_SIZE | 1000 | Text chunk size for processing |
For production deployments, consider: