Ultra-fast semantic tool filtering for MCP (Model Context Protocol) servers using embedding similarity. Reduce your tool context from 1000+ tools down to the most relevant 10-20 tools in under 10ms.
{
"mcpServers": {
"mcp-tool-filter": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Ultra-fast semantic tool filtering for MCP (Model Context Protocol) servers using embedding similarity. Reduce your tool context from 1000+ tools down to the most relevant 10-20 tools in under 10ms.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
MIT. View license →
Is it maintained?
Last commit 151 days ago. 40 stars.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol server for searching and analyzing arXiv papers
An open-source AI agent that brings the power of Gemini directly into your terminal.
The official Python SDK for Model Context Protocol servers and clients
MCP Security Weekly
Get CVE alerts and security updates for Mcp Tool Filter and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Ultra-fast semantic tool filtering for MCP (Model Context Protocol) servers using embedding similarity. Reduce your tool context from 1000+ tools down to the most relevant 10-20 tools in under 10ms.
npm install @portkey-ai/mcp-tool-filter
import { MCPToolFilter } from '@portkey-ai/mcp-tool-filter';
// 1. Initialize the filter (choose embedding provider)
// Option A: Local Embeddings (RECOMMENDED for low latency < 5ms)
const filter = new MCPToolFilter({
embedding: {
provider: 'local',
}
});
// Option B: API Embeddings (for highest accuracy)
const filter = new MCPToolFilter({
embedding: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
}
});
// 2. Load your MCP servers (one-time setup)
await filter.initialize(mcpServers);
// 3. Filter tools based on context
const result = await filter.filter(
"Search my emails for the Q4 budget discussion"
);
// 4. Use the filtered tools in your LLM request
console.log(result.tools); // Top 20 most relevant tools
console.log(result.metrics.totalTime); // e.g., "2ms" for local, "500ms" for API
Pros:
Cons:
const filter = new MCPToolFilter({
embedding: {
provider: 'local',
model: 'Xenova/all-MiniLM-L6-v2', // Optional: default model
quantized: true, // Optional: use quantized model for speed (default: true)
}
});
Available Models:
Xenova/all-MiniLM-L6-v2 (default) - 384 dimensions, very fastXenova/all-MiniLM-L12-v2 - 384 dimensions, more accurateXenova/bge-small-en-v1.5 - 384 dimensions, good balanceXenova/bge-base-en-v1.5 - 768 dimensions, higher qualityPerformance:
For highest accuracy, use OpenAI or other API providers:
const filter = new MCPToolFilter({
embedding: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
model: 'text-embedding-3-small', // Optional
dimensions: 384, // Optional: match local model for fair comparison
}
});
Pros:
Cons:
Performance:
| Aspect | Local | API | Winner | |--------|-------|-----|--------| | Speed | 1-5ms | 400-800ms | 🏆 Local (200x faster) | | Accuracy | Good (85-90%) | Best (100%) | 🏆 API | | Cost | Free | ~$0.02/1M tokens | 🏆 Local | | Privacy | Fully local | Data sent to API | 🏆 Local | | Offline | ✅ Works offline | ❌ Needs internet | 🏆 Local | | Setup | Zero config | Needs API key | 🏆 Local |
**📊 See TRADEOFFS.md for detailed a