AI-powered evaluation of local model suitability for agents.
{
"mcpServers": {
"io-github-ojaskord-local-model-suitability-mcp": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
AI-powered evaluation of local model suitability for agents.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Commit history unknown.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
AI-powered evaluation of whether your local model is actually good enough for the task at hand.
When you have both a local model (Ollama, LM Studio, etc.) and cloud APIs available, agents face a decision they cannot make intelligently alone:
Should I run this locally or send it to the cloud?
Getting this wrong in either direction is expensive:
evaluate_local_model_suitability is a single AI-powered tool that reasons across four dimensions simultaneously — cost, privacy, latency, and quality — and returns a clear verdict your agent can act on.
Verdict: LOCAL | CLOUD | EITHER | NEITHER
This is not a benchmark lookup. Claude reasons about your specific task, your specific model, and your specific constraints.
npx local-model-suitability-mcp
Or install globally:
npm install -g local-model-suitability-mcp
{
"mcpServers": {
"local-model-suitability": {
"command": "npx",
"args": ["-y", "local-model-suitability-mcp"],
"env": {
"ANTHROPIC_API_KEY": "your-key-here"
}
}
}
}
{
"mcpServers": {
"local-model-suitability": {
"command": "npx",
"args": ["-y", "local-model-suitability-mcp"],
"env": {
"ANTHROPIC_API_KEY": "your-anthropic-key",
"LMS_API_KEY": "your-pro-key-from-kordagencies"
}
}
}
}
evaluate_local_model_suitability| Parameter | Type | Required | Description |
|---|---|---|---|
task_description | string | ✅ | Describe the task specifically. Include output format, accuracy requirements, stakes. |
local_model | string | ✅ | Model name in Ollama format: llama3.1:8b, mistral:7b, qwen2.5:14b, etc. |
quality_threshold | enum | ✅ | draft / production / critical |
use_case_type | enum | ✅ | classification / summarisation / code_generation / reasoning / data_extraction / creative_writing / question_answering / translation / sentiment_analysis / other |
data_sensitivity | enum | ✅ | public / internal / confidential |
latency_requirement | enum | ✅ | flexible / moderate / realtime |
{
"task_description": "Classify customer support emails into 5 categories: billing, technical, returns, complaints, general. Must be accurate enough for production routing — wrong classification means wrong team gets the ticket.",
"local_model": "llama3.1:8b",
"quality_threshold": "production",
"use_case_type": "classification",
"data_sensitivity": "internal",
"latency_requirement": "moderate"
}
{
"verdict": "EITHER",
"confidence": "HIGH",
"summary": "Llama 3.1 8B can handle 5-category email classification at production quality if emails are clear — use local to protect customer data and save cost, with cloud fallback for ambiguous cases.",
"model_evaluated": "llama3.1:8b",
"model_profile": {
"parameter_count": "8B",
"tier": "small",
"known_strengths": ["simple Q&A", "basic summarisation", "short classification", "data extraction"],
"known_weaknesses": ["complex multi-step reasoning", "long-context coherence", "nuanced instruction following"]
},
"task_complexity": "SIMPLE",
"reasoning": {
"quality_
... [View full README on GitHub](https://github.com/OjasKord/local-model-suitability-mcp#readme)
No automated test available for this server. Check the GitHub README for setup instructions.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationNo known vulnerabilities.
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol (MCP) server and CLI that provides tools for agent use when working on iOS and macOS projects.
The official Python SDK for Model Context Protocol servers and clients
An open-source AI agent that brings the power of Gemini directly into your terminal.
MCP Security Weekly
Get CVE alerts and security updates for io.github.OjasKord/local-model-suitability-mcp and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.