Real-time LLM/VLM benchmarks, pricing, and recommendations. 336+ models, 5 sources.
{
"mcpServers": {
"io-github-daichi-kudo-llm-advisor": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Real-time LLM/VLM benchmarks, pricing, and recommendations. 336+ models, 5 sources.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Last commit 42 days ago.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
Context cost
4 tools. ~700 tokens (0.4% of 200K).
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
get_model_infoDetailed specs for a specific model: pricing, benchmarks, percentile ranks, capabilities, and a ready-to-use API code example.
list_top_modelsTop-ranked models for a category. Includes release dates for freshness awareness.
compare_modelsSide-by-side comparison for 2-5 models. Best values are bolded automatically. Includes a Released row so you can spot outdated models at a glance.
recommend_modelPersonalized top-3 recommendations. Scores combine weighted benchmarks, pricing, capability bonuses, and a freshness bonus.
This server is missing a description.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Persistent memory using a knowledge graph
Privacy-first. MCP is the protocol for tool access. We're the virtualization layer for context.
Pre-build reality check. Scans GitHub, HN, npm, PyPI, Product Hunt — returns 0-100 signal.
Monitor browser logs directly from Cursor and other MCP compatible IDEs.
MCP Security Weekly
Get CVE alerts and security updates for io.github.Daichi-Kudo/llm-advisor and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
English | 日本語
Give your AI assistant real-time LLM/VLM knowledge. Pricing, benchmarks, and recommendations — updated every hour, not every training cycle.
LLMs have knowledge cutoffs. Ask Claude "what's the best coding model right now?" and it cannot answer with current data. This MCP server fixes that by feeding live model intelligence directly into your AI assistant's context window.
list_top_models with category codingcompare_models with side-by-side tablerecommend_model with budget constraintsget_model_info with percentile ranksclaude mcp add llm-advisor -- npx -y llm-advisor-mcp
claude mcp add llm-advisor -- cmd /c npx -y llm-advisor-mcp
Add to your MCP configuration file:
{
"mcpServers": {
"llm-advisor": {
"command": "npx",
"args": ["-y", "llm-advisor-mcp"]
}
}
}
That is it. No API keys, no .env files.
| Client | Supported | Install Method |
|--------|-----------|----------------|
| Claude Code | Yes | claude mcp add |
| Claude Desktop | Yes | JSON config |
| Cursor | Yes | JSON config |
| Windsurf | Yes | JSON config |
| Any MCP client | Yes | stdio transport |
get_model_infoDetailed specs for a specific model: pricing, benchmarks, percentile ranks, capabilities, and a ready-to-use API code example.
Parameters
| Name | Type | Required | Default | Description |
|------|------|----------|---------|-------------|
| model | string | Yes | — | Model ID or partial name (e.g. "claude-sonnet-4", "gpt-5") |
| include_api_example | boolean | No | true | Include a ready-to-use code snippet |
| api_format | enum | No | openai_sdk | openai_sdk, curl, or python_requests |
Example output
## anthropic/claude-sonnet-4
**Provider**: anthropic | **Modality**: text+image→text | **Released**: 2025-06-25
### Pricing
| Metric | Value |
|--------|-------|
| Input | $3.00 /1M tok |
| Output | $15.00 /1M tok |
| Cache Read | $0.30 /1M tok |
| Context | 200K |
| Max Output | 64K |
### Benchmarks
| Benchmark | Score |
|-----------|-------|
| SWE-bench Verified | 76.8% |
| Aider Polyglot | 72.1% |
| Arena Elo | 1467 |
| MMMU | 76.0% |
### Percentile Ranks
| Category | Percentile |
|----------|------------|
| Coding | P96 |
| General | P95 |
| Vision | P90 |
**Capabilities**: Tools, Reasoning, Vision
### API Example (openai_sdk)
```python
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="<OPENROUTER_API_KEY>",
)
response = client.chat.completions.crea
... [View full README on GitHub](https://github.com/Daichi-Kudo/llm-advisor-mcp#readme)