Every MCPpedia Score is computed from real, verifiable data. No manual overrides. Full transparency.
Heaviest weight because security is what developers worry about most.
| Signal | Source | Impact |
|---|---|---|
| Known CVEs | OSV.dev API (Google's open vulnerability database) | -10 per critical/high |
| Medium severity CVEs | OSV.dev API | -5 each |
| No authentication | Server metadata | -4 |
| No license | GitHub API | -3 |
| Archived repo | GitHub API | -8 |
| MCPpedia verified | Manual review | +5 |
CVE data is refreshed daily. We query every npm and PyPI package against OSV.dev.
Is this server actively developed? Will bugs get fixed?
| Signal | Source | Points |
|---|---|---|
| Commit in last 7 days | GitHub API (pushed_at) | +12 |
| Commit in last 30 days | GitHub API | +10 |
| Commit in last 90 days | GitHub API | +7 |
| 5,000+ GitHub stars | GitHub API | +5 |
| 10,000+ weekly npm downloads | npm registry API | +5 |
GitHub and npm data is refreshed daily.
How much of your AI's context window does this server consume?
| Total Tool Token Cost | Grade | Points |
|---|---|---|
| ≤ 500 tokens | A | 20 |
| ≤ 1,500 tokens | B | 16 |
| ≤ 4,000 tokens | C | 12 |
| ≤ 8,000 tokens | D | 6 |
| > 8,000 tokens | F | 2 |
Token cost is measured by serializing each tool's name, description, and input schema to JSON and dividing by ~3.5 characters per token. This is the actual context cost when a client loads the server.
Can a developer actually set this up without guessing?
| Signal | How We Check | Points |
|---|---|---|
| All tools have descriptions | Check description.length > 10 | +5 |
| Tools have input schemas | Check schema has properties | +3 |
| Install configs provided | Check install_configs non-empty | +3 |
| README has setup instructions | Scan for "install", "setup", "getting started" | +2 |
| README has code examples | Scan for code blocks and "example" | +2 |
README content is fetched directly from GitHub and analyzed for structure.
Which clients and transports does it support?
| Signal | Points |
|---|---|
| Supports stdio transport | +4 |
| Supports HTTP/SSE transport | +4 |
| Multiple transports | +2 |
Scores are computed entirely by algorithm. No server author, sponsor, or MCPpedia team member can manually change a score. The only way to improve a score is to improve the server: fix CVEs, add documentation, maintain the code, and support more transports.
The scoring algorithm itself is open source. You can audit it at lib/scoring.ts.
MCPpedia scores are generated automatically from publicly available data and may not reflect the full quality, security posture, or suitability of any server for your use case. Scores are provided for informational purposes only and should not be the sole basis for security or purchasing decisions.
MCPpedia is an independent community project and is not affiliated with, endorsed by, or sponsored by Anthropic, the Model Context Protocol project, or any server listed on this site. "MCP" and "Model Context Protocol" are trademarks of their respective owners.
Server metadata is sourced from the official MCP Registry, GitHub, npm, PyPI, and OSV.dev. If you believe any information is inaccurate, please open an issue.