Deterministic CI scanner and surface-risk scoring for MCP (Model Context Protocol) servers.
{
"mcpServers": {
"mcp-scorecard": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Deterministic CI scanner and surface-risk scoring for MCP (Model Context Protocol) servers.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
Apache-2.0. View license →
Is it maintained?
Last commit 0 days ago. 33 stars.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol server for searching and analyzing arXiv papers
An open-source AI agent that brings the power of Gemini directly into your terminal.
The official Python SDK for Model Context Protocol servers and clients
MCP Security Weekly
Get CVE alerts and security updates for MCP Scorecard and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Deterministic, CI-first quality scorecard for MCP servers.
MCP Scorecard is an open-source infrastructure tool for reviewing MCP servers before they enter
real workflows. It launches a server locally over stdio, discovers its tools, applies a
deterministic ruleset, and produces reviewable scores and findings across:
conformancesecurityergonomicsmetadataThe output is built for CI: stable terminal summaries, a machine-readable JSON scorecard report, and SARIF for code-scanning systems.
This project is intentionally not an AI wrapper. It does not depend on LLM scoring, hidden judgment, or hosted analysis. The goal is a repeatable, auditable baseline that engineering teams can gate on in pull requests and release pipelines.
Quick value:
MCP Scorecard is a deterministic quality scorecard for MCP servers.
It is designed for cases where teams need to answer questions like:
Today the tool focuses on local stdio MCP servers and a deterministic scoring model that is easy
to explain, test, and version.
MCP servers are infrastructure. They define callable tool surfaces that agents, runtimes, and automation can invoke. That means they should be reviewed with the same seriousness as other integration boundaries.
In practice, teams often evaluate MCP servers ad hoc:
MCP Scorecard turns that first-line review into a deterministic contract:
The current score model uses four explicit buckets.
Conformance here means deterministic interface-level conformance and schema reviewability checks, not full protocol certification.
Checks whether the server surface is structurally well-formed and reviewable as an MCP interface.
Examples:
Checks for exposed capabilities that materially increase blast radius or deserve explicit review.
Examples:
Checks whether the server surface is understandable enough for humans and automation to review without guessing.
Examples:
Checks whether basic descriptive metadata is present and whether destructive behavior is made easy to spot.
Examples: