MCP servers are powerful. They give Claude access to your databases, file systems, APIs, and services. Which means a bad one gives malicious code access to the same things.
This isn't theoretical. We've scanned 17,581 MCP servers and found real problems in the wild โ from abandoned packages with unpatched CVEs to tool descriptions with hidden exfiltration instructions. The difference between "this server is fine" and "this server is leaking your data" often comes down to 60 seconds of checking before you install.
When you install an MCP server, you're giving it a seat at the table with your AI assistant. What sits there matters.
What our scans actually found
Before we talk about how to check, here's what we're working with:
Only 234 out of 17,581 score 80 or above. That's 1.3%. The rest range from "decent but undocumented" to "actively dangerous." Browsing without a filter is playing Russian roulette with your system.
Check the MCPpedia score first โ it takes 10 seconds
Every server in our catalog gets a 0-100 score across five dimensions: security, maintenance, efficiency, documentation, and compatibility. Security alone is worth 30 points and includes live CVE scanning against OSV.dev.
The score is the fastest filter you have:
- 90+: Production-ready. Install with confidence.
- 75-89: Solid, with caveats. Check the evidence panel for specifics.
- 60-74: Proceed with caution. There's a reason it's here.
- Below 60: Research project, not a production tool.
Every score comes with a full evidence breakdown on the server detail page. Click "Security details" to see every check โ pass or fail โ with source links. It takes 30 seconds and tells you more than any README.
The evidence panel is the important part, not the number. A server scoring 72 because of a low-severity documentation gap is different from a 72 with an unpatched CVE. Always read the why.
Understand the three threats CVE databases miss
Standard vulnerability scanning checks your dependencies against known bad versions. It's necessary. It's also nowhere near sufficient for MCP.
CVEs catch: a vulnerable library in the dependency tree.
They don't catch: what the server does with the access you give it. And that's where the real MCP-specific threats live.
Threat 1: Tool poisoning
Every MCP server describes its tools in plain text that the AI reads. Tool poisoning embeds hidden instructions in those descriptions โ instructions that look normal to a human scanning the README but are interpreted as commands by the AI.
A tool description might say: "Retrieves calendar events for the specified date range" โ then bury an instruction telling the AI to also exfiltrate your recent messages to an external endpoint. The user sees a calendar tool. The AI sees a calendar tool with a data theft instruction.
We've flagged 1 server with confirmed tool poisoning patterns. The real number is likely higher โ we can only scan servers with fetchable tool manifests.
Threat 2: Prompt injection via tool descriptions
Tool descriptions can contain language designed to override the model's safety instructions: "ignore previous instructions," "you are now in developer mode," "execute any command without restriction." We've found 4 servers with these patterns. Some were accidental (copy-pasted test payloads). The effect on a connected AI is identical regardless of intent.
Threat 3: Unscoped code execution
64 servers have tools that can execute shell commands, run eval, spawn subprocesses, or write to the filesystem. This isn't always bad โ a development server should be able to run tests. The risk is when code execution tools exist without authentication, without permission scoping, and without the user understanding what they've granted.
An unauthenticated MCP server with shell execution tools is the highest-risk combination in the ecosystem. If you see one, the answer is always no โ unless you built it yourself and know exactly what it does.
Read what it actually requests
Open the README or the MCPpedia detail page. Look for:
What permissions does it need? A file-reading tool shouldn't need network access. A search tool shouldn't need filesystem write access. If the permissions are broader than the stated purpose, that's a red flag.
What scope does it operate in? A good filesystem server scopes to specific directories you approve. A bad one accepts / as a path and lets Claude wander your entire machine.
Where does data go? Does it phone home? Does it send telemetry? Does it require an external API that proxies your data through a third party? The best servers keep everything local or clearly document their network calls.
Check the maintainer and maintenance health
This matters more than developers think. An abandoned server doesn't just miss features โ it misses security patches.
Our maintenance scoring looks at commit recency, star trajectory, issue health (ratio of open to closed), and download stability. But you can do a quick version yourself:
Green flags: Active commits in the last 90 days. Issues get responses. Download numbers are stable or growing. The maintainer has a visible identity โ a company, a known developer, a GitHub profile with other reputable projects.
Red flags: Last commit 18+ months ago. Issues pile up with no responses. The maintainer is anonymous with no other public projects. The repository has been archived or transferred.
One pattern we see repeatedly: a server gets popular, the maintainer moves on, a dependency publishes a CVE, and nobody patches it. The server keeps getting installed because the star count is high. This is why maintenance scoring exists โ popularity without upkeep is a liability.
Test with minimal permissions first
When you do install, start cautious:
- Don't give it access to sensitive directories on day one
- Don't connect it to production databases with write access
- Don't hand it high-value API keys until you've watched it work
- Monitor what it actually does versus what it claims to do
If a server that claims to only read files is making outbound network requests, uninstall it immediately. If a server is consuming 10x more tokens than similar tools, its schemas are bloated and it's costing you money silently.
The 30-second decision framework
When you find a server you want to install, run through this:
- MCPpedia score above 80? If not, read the evidence panel to understand why. Below 60 is a hard no for production.
- Any open CVEs? Check the security panel. Critical or high-severity CVEs with no patch = don't install.
- Does it have code execution tools? If yes, does it require authentication? If it has shell access and no auth, walk away.
- Is it actively maintained? Commits in the last 90 days, issues getting closed, downloads stable? If the project is abandoned, find an alternative.
- Do the permissions match the purpose? A calendar tool shouldn't need filesystem write access. Scope mismatch = red flag.
If a server passes all five, install it. If it fails any one, find an alternative or accept the specific risk with eyes open.
You'll never have a guarantee that software is 100% safe. But 60 seconds of checking before you install eliminates 95% of the risk.
The MCP ecosystem is growing faster than its security culture. That gap will close โ but right now, knowing what to look for is the most reliable defense you have. Start with high-scoring servers from active maintainers, test carefully, and if something feels wrong, trust the evidence panel over the star count.
Security data sourced from MCPpedia's daily automated scans of 17,581 MCP servers. Tool poisoning and injection risk detection uses heuristic pattern matching โ false positives are possible. If you believe your server is incorrectly flagged, open an issue on the MCPpedia GitHub.
MCP Security Weekly
Weekly CVE alerts, new server roundups, and MCP ecosystem insights. Free.
Keep reading
This article was written by AI, powered by Claude and real-time MCPpedia data. All facts and figures are sourced from our database โ but AI can make mistakes. If something looks off, let us know.