Scan deps for CVEs via MCP. Auto-detects stack, queries OSV.dev. Zero config, privacy-first.
Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"io-github-runyourempire-4da-mcp-server": {
"command": "<see-readme>",
"args": []
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Scan deps for CVEs via MCP. Auto-detects stack, queries OSV.dev. Zero config, privacy-first.
No automated test available for this server. Check the GitHub README for setup instructions.
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
No package registry to scan.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationBe the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in other
Persistent memory using a knowledge graph
Privacy-first. MCP is the protocol for tool access. We're the virtualization layer for context.
Make HTTP requests and fetch web content
Read, write, and manage files on the local filesystem
MCP Security Weekly
Get CVE alerts and security updates for io.github.runyourempire/4da-mcp-server and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
4DA reads the internet for developers — privately, locally — and gets sharper every day.
It scans your codebase — Cargo.toml, package.json, go.mod, Git history — and scores every article, advisory, and release from 20+ sources against what you actually build. An item needs 2+ independent signals to survive. Everything else is rejected.
Typical rejection rate: 99%+. What's left is yours.
Privacy-first. Local-first. BYOK. Your indexed content stays on your machine — there is no 4DA-operated server for it to go to. It learns from how you engage with what it shows you — yesterday's noise becomes tomorrow's signal. Crash reporting is opt-in, off by default. For the full list of outbound connections with source-code references, see NETWORK.md.
Already using Claude Code, Cursor, or Windsurf? One command:
npx @4da/mcp-server --setup
This auto-detects your editor, scans your project, and gives your AI assistant context-aware developer intelligence. No desktop app required — works standalone. Learn more about the MCP server.
5 independent signal axes. An item must pass 2 or more to surface. Single-axis matches are hard-capped at 28% — no matter how strong one signal is, it cannot pass alone.
| Axis | What it measures |
|---|---|
| Context | Semantic similarity to your active codebase |
| Interest | Alignment with your declared and learned topics |
| ACE | Real-time signals from your Git commits and file edits |
| Dependency | Direct matches against your installed packages |
| Learned | Save/dismiss feedback boosts or suppresses future scores |
What passes the gate goes through 12 quality multipliers: content depth, novelty detection, competing tech penalties, title-body coherence, and intent scoring from recent work. Every constant is calibrated across 9 simulated developer personas with 215 labeled test items.
After keyword scoring, an optional LLM layer verifies the top items against your full developer context — stack, dependencies, recent commits, anti-technologies, and engagement history. Strict 1-5 rubric:
This is where gold nuggets surface — articles the keyword pipeline misses because there's no keyword overlap, but the LLM understands the conceptual relevance to your specific project.
You control the compute. Use Ollama for free local inference (fully private), or bring your own Anthropic/OpenAI key. 4DA never pays for your compute, never stores your keys remotely, never makes API calls you didn't configure. No LLM = keyword pipeline only (85-90% accurate).
Every interaction is a training signal: