OSS libs in your stack, really used: source, tests, callers. C#, Java, TS, Python, Rust, PHP+.
Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"ai-example4-xmp4": {
"command": "<see-readme>",
"args": []
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
OSS libs in your stack, really used: source, tests, callers. C#, Java, TS, Python, Rust, PHP+.
No automated test available for this server. Check the GitHub README for setup instructions.
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
No package registry to scan.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationBe the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in other
Persistent memory using a knowledge graph
Privacy-first. MCP is the protocol for tool access. We're the virtualization layer for context.
Official Miro MCP server - Supports context to code and creating diagrams, docs, and data tables.
MCP server for using the GitLab API
MCP Security Weekly
Get CVE alerts and security updates for ai.example4/xmp4 and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Real callers. Real source. Real hierarchy. In 3 tool calls.
→ Landing page · → Benchmark whitepaper · → Connect in 30 seconds
Your AI coding agent is burning tokens grepping OSS libraries it will barely use. xmp4 is a hosted MCP server that pre-indexes 856 popular open-source libraries with SCIP — the semantic code format Sourcegraph uses — and serves them through 17 tools. No clone. No grep. No false positives.
ASK: "Who calls Flask.wsgi_app in the flask repo and what does it do?"
with grep + local clone:
git clone flask/flask ~40 MB, ~2 min
grep -rn "wsgi_app" . 200+ matches, mostly noise
cat src/flask/app.py | sed ... read 1000+ lines to find the body
filter false positives model spends tokens deciding what's real
──────────────────────────────────────
total: ~15,000 tokens + disk + wall time
with xmp4:
xmp4_info(symbol_name="Flask", file_path="src/flask/app.py") → signature, 20 tok
xmp4_source(symbol_name="wsgi_app", file_path="src/flask/app.py") → body, 180 tok
xmp4_callers(symbol_name="wsgi_app", file_path="src/flask/app.py") → 1 caller, 50 tok
──────────────────────────────────────
total: ~250 tokens
xmp4 is 60× cheaper here — and every result is SCIP-resolved, not text-matched.
Same realistic question on spring-boot · tokio · django · efcore: "give me the signature, body, and real callers of X."
| xmp4 | grep + clone | GitMCP | Context7 | |
|---|---|---|---|---|
| Total tokens (same question) | 1 558 | 2 978 | 65 629 | — |
| vs xmp4 | 1× | 1.9× more | 42× more | can't answer |
| Returns real source body? | ✅ | ✅ noisy | ✗ file paths only | ✗ curated docs only |
| Semantic callers? | ✅ | ✗ | ✗ | ✗ |
| Type hierarchy? | ✅ | ✗ | ✗ | ✗ |
| Setup cost | 0 | GBs of clone | 0 | 0 |
GitMCP and Context7 look cheaper per call because they return less. To reach the same answer, GitMCP balloons to 42× more tokens — and still can't produce the semantic caller list. Context7 can't at any cost. Full whitepaper with Python harness →
// Claude Code / Cursor / Claude Desktop — project `.mcp.json` or client config
{
"mcpServers": {
"xmp4": {
"type": "http",
"url": "https://mcp.example4.ai/mcp"
}
}
}
(The ready-to-paste config also lives at .mcp.json in this repo.)
Install the xmp4 skill once per version — Claude will pick the cheapest tool path automatically (tests_for + view over grep):
# Claude Code
mkdir -p ~/.claude/skills/xmp4 && \
curl -sfL https://example4.ai/xmp4-skill.md -o ~/.claude/skills/xmp4/SKILL.md
... [View full README on GitHub](https://github.com/0ics-srls/lsai-xmp4.public#readme)