Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"codemesh": {
"env": {
"CODEMESH_PROJECT_ROOT": "/path/to/your/project"
},
"args": [
"-y",
"@pyalwin/codemesh"
],
"command": "npx"
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Intelligent code knowledge graph for AI coding agents
Run this in your terminal to verify the server starts. Then let us know if it worked — your result helps other developers.
npx -y '@pyalwin/codemesh' 2>&1 | head -1 && echo "✓ Server started successfully"
After testing, let us know if it worked:
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
Checked @pyalwin/codemesh against OSV.dev.
Be the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in ai-ml
Dynamic problem-solving through sequential thought chains
A Model Context Protocol (MCP) server and CLI that provides tools for agent use when working on iOS and macOS projects.
An open-source AI agent that brings the power of Gemini directly into your terminal.
The Apify MCP server enables your AI agents to extract data from social media, search engines, maps, e-commerce sites, or any other website using thousands of ready-made scrapers, crawlers, and automation tools available on the Apify Store.
MCP Security Weekly
Get CVE alerts and security updates for io.github.pyalwin/codemesh and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Intelligent code knowledge graph for AI coding agents
71% cheaper, 72% faster, 82% fewer tool calls vs baseline Grep+Read
on 6 real-world repos (Sonnet 4.6) — from a single codemesh index.
Benchmarks · Quick Start · Integrations · Write-Back · How It Works · API Reference · Full Results
AI coding agents waste 40-80% of their tokens on discovery — grepping through files, reading irrelevant code, and rebuilding context they've already seen in previous sessions.
On a 600-file codebase, a typical exploration task involves 10+ file reads before the agent even knows what's relevant.
Before: Agent → Grep → 50 matches → Read 10 files → Understand → Work
After: Agent → codemesh_explore → 3 relevant files → codemesh_trace → full path → Work
Codemesh is an MCP server that gives agents a persistent, queryable knowledge graph. The graph gets smarter over time: agents write back what they learn, so the next session starts informed.
Benchmarked on 6 real-world codebases (Alamofire, Excalidraw, VS Code, Swift Compiler, pydantic-validators, pydantic-basemodel) with Claude Sonnet 4.6, compared alongside baseline and graph-based approaches for context.
Full methodology, per-repo breakdowns, and pairwise comparisons: docs/benchmark-results.md | Early pydantic evals
| Mode | Alamofire | Excalidraw | VS Code | Swift Compiler[^swift] | pydantic-validators | pydantic-basemodel | Avg |
|---|---|---|---|---|---|---|---|
| Baseline | $0.54 | $0.89 | $0.21 | $0.83 | $1.32 | $0.78 | $0.76 |
| Codemesh MCP | $0.25 | $0.21 | $0.16 | $0.23 | $0.33 | $0.13 | $0.22 |
| Codemesh CLI | $0.67 | $0.51 | $0.16 | $0.83 | $1.00 | $0.18 | $0.56 |
| Codegraph | $0.37 | $0.56 | $0.57 | $0.74 | $0.29 | $0.19 | $0.45 |
| Mode | Alamofire | Excalidraw | VS Code | Swift[^swift] | pydantic-v | pydantic-b | Avg |
|---|---|---|---|---|---|---|---|
| Baseline | 180s | 191s | 87s | 199s | 352s | 232s | 207s |
| Codemesh MCP | 78s | 45s | 35s | 87s | 72s | 32s | 58s |
| Codemesh CLI | 226s | 177s | 62s | 227s | 235s | 51s | 163s |
| Codegraph | 134s | 180s | 192s | 199s | 75s | 60s | 140s |
| Mode | Alamofire | Excalidraw | VS Code | Swift[^swift] | pydantic-v | pydantic-b | Avg |
|---|---|---|---|---|---|---|---|
| Baseline | 31 | 48 | 12 | 29 | 84 | 65 | 45 |
| Codemesh MCP | 9 | 5 | 3 | 14 | 14 | 3 | 8 |
| Codemesh CLI | 30 | 32 | 12 | 56 | 64 | 9 | 34 |
| Codegraph | 31 | 35 | 44 | 44 | 20 | 12 | 31 |
| Mode | Alamofire[^alamo] | Excalidraw | VS Code | Swift Compiler | pydantic-validators | pydantic-basemodel | Avg |
|---|---|---|---|---|---|---|---|
| Baseline | n/a | 9 | 8 | 7 | 2 | 9 | 7.0 |
| Codemesh MCP | 9 | 9 | 7 | 8 | 7 | 7.8 | 7.9 |
| Codemesh CLI | 9 | 7 | 7 | 9 | 1 | 8.4 | 6.9 |
| Codegraph | 8 | 9 | 8.7 | 8 | 8 | 9 | 8.4 |
| Repo | Baseline | Codemesh MCP | Cost saved | Time saved |
|---|---|---|---|---|
| Alamofire | $0.54 | $0.25 | −54% | −57% (180s → 78s) |
| Excalidraw | $0.89 | $0.21 | −76% | **−7 |