Offline-first AI memory for coding — mine codebases into a local LanceDB vector store with 18 MCP tools, temporal knowledge graph, and export/import. No API keys, no cloud, no server.
Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"mempalace-code": {
"command": "<see-readme>",
"args": []
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Offline-first AI memory for coding — mine codebases into a local LanceDB vector store with 18 MCP tools, temporal knowledge graph, and export/import. No API keys, no cloud, no server.
No automated test available for this server. Check the GitHub README for setup instructions.
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
No package registry to scan.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationBe the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in other
Persistent memory using a knowledge graph
Pre-build reality check. Scans GitHub, HN, npm, PyPI, Product Hunt — returns 0-100 signal.
Privacy-first. MCP is the protocol for tool access. We're the virtualization layer for context.
Official Miro MCP server - Supports context to code and creating diagrams, docs, and data tables.
MCP Security Weekly
Get CVE alerts and security updates for Mempalace Code and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
One command indexes your codebase. Your AI remembers everything — architecture decisions, debugging sessions, API patterns — across sessions and projects. Forever.
No cloud. No API keys. No subscription. Nothing leaves your machine.
[![][version-shield]][release-link] [![][python-shield]][python-link] [![][license-shield]][license-link]
Get Started in 30 seconds · How It Works · All Features · Benchmarks
| Tree-sitter AST Parsing Chunks at function boundaries not arbitrary line counts | 27 MCP Tools Native Claude Code integration search, store, traverse | Temporal Knowledge Graph Facts that change over time with validity windows |
| 595x Token Savings measured peak · median 80x scales with project size | Cross-Project Tunnels Search auth in one projectfind it everywhere | 1008 Tests · $0 Cost Every feature acceptance-gated fully offline after install |
uv tool install mempalace-code # recommended (fast, Rust-based)
# or
pipx install mempalace-code # alternative
# or
pip install mempalace-code # into current environment
# or
uvx --from mempalace-code mempalace --help # try without installing
Then ask your AI to read docs/AGENT_INSTALL.md — it will handle setup, MCP wiring, prompt injection, and verification automatically.
mempalace init ~/projects/myapp # detect rooms, download embedding model (~80 MB)
mempalace mine ~/projects/myapp # index your codebase
claude mcp add mempalace -- python -m mempalace.mcp_server # connect to Claude Code
Optional: auto-sync on commit (requires [watch] extra — see Auto-Watch):
mempalace watch ~/projects/ # re-mines on every commit, zero noise
This makes the 27 tools available to your AI. For proactive search and storage (without you asking), you'll also need to add usage rules to your CLAUDE.md — see docs/AGENT_INSTALL.md Section 7.
Read https://github.com/rergards/mempalace-code/blob/main/docs/AGENT_INSTALL.md
1. How much context do I re-explain to you each session? Estimate tokens wasted.
2. Would auto-indexing this codebase as it grows save tokens over time? By how much?
3. Is my language/stack supported? If not, what's missing?
4. Decision: install now, wait for [feature], or skip — with reasoning.
5. If waiting: ask me if I want to draft a feature request for GitHub.
mempalace works with any MCP-compatible client:
claude mcp add mempalace -- python -m mempalace.mcp_serverclaude_desktop_config.jsonpython -m mempalace.mcp_server (stdio transport)For local models without MCP support (Llama, Mistral, etc.), use mempalace wake-up to pipe context into the system prompt — see Memory Layers.
You write code. You make decisions. You debug things. Between sessions, all that context vanishes.
mempalace-code **indexes