Local-first memory for coding agents; stores decisions, bugs, and context across sessions.
{
"mcpServers": {
"io-github-go-ports-echovault": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Local-first memory for coding agents; stores decisions, bugs, and context across sessions.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Last commit 2 days ago. 4 stars.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Persistent memory using a knowledge graph
Privacy-first. MCP is the protocol for tool access. We're the virtualization layer for context.
Pre-build reality check. Scans GitHub, HN, npm, PyPI, Product Hunt — returns 0-100 signal.
Monitor browser logs directly from Cursor and other MCP compatible IDEs.
MCP Security Weekly
Get CVE alerts and security updates for io.github.go-ports/echovault and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Local memory for coding agents. Your agent remembers decisions, bugs, and context across sessions — no cloud, no API keys, no cost.
Install · Features · How it works · Commands
This is the Go port of EchoVault. It is a single static binary with no Python runtime dependency. The vault format and MCP interface are fully compatible with the Python version — you can switch between them without losing any memories.
Works with 4 agents — Claude Code, Cursor, Codex, OpenCode. One command sets up MCP config for your agent.
MCP native — Runs as an MCP server exposing memory_save, memory_search, and memory_context as tools. Agents call them directly — no shell hooks needed.
Local-first — Everything stays on your machine. Memories are stored as Markdown in ~/.memory/vault/, readable in Obsidian or any editor.
Zero idle cost — No background processes, no daemon, no RAM overhead. The MCP server only runs when the agent starts it.
Hybrid search — FTS5 keyword search works out of the box. Add Ollama or OpenAI for semantic vector search.
Secret redaction — 3-layer redaction strips API keys, passwords, and credentials before anything hits disk. Supports explicit <redacted> tags, pattern detection, and custom .memoryignore rules.
Cross-agent — Memories saved by Claude Code are searchable in Cursor, Codex, and OpenCode. One vault, many agents.
Obsidian-compatible — Session files are valid Markdown with YAML frontmatter. Point Obsidian at ~/.memory/vault/ and browse your agent's memory visually.
Download the latest release for your platform from the releases page and place the binary somewhere on your $PATH.
git clone https://github.com/go-ports/echovault.git
cd echovault
make build # produces ./bin/memory
sudo cp bin/memory /usr/local/bin/
CGO required. The binary links against
go-sqlite3andsqlite-vec, so a C compiler (gcc/clang) must be present. On macOS:xcode-select --install. On Debian/Ubuntu:apt install build-essential.
memory init
memory setup claude-code # or: cursor, codex, opencode
That's it. memory setup installs the MCP server config automatically.
By default the config is installed globally. To install for a specific project:
cd ~/my-project
memory setup claude-code --project # writes .mcp.json in project root
memory setup opencode --project # writes opencode.json in project root
memory setup codex --project # writes .codex/config.toml + AGENTS.md
Embeddings enable semantic search. Without them, you still get fast keyword search via FTS5.
Generate a starter config:
memory config init
This creates ~/.memory/config.yaml with sensible defaults:
embedding:
provider: ollama # ollama | openai | openrouter
model: nomic-embed-text
context:
semantic: auto # auto | always | never
topup_recent: true
What each section does:
embedding — How memories get turned into vectors for semantic search. ollama runs locally; openai and openrouter call cloud APIs. nomic-embed-text is a good local model for Ollama.context — Controls how memories are retrieved at session start. auto uses vector search when embeddings are available, falls back to keywords. topup_recent also includes recent memories so the agent has fresh context.For cloud providers, add api_key under the provider section. API ke