Self-hosted MCP memory server — 35 tools for persistent context across AI coding sessions. Runs on Docker with PostgreSQL + pgvector + Ollama.
Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"engram-go": {
"command": "<see-readme>",
"args": []
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Self-hosted MCP memory server — 35 tools for persistent context across AI coding sessions. Runs on Docker with PostgreSQL + pgvector + Ollama.
No automated test available for this server. Check the GitHub README for setup instructions.
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
No package registry to scan.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationBe the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in ai-ml / developer-tools / data
Dynamic problem-solving through sequential thought chains
Query and manage PostgreSQL databases directly from AI assistants
XcodeBuildMCP provides tools for Xcode project management, simulator management, and app utilities.
Manage Supabase projects — databases, auth, storage, and edge functions
MCP Security Weekly
Get CVE alerts and security updates for Engram Go and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Every time you close your AI coding session, it forgets everything. The JWT library you chose. The expiry bug you spent an afternoon on. The pattern you explicitly rejected. Gone. Next session, the agent starts from zero and you start explaining.
# Session start — before touching any code
memory_recall("session handoff recent decisions", project="myapp")
# After settling on a technical choice
memory_store(
"Chose RS256 over HS256: the API gateway needs to verify tokens without
holding the signing secret. HS256 would require distributing the key to
every service. Do not change this without updating the gateway config.",
memory_type="decision",
project="myapp"
)
Your memories never leave your machine.
Most memory tools send your code context, architectural notes, and decision logs to a third-party API. Engram doesn't. Your PostgreSQL stores every memory. Your Ollama instance runs every embedding. Nothing leaves your infrastructure unless you push it yourself.
make init && make up && make setup
# Done. Memory server at localhost:8788.
Finds what you mean, not just what you typed. BM25 keyword search and 768-dimensional semantic vectors run simultaneously. Searching "database lock timeout" finds your note about "WAL mode contention under load" — no shared words, close meaning. When Ollama is unavailable, search degrades gracefully to BM25+recency. Your results never disappear because an external service went down.
Weights by recency automatically. Exponential decay at 1% per hour. Yesterday's decision outranks one from six months ago. Nothing is deleted; old memories step back. Six-month-old memories are still there if nothing more recent matches.
Surfaces connected memories without being asked. A knowledge graph links decisions to the bugs they caused and the patterns they require. Recall one; get its neighborhood. Store a bug report, store the architectural pattern that caused it, connect them with a causes edge. Now any query about the pattern automatically surfaces the bug — you don't have to remember to ask for both.
Stores documents, not just notes. memory_store_document handles up to 500,000 characters. Engram chunks at sentence boundaries and embeds each chunk independently. A 20,000-word architecture document is searchable at the paragraph level — a query about authentication surfaces the auth section, not the whole document.
git clone https://github.com/petersimmons1972/engram-go.git && cd engram-go
make init # generates POSTGRES_PASSWORD and ENGRAM_API_KEY in .env
make up # starts postgres, ollama, and engram-go
make setup # writes bearer token to ~/.claude/mcp_servers.json
The server starts on port 8788. If you prefer to author .env by hand rather than using make init, .env.example at the repo root documents every available variable with its default and purpose. Cold start: under 200ms. Memory at idle: 18 MB.
Docker users:
docker-compose.ymlnow setsENGRAM_SETUP_TOKEN_ALLOW_RFC1918=1automatically. If you run engram outside Docker and need/setup-tokenaccessible from RFC1918 addresses (e.g. a LAN host), add this variable to your environment. Without it,/setup-tokenonly accepts loopback (127.0.0.1 / ::1).
Run /mcp in Cla