Cached context spine for AI coding agents. Any MCP server as a 10-line plugin. 89.1% saved.
Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"io-github-nickcirv-engram": {
"command": "<see-readme>",
"args": []
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Cached context spine for AI coding agents. Any MCP server as a 10-line plugin. 89.1% saved.
No automated test available for this server. Check the GitHub README for setup instructions.
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
No package registry to scan.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationBe the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in other
Persistent memory using a knowledge graph
Privacy-first. MCP is the protocol for tool access. We're the virtualization layer for context.
Official Miro MCP server - Supports context to code and creating diagrams, docs, and data tables.
MCP server for using the GitLab API
MCP Security Weekly
Get CVE alerts and security updates for io.github.NickCirv/engram and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Install Page · Live Demo · Scene Table · rendered with Hyperframes
Install · Quickstart · Dashboard · Benchmark · IDE Integrations · HTTP API · ECP Spec · Contributing
EngramX v3.0 "Spine" shipped 2026-04-24 — the biggest release since v1.0. The spine is now extensible: any MCP server becomes an EngramX provider via a 10-line plugin file. Pre-mortem mistake-guard warns before you repeat a bug. Bi-temporal mistake memory — refactored-away mistakes stop firing. Anthropic Auto-Memory bridge reads Claude Code's own consolidated memory. SSE-streaming packets render progressively.
engram gendual-emitsAGENTS.md+CLAUDE.mdby default. 89.1% measured real-world token savings on 87 source files — reproducible in one command. 878 tests, CI green on Ubuntu + Windows × Node 20 + 22. Zero cloud, zero telemetry. See CHANGELOG.md for the full diff.
Your AI coding agent keeps re-reading the same files. Every Read, every Edit, every cat re-pays for context you've already paid for.
EngramX is the spine. It intercepts every file read at the tool boundary, answers from a pre-assembled context packet held in three layers of cache — a knowledge graph the agent has already "paid" to build, a per-provider SQLite cache of external lookup