Persistent knowledge memory for AI agents. Hybrid search, code graph, pgvector.
{
"mcpServers": {
"io-github-cdnsteve-remembrallmcp": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Persistent knowledge memory for AI agents. Hybrid search, code graph, pgvector.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Commit history unknown.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
A Model Context Protocol (MCP) server and CLI that provides tools for agent use when working on iOS and macOS projects.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol server for searching and analyzing arXiv papers
The Apify MCP server enables your AI agents to extract data from social media, search engines, maps, e-commerce sites, or any other website using thousands of ready-made scrapers, crawlers, and automation tools available on the Apify Store.
MCP Security Weekly
Get CVE alerts and security updates for io.github.cdnsteve/remembrallmcp and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Persistent knowledge memory and code intelligence for AI agents. Rust core, Postgres + pgvector, MCP protocol.
The problem: AI coding agents are stateless. Every session starts from zero - no memory of past decisions, no understanding of how the codebase fits together, no way to know what breaks when you change something.
The solution: RemembrallMCP gives agents two things most memory tools don't:
1. Persistent Memory - Decisions, patterns, and organizational knowledge that survive between sessions. Hybrid semantic + full-text search finds relevant context instantly.
2. Code Dependency Graph - A live map of your codebase built with tree-sitter. Functions, classes, imports, and call relationships across 8 languages. Ask "what breaks if I change this?" and get an answer in milliseconds - before the agent touches anything.
remembrall_recall("authentication middleware patterns")
-> 3 relevant memories from past sessions
remembrall_index("/path/to/project", "myapp")
-> Builds dependency graph: 847 symbols, 1,203 relationships
remembrall_impact("AuthMiddleware", direction="upstream")
-> 12 files depend on AuthMiddleware (with confidence scores)
remembrall_store("Switched from JWT to session tokens because...")
-> Decision stored for future sessions
Without RemembrallMCP, agents explore your codebase from scratch every session. Claude Code spawns Explore agents, Codex reads dozens of files, Cursor greps through directories - all burning tokens and time just to understand what calls what. A single "find all callers of this function" task can cost thousands of tokens across multiple tool calls.
With RemembrallMCP, that same query is a single remembrall_impact call that returns in <1ms with zero exploration tokens. The dependency graph is already built and waiting.
| Without RemembrallMCP | With RemembrallMCP | |
|---|---|---|
| "What calls UserService?" | Agent greps, reads 8-15 files, spawns sub-agents | remembrall_impact - 1 call, <1ms |
| "Where is auth middleware defined?" | Agent globs, reads matches, filters | remembrall_lookup_symbol - 1 call, <1ms |
| "What did we decide about caching?" | Agent has no context, asks you | remembrall_recall - 1 call, ~25ms |
| Typical exploration cost | 5,000-20,000 tokens per question | ~200 tokens (tool call + response) |
The savings scale with codebase size. On a small project, an agent can grep and read its way through. On a 500-file monorepo, that exploration becomes the bottleneck - agents hit context limits, spawn multiple sub-agents, or miss cross-module dependencies entirely. RemembrallMCP's graph queries stay under 10ms regardless of project size because the structure is pre-indexed in Postgres, not discovered at runtime.
This is the difference between an agent that explores your codebase every time and one that already understands it.
RemembrallMCP is currently benchmarked on two surfaces:
| Metric | Without RemembrallMCP | With RemembrallMCP | Delta |
|---|---|---|---|
| Total tool calls (5 tasks) | 112 | 5 | -95.5% |
| Estimated tokens | ~56,000 | ~1,000 | -98.2% |
| Avg tool calls per question | 22.4 | 1.0 | -95.5% |
The savings compound on larger codebases. Click is ~90 files - on a 500+ file monorepo, agents without RemembrallMCP need proportionally more exploration calls, while graph queries stay under 10ms regardless of size.
| Memory Recall Metric | Result |
|---|---|
| Queries passed | **3 |