Local-first agentic memory for MCP agents — 25 tools, hybrid search, GDPR, no cloud.
{
"mcpServers": {
"io-github-skynetcmd-m3-memory": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Local-first agentic memory for MCP agents — 25 tools, hybrid search, GDPR, no cloud.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Commit history unknown.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
956k Swiss court decisions: full-text search, citation graph, statute lookup (DE/FR/IT)
Structured execution for Claude Code — contracts, postconditions, gates, guardrails.
Chilean legislation via MCP — full-text search across statutes and provisions
Senegal legislation via MCP -- full-text search across statutes and provisions
MCP Security Weekly
Get CVE alerts and security updates for io.github.skynetcmd/m3-memory and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Persistent, local memory for MCP agents.
Your agent forgets everything between sessions. M3 Memory fixes that. Install it, add one line to your MCP config, and your agent remembers across sessions, detects contradictions, and keeps its own knowledge current — all on your hardware, fully offline.
Works with Claude Code, Gemini CLI, Aider, and any MCP-compatible agent.
pip install m3-memory
Add to your MCP config:
{
"mcpServers": {
"memory": { "command": "mcp-memory" }
}
}
Requires a local embedding model. Ollama is the easiest:
ollama pull nomic-embed-text && ollama serve
Prefer a GUI? LM Studio works too — load any embedding model (e.g. nomic-embed-text-v1.5) and start its server (defaults to port 1234).
Restart your agent. Done.
You're at a coffee shop on your MacBook, asking Claude to debug a deployment issue. It remembers the architecture decisions you made last week, the server configs you stored yesterday, and the troubleshooting steps that worked last time — all from local SQLite, no internet required.
Later, you're at your Windows desktop at home with Gemini CLI, and it picks up exactly where you left off. Same memories, same context, same knowledge graph. You didn't copy files, didn't export anything, didn't push to someone else's cloud. Your PostgreSQL sync handled everything in the background the moment your laptop hit the local network.
Most AI agents don't persist state between sessions. You re-paste context, re-explain architecture, re-correct mistakes. When facts change, the agent has no mechanism to update what it "knows."
M3 Memory gives agents a structured, persistent memory layer that handles this.
Persistent memory — facts, decisions, preferences survive across sessions. Stored in local SQLite.
Hybrid retrieval — FTS5 keyword matching + semantic vector similarity + MMR diversity re-ranking. Scored and explainable.
Contradiction handling — conflicting facts are automatically superseded. Bitemporal versioning preserves the full history.
Knowledge graph — related memories linked automatically on write. Eight relationship types, 3-hop traversal.
Local and private — embeddings generated locally. No cloud calls. No API costs. Works offline.
Cross-device sync — optional bi-directional delta sync across SQLite, PostgreSQL, and ChromaDB. Same memory on every machine.
| Good fit | Not the right tool |
|---|---|
| You use Claude Code, Gemini CLI, Aider, or any MCP agent | You need LangChain/CrewAI pipeline memory — see Mem0 |
| You're coordinating multiple agents on a shared |