Persistent memory for AI agents — local JSON, zero config, no server required.
{
"mcpServers": {
"io-github-luizedupp-rememb": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Persistent memory for AI agents — local JSON, zero config, no server required.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Commit history unknown.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.

AI agents forget everything between sessions. rememb gives them persistent memory — local, portable, and works with any agent.

Every dev using AI professionally hits this wall:
Session 1: "We're using PostgreSQL, auth at src/auth/, prefer async patterns."
Session 2: Agent starts from zero. You explain everything again.
Session 3: Same thing.
Existing solutions (Mem0, Zep, Letta) require servers, API keys, and cloud accounts.
You just want the agent to remember your project.
pip install rememb[mcp] # Recommended — includes MCP server
pip install rememb # CLI only
pip install rememb[mcp,semantic,pdf] # All features
Zero friction. No CLI commands. Native IDE integration.
1. Add to your IDE's MCP config:
{
"mcpServers": {
"rememb": {
"command": "rememb",
"args": ["mcp"]
}
}
}
2. Restart your IDE.
The agent now automatically reads memory at session start, writes when learning something new, and searches when needed.
rememb rules # Print generic rules for AI agents
Copy the output to your editor's rules file (.windsurfrules, .cursorrules, CLAUDE.md, etc.)
.rememb/
entries.json ← structured memory (project, actions, systems, user, context)
meta.json ← project metadata
A JSON file in your project. Your agent reads it at the start of every session.
User: "We're using PostgreSQL, auth at src/auth/, async patterns"
Agent: [rememb_write] → Saved
[New session]
Agent: [rememb_read] → Context loaded
Agent: "I see you're using PostgreSQL with auth at src/auth/..."
Search uses local semantic embeddings (no API, no cloud). Falls back to keyword search if embeddings aren't available.
| Section | What to store |
|---|---|
project | Tech stack, architecture, goals |
actions | What was done, decisions made |
systems | Services, modules, integrations |
requests | User preferences, recurring asks |
user | Name, style, expertise, preferences |
context | Anything else relevant |
rememb init # Initialize memory store
rememb write "text" # Add entry (--section, --tags)
rememb read # List all entries (--section, --agent)
rememb search "query" # Semantic/keyword search (--top)
rememb edit <id> # Update entry (--content, --section, --tags)
rememb delete <id> # Remove entry
rememb clear --yes # Delete all entries
rememb import <folder> # Import .md/.txt/.pdf files
rememb rules # Show generic rules for AI agents
.rememb/ anywhere, it worksgit clone https://github.com/LuizEduPP/Rememb
cd rememb
pip install -e ".[dev]"
PRs welcome. Issues welcome. Stars welcome. 🌟
MIT
No automated test available for this server. Check the GitHub README for setup instructions.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationNo known vulnerabilities.
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol (MCP) server and CLI that provides tools for agent use when working on iOS and macOS projects.
The official Python SDK for Model Context Protocol servers and clients
An open-source AI agent that brings the power of Gemini directly into your terminal.
MCP Security Weekly
Get CVE alerts and security updates for io.github.LuizEduPP/rememb and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.