Brave Search MCP Server
Web and local search using Brave Search API
Persistent cross-project memory for Cursor and Claude Code using local semantic search.
{
"mcpServers": {
"io-github-marerem-longmem": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Cross-project memory for AI coding assistants. Stop solving the same problems twice.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Last commit 0 days ago. 1 stars.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Web and local search using Brave Search API
Dynamic problem-solving through sequential thought chains
A Model Context Protocol (MCP) server and CLI that provides tools for agent use when working on iOS and macOS projects.
Production ready MCP server with real-time search, extract, map & crawl.
MCP Security Weekly
Get CVE alerts and security updates for io.github.marerem/longmem and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Your AI solves the same bug in a different project six months later. Writes the same boilerplate. Explains the same pattern. You already knew the answer.
longmem gives your AI a persistent memory that works across every project and every session. Before reasoning from scratch, it searches what you've already solved. After something works, it saves it. The longer you use it, the less you repeat yourself.
You describe a problem
│
▼
search_similar()
┌─────────────────────────────────────────────────────┐
│ 1. pre-filter by category (ci_cd / auth / db / …) │
│ 2. semantic search (Ollama or OpenAI embeddings) │
│ 3. keyword search (SQLite FTS5 exact match) │
│ 4. merge + rank results │
└─────────────────────────────────────────────────────┘
│ │
score ≥ 85% score < 85%
│ │
▼ ▼
cached solution AI reasons from scratch
+ edge cases │
+ team knowledge "it works"
(any project) │
▼
confirm_solution()
saved once — surfaces
from every future project
| longmem | others | |
|---|---|---|
| Cost | Free — local Ollama embeddings | Requires API calls per session |
| Privacy | Nothing leaves your machine | Sends observations to external APIs |
| Process | Starts on demand, no daemon | Background worker + open port required |
| IDE support | Cursor + Claude Code | Primarily one IDE |
| Search | Hybrid: semantic + keyword (FTS5) | Vector-only or keyword-only |
| Teams | Export / import / shared DB path / S3 | Single-user |
| License | MIT | AGPL / proprietary |
1. Install
pipx install longmem
2. Setup — checks Ollama, pulls the embedding model, writes your IDE config
longmem init
3. Activate in each project — copies the rules file that tells the AI how to use memory
cd your-project
longmem install
4. Restart your IDE. Memory tools are now active on every chat.
Need Ollama? Install from ollama.com, then
ollama pull nomic-embed-text. Or use OpenAI — see Configuration.
longmem is an MCP server. Your IDE starts it on demand. Two rules drive the workflow:
Rule 1 — search first. Before the AI reasons about any bug or q