Persistent memory for AI coding agents. #1 on LongMemEval. Local-first.
{
"mcpServers": {
"io-github-omega-memory-core": {
"args": [
"omega-memory"
],
"command": "uvx"
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Cross-model memory for AI agents. Local-first. Works with Claude, GPT, Gemini, Cursor, Claw Code, and any MCP client. Your agent's brain shouldn't live on someone else's server, or be locked to one provider.
Is it safe?
No known CVEs for omega-memory.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Last commit 0 days ago. 85 stars.
Will it work with my client?
Transport: stdio, sse, http. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
Run this in your terminal to verify the server starts. Then let us know if it worked — your result helps other developers.
uvx 'omega-memory' 2>&1 | head -1 && echo "✓ Server started successfully"
After testing, let us know if it worked:
No known vulnerabilities.
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Persistent memory using a knowledge graph
Privacy-first. MCP is the protocol for tool access. We're the virtualization layer for context.
Pre-build reality check. Scans GitHub, HN, npm, PyPI, Product Hunt — returns 0-100 signal.
Monitor browser logs directly from Cursor and other MCP compatible IDEs.
MCP Security Weekly
Get CVE alerts and security updates for Core MCP Server and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Cross-model memory for AI agents. Local-first. Works with Claude, GPT, Gemini, Cursor, Claw Code, and any MCP client. Your agent's brain shouldn't live on someone else's server, or be locked to one provider.
AI coding agents are stateless. Every new session starts from zero. The "solutions" either lock you into one model provider or send your codebase context to their cloud.
OMEGA solves this. Memory, coordination, and learning that runs entirely on your machine. Works with every major LLM and coding agent. No cloud. No API keys. No vendor lock-in.
pip install omega-memory[server] # Full install (memory + MCP server)
omega setup # Downloads model, registers MCP, installs hooks
omega doctor # Verify everything works
pip install omega-memory[server]
omega setup --client claude-desktop
This registers OMEGA as an MCP server in Claude Desktop's config. Restart Claude Desktop to activate.
pip install omega-memory[server]
omega setup --client cursor # or: claw-code, windsurf, cline, codex
If you only need OMEGA as a Python library for scripts, CI/CD, or automation:
pip install omega-memory # Core only, no MCP server process
from omega import store, query, remember
store("Always use TypeScript strict mode", "user_preference")
results = query("TypeScript preferences")
This gives you the full storage and retrieval API without running an MCP server (~50 MB lighter, no background process). Hooks still work:
omega setup --hooks-only # Auto-capture + memory surfacing, no MCP server (~600MB RAM saved)
git clone https://github.com/omega-memory/omega.git
cd omega
pip install -e ".[server,dev]"
omega setup
omega setup will:
~/.omega/ directory~/.cache/omega/models/omega-memory as an MCP server (Claude Code auto-detected, or specify --client)~/.claude/settings.json~/.claude/CLAUDE.mdOMEGA works through natural language — no API calls, no configuration. Just talk to Claude.
1. Tell Claude to remember something:
"Remember that the auth system uses JWT tokens, not session cookies"
Claude stores this as a permanent memory with semantic embeddings.
2. Close the session. Open a new one.
3. Ask about it:
"What did I decide about authentication?"
OMEGA surfaces th