Compact, efficient, and extensible long-term memory for LLM agents.
{
"mcpServers": {
"lycheemem": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Compact, efficient, and extensible long-term memory for LLM agents.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
Apache-2.0. View license →
Is it maintained?
Last commit 0 days ago. 225 stars.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol server for searching and analyzing arXiv papers
An open-source AI agent that brings the power of Gemini directly into your terminal.
The official Python SDK for Model Context Protocol servers and clients
MCP Security Weekly
Get CVE alerts and security updates for LycheeMem and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
LycheeMem is a compact memory framework for LLM agents. It starts from efficient conversational memory—through structured organization, lightweight consolidation, and adaptive retrieval—and gradually extends toward action-aware, usage-aware memory for more capable agentic systems.
pip install lycheemem. You can easily start the service from anywhere using lycheemem-cli!LycheeMem is part of the 3rd-generation Lychee (立知) large model series, which focuses on memory intelligence, continual learning, and long-context reasoning.
We welcome you to explore our related works: