ContextCore: An MCP server for Claude (or any AI tool) that enables massive token saving through hybrid search (BM25 + Embeddings)
{
"mcpServers": {
"searchembedsdk": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
ContextCore: An MCP server for Claude (or any AI tool) that enables massive token saving through hybrid search (BM25 + Embeddings)
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
AGPL-3.0. View license →
Is it maintained?
Last commit 0 days ago. 14 stars.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol server for searching and analyzing arXiv papers
An open-source AI agent that brings the power of Gemini directly into your terminal.
The official Python SDK for Model Context Protocol servers and clients
MCP Security Weekly
Get CVE alerts and security updates for SearchEmbedSDK and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Search all your local data — notes, code, recordings, images — and send only what matters to AI.
Reduce LLM context tokens by ~57% on SciFact benchmark settings.
| Benchmark Setup | Baseline Context | ContextCore Context | Reduction |
|---|---|---|---|
| SciFact (top-5 retrieved docs vs chunked context) | 1,723.5 tokens/query | 733.4 tokens/query | 57.45% |

Install from PyPI:
python -m pip install contextcore==1.0.0
Optional source install (for contributors):
git clone https://github.com/lucifer-ux/SearchEmbedSDK.git
cd SearchEmbedSDK
python -m pip install -e .
Then run the setup wizard:
contextcore init

Gif is sped up to skip the installation parts. This is not supermemory, but supercharged memory for all file formats shared all across.
That's it. ContextCore indexes your files, registers with your AI tools, and runs in the background. No config files to edit.
Optional but important:
ffmpeg for video indexingContextCore gives you:
contextcorehttp://127.0.0.1:8000ContextCore can expose your codebase context directly to MCP tools (for example, Claude Desktop and OpenCode) so the model can reason over your project without you pasting the entire directory into chat.
Use the code modality during setup (contextcore init) and ContextCore will provide indexed codebase context through MCP tools such as:
get_codebase_contextget_codebase_indexget_module_detailget_file_contentFor real usage, the most reliable setup is:
contextcore initcontextcore servemcp_server.py in your Claude configDo not test the backend in one venv and point Claude at a different venv. That is one of the most common causes of "it works in the terminal but not in Claude".
Run:
contextcore --help
If that fails, the package is not installed in the Python environment your shell is using.
You can benchmark the current text retrieval stack on a BEIR dataset (starting with SciFact) without touching your existing index data.
Install optional benchmark dependency:
python -m pip install beir
Run benchmark:
contextcore benchmark --dataset scifact --top-k 10
Optional fast iteration with fewer queries:
contextcore benchmark --dataset scifact --top-k 10 --max-queries 50
Optional JSON output:
contextcore benchmark --dataset scifact --output-json .\benchmarks\scifact_run.json
Token reduction benchmark (tiktoken):
python -m pip install tiktoken
contextcore benchmark --dataset scifact --top-k 10 --measure-tokens --context-top-k 5
Compare retrieval systems (ContextCore vs BM25) and export publish-ready tables:
contextcore benchmark --dataset scifact --top-k 10 --measure-tokens --context-top-k 5 --systems contextcore_hybrid,bm25_only,trigram_only --report-csv .\benchmarks\scifact_compare.csv --report-md .\benchmarks\scifact_compare.md --output-json .\benchmarks\scifact_compare.json