Productivity-boosting RAG engine for codebases with multi-provider AI support and semantic search.
{
"mcpServers": {
"io-github-abhishek2432001-deeprepo": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Productivity-boosting RAG engine for codebases with multi-provider AI support and semantic search.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Last commit 99 days ago.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Production ready MCP server with real-time search, extract, map & crawl.
Web and local search using Brave Search API
The Apify MCP server enables your AI agents to extract data from social media, search engines, maps, e-commerce sites, or any other website using thousands of ready-made scrapers, crawlers, and automation tools available on the Apify Store.
🚀 OneSearch MCP Server: Web Search & Scraper & Extract, Support agent-browser, SearXNG, Tavily, DuckDuckGo, Bing, etc.
MCP Security Weekly
Get CVE alerts and security updates for io.github.abhishek2432001/deeprepo and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
A production-grade Python library for performing RAG (Retrieval Augmented Generation) on local codebases with multiple AI provider support.
cd deeprepo_core
pip install -e .
See INSTALLATION.md for detailed setup instructions for each provider.
from deeprepo import DeepRepoClient
# Initialize with Ollama (FREE, local) - same provider for both embeddings and LLM
client = DeepRepoClient(provider_name="ollama")
# Or use different providers for embeddings and LLM
# Example: OpenAI for embeddings, Anthropic for LLM
client = DeepRepoClient(
embedding_provider_name="openai",
llm_provider_name="anthropic"
)
# Ingest documents
result = client.ingest("/path/to/your/code")
print(f"Ingested {result['chunks_processed']} chunks")
# Query with RAG
response = client.query("How does authentication work?")
print(response['answer'])
print(f"Sources: {response['sources']}")
| Provider | Cost | Speed | Best For | |----------|------|-------|----------| | Ollama | FREE | Fast | Local development, privacy, offline work | | HuggingFace | FREE* | Medium | Cloud-based, no local setup | | OpenAI | Paid | Very Fast | Production, best quality | | Anthropic | Paid | Very Fast | Production, excellent reasoning | | Gemini | FREE* | Medium | Testing, Google ecosystem |
*Free tier with rate limits
# Same provider for both embeddings and LLM
# Ollama (Recommended - FREE and unlimited)
client = DeepRepoClient(provider_name="ollama")
# HuggingFace (FREE tier)
client = DeepRepoClient(provider_name="huggingface")
# OpenAI (Paid, best quality)
client = DeepRepoClient(provider_name="openai")
# Anthropic (Paid, excellent reasoning)
# Note: Anthropic doesn't have embeddings API, so use with another provider
client = DeepRepoClient(
embedding_provider_name="openai", # Use OpenAI for embeddings
llm_provider_name="anthropic" # Use Anthropic for LLM
)
# Gemini (Free tier, limited)
client = DeepRepoClient(provider_name="gemini")
# Mix and match providers
# Example: Use free HuggingFace for embeddings, paid OpenAI for LLM
client = DeepRepoClient(
embedding_provider_name="huggingface",
llm_provider_name="openai"
)
deeprepo_core/
├── src/deeprepo/
│ ├── client.py # Main facade
│ ├── storage.py # Vector store (JSON + NumPy)
│ ├── ingestion.py # File scanning & chunking
│ ├── interfaces.py # Abstract base classes
│ ├── registry.py # Decorator-based registry
│ ├── mcp/ # MCP server for AI assistants
│ │ ├── server.py # FastMCP server
│ │ └── README.md # MCP documentation
│ └── providers/
│ ├── ollama_v.py # Ollama (local, FREE)
│ ├── huggingface_v.py # HuggingFace (cloud, FREE)
│ ├── openai_v.py # OpenAI (paid)
│ ├── anthropic_v.py # Anthropic (paid)
│ └── gemini_v.py # Gemini (free tier)
VectorStore decouples storage from application logicLLMProvider and EmbeddingProvider abstract interfaces@register_llm decorator for dynamic provider discovery