MCP-compatible LLM gateway that proxies completion requests.
{
"mcpServers": {
"io-github-daedalus-mcp-llm-gateway": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
MCP-compatible LLM gateway that proxies completion requests.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Last commit 7 days ago.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol server for searching and analyzing arXiv papers
An open-source AI agent that brings the power of Gemini directly into your terminal.
The official Python SDK for Model Context Protocol servers and clients
MCP Security Weekly
Get CVE alerts and security updates for io.github.daedalus/mcp-llm-gateway and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
MCP-compatible LLM gateway that proxies completion requests to downstream OpenAI-compatible providers.
mcp-name: io.github.daedalus/mcp-llm-gateway
pip install mcp-llm-gateway
Set the following environment variables:
DOWNSTREAM_URL: Base URL for the OpenAI-compatible downstream API (required)DEFAULT_MODEL: Default model to use for completions (required)MODEL_LIST_URL: URL to fetch available models from (optional, defaults to models.dev)API_KEY: Optional API key for downstream (passthrough)TIMEOUT: Request timeout in seconds (optional, default: 60)Run the MCP server with stdio transport:
mcp-llm-gateway
The server exposes the following tools:
list_models(): List all available models from the remote endpointcomplete(prompt, model, max_tokens, temperature): Send a completion request to the downstream LLM providermodels://list: Returns the list of available modelsconfig://info: Returns current gateway configurationgit clone https://github.com/daedalus/mcp-llm-gateway.git
cd mcp-llm-gateway
pip install -e ".[test]"
# run tests
pytest
# format
ruff format src/ tests/
# lint
ruff check src/ tests/
# type check
mypy src/
Model: Dataclass representing an available LLM modelCompletionRequest: Dataclass for completion request payloadsGatewayConfig: Dataclass for gateway configurationHTTPAdapter: HTTP client for downstream API communicationModelListAdapter: Adapter for fetching model list from remote endpointsModelService: Service for managing model discovery and cachingCompletionService: Service for handling completion requestsConfigService: Service for managing gateway configuration