Multi-model AI orchestration MCP server with code review, compare, and debate tools.
{
"mcpServers": {
"multi": {
"args": [
"-m",
"multi_mcp.server"
],
"type": "stdio",
"command": "/path/to/multi_mcp/.venv/bin/python"
}
}
}Multi-model AI orchestration MCP server with code review, compare, and debate tools.
Is it safe?
No known CVEs for @anthropic-ai/gemini-cli.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Last commit 2 days ago. 21 stars.
Will it work with my client?
Transport: stdio, sse. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
Context cost
5 tools. ~200 tokens (0.1% of 200K).
Run this in your terminal to verify the server starts. Then let us know if it worked — your result helps other developers.
npx -y @anthropic-ai/gemini-cli 2>&1 | head -1 && echo "✓ Server started successfully"
After testing, let us know if it worked:
chatInteractive development assistance with repository context awareness
codereviewSystematic code review workflow with OWASP Top 10 security checks and performance analysis
compareParallel multi-model analysis for architectural decisions
debateMulti-agent consensus workflow with independent answers and critique
modelsList all available models and their aliases
This server is missing a description.If you've used it, help the community.
Add informationLast scanned 4h ago
No known vulnerabilities.
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.