This application provides a chat interface for interacting with multiple LLMs. It supports models from OpenAI, Anthropic, Google, xAI, and custom models via OpenRouter. Key features include text and image input processing, file text extraction, and integration with MCP servers for tool usage.
{
"mcpServers": {
"multi-llm-chat-app-vue-python": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
This application provides a chat interface for interacting with multiple LLMs. It supports models from OpenAI, Anthropic, Google, xAI, and custom models via OpenRouter. Key features include text and image input processing, file text extraction, and integration with MCP servers for tool usage.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
MIT. View license →
Is it maintained?
Last commit 110 days ago. 7 stars.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol server for searching and analyzing arXiv papers
An open-source AI agent that brings the power of Gemini directly into your terminal.
The official Python SDK for Model Context Protocol servers and clients
MCP Security Weekly
Get CVE alerts and security updates for Multi Llm Chat App Vue Python and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
This application provides a chat interface for interacting with multiple Large Language Models (LLMs). It supports models from OpenAI, Anthropic, Google (Gemini), xAI (Grok), and custom models via OpenRouter. Key features include text and image input processing, file text extraction, and integration with MCP servers for tool usage.
Note: Built for learning LLM APIs and web app integration, this project focuses on functional implementation rather than production-level architecture. It is intended solely for local, single-user use and lacks multi-user support.
Deploying using Docker Compose is the recommended way to run this application.
Clone the repository:
git clone https://github.com/yourusername/multi-llm-chat-app-vue-python.git
cd multi-llm-chat
Create Environment Variables:
Create a .env file in the root directory of the project with the following content. Replace the placeholder values with your actual credentials and settings.
# Optional: Required for web search, browsing, and specific model features
GEMINI_API_KEY=your_gemini_api_key
GEMINI_MODEL_NAME=gemini-2.0-flash # Gemini model with support for grounding by Google search
# Required for encryption of API keys and MCP configurations
# Generate a secure key using: python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
ENCRYPTION_KEY=your_secure_fernet_encryption_key
# Required: PostgreSQL database connection details (used by Docker Compose)
POSTGRES_DB=appdb
POSTGRES_USER=user
POSTGRES_PASSWORD=password
ENCRYPTION_KEY is kept secret and secure. The POSTGRES_* variabl