Simple MCP Client CLI Implementation Using LangChain ReAct Agent / Python
{
"mcpServers": {
"mcp-client-langchain-py": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Simple MCP Client CLI Implementation Using LangChain ReAct Agent / Python
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
MIT. View license →
Is it maintained?
Last commit 54 days ago. 12 stars.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol server for searching and analyzing arXiv papers
An open-source AI agent that brings the power of Gemini directly into your terminal.
The official Python SDK for Model Context Protocol servers and clients
MCP Security Weekly
Get CVE alerts and security updates for Mcp Client Langchain Py and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Quickly test and explore MCP servers from the command line!
A simple, text-based CLI client for Model Context Protocol (MCP) servers built with LangChain and Python.
Suitable for testing MCP servers, exploring their capabilities, and prototyping integrations.
Internally it uses LangChain Agent and
a utility function convert_mcp_to_langchain_tools() from langchain_mcp_tools.
A TypeScript equivalent of this utility is available here
uv (uvx)
installed to run Python package-based MCP serversnpx)
to run Node.js package-based MCP serversInstall mcp-chat tool.
This can take up to a few minutes to complete:
pip install mcp-chat
Configure LLM and MCP Servers settings via the configuration file, llm_mcp_config.json5
code llm_mcp_config.json5
The following is a simple configuration for quick testing:
{
"llm": {
"provider": "openai", "model": "gpt-5-mini"
// "provider": "anthropic", "model": "claude-haiku-4-5"
// "provider": "google_genai", "model": "gemini-2.5-flash"
// "provider": "xai", "model": "grok-4-1-fast-non-reasoning"
// "provider": "cerebras", "model": "gpt-oss-120b"
// "provider": "groq", "model": "openai/gpt-oss-20b"
},
"mcp_servers": {
"us-weather": { // US weather only
"command": "npx",
"args": ["-y", "@h1deya/mcp-server-weather"]
},
},
"example_queries": [
"Tell me how LLMs work in a few sentences",
"Are there any weather alerts in California?",
],
}
Set up API keys
echo "ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-proj-...
GOOGLE_API_KEY=AI...
XAI_API_KEY=xai-...
CEREBRAS_API_KEY=csk-...
GROQ_API_KEY=gsk_..." > .env
code .env
Run the tool
mcp-chat
By default, it reads the configuration file, llm_mcp_config.json5, from the current directory.
Then, it applies the environment variables specified in the .env file,
as well as the ones that are already defined.
response_format: 'content' (the default) internally, which only supports text strings.
While MCP tools can return multiple content types (text, images, etc.), this library currently filters and uses only text content.