MCP server that reduces LLM context by removing code comments and converting data formats to TOON
{
"mcpServers": {
"io-github-ankitpal181-toon-parse-mcp": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
MCP server that reduces LLM context by removing code comments and converting data formats to TOON
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Last commit 81 days ago.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol server for searching and analyzing arXiv papers
An open-source AI agent that brings the power of Gemini directly into your terminal.
The official Python SDK for Model Context Protocol servers and clients
MCP Security Weekly
Get CVE alerts and security updates for io.github.ankitpal181/toon-parse-mcp and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
mcp-name: io.github.ankitpal181/toon-parse-mcp
A specialized Model Context Protocol (MCP) server that optimizes token usage by converting data to TOON (Token-Oriented Object Notation) and stripping non-essential context from code files.
The toon-parse-mcp MCP server helps AI agents (like Cursor, Claude Desktop, etc.) operate more efficiently by:
optimize_input_context(raw_input: str): Processes raw text data (JSON/XML/CSV/YAML) and returns optimized TOON format.read_and_optimize_file(file_path: str): Reads a local code file and returns a token-optimized version (no inline comments, minimized whitespace).protocol://mandatory-efficiency: Provides a strict system instruction prompt for LLMs to ensure they use the optimization tools correctly.pip install toon-parse-mcp
toon-parse-mcpcommandpython3 -m toon_parse_mcp.server (Ensure your environment is active or use absolute path to python)~/.codeium/windsurf/mcp_config.json directly.mcpServers object:{
"mcpServers": {
"toon-parse-mcp": {
"command": "python3",
"args": ["-m", "toon_parse_mcp.server"]
}
}
}
~/.gemini/antigravity/mcp_config.json directly.mcpServers object:{
"mcpServers": {
"toon-parse-mcp": {
"command": "python3",
"args": ["-m", "toon_parse_mcp.server"]
}
}
}
Add this to your claude_desktop_config.json:
{
"mcpServers": {
"toon-parse-mcp": {
"command": "python3",
"args": ["-m", "toon_parse_mcp.server"]
}
}
}
When the server is active, the AI will have access to the optimize_input_context and read_and_optimize_file tools. You can also refer to the efficiency protocol by asking the AI to "check the mandatory efficiency protocol".
To run the test suite:
pip install -e ".[test]"
pytest tests/
mcp >= 1.25.0toon-parse >= 2.4.3MIT License - see LICENSE for details.