{
"mcpServers": {
"io-getunleash-unleash-mcp": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
MCP server for managing Unleash feature flags
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Last commit 11 days ago. 10 stars.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Persistent memory using a knowledge graph
Privacy-first. MCP is the protocol for tool access. We're the virtualization layer for context.
Pre-build reality check. Scans GitHub, HN, npm, PyPI, Product Hunt — returns 0-100 signal.
Monitor browser logs directly from Cursor and other MCP compatible IDEs.
MCP Security Weekly
Get CVE alerts and security updates for io.getunleash/unleash-mcp and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
A purpose-driven Model Context Protocol (MCP) server for managing Unleash feature flags. This server enables LLM-powered coding assistants to create and manage feature flags following Unleash best practices.
Experimental feature
The Unleash MCP server is an experimental feature. Functionality may change, and we do not yet recommend using it in production environments.
To share feedback, join our community Slack, open an issue on GitHub, or email us at beta@getunleash.io.
This MCP server provides tools that integrate with the Unleash Admin API, allowing AI coding assistants to:
The MCP server exposes the following tools:
create_flag: Creates a feature flag in Unleash.evaluate_change: Scores risk and recommends feature flag usage.detect_flag: Discovers existing feature flags to avoid duplicates.wrap_change: Provides guidance on how to wrap a change in a feature flag.set_flag_rollout: Configures rollout strategies for a feature flag (does not enable the flag).get_flag_state: Surfaces a feature flag's metadata and its activation strategies.toggle_flag_environment: Enables or disables a feature flag in an environment.remove_flag_strategy: Deletes a feature flag's strategy from an environment.cleanup_flag: Generates instructions for safely removing flagged code paths.The core workflow for an AI assistant is designed to be:
evaluate_change: First, assess a code change to see if a flag is needed.detect_flag: This is often called automatically by evaluate_change to prevent creating duplicate flags.create_flag: If a new flag is required, this tool creates it in Unleash.wrap_change: Finally, this tool provides the language-specific code to implement the new flag.See more information on the core workflow tools in the Tool reference section.
Before you can run the server, you need the following:
This section covers the different ways to install and run the Unleash MCP server. You can either follow a setup for agents (such as Claude Code and Codex), run the MCP as a standalone process using npx, or use a local development setup.
You can add the MCP server directly to Claude Code or Codex. Agent configurations are path-specific. You must run the following command from the root directory of the project where you want to use the MCP.
For Claude Code:
claude mcp add unleash \
--env UNLEASH_BASE_URL={{your-instance-url}} \
--env UNLEASH_PAT={{your-personal-access-token}} \
-- npx -y @unleash/mcp@latest --log-level error
For Codex:
codex mcp add unleash \
--env UNLEASH_BASE_URL={{your-instance-url}} \
--env UNLEASH_PAT={{your-personal-access-token}} \
-- npx -y @unleash/mcp@latest --log-level error
Instead of running the