Sequential Thinking MCP Server
Dynamic problem-solving through sequential thought chains
Give your AI agents the ability to listen. Microphone capture and speech-to-text.
{
"mcpServers": {
"io-github-decibri-mcp-listen": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Give your AI agents the ability to listen. Microphone capture and speech-to-text.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Commit history unknown.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationNo known vulnerabilities.
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol (MCP) server and CLI that provides tools for agent use when working on iOS and macOS projects.
The official Python SDK for Model Context Protocol servers and clients
An open-source AI agent that brings the power of Gemini directly into your terminal.
MCP Security Weekly
Get CVE alerts and security updates for io.github.decibri/mcp-listen and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Give your AI agents the ability to listen
Microphone capture and speech-to-text tools for MCP-compatible agents.
| Tool | Description |
|---|---|
list_audio_devices | List available microphone input devices |
capture_audio | Record audio from the microphone and save as WAV |
voice_query | Capture, transcribe (whisper.cpp), and query a local LLM (Ollama) |
claude mcp add mcp-listen npx mcp-listen
Add to your MCP configuration:
{
"mcpServers": {
"mcp-listen": {
"command": "npx",
"args": ["-y", "mcp-listen"]
}
}
}
Compatible with Claude Desktop, ChatGPT Desktop, Cursor, GitHub Copilot, Windsurf, VS Code, Gemini, Zed, and any MCP-compatible client.
npm install -g mcp-listen
For list_audio_devices and capture_audio:
For voice_query (optional):
Returns a JSON array of available audio input devices.
Parameters: None
Example response:
[
{ "index": 3, "name": "Microphone (Creative Live! Cam)", "isDefault": true, "maxInputChannels": 2, "defaultSampleRate": 48000 },
{ "index": 4, "name": "Microphone Array (Intel)", "isDefault": false, "maxInputChannels": 2, "defaultSampleRate": 48000 }
]
Records audio from the microphone and saves as a WAV file.
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
duration_ms | number | 5000 | Recording duration in milliseconds (100-30000) |
device | number | system default | Device index from list_audio_devices |
Example response:
{
"path": "/tmp/mcp-listen-1712345678901.wav",
"duration_ms": 5000,
"sample_rate": 16000,
"channels": 1,
"size_bytes": 160044
}
Full voice pipeline: capture audio, transcribe with whisper.cpp, send to Ollama, return the response. Entirely offline.
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
duration_ms | number | 5000 | Recording duration in milliseconds (100-30000) |
device | number | system default | Device index from list_audio_devices |
whisper_model | string | ggml-base.en.bin | Path or filename of Whisper GGML model |
language | string | en | Language code for transcription |
model | string | llama3.2 | Ollama model name |
prompt | string | You are a helpful assistant. | System prompt for the LLM |
Example response:
{
"transcription": "What is the default port for PostgreSQL?",
"response": "PostgreSQL runs on port 5432 by default.",
"model": "llama3.2"
}
mcp-listen uses decibri for cross-platform microphone capture. No ffmpeg