Persistent graph-backed conversational memory for AI agents.
Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"io-github-abhigyan-shekhar-waggle-mcp": {
"command": "<see-readme>",
"args": []
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Persistent graph-backed conversational memory for AI agents.
No automated test available for this server. Check the GitHub README for setup instructions.
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
No package registry to scan.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationBe the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in ai-ml
Dynamic problem-solving through sequential thought chains
A Model Context Protocol (MCP) server and CLI that provides tools for agent use when working on iOS and macOS projects.
An open-source AI agent that brings the power of Gemini directly into your terminal.
The official Python SDK for Model Context Protocol servers and clients
MCP Security Weekly
Get CVE alerts and security updates for io.github.Abhigyan-Shekhar/Waggle-mcp and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Your AI forgets everything between sessions. Waggle gives it a graph-backed brain.
Persistent, structured memory for AI agents — typically lower-token than chunk-based retrieval, often 2-4× on factual lookups.
Waggle is not a code indexer. It's a conversational memory engine — it remembers what you decided, why, and what changed, across every session.
waggle-mcp ingest-transcript-handoff ingests full ordered transcripts, deduplicates them with message_identity, and exports a session-scoped handoff bundle for the next window or IDE.user block when the matching assistant arrives later.export_skipped.WaggleAdapter connecting the graph engine to ConvoMem / MemBench runners with automated exact-match scoring and latency logging.97.4% R@5 / 88.2% Exact@5 in graph_raw, 96.4% / 85.6% in graph_hybrid).deploy/observability/.docs/runbooks/.→ Individual developer extending Claude, Codex, Cursor, or Antigravity with persistent memory:
Use Python 3.11+ and install via pipx (no venv activation needed):
brew install pipx && pipx ensurepath && pipx install waggle-mcp && waggle-mcp init.
SQLite + local embeddings, zero infra.
→ Team running a shared memory service: Waggle ships with a Docker image, Kubernetes manifests, Prometheus metrics, and multi-tenant auth. See deploy/kubernetes/ and docs/runbooks/.
Both paths share the same MCP tool surface — the difference is only the backend and transport.
waggle-mcp is a local-first memory layer for MCP-compatible AI clients, built on a persistent knowledge graph.
| Stuffed context | Structured retrieval |
|---|---|
| Huge prompts every session | Compact subgraph retrieved at query time |
| Session-local memory | Persistent multi-session memory |
| F |