Bounded working memory for coding agents — typed nodes, current-vs-stale state, MCP-mountable.
Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"io-github-razroo-state-trace": {
"command": "<see-readme>",
"args": []
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Bounded working memory for coding agents — typed nodes, current-vs-stale state, MCP-mountable.
No automated test available for this server. Check the GitHub README for setup instructions.
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
No package registry to scan.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationBe the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in other
Persistent memory using a knowledge graph
Privacy-first. MCP is the protocol for tool access. We're the virtualization layer for context.
Official Miro MCP server - Supports context to code and creating diagrams, docs, and data tables.
MCP server for using the GitLab API
MCP Security Weekly
Get CVE alerts and security updates for io.github.razroo/state-trace and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Graph-native working memory for coding agents: typed memories, causal retrieval, bounded capacity, and compact briefs for small models.
state-trace is a bounded working-memory layer for coding and debugging agents that need the right file, failure, and next action under tight token budgets. It is not a replacement for a general-purpose temporal knowledge graph like Graphiti — see ARCHITECTURE.md for the honest comparison.
What it is optimized for:
engine.current_state(), engine.failed_hypotheses())The credibility benchmark. Cold-start artifact localization on the full SWE-bench-Verified test split: given only the GitHub issue text and hints (no trajectory), rank the correct patch file at 1 and at 5.
pip install -e ".[bench]"
python3 examples/swebench_verified_eval.py --limit 500 --backends no_memory bm25 state_trace graphiti
| backend | n | Artifact@1 | Artifact@1 CI | Artifact@5 | Artifact@5 CI | AvgLatencyMs |
|---|---|---|---|---|---|---|
| no_memory | 500 | 0.000 | [0.000, 0.000] | 0.000 | [0.000, 0.000] | 0.00 |
| bm25 | 500 | 0.176 | [0.144, 0.208] | 0.300 | [0.262, 0.338] | 0.10 |
| state_trace | 500 | 0.254 | [0.218, 0.290] | 0.376 | [0.336, 0.414] | 15.04 |
| graphiti | 500 | 0.098 | [0.072, 0.126] | 0.216 | [0.182, 0.254] | 4851.46 |
What this says, plainly:
v0.3.0 landed a module-to-path translator in retrieve_brief's lexical fallback: dotted Python module references in issue text (astropy.modeling.separable_matrix) now resolve to file path candidates (astropy/modeling/separable.py), which pushed A@1 from 0.216 → 0.254 on n=500.
graphiti_head_to_head_eval.py uses for reproducibility without API keys. A full Graphiti pipeline with GPT-4-class extraction might close some of the gap, at materially higher cost per ingest.Localization leads need to be converted into downstream solve wins to matter. Running the actual swebench test suite on patches Codex CLI produces with vs. without a state-trace brief:
| arm | resolved | unresolved | errored | solve-rate | | ---