SWI-Prolog as a logic calculator for LLMs
Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"io-github-rikarazome-prolog-reasoner": {
"command": "<see-readme>",
"args": []
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
SWI-Prolog as a logic calculator for LLMs
No automated test available for this server. Check the GitHub README for setup instructions.
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
No package registry to scan.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationBe the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in other
Persistent memory using a knowledge graph
Privacy-first. MCP is the protocol for tool access. We're the virtualization layer for context.
Pre-build reality check. Scans GitHub, HN, npm, PyPI, Product Hunt — returns 0-100 signal.
Hash-verified file editing MCP server with token efficiency hook. 11 tools for AI coding agents.
MCP Security Weekly
Get CVE alerts and security updates for io.github.rikarazome/prolog-reasoner and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
SWI-Prolog as a "logic calculator" for LLMs — available as an MCP server and a Python library. Eliminate the black box from LLM logical reasoning.
LLMs excel at natural language but struggle with formal logic. Prolog excels at logical reasoning but can't process natural language. prolog-reasoner bridges this gap by exposing SWI-Prolog execution to LLMs.
On the built-in 30-problem logic benchmark:
| Pipeline | Accuracy |
|---|---|
LLM-only (claude-sonnet-4-6) | 22/30 (73.3%) |
| LLM + prolog-reasoner | 27/30 (90.0%) |
The gap concentrates in constraint satisfaction and multi-step reasoning — the combinatorial territory LLMs are weak on and Prolog is strong on. Full breakdown below.
LLMs pattern-match; Prolog actually searches and solves. When the LLM writes its problem down as Prolog, two things happen at once:
execute_prolog for arbitrary SWI-Prolog execution, plus list_rule_bases / get_rule_base / save_rule_base / delete_rule_base for reusable named rule bases (v14)execute_prolog so the LLM only writes the situation-specific facts per call# MCP server only (no LLM dependencies)
pip install prolog-reasoner
# Library with OpenAI
pip install prolog-reasoner[openai]
# Library with Anthropic
pip install prolog-reasoner[anthropic]
# Both providers
pip install prolog-reasoner[all]
The MCP server exposes five tools — execute_prolog runs Prolog code written by the connected LLM, and four rule-base tools manage named, reusable Prolog modules. It does not call any external LLM API, so no API key is required.
{
"mcpServers": {
"prolog-reasoner": {
"command": "uvx",
"args": ["prolog-reasoner"]
}
}
}
Or, if prolog-reasoner is installed directly:
{
"mcpServers": {
"prolog-reasoner": {
"command": "prolog-reasoner"
}
}
}
Use Docker if you don't want to install SWI-Prolog locally:
docker build -
... [View full README on GitHub](https://github.com/rikarazome/prolog-reasoner#readme)