Agent reasoning, memory, and token-optimized context for AI applications.
{
"mcpServers": {
"ai-nocturnus-logic-server": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Agent reasoning, memory, and token-optimized context for AI applications.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Commit history unknown.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
The context engineering engine for AI agents: send only what changed.

# ❌ Without NocturnusAI — replay everything, every turn
messages = system_prompt + full_history + tool_outputs # ~1,259 tokens/turn
response = llm(messages) # $13,600/mo at scale
# ✅ With NocturnusAI — send only what changed
ctx = nocturnus.process_turns(raw_turns) # extract → infer → delta
messages = system_prompt + ctx.briefing_delta # ~221 tokens/turn
response = llm(messages) # $2,400/mo. Same accuracy.
Measured on live APIs. 15-turn product support conversation. Real usage.input_tokens counts. Run it yourself.
| Naive replay | RAG-optimized | NocturnusAI | |
|---|---|---|---|
| Tokens per turn | ~1,259 | ~800 | ~221 |
| Cost per month (1K req/hr, Opus 4, $15/1M) | $13,600 | $12,000 | $2,400 |
| Latency | high | medium | low |
| Truth-preserving | no | no | yes |
Claude Opus 4: 5.7× reduction. Gemini 2.0 Flash: 10.0×. Full calculations.
pip install nocturnusai # Python
npm install nocturnusai-sdk # TypeScript
docker run -p 9300:9300 ghcr.io/auctalis/nocturnusai:latest # Docker
Or use the setup wizard:
curl -fsSL https://raw.githubusercontent.com/Auctalis/nocturnusai/main/install.sh | bash
| Framework | Integration | Link |
|---|---|---|
| LangChain / LangGraph | Drop-in NocturnusContextProvider, LangSmith trace pass-through | Docs |
| CrewAI | Task-scoped context per agent role | Docs |
| AutoGen | Context server callable by any agent | Docs |
| MCP | Spec-compliant server for Claude Desktop, Cursor, Continue | Config |
| OpenAI Agents SDK | Context middleware, no tool modifications | Docs |
| Vercel AI SDK | Edg |
No automated test available for this server. Check the GitHub README for setup instructions.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationNo known vulnerabilities.
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol (MCP) server and CLI that provides tools for agent use when working on iOS and macOS projects.
The official Python SDK for Model Context Protocol servers and clients
An open-source AI agent that brings the power of Gemini directly into your terminal.
MCP Security Weekly
Get CVE alerts and security updates for ai.nocturnus/logic-server and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.