Scores your prompts against your real codebase — context-aware prompt intelligence
{
"mcpServers": {
"io-github-samouh-waleed-prompyai": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Scores your prompts against your real codebase — context-aware prompt intelligence
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Last commit 23 days ago. 1 stars.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
Context cost
3 tools. ~300 tokens (0.2% of 200K).
No automated test available for this server. Check the GitHub README for setup instructions.
evaluate_promptAutomatically called on every user message. Scores your prompt against your project across 4 dimensions (specificity, context completeness, task clarity, file & folder anchoring) and returns AI-enhanced rewrite.
get_contextReturns your project summary: tech stack, recent files, key folders, and AI instruction files.
prompyai_toggleTurns auto-evaluation on or off.
This server is missing a description.If you've used it, help the community.
Add informationNo known vulnerabilities.
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.