MCP server for runtime quality validation of AI agent outputs — hallucination detection, scope compl
Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"io-github-mdfifty50-boop-qc-validator": {
"command": "<see-readme>",
"args": []
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
MCP server for runtime quality validation of AI agent outputs — hallucination detection, scope compl
No automated test available for this server. Check the GitHub README for setup instructions.
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
No package registry to scan.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationBe the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in ai-ml
A Model Context Protocol (MCP) server and CLI that provides tools for agent use when working on iOS and macOS projects.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol server for searching and analyzing arXiv papers
The official Python SDK for Model Context Protocol servers and clients
MCP Security Weekly
Get CVE alerts and security updates for io.github.mdfifty50-boop/qc-validator and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Runtime quality validation for AI agent outputs. Detect hallucinations, enforce scope compliance, and score output quality — all via MCP.
npx qc-validator-mcp
{
"mcpServers": {
"qc-validator": {
"command": "npx",
"args": ["qc-validator-mcp"]
}
}
}
Score agent output against configurable criteria: length limits, required keywords, forbidden patterns, and factual claim density.
Params: output, task_description, criteria { max_length, required_keywords[], forbidden_patterns[], factual_claims_count }
Returns: { pass, score, issues[], recommendation }
Estimate hallucination likelihood. With source text, checks sentence-level grounding. Without source, flags outputs dense with specific numbers, dates, and URLs.
Params: output, source_text (optional), claim_count (default 5)
Returns: { risk_level, unsupported_claims[], confidence, suggestion }
Validate output against a scope contract — allowed/forbidden topics, word limits, required sections.
Params: output, scope { allowed_topics[], forbidden_topics[], max_words, required_sections[] }
Returns: { compliant, violations[], scope_utilization_percent }
Store validation results for per-agent trending.
Params: agent_id, output_hash, score, pass, issues_count
Returns: { logged, agent_id, total_validations }
Analyze common failure modes for a specific agent.
Params: agent_id
Returns: { total_validations, pass_rate, avg_score, most_common_issues[], trend }
Quality dashboard across all validated agents — no parameters required.
Returns: { total_agents, overall_pass_rate, agents[], worst_performers[], best_performers[], recommendations[] }
qc://dashboard — Quality metrics for all validated agentsMIT