EU AI Act compliance audit trails and evidence generation for AI agent systems — risk classification
Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"io-github-mdfifty50-boop-compliance-shield": {
"command": "<see-readme>",
"args": []
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
EU AI Act compliance audit trails and evidence generation for AI agent systems — risk classification
No automated test available for this server. Check the GitHub README for setup instructions.
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
No package registry to scan.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationBe the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in legal
An MCP (Model Context Protocol) server for performing accessibility audits on webpages using axe-core. Use the results in an agentic loop with your favorite AI assistants (Amp/Cline/Cursor/GH Copilot) and let them fix a11y issues for you!
MCP server for French e-invoicing (XP Z12-013). Manages invoices, validation and compliance.
956k Swiss court decisions: full-text search, citation graph, statute lookup (DE/FR/IT)
API governance for AI coding assistants. Breaking changes, policies, cross-model context.
MCP Security Weekly
Get CVE alerts and security updates for io.github.mdfifty50-boop/compliance-shield and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
EU AI Act compliance audit trails and evidence generation for AI agent systems.
EU AI Act enforcement begins August 2, 2026. Fines up to 35M EUR or 7% of global annual turnover.
Runtime compliance layer that plugs into any MCP-compatible AI system to provide:
npx compliance-shield-mcp
{
"mcpServers": {
"compliance-shield": {
"command": "npx",
"args": ["compliance-shield-mcp"]
}
}
}
| Tool | Description |
|---|---|
assess_risk_level | Classify an AI system under EU AI Act risk framework |
create_audit_trail | Start a compliance audit trail for an AI system |
log_decision | Log an AI decision with full traceability metadata |
check_compliance_gaps | Identify missing compliance requirements |
generate_evidence_package | Generate auditor-ready evidence documentation |
get_enforcement_timeline | Show upcoming enforcement deadlines and penalties |
| URI | Description |
|---|---|
compliance://timeline | EU AI Act enforcement timeline |
compliance://trails | List all active audit trails |
1. assess_risk_level → Know your risk classification
2. create_audit_trail → Start logging
3. log_decision (repeatedly) → Record every AI decision
4. check_compliance_gaps → Find what's missing
5. generate_evidence_package → Hand to your auditor
| Date | Milestone |
|---|---|
| Feb 2, 2025 | Prohibited AI practices banned |
| Aug 2, 2025 | Governance bodies operational |
| Aug 2, 2026 | High-risk AI obligations enforced |
| Aug 2, 2027 | Full enforcement for all AI systems |
@modelcontextprotocol/sdk + zod onlyMIT