Score AI initiatives (Accelerate/Fix/Stop), model EUR value, validate portfolios. AI BVF v1.0.
Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"io-github-bahamas1717-aibvf-mcp": {
"command": "<see-readme>",
"args": []
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Score AI initiatives (Accelerate/Fix/Stop), model EUR value, validate portfolios. AI BVF v1.0.
No automated test available for this server. Check the GitHub README for setup instructions.
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
No package registry to scan.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationBe the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in finance
Real-time financial market data: stocks, forex, crypto, commodities, and economic indicators
An MCP server for Massive.com Financial Market Data
Allow parsing of object rest/spread
MCP server for Paradex perp trading. Market data, accounts, orders, positions, and vaults.
MCP Security Weekly
Get CVE alerts and security updates for io.github.Bahamas1717/aibvf-mcp and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
The scoring tool your Claude agent calls before it recommends an AI deployment. Four pillars, published benchmarks, deterministic classification of Accelerate, Fix, or Stop, with modelled EUR value, decision confidence, and a specific list of what to do next.
Six tools on stdio, each callable from any MCP-compatible agent.
| Tool | Purpose |
|---|---|
score_initiative | Four-pillar score returns Accelerate, Fix, or Stop with EUR value range, decision confidence, applied modules, reasoning. |
recommend_improvements | For Stop or Fix, returns the specific pillar raises that would flip the call toward Accelerate. |
calculate_pace_layer_drag | Annual Organisational Drag Cost in EUR from AI-tier vs operating-model misalignment. |
validate_portfolio | Validates a portfolio JSON document against the BVF v1.0 schema. |
get_benchmark | Looks up published benchmark rates for a business function and industry. |
list_taxonomy | Returns valid values for industries, functions, AI tiers, readiness levels. |
npm install -g aibvf-mcp
Register with Claude Desktop, Claude Code, or any MCP client:
{
"mcpServers": {
"aibvf": { "command": "aibvf-mcp" }
}
}
Ask your agent: "score a gen2 CX AI initiative for a 400M EUR retailer, traditional readiness, SA 70, FR 50, CE 55, GR 45," and the agent will call score_initiative, return a Fix classification with a concrete gap list, and offer to call recommend_improvements next.
Agents confidently recommend AI projects with no reference to the business case, no reference to operating-model readiness, and no reference to governance exposure. The scoring belongs in the agent's pre-flight check, not in a slide deck written after the decision.
The protocol is open, the benchmarks cite McKinsey, Gartner, BCG, Deloitte, Forrester, Accenture, ServiceNow, and readiness capture rates come from EY/Oxford and Prosci change-success research.
Every initiative is scored on four pillars, 0 to 100, honest self-assessment.
Rules are deterministic, no network, no dependencies. GR >= 70 or FR <= 20 returns Stop, all four pillars at or above 60 with GR <= 40 returns Accelerate, anything else returns Fix with a specific gap list.
See docs/scoring-formulas.md for every formula and docs/worked-example.md for a full run on a healthcare portfolio.
import { score, recommendImprovements, calculatePaceLayerDrag } from '@aibvf/core';
const r = score({
industry: 'healthcare',
revenue_eur: 800_000_000,
function: 'cx',
ai_tier: 'gen3',
readiness: 'traditional',
scores: {
strategic_alignment: 75,
financial_return: 55,
change_enablement: 40,
governance_risk: 55,
},
});
// { classification: 'Fix', net_low_eur: 23_760_000, net_high_eur: 83_160_000,
// confidence: 54, applied_modules: ['four_pillar_base',
// 'readiness_capture_traditional', 'healthcare_clinical_validation',
// 'healthcare_regulatory_overhead'], ... }
Same inputs through recommendImprovements return three pillar raises, each with a named action, and project a new decision confidence of 68 with target classification Accelerate. `calculatePaceLayerDrag({ revenue_eur: 800_000_000, ai_tier: 'ge