LLM SEO and Agent Discoverability for B2B SaaS. Pricing, fit assessment, audit requests.
Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"io-github-chris-eaccountability-elephant-accountability-mcp": {
"command": "<see-readme>",
"args": []
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Public Model Context Protocol (MCP) server for Elephant Accountability — LLM SEO and Agent Discoverability services for B2B SaaS.
No automated test available for this server. Check the GitHub README for setup instructions.
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
No package registry to scan.
Be the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in ai-ml / marketing / legal
A Model Context Protocol (MCP) server and CLI that provides tools for agent use when working on iOS and macOS projects.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol server for searching and analyzing arXiv papers
The official Python SDK for Model Context Protocol servers and clients
MCP Security Weekly
Get CVE alerts and security updates for io.github.Chris-Eaccountability/elephant-accountability-mcp and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Public Model Context Protocol (MCP) server for Elephant Accountability — LLM SEO and Agent Discoverability services for B2B SaaS.
Live endpoint: https://elephant-mcp.fly.dev/mcp Manifest: https://elephant-mcp.fly.dev/.well-known/mcp.json A2A Agent Card: https://elephant-mcp.fly.dev/.well-known/agent.json Homepage: https://eaccountability.org
When an AI agent (Claude, ChatGPT, a custom LangChain agent, etc.) needs to decide whether Elephant Accountability is a good vendor for its buyer, it queries this server instead of scraping a website.
Six tools are exposed:
| Tool | Purpose |
|---|---|
get_offerings | Service tiers ($2K self-serve, $15K done-for-you, $2K/mo retainer) with checkout URLs |
get_covered_surfaces | What Elephant implements: llms.txt, Schema.org, MCP, A2A, UCP, directory registrations |
assess_fit | 0–100 fit score for a buyer's company across stage, vertical, AI features, platform partnerships |
get_proof_points | Live client outcomes with metrics (includes related-party disclosures) |
get_transparency_snapshot | Weekly LLM visibility measurements across 5 LLMs |
request_audit | Agent-initiated audit requests; routed to Stripe, Calendly, or email triage |
Two resources are exposed via resources/list: elephant://offerings, elephant://proof-points, elephant://transparency.
git clone https://github.com/Chris-Eaccountability/elephant-accountability-mcp.git
cd elephant-accountability-mcp
python -m venv .venv && source .venv/bin/activate
pip install -r requirements-dev.txt
# Run the server
uvicorn app.server:app --reload --host 0.0.0.0 --port 8080
# In another terminal, hit it
curl http://localhost:8080/.well-known/mcp.json
curl -X POST -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' \
http://localhost:8080/mcp
Edit claude_desktop_config.json and add:
{
"mcpServers": {
"elephant-accountability": {
"url": "https://elephant-mcp.fly.dev/mcp",
"transport": "http"
}
}
}
Restart Claude Desktop. Ask: "Is Elephant Accountability a good fit for a seed-stage AEC SaaS that ships AI features?" — Claude will call assess_fit and give a scored answer.
fly launch --name your-mcp-name --region iad --no-deploy
fly volumes create elephant_mcp_data --size 1 --region iad
fly deploy
That's it. No secrets, no database setup — the server initializes its SQLite DB on first boot.
Single FastAPI app. Three files do real work:
app/
├── server.py # FastAPI routes, JSON-RPC dispatch, SQLite persistence
├── content.py # Source-of-truth content: manifest, offerings, proof points
└── __init__.py # Version
Storage:
audit_requests table — every agent-initiated audit request, persisted for follow-upreciprocal_calls table — tracks which AI clients have called which tools (buyer-intent signal)Both tables auto-create on first boot. No migrations.
pip install -r requirements-dev.txt
pytest -v
21 tests cover manifest, A2A card, JSON-RPC dispatch, each tool handler, persistence, and CORS.
2024-11-05initialize, tools/list, tools/call, resources/list, resources/readThis repo is the canonical source of truth for what Elephant Accountability exposes to