Local OCR & image analysis via Apple Vision — no cloud, no API keys, ~97% fewer tokens on PDFs.
{
"mcpServers": {
"io-github-woladi-macos-vision-mcp": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
Local OCR & image analysis via Apple Vision — no cloud, no API keys, ~97% fewer tokens on PDFs.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Last commit 0 days ago.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Persistent memory using a knowledge graph
Privacy-first. MCP is the protocol for tool access. We're the virtualization layer for context.
Pre-build reality check. Scans GitHub, HN, npm, PyPI, Product Hunt — returns 0-100 signal.
Monitor browser logs directly from Cursor and other MCP compatible IDEs.
MCP Security Weekly
Get CVE alerts and security updates for io.github.woladi/macos-vision-mcp and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Local OCR & image analysis for any MCP client — private, offline, no API keys.
Pre-extracts text and image data locally before your AI ever sees it — cutting token usage by ~97% on real documents. Files never leave your Mac: no cloud API, no API keys, no network requests.
npm install — powered by Apple Vision Framework, same engine as Live Text in Photos.app.❌ Without macos-vision-mcp:
✅ With macos-vision-mcp:
macos-vision-mcp acts as a local pre-processing layer between your documents and the cloud. Useful for:
Instead of sending the raw document to your AI, you extract the text and structure locally first. The model then works only with the extracted text — never the original file.
Step 1 — Install the package:
npm install -g macos-vision-mcp
Step 2 — Add to your MCP client (example for Claude Code):
claude mcp add macos-vision-mcp -- macos-vision-mcp
Restart your client. The tools appear automatically.
Note: The native module
macos-visioncompiles against your local Node.js at install time. If you switch Node versions, runnpm rebuildinside the package directory.
| Tool | What it does | Example prompt |
|---|---|---|
ocr_image | Extract text from an image or PDF (JPG, PNG, HEIC, TIFF, PDF). Returns plain text or structured blocks with bounding boxes. | "Read the text from ~/Desktop/screenshot.png" |
detect_faces | Detect human faces and return their count and positions. | "How many people are in this photo?" |
detect_barcodes | Read QR codes, EAN, UPC, Code128, PDF417, Aztec, and other 1D/2D codes. | "What does the QR code in /tmp/qr.jpg say?" |
classify_image | Classify image content into 1000+ categories wi |