CacheTank MCP server gives every AI tool your personal context so you never start from zero.
{
"mcpServers": {
"io-github-jacobsilver7-art-cachetank-mcp": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
CacheTank MCP server gives every AI tool your personal context so you never start from zero.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Commit history unknown.
Will it work with my client?
Transport: stdio. Works with Claude Desktop, Cursor, Claude Code, and most MCP clients.
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationHave you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Dynamic problem-solving through sequential thought chains
A Model Context Protocol server for searching and analyzing arXiv papers
An open-source AI agent that brings the power of Gemini directly into your terminal.
The official Python SDK for Model Context Protocol servers and clients
MCP Security Weekly
Get CVE alerts and security updates for io.github.jacobsilver7-art/cachetank-mcp and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
Stop re-explaining yourself to AI.
CacheTank is your AI memory layer. Save your identity, projects, decisions, and knowledge once — every AI tool gets it automatically.
Every time you open ChatGPT, Claude, Cursor, Copilot, or Gemini you start from zero. You re-explain who you are, what you are working on, what you have already decided. CacheTank fixes this. Save context once, and every AI-powered tool that supports MCP loads it before your first message.
CacheTank solves the number one frustration of working with AI: repeating yourself. Whether you use one AI tool or ten, CacheTank gives each one your full context without you typing a word.
No copy-pasting. No system prompts. No re-explaining. Your context follows you everywhere.
Add to your Claude Desktop config (claude_desktop_config.json):
{
"mcpServers": {
"cachetank": {
"command": "npx",
"args": ["-y", "cachetank-mcp"],
"env": {
"CACHETANK_READ_TOKEN": "your-read-token",
"CACHETANK_WRITE_TOKEN": "your-write-token"
}
}
}
}
claude mcp add cachetank -- npx -y cachetank-mcp
Then set your tokens:
export CACHETANK_READ_TOKEN=your-read-token
export CACHETANK_WRITE_TOKEN=your-write-token
Add to Cursor settings (Settings > MCP Servers):
{
"cachetank": {
"command": "npx",
"args": ["-y", "cachetank-mcp"],
"env": {
"CACHETANK_READ_TOKEN": "your-read-token",
"CACHETANK_WRITE_TOKEN": "your-write-token"
}
}
}
Use the same npx command:
npx -y cachetank-mcp
Required environment variables:
CACHETANK_READ_TOKEN — Your CacheTank read token (get it from the extension or cachetank.com)CACHETANK_WRITE_TOKEN — Optional. Enables saving back to your tank.CacheTank also runs as a remote MCP server for browser-based AI tools:
https://cachetank-mcp-77926794635.us-central1.run.app/mcp
Fetch your personal context for a specific project. Returns your identity, project knowledge, and recent outputs formatted as markdown.
fill_tank({ project: "My Startup" })
Save a piece of knowledge, decision, or output to your tank. Saved items become part of your context automatically in future conversations.
cache_it({
title: "Q1 pricing decision",
markdown: "Decided on \$29/mo for pro tier based on competitor analysis...",
project: "My Startup",
layer: "PROJECTS"
})
| Resource URI | Description |
|---|---|
cachetank://context | Your full personal context. Auto-loaded at conversation start. |
Every knowledge worker using AI tools faces the same problem: context loss. You explain your role, your project, your constraints, your preferences — and then the conversation ends. Next conversation, you start over.
This is not just annoying. It is expensive. Studies show knowledge workers spend 23 minutes re-establishing context every time they switch tools or start a new conversation.
CacheTank is the fix. O