A locally-hosted AI research platform for Physics/STEM labs — MCP tool servers for spectroscopy, XRD, SEM, literature search, and OriginLab automation, built on Open WebUI with AMD ROCm.
{
"mcpServers": {
"forthought": {
"command": "<see-readme>",
"args": []
}
}
}No install config available. Check the server's README for setup instructions.
Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
This repository documents my working configuration for a locally-hosted AI research platform, built around Open WebUI, that I use daily in my Physics lab. It includes the custom tools, functions, optimizations, and MCP servers I've developed or assembled to support scientific workflows — literature review, spectroscopy, electron microscopy, X-ray diffraction, and data analysis.
Is it safe?
No package registry to scan.
No authentication — any process on your machine can connect.
License not specified.
Is it maintained?
Last commit 52 days ago. 7 stars.
Will it work with my client?
Transport: sse. Compatibility not confirmed.
|
Project FORTHoughtA locally-hosted AI research configuration for Physics and STEM laboratories. |
This repository documents my working configuration for a locally-hosted AI research platform, built around Open WebUI, that I use daily in my Physics lab. It includes the custom tools, functions, optimizations, and MCP servers I've developed or assembled to support scientific workflows — literature review, spectroscopy, electron microscopy, X-ray diffraction, and data analysis.
This is not a product. It is a personal research setup that I maintain and iterate on. I upload my findings, configurations, and custom code here for documentation, reproducibility, and to share with my research group. The tools I build sit on top of existing open-source projects; my contribution is in how they are assembled, configured, and extended for real lab use.
Hardware:
| Server | CPU | GPU | Role |
|---|---|---|---|
| Compute Server (new) | AMD Ryzen 9 9950X | 2× AMD Radeon AI Pro R9700 (32 GB VRAM each) | OriginLab, Lemonade reranker, embeddings, VLM, LM Studio, Docling |
| Docker Server (existing) | Intel Xeon E5-2680 v4 | AMD Radeon RX 7900 XT (20 GB VRAM) | Open WebUI, MCP servers, Jupyter, Qdrant, MetaMCP, all Docker services |
The two machines communicate over Tailscale. The Docker server runs the full containerized stack, while the compute server handles GPU-intensive inference tasks (local LLMs, reranking, embeddings, document parsing).
Configuration note: All service URLs default to
localhost. To run your own instance, copyconfig/.env.exampleto.envand fill in your values. If you use Cloudflare Tunnels or a reverse proxy, update the relevant environment variables.
The platform runs across two machines. The Docker server hosts the containerized stack, while the compute server handles GPU inference:
All services run on-premises. No data leaves the local network unless I explicitly use a cloud LLM.
I tuned Open WebUI's RAG pipeline for scientific documents. Out-of-the-box defaults struggle with multi-column papers, equations, and dense tables. My current configuration:
| Stage | Configuration | Notes |
|---|---|---|
| Parsing | Docling (ROCm GPU) | Native PyTorch ROCm with Granite-Docling instances; VLM image description via Qwen3-VL |
| Embeddings | Qwen 0.6B embed via LM Studio | Served through a custom parallel proxy that splits OWUI's single-batch requests into concurrent sub-batches |
| Vector store | Qdrant, hybrid search enabled | 800-token chunks, 100 overlap |
| Reranking | BGE |
No automated test available for this server. Check the GitHub README for setup instructions.
No known vulnerabilities.
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Query and manage PostgreSQL databases directly from AI assistants
Manage Supabase projects — databases, auth, storage, and edge functions
Asynchronous coordination layer for AI coding agents: identities, inboxes, searchable threads, and advisory file leases over FastMCP + Git + SQLite
Context7 Platform -- Up-to-date code documentation for LLMs and AI code editors
MCP Security Weekly
Get CVE alerts and security updates for FORTHought and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.