MCP server for web scraping — static HTML (Colly) and JS-rendered pages (chromedp)
Config is the same across clients — only the file and path differ.
{
"mcpServers": {
"scraper-mcp-server": {
"command": "<see-readme>",
"args": []
}
}
}Are you the author?
Add this badge to your README to show your security score and help users find safe servers.
MCP server for web scraping — static HTML (Colly) and JS-rendered pages (chromedp)
No automated test available for this server. Check the GitHub README for setup instructions.
Five weighted categories — click any category to see the underlying evidence.
No known CVEs.
No package registry to scan.
This server is missing a description. Tools and install config are also missing.If you've used it, help the community.
Add informationBe the first to review
Have you used this server?
Share your experience — it helps other developers decide.
Sign in to write a review.
Others in browser
Production ready MCP server with real-time search, extract, map & crawl.
Browser automation with Puppeteer for web scraping and testing
The Apify MCP server enables your AI agents to extract data from social media, search engines, maps, e-commerce sites, or any other website using thousands of ready-made scrapers, crawlers, and automation tools available on the Apify Store.
One MCP for devs incl Brave,Google,Tavily,Context7,AWS,Excalidraw,DB,GitHub,DevTools,Playwright
MCP Security Weekly
Get CVE alerts and security updates for Scraper Mcp Server and similar servers.
Start a conversation
Ask a question, share a tip, or report an issue.
Sign in to join the discussion.
A lightweight MCP server for web scraping — handles both static HTML and JavaScript-rendered pages through a single, consistent interface.
Most AI assistants can browse the web, but they often struggle with pages that require JavaScript to render content. When I tried to fetch data from sites like AWS blogs or dashboards, I kept getting back empty results — because the actual content only appears after JavaScript runs.
I already had Python scripts using requests + BeautifulSoup for static pages, but JS-rendered pages meant spinning up Playwright separately, writing boilerplate every time, and context-switching between tools.
So I built this MCP server to handle it all in one place. You tell it what URL and CSS selector you want — it figures out whether to use a lightweight HTTP fetch or headless Chrome, and gives you back the data.
Four tools, each for a different use case:
| Tool | Engine | When to use |
|---|---|---|
scrape_static | Colly (HTTP) | Fast static HTML pages |
scrape_js | chromedp (headless Chrome) | JS-rendered SPAs, dashboards |
scrape_multiple | Colly parallel | Same selector across many URLs |
scrape_crawl | Colly recursive | Follow links to a given depth |
All tools use the same interface: give a URL and a CSS selector, get back an array of matched values.
scrape_js)go install github.com/niceysam/scraper-mcp-server@latest
Or download a pre-built binary from Releases.
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"scraper": {
"command": "/Users/YOU/go/bin/scraper-mcp-server"
}
}
}
{
"mcpServers": {
"scraper": {
"command": "/path/to/scraper-mcp-server"
}
}
}
scrape_staticFetches raw HTML via HTTP and extracts values using a CSS selector. No JavaScript execution — fast and lightweight.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
url | string | required | Target URL |
selector | string | required | CSS selector |
attribute | string | "text" | "text" for inner text, or any attribute name ("href", "src", ...) |
Example
{
"url": "https://news.ycombinator.com",
"selector": ".titleline > a",
"attribute": "text"
}
scrape_jsLaunches headless Chrome, waits for JavaScript to execute, then extracts values. Use this for any page that loads content dynamically.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
url | string | required | Target URL |
selector | string | required | CSS selector |
attribute | string | "text" | "text" or any attribute name |
wait_for | string | — | CSS selector to wait for before extracting |
timeout_seconds | number | 30 | Total timeout |
Example
{
"url": "https://aws.amazon.com/ko/blogs/tech/",
"selector": "article",
"timeout_seconds": 25
}
scrape_multipleScrapes multiple URLs concurrently (5 parallel workers) with the same selector. Returns a map of URL → matched values.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
urls | string[] | required | List of URLs |
selector | string | required | CSS selector |
attribute | string | "text" | "text" or any attribute name |
scrape_crawlStarts at a URL and recursively follows links to a specified depth, collecting matched values from every page visited.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
url | string | required | Starting URL |