Server data from the Official MCP Registry
Enable your AI agents to scrape and parse web content dynamically, including geo-restricted sites
Enable your AI agents to scrape and parse web content dynamically, including geo-restricted sites
Valid MCP server (9 strong, 1 medium validity signals). 3 known CVEs in dependencies (0 critical, 3 high severity) Imported from the Official MCP Registry. 1 finding(s) downgraded by scanner intelligence.
12 files analyzed · 4 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: SCRAPER_API_USERNAME
Environment variable: SCRAPER_API_PASSWORD
Add this to your MCP configuration file:
{
"mcpServers": {
"mcp-server": {
"args": [
"-y",
"@decodo/mcp-server"
],
"command": "npx"
}
}
}From the project's GitHub README.
This repository provides a Model Context Protocol (MCP) server that connects LLMs and applications to Decodo's platform. The server facilitates integration between MCP-compatible clients and Decodo's services, streamlining access to our tools and capabilities.
Go to decodo.com and start a Web Scraping API plan (free plan available).
Once your plan has started, obtain a Web Scraping API basic authentication token from the dashboard.
{
"Decodo": {
"url": "https://mcp.decodo.com/mcp",
"headers": {
"Authorization": "Basic <basic_auth_token>"
}
}
}
{
"mcpServers": {
"Decodo MCP Server": {
"command": "npx",
"args": ["@decodo/mcp"],
"env": {
"SCRAPER_API_TOKEN": "<web_scraping_api_base64_token>",
"TOOLSETS": "web,ai"
}
}
}
}
git clone https://github.com/Decodo/decodo-mcp-server
cd decodo-mcp-server
npm install
npm run build
cd build/
pwd
Adding index.js to the end of this directory, your build file location should look something like
this:
/Users/your.user/projects/decodo-mcp/build/index.js
{
"mcpServers": {
"decodo-mcp": {
"command": "node",
"args": ["/Users/your.user/projects/decodo-mcp/build/index.js"],
"env": {
"SCRAPER_API_TOKEN": "<web_scraping_api_base64_token>"
}
}
}
}
Tools are organized into toolsets. You can selectively enable specific toolsets by passing a
comma-separated list via the toolsets query parameter:
"Decodo MCP Server": {
"url": "https://mcp.decodo.com/mcp?toolsets=web,ai",
"headers": {
"Authorization": "Basic <your_auth_token>"
}
}
When no toolsets are specified, all tools are registered.
| Toolset | Tools |
|---|---|
web | scrape_as_markdown, screenshot |
search | google_search, google_ads, google_lens, google_travel_hotels, bing_search |
ecommerce | amazon_search, amazon_product, amazon_pricing, amazon_sellers, amazon_bestsellers, walmart_search, walmart_product, target_search, target_product, tiktok_shop_search, tiktok_shop_product, tiktok_shop_url |
social_media | reddit_post, reddit_subreddit, reddit_user, tiktok_post, youtube_metadata, youtube_channel, youtube_subtitles, youtube_search |
ai | chatgpt, perplexity, google_ai_mode |
The server exposes the following tools:
| Tool | Description | Example prompt |
|---|---|---|
scrape_as_markdown | Scrapes any target URL, expects a URL to be given via prompt. Returns results in Markdown. | Scrape peacock.com from a US IP address and tell me the pricing. |
screenshot | Captures a screenshot of any webpage and returns it as a PNG image. | Take a screenshot of github.com from a US IP address. |
google_search | Scrapes Google Search for a given query, and returns parsed results. | Scrape Google Search for shoes and tell me the top position. |
google_ads | Scrapes Google Ads search results. | Scrape Google Ads for laptop and show me the top ads. |
google_lens | Scrapes Google Lens image search results. | Search Google Lens for this image: https://example.com/image.jpg |
google_ai_mode | Scrapes Google AI Mode (Search with AI) results. | Ask Google AI Mode: What are the top three dog breeds? |
google_travel_hotels | Scrapes Google Travel Hotels search results. | Search Google Travel Hotels for hotels in Paris. |
amazon_search | Scrapes Amazon Search for a given query, and returns parsed results. | Scrape Amazon Search for wireless keyboard. |
amazon_product | Scrapes Amazon Product page. | Scrape Amazon product B09H74FXNW and show me the details. |
amazon_pricing | Scrapes Amazon Product pricing information. | Get pricing for Amazon product B09H74FXNW. |
amazon_sellers | Scrapes Amazon Seller information. | Get information about Amazon seller A1R0Z7FJGTKESH. |
amazon_bestsellers | Scrapes Amazon Bestsellers list. | Show me Amazon bestsellers in electronics. |
walmart_search | Scrapes Walmart Search for a given query, and returns parsed results. | Scrape Walmart Search for camping tent. |
walmart_product | Scrapes Walmart Product page. | Scrape Walmart product 15296401808. |
target_search | Scrapes Target Search for a given query, and returns parsed results. | Scrape Target Search for kitchen appliances. |
target_product | Scrapes Target Product page. | Scrape Target product 92186007. |
tiktok_post | Scrapes a TikTok post URL for structured data (e.g. engagement, caption, hashtags). | Scrape this TikTok post: https://www.tiktok.com/@nba/video/7393013274725403950 |
tiktok_shop_search | Scrapes TikTok Shop Search for a given query, and returns parsed results. | Scrape TikTok Shop Search for phone cases. |
tiktok_shop_product | Scrapes TikTok Shop Product page. | Scrape TikTok Shop product 1731541214379741272. |
tiktok_shop_url | Scrapes TikTok Shop page by URL. | Scrape this TikTok Shop URL: https://www.tiktok.com/shop/s?q=HEADPHONES |
youtube_metadata | Scrapes YouTube video metadata. | Get metadata for YouTube video dFu9aKJoqGg. |
youtube_channel | Scrapes YouTube channel videos. | Scrape YouTube channel @decodo_official. |
youtube_subtitles | Scrapes YouTube video subtitles. | Get subtitles for YouTube video L8zSWbQN-v8. |
youtube_search | Search YouTube videos. | Search YouTube for "How to care for chinchillas". |
reddit_post | Scrapes a specific Reddit post. | Scrape the following Reddit post: https://www.reddit.com/r/horseracing/comments/1nsrn3/ |
reddit_subreddit | Scrapes Reddit subreddit results. | Scrape the top 5 posts on r/Python this week. |
reddit_user | Scrapes a Reddit user profile and their posts/comments. | Scrape this Reddit user: https://www.reddit.com/user/IWasRightOnce/ |
bing_search | Scrapes Bing Search results. | Search Bing for laptop reviews. |
chatgpt | Search and interact with ChatGPT for AI-powered responses and conversations. | Ask ChatGPT to explain quantum computing in simple terms. |
perplexity | Search and interact with Perplexity for AI-powered responses and conversations. | Ask Perplexity what the latest trends in web development are. |
The following parameters are inferred from user prompts:
| Parameter | Description |
|---|---|
jsRender | Renders target URL in a headless browser. |
geo | Sets the country from which the request will originate. |
locale | Sets the locale of the request. |
tokenLimit | Truncates the response content up to this limit. Useful if the context window is small. |
prompt | Prompt to send to AI tools (chatgpt, perplexity). |
search | Activates ChatGPT's web search functionality (chatgpt only). |
xhr | When true, includes XHR or fetch responses in the scrape result where supported (e.g. tiktok_post). |
deviceType | Device type to emulate for the request (desktop, mobile, tablet). |
domain | Domain to use for the request (e.g., amazon.com, amazon.co.uk, bing.com). |
pageFrom | Starting page number for pagination. |
deliveryZip | ZIP code for delivery location (Target, Walmart). |
storeId | Store ID for local inventory (Target, Walmart). |
country | Country for TikTok Shop requests. |
limit | Maximum number of results to return (e.g., YouTube channel videos). |
language_code | Language code for subtitles (e.g., en, es). |
Query your AI agent with the following prompt:
Scrape peacock.com from a German IP address and tell me the pricing.
This prompt will say that peacock.com is geo-restricted. To bypass the geo-restriction:
Scrape peacock.com from a US IP address and tell me the pricing.
If your agent has a small context window, the content returned from scraping will be automatically truncated, in order to avoid context-overflow. You can increase the number of tokens returned within your prompt:
Scrape hacker news, return 50k tokens.
If your agent has a big context window, tell it to return full content:
Scrape hacker news, return full content.
All code is released under the MIT License.
Be the first to review this server!
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption