Hosted Crawleo MCP (remote streamable HTTP endpoint).
Remote endpoints: streamable-http: https://api.crawleo.dev/mcp
Valid MCP server (2 strong, 4 medium validity signals). No known CVEs in dependencies. Imported from the Official MCP Registry.
3 files analyzed · No issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Available as Local & Remote
This plugin can run on your machine or connect to a hosted endpoint. during install.
From the project's GitHub README.
Real-time web search and crawling capabilities for AI assistants through Model Context Protocol (MCP).
Crawleo MCP enables AI assistants to access live web data through two powerful tools:
✅ Real-time web search from any country/language
✅ Multiple output formats - Enhanced HTML, Raw HTML, Markdown, Plain Text
✅ Device-specific results - Desktop, mobile, or tablet view
✅ Deep content extraction with JavaScript rendering
✅ Zero data retention - Complete privacy
✅ Auto-crawling option for search results
Install globally via npm:
npm install -g crawleo-mcp
Or use npx without installing:
npx crawleo-mcp
git clone https://github.com/Crawleo/Crawleo-MCP.git
cd Crawleo-MCP
npm install
npm run build
Build and run using Docker:
# Build the image
docker build -t crawleo-mcp .
# Run with your API key
docker run -e CRAWLEO_API_KEY=your_api_key crawleo-mcp
Docker configuration for MCP clients:
{
"mcpServers": {
"crawleo": {
"command": "docker",
"args": ["run", "-i", "--rm", "-e", "CRAWLEO_API_KEY=YOUR_API_KEY_HERE", "crawleo-mcp"]
}
}
}
Use the hosted version at https://api.crawleo.dev/mcp - see configuration examples below.
sk_)After installing via npm, configure your MCP client to use the local server:
Claude Desktop / Cursor / Windsurf (Local):
{
"mcpServers": {
"crawleo": {
"command": "npx",
"args": ["crawleo-mcp"],
"env": {
"CRAWLEO_API_KEY": "YOUR_API_KEY_HERE"
}
}
}
}
Or if installed globally:
{
"mcpServers": {
"crawleo": {
"command": "crawleo-mcp",
"env": {
"CRAWLEO_API_KEY": "YOUR_API_KEY_HERE"
}
}
}
}
From cloned repository:
{
"mcpServers": {
"crawleo": {
"command": "node",
"args": ["/path/to/Crawleo-MCP/dist/index.js"],
"env": {
"CRAWLEO_API_KEY": "YOUR_API_KEY_HERE"
}
}
}
}
Location of config file:
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%\Claude\claude_desktop_config.json~/.config/Claude/claude_desktop_config.jsonConfiguration:
{
"mcpServers": {
"crawleo": {
"url": "https://api.crawleo.dev/mcp",
"transport": "http",
"headers": {
"Authorization": "Bearer YOUR_API_KEY_HERE"
}
}
}
}
Replace YOUR_API_KEY_HERE with your actual API key from crawleo.dev.
Steps:
Example usage:
"Search for the latest AI news and summarize the top 5 articles"
"Find Python web scraping tutorials and extract code examples"
Location of config file:
~/.cursor/config.json or ~/Library/Application Support/Cursor/config.json%APPDATA%\Cursor\config.json~/.config/Cursor/config.jsonConfiguration:
{
"mcpServers": {
"crawleo": {
"url": "https://api.crawleo.dev/mcp",
"transport": "http",
"headers": {
"Authorization": "Bearer YOUR_API_KEY_HERE"
}
}
}
}
Steps:
Example usage in Cursor:
"Search for React best practices and add them to my code comments"
"Find the latest documentation for this API endpoint"
Location of config file:
~/Library/Application Support/Windsurf/config.json%APPDATA%\Windsurf\config.json~/.config/Windsurf/config.jsonConfiguration:
{
"mcpServers": {
"crawleo": {
"url": "https://api.crawleo.dev/mcp",
"transport": "http",
"headers": {
"Authorization": "Bearer YOUR_API_KEY_HERE"
}
}
}
}
Steps:
Location of config file:
For GitHub Copilot in VS Code or compatible editors, you need to configure MCP servers.
Configuration:
Create or edit your MCP config file and add:
{
"servers": {
"Crawleo": {
"url": "https://api.crawleo.dev/mcp",
"transport": "http",
"headers": {
"Authorization": "Bearer YOUR_API_KEY_HERE"
}
}
}
}
Complete example with multiple servers:
{
"servers": {
"Crawleo": {
"url": "https://api.crawleo.dev/mcp",
"transport": "http",
"headers": {
"Authorization": "Bearer YOUR_API_KEY_HERE"
}
}
}
}
Steps:
Example usage:
Ask Copilot: "Search for the latest Python best practices"
Ask Copilot: "Find documentation for this library"
OpenAI now supports MCP servers directly! Here's how to use Crawleo with OpenAI's API:
Python Example:
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="gpt-4",
input=[
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "search for latest news about openai models"
}
]
}
],
text={
"format": {
"type": "text"
},
"verbosity": "medium"
},
reasoning={
"effort": "medium"
},
tools=[
{
"type": "mcp",
"server_label": "Crawleo",
"server_url": "https://api.crawleo.dev/mcp",
"server_description": "Crawleo MCP Server - Real-Time Web Knowledge for AI",
"authorization": "YOUR_API_KEY_HERE",
"allowed_tools": [
"web.search",
"web.crawl"
],
"require_approval": "always"
}
],
store=True,
include=[
"reasoning.encrypted_content",
"web_search_call.action.sources"
]
)
print(response)
Key Parameters:
server_url - Crawleo MCP endpointauthorization - Your Crawleo API keyallowed_tools - Enable web.search and/or web.crawlrequire_approval - Set to "always", "never", or "conditional"Node.js Example:
import OpenAI from 'openai';
const client = new OpenAI();
const response = await client.responses.create({
model: 'gpt-4',
input: [
{
role: 'user',
content: [
{
type: 'input_text',
text: 'search for latest AI developments'
}
]
}
],
tools: [
{
type: 'mcp',
server_label: 'Crawleo',
server_url: 'https://api.crawleo.dev/mcp',
server_description: 'Crawleo MCP Server - Real-Time Web Knowledge for AI',
authorization: 'YOUR_API_KEY_HERE',
allowed_tools: ['web.search', 'web.crawl'],
require_approval: 'always'
}
]
});
console.log(response);
Search the web in real-time with customizable parameters.
Parameters:
query (required) - Search termmax_pages - Number of result pages (default: 1)setLang - Language code (e.g., "en", "ar")cc - Country code (e.g., "US", "EG")device - Device type: "desktop", "mobile", "tablet" (default: "desktop")enhanced_html - Get clean HTML (default: true)raw_html - Get raw HTML (default: false)markdown - Get Markdown format (default: true)page_text - Get plain text (default: false)auto_crawling - Auto-crawl result URLs (default: false)Example:
Ask your AI: "Search for 'Python web scraping' and return results in Markdown"
Extract content from specific URLs.
Parameters:
urls (required) - List of URLs to crawlrawHtml - Return raw HTML (default: false)markdown - Convert to Markdown (default: false)screenshot - Capture screenshot (optional)country - Geographic locationExample:
Ask your AI: "Crawl https://example.com and extract the main content in Markdown"
sk_)authorization fieldMake sure you're using the correct tool names:
web.search (not search_web)web.crawl (not crawl_web)"Search for recent developments in quantum computing and summarize the key findings"
"Search for competitor pricing pages and extract their pricing tiers"
"Find the official documentation for FastAPI and extract the quickstart guide"
"Search for today's news about artificial intelligence from US sources"
"Search for customer reviews of iPhone 15 and analyze sentiment"
Crawleo MCP uses the same affordable pricing as our API:
Check your usage and manage your subscription at crawleo.dev
✅ Zero data retention - We never store your search queries or results
✅ Secure authentication - API keys transmitted over HTTPS
✅ No tracking - Your usage patterns remain private
Is this better? Would you like me to add anything else or create additional guides?
Be the first to review this server!
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption