Server data from the Official MCP Registry
MCP server providing Google Search, web scraping, and multi-source research tools for AI assistants
MCP server providing Google Search, web scraping, and multi-source research tools for AI assistants
Valid MCP server (3 strong, 1 medium validity signals). 2 known CVEs in dependencies (0 critical, 1 high severity) Package registry verified. Imported from the Official MCP Registry.
4 files analyzed · 3 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: GOOGLE_CUSTOM_SEARCH_API_KEY
Environment variable: GOOGLE_CUSTOM_SEARCH_ID
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-zoharbabin-google-researcher": {
"env": {
"GOOGLE_CUSTOM_SEARCH_ID": "your-google-custom-search-id-here",
"GOOGLE_CUSTOM_SEARCH_API_KEY": "your-google-custom-search-api-key-here"
},
"args": [
"-y",
"google-researcher-mcp"
],
"command": "npx"
}
}
}From the project's GitHub README.
Professional research tools for AI assistants - Google Search, web scraping, academic papers, patents, and more
| Tool | Description |
|---|---|
google_search | Web search with site, date, and language filters |
google_news_search | News search with freshness controls |
google_image_search | Image search with type, size, color filters |
scrape_page | Extract content from web pages, PDFs, DOCX |
search_and_scrape | Combined search + content extraction |
academic_search | Papers from arXiv, PubMed, IEEE, Springer |
patent_search | Patent search with assignee/inventor filters |
| YouTube | Automatic transcript extraction |
sequential_search | Multi-step research tracking |
| Feature | Google Researcher | Basic Web Search | Manual Research |
|---|---|---|---|
| Web Search | Yes | Yes | No |
| News Search | Yes | No | No |
| Image Search | Yes | No | No |
| Academic Papers | Yes | No | Yes |
| Patent Search | Yes | No | Yes |
| YouTube Transcripts | Yes | No | No |
| PDF Extraction | Yes | No | No |
| Citation Generation | Yes | No | Yes |
| Response Caching | Yes (30min) | No | N/A |
| Rate Limiting | Yes | No | N/A |
This is a Model Context Protocol (MCP) server that enables AI assistants like Claude, GPT, and other LLMs to:
Built for production use with caching, quality scoring, and enterprise security.
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"google-researcher": {
"command": "npx",
"args": ["-y", "google-researcher-mcp"],
"env": {
"GOOGLE_CUSTOM_SEARCH_API_KEY": "YOUR_API_KEY_HERE",
"GOOGLE_CUSTOM_SEARCH_ID": "YOUR_SEARCH_ID_HERE"
}
}
}
}
Add to %APPDATA%\Claude\claude_desktop_config.json:
{
"mcpServers": {
"google-researcher": {
"command": "npx",
"args": ["-y", "google-researcher-mcp"],
"env": {
"GOOGLE_CUSTOM_SEARCH_API_KEY": "YOUR_API_KEY_HERE",
"GOOGLE_CUSTOM_SEARCH_ID": "YOUR_SEARCH_ID_HERE"
}
}
}
}
Download the latest .mcpb bundle from GitHub Releases and double-click to install in Claude Desktop. You'll be prompted to enter your Google API credentials.
Add to ~/.claude.json:
{
"mcpServers": {
"google-researcher": {
"command": "npx",
"args": ["-y", "google-researcher-mcp"],
"env": {
"GOOGLE_CUSTOM_SEARCH_API_KEY": "YOUR_API_KEY_HERE",
"GOOGLE_CUSTOM_SEARCH_ID": "YOUR_SEARCH_ID_HERE"
}
}
}
}
Add to Cursor's MCP settings (Settings > MCP):
{
"mcpServers": {
"google-researcher": {
"command": "npx",
"args": ["-y", "google-researcher-mcp"],
"env": {
"GOOGLE_CUSTOM_SEARCH_API_KEY": "YOUR_API_KEY_HERE",
"GOOGLE_CUSTOM_SEARCH_ID": "YOUR_SEARCH_ID_HERE"
}
}
}
}
Use the same JSON configuration above in your MCP client's settings. Any MCP-compatible client that supports STDIO transport will work.
Need API keys? See the API Setup Guide for step-by-step instructions to get your Google API credentials.
git clone https://github.com/zoharbabin/google-researcher-mcp.git && cd google-researcher-mcp
npm install # Installs deps + Chromium browser automatically
cp .env.example .env # Then add your Google API keys to .env
npm run dev # Server is now running on STDIO transport
Note: This starts the server in STDIO mode, which is all you need for local AI assistant integrations. HTTP transport with OAuth is only required for web-based or multi-client setups — see Choosing a Transport.
Once configured, ask your AI assistant:
"Search for the latest news about AI regulations"
The assistant will use the google_news_search tool and return current articles. If you see search results, the server is working correctly.
If you are an AI assistant reading this, here's how to use this MCP server:
| Your Task | Use This Tool |
|---|---|
| Research a topic, answer a question | search_and_scrape — searches AND retrieves content in one call (recommended) |
| Complex multi-step investigation | sequential_search — tracks progress across 3+ searches, supports branching |
| Find academic papers | academic_search — searches arXiv, PubMed, IEEE with citations (APA, MLA, BibTeX) |
| Search patents | patent_search — Google Patents for prior art, FTO analysis |
| Find recent news | google_news_search — with freshness filtering and date sorting |
| Find images | google_image_search — with size/type/color filtering |
| Get a list of URLs only | google_search — when you need URLs but will process pages yourself |
| Read a specific URL | scrape_page — also extracts YouTube transcripts and parses PDF/DOCX/PPTX |
// Research a topic (RECOMMENDED for most queries)
{ "name": "search_and_scrape", "arguments": { "query": "climate change effects 2024", "num_results": 5 } }
// Multi-step research with tracking (for complex investigations)
{ "name": "sequential_search", "arguments": { "searchStep": "Starting research on quantum computing", "stepNumber": 1, "totalStepsEstimate": 4, "nextStepNeeded": true } }
// Find academic papers (peer-reviewed sources with citations)
{ "name": "academic_search", "arguments": { "query": "transformer neural networks", "num_results": 5 } }
// Search patents (prior art, FTO analysis)
{ "name": "patent_search", "arguments": { "query": "machine learning optimization", "search_type": "prior_art" } }
// Get recent news
{ "name": "google_news_search", "arguments": { "query": "AI regulations", "freshness": "week" } }
// Find images
{ "name": "google_image_search", "arguments": { "query": "solar panel installation", "type": "photo" } }
// Read a specific page
{ "name": "scrape_page", "arguments": { "url": "https://example.com/article" } }
// Get YouTube transcript
{ "name": "scrape_page", "arguments": { "url": "https://www.youtube.com/watch?v=VIDEO_ID" } }
search_and_scrape ranks sources by relevance, freshness, authority, and content quality.scrape_page auto-detects PDFs, DOCX, PPTX and extracts text.| Tool | Best For | Use When... |
|---|---|---|
search_and_scrape | Research (recommended) | You need to answer a question using web sources. Most efficient — searches AND retrieves content in one call. Sources are quality-scored. |
sequential_search | Complex investigations | 3+ searches needed with different angles, or research you might abandon early. Tracks progress, supports branching. You reason; it tracks state. |
academic_search | Peer-reviewed papers | Research requiring authoritative academic sources. Returns papers with citations (APA, MLA, BibTeX), abstracts, and PDF links. |
patent_search | Patent research | Prior art search, freedom to operate (FTO) analysis, patent landscaping. Returns patents with numbers, assignees, inventors, and PDF links. |
google_search | Finding URLs only | You only need a list of URLs (not their content), or want to process pages yourself with custom logic. |
google_image_search | Finding images | You need visual content — photos, illustrations, graphics. For text research, use search_and_scrape. |
google_news_search | Current news | You need recent news articles. Use scrape_page on results to read full articles. |
scrape_page | Reading a specific URL | You have a URL and need its content. Auto-handles YouTube transcripts and documents (PDF, DOCX, PPTX). |
search_and_scrape (Recommended for research)Searches Google and retrieves content from top results in one call. Returns quality-scored, deduplicated text with source attribution. Includes size metadata (estimatedTokens, sizeCategory, truncated) in response.
| Parameter | Type | Default | Description |
|---|---|---|---|
query | string | required | Search query (1-500 chars) |
num_results | number | 3 | Number of results (1-10) |
include_sources | boolean | true | Append source URLs |
deduplicate | boolean | true | Remove duplicate content |
max_length_per_source | number | 50KB | Max content per source in chars |
total_max_length | number | 300KB | Max total combined content in chars |
filter_by_query | boolean | false | Filter to only paragraphs containing query keywords |
google_searchReturns ranked URLs from Google. Use when you only need links, not content.
| Parameter | Type | Default | Description |
|---|---|---|---|
query | string | required | Search query (1-500 chars) |
num_results | number | 5 | Number of results (1-10) |
time_range | string | - | day, week, month, year |
site_search | string | - | Limit to domain |
exact_terms | string | - | Required phrase |
exclude_terms | string | - | Exclude words |
google_image_searchSearches Google Images with filtering options.
| Parameter | Type | Default | Description |
|---|---|---|---|
query | string | required | Search query (1-500 chars) |
num_results | number | 5 | Number of results (1-10) |
size | string | - | huge, large, medium, small |
type | string | - | clipart, face, lineart, photo, animated |
color_type | string | - | color, gray, mono, trans |
file_type | string | - | jpg, gif, png, bmp, svg, webp |
google_news_searchSearches Google News with freshness and date sorting.
| Parameter | Type | Default | Description |
|---|---|---|---|
query | string | required | Search query (1-500 chars) |
num_results | number | 5 | Number of results (1-10) |
freshness | string | week | hour, day, week, month, year |
sort_by | string | relevance | relevance, date |
news_source | string | - | Filter to specific source |
scrape_pageExtracts text from any URL. Auto-detects: web pages (static/JS), YouTube (transcript), documents (PDF/DOCX/PPTX).
| Parameter | Type | Default | Description |
|---|---|---|---|
url | string | required | URL to scrape (max 2048 chars) |
max_length | number | 50KB | Maximum content length in chars. Content exceeding this is truncated at natural breakpoints. |
mode | string | full | full returns content, preview returns metadata + structure only (useful to check size before fetching) |
sequential_searchTracks multi-step research state. Following the sequential_thinking pattern: you do the reasoning, the tool tracks state.
| Parameter | Type | Default | Description |
|---|---|---|---|
searchStep | string | required | Description of current step (1-2000 chars) |
stepNumber | number | required | Current step number (starts at 1) |
totalStepsEstimate | number | 5 | Estimated total steps (1-50) |
nextStepNeeded | boolean | required | true if more steps needed, false when done |
source | object | - | Source found: { url, summary, qualityScore? } |
knowledgeGap | string | - | Gap identified — what's still missing |
isRevision | boolean | - | true if revising a previous step |
revisesStep | number | - | Step number being revised |
branchId | string | - | Identifier for branching research |
academic_searchSearches academic papers via Google Custom Search API, filtered to academic sources (arXiv, PubMed, IEEE, Nature, Springer, etc.). Returns papers with pre-formatted citations.
| Parameter | Type | Default | Description |
|---|---|---|---|
query | string | required | Search query (1-500 chars) |
num_results | number | 5 | Number of papers (1-10) |
year_from | number | - | Filter by min publication year |
year_to | number | - | Filter by max publication year |
source | string | all | all, arxiv, pubmed, ieee, nature, springer |
pdf_only | boolean | false | Only return results with PDF links |
sort_by | string | relevance | relevance, date |
patent_searchSearches Google Patents for prior art, freedom to operate (FTO) analysis, and patent landscaping. Returns patents with numbers, assignees, inventors, and PDF links.
| Parameter | Type | Default | Description |
|---|---|---|---|
query | string | required | Search query (1-500 chars) |
num_results | number | 5 | Number of results (1-10) |
search_type | string | prior_art | prior_art, specific, landscape |
patent_office | string | all | all, US, EP, WO, JP, CN, KR |
assignee | string | - | Filter by assignee/company |
inventor | string | - | Filter by inventor name |
cpc_code | string | - | Filter by CPC classification code |
year_from | number | - | Filter by min year |
year_to | number | - | Filter by max year |
| Feature | Description |
|---|---|
| Web Scraping | Fast static HTML + automatic Playwright fallback for JavaScript-rendered pages |
| YouTube Transcripts | Robust extraction with retry logic and 10 classified error types |
| Document Parsing | Auto-detects and extracts text from PDF, DOCX, PPTX |
| Quality Scoring | Sources ranked by relevance (35%), freshness (20%), authority (25%), content quality (20%) |
| Feature | Description |
|---|---|
| Tools | 8 tools: search_and_scrape, google_search, google_image_search, google_news_search, scrape_page, sequential_search, academic_search, patent_search |
| Resources | Expose server state: stats://tools (per-tool metrics), stats://cache, search://recent, config://server |
| Prompts | Pre-built templates: comprehensive-research, fact-check, summarize-url, news-briefing |
| Annotations | Content tagged with audience, priority, and timestamps |
| Feature | Description |
|---|---|
| Caching | Two-layer (memory + disk) with per-tool namespaces, reduces API costs |
| Dual Transport | STDIO for local clients, HTTP+SSE for web apps |
| Security | OAuth 2.1, SSRF protection, granular scopes |
| Resilience | Circuit breaker, timeouts, graceful degradation |
| Monitoring | Admin endpoints for cache stats, event store, health checks |
For detailed documentation: YouTube Transcripts · Architecture · Testing
graph TD
A[MCP Client] -->|local process| B[STDIO Transport]
A -->|network| C[HTTP+SSE Transport]
C --> L[OAuth 2.1 + Rate Limiter]
L --> D
C -.->|session replay| K[Event Store]
B --> D[McpServer<br>MCP SDK routing + dispatch]
D --> F[google_search]
D --> G[scrape_page]
D --> I[search_and_scrape]
D --> IMG[google_image_search]
D --> NEWS[google_news_search]
I -.->|delegates| F
I -.->|delegates| G
I --> Q[Quality Scoring]
G --> N[SSRF Validator]
N --> S1[CheerioCrawler<br>static HTML]
S1 -.->|insufficient content| S2[Playwright<br>JS rendering]
G --> YT[YouTube Transcript<br>Extractor]
F & G & IMG & NEWS --> J[Persistent Cache<br>memory + disk]
D -.-> R[MCP Resources]
D -.-> P[MCP Prompts]
style J fill:#f9f,stroke:#333,stroke-width:2px
style K fill:#ccf,stroke:#333,stroke-width:2px
style L fill:#f99,stroke:#333,stroke-width:2px
style N fill:#ff9,stroke:#333,stroke-width:2px
style Q fill:#9f9,stroke:#333,stroke-width:2px
For a detailed explanation, see the Architecture Guide.
npm install via postinstall hookClone the Repository:
git clone https://github.com/zoharbabin/google-researcher-mcp.git
cd google-researcher-mcp
Install Dependencies (includes Chromium browser automatically):
npm install
Configure Environment Variables:
cp .env.example .env
Open .env and add your Google API keys. All other variables are optional — see the comments in .env.example for detailed explanations.
Development (auto-reload on file changes):
npm run dev
Production:
npm run build
npm start
# Build the image
docker build -t google-researcher-mcp .
# Run in STDIO mode (default, for MCP clients)
docker run -i --rm --env-file .env google-researcher-mcp
# Run with HTTP transport on port 3000
# (MCP_TEST_MODE= overrides the Dockerfile default of "stdio" to enable HTTP)
docker run -d --rm --env-file .env -e MCP_TEST_MODE= -p 3000:3000 google-researcher-mcp
Docker Compose (quick HTTP transport setup):
cp .env.example .env # Fill in your API keys
docker compose up --build
curl http://localhost:3000/health
Docker with Claude Code (~/.claude/claude_desktop_config.json):
{
"mcpServers": {
"google-researcher": {
"command": "docker",
"args": ["run", "-i", "--rm", "--env-file", "/path/to/.env", "google-researcher-mcp"]
}
}
}
Security note: Never bake secrets into the Docker image. Always pass them at runtime via --env-file or -e flags.
| STDIO | HTTP+SSE | |
|---|---|---|
| Best for | Local MCP clients (Claude Code, Cline, Roo Code) | Web apps, multi-client setups, remote access |
| Auth | None needed (process-level isolation) | OAuth 2.1 Bearer tokens required |
| Setup | Zero config — just provide API keys | Requires OAuth provider (Auth0, Okta, etc.) |
| Scaling | One server per client process | Single server, many concurrent clients |
Recommendation: Use STDIO for local AI assistant integrations. Use HTTP+SSE only when you need a shared service or web application integration.
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
const transport = new StdioClientTransport({
command: "node",
args: ["dist/server.js"]
});
const client = new Client({ name: "my-client" });
await client.connect(transport);
// Search Google
const searchResult = await client.callTool({
name: "google_search",
arguments: { query: "Model Context Protocol" }
});
console.log(searchResult.content[0].text);
// Extract a YouTube transcript
const transcript = await client.callTool({
name: "scrape_page",
arguments: { url: "https://www.youtube.com/watch?v=dQw4w9WgXcQ" }
});
console.log(transcript.content[0].text);
Requires a valid OAuth 2.1 Bearer token from your configured authorization server.
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
const transport = new StreamableHTTPClientTransport(
new URL("http://localhost:3000/mcp"),
{
getAuthorization: async () => `Bearer YOUR_ACCESS_TOKEN`
}
);
const client = new Client({ name: "my-client" });
await client.connect(transport);
const result = await client.callTool({
name: "search_and_scrape",
arguments: { query: "Model Context Protocol", num_results: 3 }
});
console.log(result.content[0].text);
Administrative and monitoring endpoints (HTTP transport only):
| Method | Endpoint | Description | Auth |
|---|---|---|---|
GET | /health | Server health check (status, version, uptime) | Public |
GET | /version | Server version and runtime info | Public |
GET | /mcp/cache-stats | Cache performance statistics | mcp:admin:cache:read |
GET | /mcp/event-store-stats | Event store usage statistics | mcp:admin:event-store:read |
POST | /mcp/cache-invalidate | Clear specific cache entries | mcp:admin:cache:invalidate |
POST | /mcp/cache-persist | Force cache save to disk | mcp:admin:cache:persist |
GET | /mcp/oauth-config | Current OAuth configuration | mcp:admin:config:read |
GET | /mcp/oauth-scopes | OAuth scopes documentation | Public |
GET | /mcp/oauth-token-info | Token details | Authenticated |
All HTTP endpoints under /mcp/ (except public documentation) are protected by OAuth 2.1:
${OAUTH_ISSUER_URL}/.well-known/jwks.json).Configure OAUTH_ISSUER_URL and OAUTH_AUDIENCE in .env. See .env.example for details.
STDIO users: OAuth is not used for STDIO transport. You can skip all OAuth configuration.
Tool Execution:
mcp:tool:google_search:executemcp:tool:google_image_search:executemcp:tool:google_news_search:executemcp:tool:scrape_page:executemcp:tool:search_and_scrape:executeAdministration:
mcp:admin:cache:readmcp:admin:cache:invalidatemcp:admin:cache:persistmcp:admin:event-store:readmcp:admin:config:readThe server exposes state via the MCP Resources protocol. Use resources/list to discover available resources and resources/read to retrieve them.
| URI | Description |
|---|---|
search://recent | Last 20 search queries with timestamps and result counts |
config://server | Server configuration (version, start time, transport mode) |
stats://cache | Cache statistics (hit rate, entry count, memory usage) |
stats://events | Event store statistics (event count, storage size) |
Example (using MCP SDK):
const resources = await client.listResources();
const recentSearches = await client.readResource({ uri: "search://recent" });
Pre-built research workflow templates are available via the MCP Prompts protocol. Use prompts/list to discover prompts and prompts/get to retrieve a prompt with arguments.
| Prompt | Arguments | Description |
|---|---|---|
comprehensive-research | topic, depth (quick/standard/deep) | Multi-source research on a topic |
fact-check | claim, sources (number) | Verify a claim against multiple sources |
summarize-url | url, format (brief/detailed/bullets) | Summarize content from a single URL |
news-briefing | topic, timeRange (day/week/month) | Get current news summary on a topic |
| Prompt | Arguments | Description |
|---|---|---|
patent-portfolio-analysis | company, includeSubsidiaries | Analyze a company's patent holdings |
competitive-analysis | entities (comma-separated), aspects | Compare companies/products |
literature-review | topic, yearFrom, sources | Academic literature synthesis |
technical-deep-dive | technology, focusArea | In-depth technical investigation |
Focus areas for technical-deep-dive: architecture, implementation, comparison, best-practices, troubleshooting
Example (using MCP SDK):
const prompts = await client.listPrompts();
// Basic research
const research = await client.getPrompt({
name: "comprehensive-research",
arguments: { topic: "quantum computing", depth: "standard" }
});
// Advanced: Patent analysis
const patents = await client.getPrompt({
name: "patent-portfolio-analysis",
arguments: { company: "Kaltura", includeSubsidiaries: true }
});
// Advanced: Competitive analysis
const comparison = await client.getPrompt({
name: "competitive-analysis",
arguments: { entities: "React, Vue, Angular", aspects: "performance, learning curve, ecosystem" }
});
| Script | Description |
|---|---|
npm test | Run all unit/component tests (Jest) |
npm run test:e2e | Full end-to-end suite (STDIO + HTTP + YouTube) |
npm run test:coverage | Generate code coverage report |
npm run test:e2e:stdio | STDIO transport E2E only |
npm run test:e2e:sse | HTTP transport E2E only |
npm run test:e2e:youtube | YouTube transcript E2E only |
All NPM scripts:
| Script | Description |
|---|---|
npm start | Run the built server (production) |
npm run dev | Start with live-reload (development) |
npm run build | Compile TypeScript to dist/ |
npm run inspect | Open MCP Inspector for interactive debugging |
For testing philosophy and structure, see the Testing Guide.
The MCP Inspector is a visual debugging tool for MCP servers. Use it to interactively test tools, browse resources, and verify prompts.
Run the Inspector:
npm run inspect
This opens a browser interface at http://localhost:5173 connected to the server via STDIO.
What to Expect:
| Primitive | Count | Items |
|---|---|---|
| Tools | 8 | google_search, google_image_search, google_news_search, scrape_page, search_and_scrape, sequential_search, academic_search, patent_search |
| Resources | 6 | search://recent, config://server, stats://cache, stats://events, search://session/current, stats://resources |
| Prompts | 8 | comprehensive-research, fact-check, summarize-url, news-briefing, patent-portfolio-analysis, competitive-analysis, literature-review, technical-deep-dive |
Troubleshooting Inspector Issues:
npm run build first — Inspector requires compiled JavaScript.GOOGLE_CUSTOM_SEARCH_API_KEY and GOOGLE_CUSTOM_SEARCH_ID are set in your .env file.npm install). Chromium is installed automatically via the postinstall hook.GOOGLE_CUSTOM_SEARCH_API_KEY and GOOGLE_CUSTOM_SEARCH_ID are set in .env. The server exits with a clear error if either is missing.storage/persistent_cache/namespaces/scrapePage/ and restart to force fresh scrapes.npm install. If it failed, re-run npx playwright install chromium. On Linux, also run npx playwright install-deps chromium for system dependencies. In Docker, these are pre-installed. If the browser is missing at runtime, scrape_page returns a clear error message instead of crashing.lsof -ti:3000 | xargs kill) or set PORT=3001 npm start.TRANSCRIPT_DISABLED, VIDEO_UNAVAILABLE). See the YouTube Transcript Documentation for all error codes./mcp/cache-stats to inspect cache health, or /mcp/cache-persist to force a save. See the Management API.OAUTH_ISSUER_URL and OAUTH_AUDIENCE in .env. Use /mcp/oauth-config to inspect current configuration./health on port 3000, which requires HTTP transport. In STDIO mode (MCP_TEST_MODE=stdio), the health check will fail — this is expected.Feature requests and improvements are tracked as GitHub Issues. Contributions welcome.
We welcome contributions of all kinds! Please see the Contribution Guidelines for details.
If you find this project useful, please consider giving it a star — it helps others discover it.
This project is licensed under the MIT License. See the LICENSE file for details.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.