Server data from the Official MCP Registry
Stateless MCP server that proxies research queries to Gemini CLI, reducing agent context/model usage
Stateless MCP server that proxies research queries to Gemini CLI, reducing agent context/model usage
Valid MCP server (2 strong, 1 medium validity signals). 1 code issue detected. 2 known CVEs in dependencies (0 critical, 1 high severity) ⚠️ Package registry links to a different repository than scanned source. Imported from the Official MCP Registry.
4 files analyzed · 4 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: GEMINI_API_KEY
Environment variable: PROJECT_ROOT
Environment variable: RESPONSE_CHUNK_SIZE_KB
Environment variable: CACHE_TTL_MS
Environment variable: DEBUG
Environment variable: GOOGLE_APPLICATION_CREDENTIALS
Environment variable: GOOGLE_CLOUD_PROJECT
Environment variable: VERTEX_AI_PROJECT
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-capybearista-gemini-researcher": {
"env": {
"DEBUG": "your-debug-here",
"CACHE_TTL_MS": "your-cache-ttl-ms-here",
"PROJECT_ROOT": "your-project-root-here",
"GEMINI_API_KEY": "your-gemini-api-key-here",
"VERTEX_AI_PROJECT": "your-vertex-ai-project-here",
"GOOGLE_CLOUD_PROJECT": "your-google-cloud-project-here",
"RESPONSE_CHUNK_SIZE_KB": "your-response-chunk-size-kb-here",
"GOOGLE_APPLICATION_CREDENTIALS": "your-google-application-credentials-here"
},
"args": [
"-y",
"gemini-researcher"
],
"command": "npx"
}
}
}From the project's GitHub README.
A lightweight, stateless MCP (Model Context Protocol) server that lets developer agents (Claude Code, GitHub Copilot) hand off deep repository analysis to the Gemini CLI. The server is read-only, returns structured JSON (as text content), and is designed to reduce the calling agent's context and model usage.
Status: v1 complete. Core features are stable, but still early days. Feedback welcome!
If this saved you tokens, ⭐ please consider giving it a star! :)
The primary goals:
Why use this?
Instead of copying entire files into your agent's context (burning tokens and cluttering the conversation), this server lets Gemini CLI read files directly from your project. Your agent sends a research query, Gemini reads and synthesizes using its large context window, and returns structured results. You save tokens, your agent stays focused, and complex codebase analysis becomes practical.
Verified clients: Claude Code, Cursor, VS Code (GitHub Copilot)
[!NOTE] It definitely works with other clients, but I haven't personally tested them yet. Please open an issue if you try it elsewhere!
Table of contents
Gemini Researcher accepts queries from your AI agent and uses Gemini CLI to analyze your local code files. Results are returned as formatted JSON for your agent to use.
The server runs Gemini CLI with safety restrictions enabled. See docs/runtime-contract.md for full technical details.
Default invocation pattern:
gemini [ -m <model> ] --output-format json --approval-mode default [--admin-policy <path>] -p "<prompt>"
Key safety points:
--approval-mode default (not yolo mode) for controlled executionwrite_file, replace, run_shell_commandGEMINI_RESEARCHER_ENFORCE_ADMIN_POLICY=0 (not recommended)Run health_check with includeDiagnostics: true to see auth status and server health.
| authStatus | What it means | Impact |
|---|---|---|
configured | Gemini CLI is authenticated | Server ready to use |
unauthenticated | No valid authentication found | Server marked as degraded |
unknown | Could not verify auth status | Server marked as degraded |
health_check.status values:
ok: Gemini CLI is available, auth is working, and safety policy is enforceddegraded: Setup incomplete, auth unclear, or safety policy disablednpm install -g @google/gemini-cligemini → Login with Google) or set GEMINI_API_KEYQuick checks:
node --version
gemini --version
Run the setup wizard to verify Gemini CLI is installed and authenticated:
npx gemini-researcher init
Standard config works in most of the tools:
{
"mcpServers": {
"gemini-researcher": {
"command": "npx",
"args": [
"gemini-researcher"
]
}
}
}
[!NOTE] On native Windows, some MCP hosts use shell-less process spawning and may not resolve npm command shims reliably (
npx,gemini). If startup fails with launch errors (spawn ... ENOENT/GEMINI_CLI_LAUNCH_FAILEDdespite working in PowerShell), prefer Docker or WSL for immediate reliability. See the full remediation tree indocs/platforms/windows.md.
Add to your VS Code MCP settings (create .vscode/mcp.json if needed):
{
"servers": {
"gemini-researcher": {
"command": "npx",
"args": [
"gemini-researcher"
]
}
}
}
Option 1: Command line (recommended)
Local (user-wide) scope
# Add the MCP server via CLI
claude mcp add --transport stdio gemini-researcher -- npx gemini-researcher
# Verify it was added
claude mcp list
Project scope
Navigate to your project directory, then run:
# Add the MCP server via CLI
claude mcp add --scope project --transport stdio gemini-researcher -- npx gemini-researcher
# Verify it was added
claude mcp list
Option 2: Manual configuration
Add to .mcp.json in your project root (project scope):
{
"mcpServers": {
"gemini-researcher": {
"command": "npx",
"args": [
"gemini-researcher"
]
}
}
}
Or add to ~/.claude/settings.json for local scope.
After adding the server, restart Claude Code and use /mcp to verify the connection.
Go to Cursor Settings -> Tools & MCP -> Add a Custom MCP Server. Add the following configuration:
{
"mcpServers": {
"gemini-researcher": {
"type": "stdio",
"command": "npx",
"args": [
"gemini-researcher"
]
}
}
}
[!NOTE] The server automatically uses the directory where the IDE opened your workspace as the project root or where your terminal is. To analyze a different directory, optionally set
PROJECT_ROOT:
Example
{
"mcpServers": {
"gemini-researcher": {
"command": "npx",
"args": [
"gemini-researcher"
],
"env": {
"PROJECT_ROOT": "/path/to/your/project"
}
}
}
}
Ask your agent: "Use gemini-researcher to analyze the project."
All tools return structured JSON (as MCP text content). Large responses are chunked (~10KB per chunk) and cached for 1 hour.
| Tool | Purpose | When to use |
|---|---|---|
quick_query | Fast analysis with flash model | Quick questions about specific files or small code sections |
deep_research | In-depth analysis with pro model | Complex multi-file analysis, architecture reviews, security audits |
analyze_directory | Map directory structure | Understanding unfamiliar codebases, generating project overviews |
validate_paths | Pre-check file paths | Verify files exist before running expensive queries |
health_check | Diagnostics | Troubleshooting server/Gemini CLI issues |
fetch_chunk | Get chunked responses | Retrieve remaining parts of large responses |
Query tool fallback chains are family-aware:
quick_query: flash -> flash_lite -> autodeep_research: pro -> flash -> flash_lite -> autoanalyze_directory: flash -> flash_lite -> autoWhen using API-key auth, fallback also handles model-unavailable/unsupported errors (not only quota/capacity errors).
Understanding a security vulnerability:
Agent: Use deep_research to analyze authentication flow across @src/auth and @src/middleware, focusing on security
Quick code explanation:
Agent: Use quick_query to explain the login flow in @src/auth.ts, be concise
Mapping an unfamiliar codebase:
Agent: Use analyze_directory on src/ with depth 3 to understand the project structure
quick_query
{
"prompt": "Explain @src/auth.ts login flow",
"focus": "security",
"responseStyle": "concise"
}
deep_research
{
"prompt": "Analyze authentication across @src/auth and @src/middleware",
"focus": "architecture",
"citationMode": "paths_only"
}
analyze_directory
{
"path": "src",
"depth": 3,
"maxFiles": 200
}
validate_paths
{
"paths": ["src/auth.ts", "README.md"]
}
health_check
{
"includeDiagnostics": true
}
fetch_chunk
{
"cacheKey": "cache_abc123",
"chunkIndex": 2
}
A pre-built multi-platform Docker image is available on Docker Hub:
# Pull the image (works on Intel/AMD and Apple Silicon)
docker pull capybearista/gemini-researcher:latest
# Run the server (mount your project and provide API key)
docker run -i --rm \
-e GEMINI_API_KEY="your-api-key" \
-v /path/to/your/project:/workspace \
capybearista/gemini-researcher:latest
For MCP client configuration with Docker:
{
"mcpServers": {
"gemini-researcher": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-e", "GEMINI_API_KEY",
"-v", "/path/to/your/project:/workspace",
"capybearista/gemini-researcher:latest"
],
"env": {
"GEMINI_API_KEY": "your-api-key-here"
}
}
}
}
[!NOTE]
- The
-iflag is required for stdio transport- The container mounts your project to
/workspace(the project root)- Replace
/path/to/your/projectwith your actual project path- Replace
your-api-keywith your actual Gemini API key (this is required for Docker usage)
docs/platforms/windows.md| Error / signal | Run this check first | Change this configuration next |
|---|---|---|
GEMINI_CLI_LAUNCH_FAILED or spawn ... ENOENT | gemini --help and npx --version in the same terminal profile used by your MCP host | Prefer Docker or WSL config. If staying native, point host command to a stable shim/binary path and restart host. |
health_check warning: "resolves only through cmd /c fallback" | Run health_check with includeDiagnostics: true and inspect diagnostics.resolution | Update host config to launch the reported .cmd shim directly instead of relying on cmd /c fallback. |
MCP host cannot launch server via npx | npx --version | Change host server command from npx gemini-researcher to installed binary path (or Docker transport). |
ADMIN_POLICY_UNSUPPORTED / output format unsupported | gemini --help and confirm --admin-policy, json, stream-json | Upgrade Gemini CLI to v0.36.0+ |
AUTH_MISSING / AUTH_UNKNOWN | gemini interactive login and rerun health_check | Authenticate Gemini CLI or set GEMINI_API_KEY |
GEMINI_CLI_NOT_FOUND: Install Gemini CLI: npm install -g @google/gemini-cliGEMINI_CLI_LAUNCH_FAILED: This is a launch-path issue, not an auth/capability issue. On Windows, command shims can fail in shell-less spawn contexts. Validate gemini --help and npx --version interactively, then prefer Docker or WSL if host launch mode is strict.GEMINI_RESEARCHER_GEMINI_COMMAND: Override the Gemini command name/path used by the server (for wrappers or pinned binary locations).GEMINI_RESEARCHER_GEMINI_ARGS_PREFIX: Prefix extra Gemini args for every invocation (for example --config <file>).health_check diagnostics redact sensitive token-like values in configured args prefix output.AUTH_MISSING: Run gemini, and authenticate or set GEMINI_API_KEYAUTH_UNKNOWN: Auth could not be confirmed (often network/CLI probe failure). If launch errors are present, fix launch-path first; otherwise verify gemini works interactively, then retry.ADMIN_POLICY_MISSING: Reinstall package or verify policies/read-only-enforcement.toml exists in installed package.ADMIN_POLICY_UNSUPPORTED: Upgrade Gemini CLI to v0.36.0+ (gemini --help should include --admin-policy).ADMIN_POLICY_UNSUPPORTED, output format unsupported) should be interpreted only after a successful gemini --help probe. If probe launch fails, treat it as launch-path failure first.GEMINI_RESEARCHER_ENFORCE_ADMIN_POLICY=0: Disables strict startup policy checks. This reduces safety guarantees..gitignore blocking files: Gemini respects .gitignore by default; toggle fileFiltering.respectGitIgnore in gemini /settings if you intentionally want ignored files included (note: this changes Gemini behavior globally)PATH_NOT_ALLOWED: All @path references must resolve inside the configured project root (process.cwd() by default). Use validate_paths to pre-check paths.QUOTA_EXCEEDED: Server retries with fallback models; if all options are exhausted, reduce scope (use quick_query) or wait for quota reset.Read the Contributing Guide to get started.
Quick links:
Be the first to review this server!
by Modelcontextprotocol · AI & ML
Dynamic and reflective problem-solving through structured thought sequences
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.