Server data from the Official MCP Registry
AI image generation with Gemini 3 Pro (4K) and 2.5 Flash. Smart model selection.
AI image generation with Gemini 3 Pro (4K) and 2.5 Flash. Smart model selection.
Valid MCP server (0 strong, 3 medium validity signals). 8 known CVEs in dependencies (1 critical, 5 high severity) โ ๏ธ Package registry links to a different repository than scanned source. Imported from the Official MCP Registry.
5 files analyzed ยท 9 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
Unverified package source
We couldn't verify that the installable package matches the reviewed source code. Proceed with caution.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-zhongweili-nanobanana-mcp-server": {
"args": [
"nanobanana-mcp-server"
],
"command": "uvx"
}
}
}From the project's GitHub README.
A production-ready Model Context Protocol (MCP) server that provides AI-powered image generation capabilities through Google's Gemini models with intelligent model selection.
Nano Banana 2 (gemini-3.1-flash-image-preview) is now the default model โ delivering Pro-level quality at Flash speed:
Option 1: From MCP Registry (Recommended) This server is available in the Model Context Protocol Registry. Search for "nanobanana" or use the MCP name below with your MCP client.
mcp-name: io.github.zhongweili/nanobanana-mcp-server
Option 2: Using uvx
uvx nanobanana-mcp-server@latest
Option 3: Using pip
pip install nanobanana-mcp-server
Nano Banana supports two authentication methods via NANOBANANA_AUTH_METHOD:
api_key): Uses GEMINI_API_KEY. Best for local development and simple deployments.vertex_ai): Uses Google Cloud Application Default Credentials. Best for production on Google Cloud (Cloud Run, GKE, GCE).auto): Defaults to API Key if present, otherwise tries Vertex AI.Set GEMINI_API_KEY environment variable.
Required environment variables:
NANOBANANA_AUTH_METHOD=vertex_ai (or auto)GCP_PROJECT_ID=your-project-idGCP_REGION=us-central1 (default)Prerequisites:
gcloud services enable aiplatform.googleapis.comroles/aiplatform.user to the service account.Add to your claude_desktop_config.json:
{
"mcpServers": {
"nanobanana": {
"command": "uvx",
"args": ["nanobanana-mcp-server@latest"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
If you are running from source code, point to your local installation:
{
"mcpServers": {
"nanobanana-local": {
"command": "uv",
"args": ["run", "python", "-m", "nanobanana_mcp_server.server"],
"cwd": "/absolute/path/to/nanobanana-mcp-server",
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
To authenticate with Google Cloud Application Default Credentials (instead of an API Key):
{
"mcpServers": {
"nanobanana-adc": {
"command": "uvx",
"args": ["nanobanana-mcp-server@latest"],
"env": {
"NANOBANANA_AUTH_METHOD": "vertex_ai",
"GCP_PROJECT_ID": "your-project-id",
"GCP_REGION": "us-central1"
}
}
}
}
Configuration file locations:
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%\Claude\claude_desktop_config.jsonInstall and configure in VS Code:
Cmd/Ctrl + Shift + P){
"name": "nanobanana",
"command": "uvx",
"args": ["nanobanana-mcp-server@latest"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
Add to Cursor's MCP configuration:
{
"mcpServers": {
"nanobanana": {
"command": "uvx",
"args": ["nanobanana-mcp-server@latest"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
Add to ~/.codex/config.toml (global) or .codex/config.toml (project-scoped):
[mcp_servers.nanobanana]
command = "uvx"
args = ["nanobanana-mcp-server@latest"]
[mcp_servers.nanobanana.env]
GEMINI_API_KEY = "your-gemini-api-key-here"
Or add via the CLI:
codex mcp add
Codex supports both the CLI and VSCode extension using the same config.toml. Once added, Codex can call generate_image, edit_image, and upload_file tools directly in your coding sessions.
Note: The Codex config file is shared by the CLI and the IDE extension. A TOML syntax error will break both simultaneously, so validate your edits carefully.
Add to your config.json:
{
"mcpServers": [
{
"name": "nanobanana",
"command": "uvx",
"args": ["nanobanana-mcp-server@latest"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
]
}
Configure in Open WebUI settings:
{
"mcp_servers": {
"nanobanana": {
"command": ["uvx", "nanobanana-mcp-server@latest"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
# Set environment variable
export GEMINI_API_KEY="your-gemini-api-key-here"
# Run server in stdio mode
uvx nanobanana-mcp-server@latest
# Or with pip installation
python -m nanobanana_mcp_server.server
Nano Banana supports three Gemini models with intelligent automatic selection:
Flash speed with Pro-level quality โ the best of both worlds
"nb2" (or "auto" โ NB2 is the auto default)Maximum reasoning depth for the most demanding compositions
"pro"Legacy model for high-volume rapid iteration
"flash"By default, the server uses AUTO mode which routes to NB2 unless Pro's deeper reasoning is clearly needed:
Pro Model Selected When:
thinking_level="HIGH"NB2 Model Selected When (default):
n > 2)# Automatic selection (recommended) โ routes to NB2 by default
"A cat sitting on a windowsill" # โ NB2 (default)
"Quick sketch of a cat" # โ NB2 (speed keyword, NB2 is fast enough)
"Professional 4K product photo" # โ Pro (strong quality keywords)
# Explicit NB2 selection
generate_image(
prompt="Product photo on white background",
model_tier="nb2", # Nano Banana 2 (Flash speed + 4K)
resolution="4k",
enable_grounding=True
)
# Leverage Nano Banana Pro for complex reasoning
generate_image(
prompt="Cinematic scene: three characters in a tense standoff at dusk",
model_tier="pro", # Pro for deep reasoning
resolution="4k",
thinking_level="HIGH", # Enhanced reasoning
enable_grounding=True
)
# Legacy Flash for high-volume drafts
generate_image(
prompt="Simple icon",
model_tier="flash" # Fast 1024px generation
)
# Control aspect ratio for different formats โญ NEW!
generate_image(
prompt="Cinematic landscape at sunset",
aspect_ratio="21:9" # Ultra-wide cinematic format
)
generate_image(
prompt="Instagram post about coffee",
aspect_ratio="1:1" # Square format for social media
)
generate_image(
prompt="YouTube thumbnail design",
aspect_ratio="16:9" # Standard video format
)
generate_image(
prompt="Mobile wallpaper of mountain vista",
aspect_ratio="9:16" # Portrait format for phones
)
Control the output image dimensions with the aspect_ratio parameter:
Supported Aspect Ratios:
1:1 - Square (Instagram, profile pictures)4:3 - Classic photo format3:4 - Portrait orientation16:9 - Widescreen (YouTube thumbnails, presentations)9:16 - Mobile portrait (phone wallpapers, stories)21:9 - Ultra-wide cinematic2:3, 3:2, 4:5, 5:4 - Various photo formats# Examples for different use cases
generate_image(
prompt="Product showcase for e-commerce",
aspect_ratio="3:4", # Portrait format, good for product pages
model_tier="pro"
)
generate_image(
prompt="Social media banner for Facebook",
aspect_ratio="16:9" # Landscape banner format
)
Note: Aspect ratio works with both Flash and Pro models. For best results with specific aspect ratios at high resolution, use the Pro model with resolution="4k".
Control where generated images are saved with the output_path parameter:
Three modes of operation:
generate_image(
prompt="A beautiful sunset",
output_path="/path/to/sunset.png" # Exact file location
)
generate_image(
prompt="Product photo",
output_path="/path/to/products/" # Trailing slash indicates directory
)
generate_image(
prompt="Random image"
# output_path defaults to None
)
Multiple images (n > 1): When generating multiple images with a file path, images are automatically numbered:
/path/to/image.png/path/to/image_2.png/path/to/image_3.pngPrecedence Rules:
output_path parameter (if provided) - highest priorityIMAGE_OUTPUT_DIR environment variable~/nanobanana-images (default fallback)# Save to specific location with Pro model
generate_image(
prompt="Professional headshot",
model_tier="pro",
output_path="/Users/me/photos/headshot.png"
)
# Save multiple images to a directory
generate_image(
prompt="Product variations",
n=4,
output_path="/path/to/products/" # Each gets unique filename
)
Configuration options:
# Authentication (Required)
# Method 1: API Key
GEMINI_API_KEY=your-gemini-api-key-here
# Method 2: Vertex AI (Google Cloud)
NANOBANANA_AUTH_METHOD=vertex_ai
GCP_PROJECT_ID=your-project-id
GCP_REGION=us-central1
# Model Selection (optional)
NANOBANANA_MODEL=auto # Options: flash, nb2, pro, auto (default: auto โ nb2)
# Optional
IMAGE_OUTPUT_DIR=/path/to/image/directory # Default: ~/nanobanana-images
GEMINI_BASE_URL=https://custom-api.example.com # Custom API endpoint (for proxies/gateways)
LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERROR
LOG_FORMAT=standard # standard, json, detailed
"GEMINI_API_KEY not set"
"Server failed to start"
uvx nanobanana-mcp-server@latest"Permission denied" errors
~/nanobanana-images by defaultFor local development:
# Clone repository
git clone https://github.com/zhongweili/nanobanana-mcp-server.git
cd nanobanana-mcp-server
# Install with uv
uv sync
# Set environment
export GEMINI_API_KEY=your-api-key-here
# Run locally
uv run python -m nanobanana_mcp_server.server
MIT License - see LICENSE for details.
Be the first to review this server!
by Modelcontextprotocol ยท Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno ยท Developer Tools
Toleno Network MCP Server โ Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace ยท Developer Tools
Create, build, and publish Python MCP servers to PyPI โ conversationally.