Server data from the Official MCP Registry
Multi-model AI orchestration MCP server with code review, compare, and debate tools.
Multi-model AI orchestration MCP server with code review, compare, and debate tools.
Valid MCP server (1 strong, 1 medium validity signals). 9 known CVEs in dependencies (2 critical, 6 high severity) Package registry verified. Imported from the Official MCP Registry.
5 files analyzed ยท 10 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
Unverified package source
We couldn't verify that the installable package matches the reviewed source code. Proceed with caution.
Set these up before or after installing:
Environment variable: OPENAI_API_KEY
Environment variable: ANTHROPIC_API_KEY
Environment variable: GEMINI_API_KEY
Environment variable: OPENROUTER_API_KEY
Environment variable: DEFAULT_MODEL
Environment variable: DEFAULT_MODEL_LIST
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-religa-multi-mcp": {
"env": {
"DEFAULT_MODEL": "your-default-model-here",
"GEMINI_API_KEY": "your-gemini-api-key-here",
"OPENAI_API_KEY": "your-openai-api-key-here",
"ANTHROPIC_API_KEY": "your-anthropic-api-key-here",
"DEFAULT_MODEL_LIST": "your-default-model-list-here",
"OPENROUTER_API_KEY": "your-openrouter-api-key-here"
},
"args": [
"multi-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
A multi-model AI orchestration MCP server for automated code review and LLM-powered analysis. Multi-MCP integrates with Claude Code CLI and OpenCode to orchestrate multiple AI models (OpenAI GPT, Anthropic Claude, Google Gemini) for code quality checks, security analysis (OWASP Top 10), and multi-agent consensus. Built on the Model Context Protocol (MCP), this tool enables Python developers and DevOps teams to automate code reviews with AI-powered insights directly in their development workflow.
mini, sonnet, geminiMulti-MCP acts as an MCP server that Claude Code or OpenCode connects to, providing AI-powered code analysis tools:
make installFast Multi-Model Analysis:
Prerequisites:
# Clone and install
git clone https://github.com/religa/multi_mcp.git
cd multi_mcp
# Execute ./scripts/install.sh
make install
# The installer will:
# 1. Install dependencies (uv sync)
# 2. Generate your .env file
# 3. Automatically add to Claude Code / OpenCode config (requires jq)
# 4. Test the installation
If you prefer not to run make install:
# Install dependencies
uv sync
# Copy and configure .env
cp .env.example .env
# Edit .env with your API keys
Add to Claude Code (~/.claude.json) or OpenCode (~/.opencode/opencode.json), replacing /path/to/multi_mcp with your actual clone path:
Claude Code:
{
"mcpServers": {
"multi": {
"type": "stdio",
"command": "/path/to/multi_mcp/.venv/bin/python",
"args": ["-m", "multi_mcp.server"]
}
}
}
OpenCode:
{
"mcp": {
"multi": {
"type": "local",
"command": ["/path/to/multi_mcp/.venv/bin/python", "-m", "multi_mcp.server"],
"enabled": true
}
}
}
Multi-MCP loads settings from .env files in this order (highest priority first):
.env (current directory or project root).env (~/.multi_mcp/.env) - fallback for pip installsEdit .env with your API keys:
# API Keys (configure at least one)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...
OPENROUTER_API_KEY=sk-or-...
# Azure OpenAI (optional)
AZURE_API_KEY=...
AZURE_API_BASE=https://your-resource.openai.azure.com/
# AWS Bedrock (optional)
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION_NAME=us-east-1
# Model Configuration
DEFAULT_MODEL=gpt-5-mini
DEFAULT_MODEL_LIST=gpt-5-mini,gemini-3-flash
Models are defined in YAML configuration files (user config wins):
multi_mcp/config/config.yaml (bundled with package)~/.multi_mcp/config.yaml (optional, takes precedence)To add your own models, create ~/.multi_mcp/config.yaml (see config.yaml and config.override.example.yaml for examples):
version: "1.0"
models:
# Add a new API model
my-custom-gpt:
litellm_model: openai/gpt-4o
aliases:
- custom
notes: "My custom GPT-4o configuration"
# Add a custom CLI model
my-local-llm:
provider: cli
cli_command: ollama
cli_args:
- "run"
- "llama3.2"
cli_parser: text
aliases:
- local
notes: "Local LLaMA via Ollama"
# Override an existing model's settings
gpt-5-mini:
constraints:
temperature: 0.5 # Override default temperature
Merge behavior:
Once installed in your MCP client (Claude Code or OpenCode), you can use these commands:
๐ฌ Chat - Interactive development assistance:
Can you ask Multi chat what's the answer to life, universe and everything?
๐ Code Review - Analyze code with specific models:
Can you multi codereview this module for code quality and maintainability using gemini-3 and codex?
๐ Compare - Get multiple perspectives (uses default models):
Can you multi compare the best state management approach for this React app?
๐ญ Debate - Deep analysis with critique:
Can you multi debate the best project code name for this project?
Edit ~/.claude/settings.json and add the following lines to permissions.allow to enable Claude Code to use Multi MCP without blocking for user permission:
{
"permissions": {
"allow": [
...
"mcp__multi__chat",
"mcp__multi__codereview",
"mcp__multi__compare",
"mcp__multi__debate",
"mcp__multi__models"
],
},
"env": {
"MCP_TIMEOUT": "300000",
"MCP_TOOL_TIMEOUT": "300000"
},
}
Use short aliases instead of full model names:
| Alias | Model | Provider |
|---|---|---|
mini | gpt-5-mini | OpenAI |
nano | gpt-5-nano | OpenAI |
gpt | gpt-5.2 | OpenAI |
codex | gpt-5.1-codex | OpenAI |
sonnet | claude-sonnet-4.6 | Anthropic |
haiku | claude-haiku-4.5 | Anthropic |
opus | claude-opus-4.6 | Anthropic |
gemini | gemini-3.1-pro-preview | |
gemini-3 | gemini-3.1-pro-preview | |
flash | gemini-3-flash | |
azure-mini | azure-gpt-5-mini | Azure |
bedrock-sonnet | bedrock-claude-4-5-sonnet | AWS |
Run multi:models to see all available models and aliases.
Multi-MCP can execute CLI-based AI models (like Gemini CLI, Codex CLI, or Claude CLI) alongside API models. CLI models run as subprocesses and work seamlessly with all existing tools.
Benefits:
compare and debate workflowsBuilt-in CLI Models:
gemini-cli (alias: gem-cli) - Gemini CLI with auto-edit modecodex-cli (alias: cx-cli) - Codex CLI with full-auto modeclaude-cli (alias: cl-cli) - Claude CLI with acceptEdits modeAdding Custom CLI Models:
Add to ~/.multi_mcp/config.yaml (see Model Configuration):
version: "1.0"
models:
my-ollama:
provider: cli
cli_command: ollama
cli_args:
- "run"
- "codellama"
cli_parser: text # "json", "jsonl", or "text"
aliases:
- ollama
notes: "Local CodeLlama via Ollama"
Prerequisites:
CLI models require the respective CLI tools to be installed:
# Gemini CLI
npm install -g @anthropic-ai/gemini-cli
# Codex CLI
npm install -g @openai/codex
# Claude CLI
npm install -g @anthropic-ai/claude-code
Multi-MCP includes a standalone CLI for code review without needing an MCP client.
โ ๏ธ Note: The CLI is experimental and under active development.
# Review a directory
multi src/
# Review specific files
multi src/server.py src/config.py
# Use a different model
multi --model mini src/
# JSON output for CI/pipelines
multi --json src/ > results.json
# Verbose logging
multi -v src/
# Specify project root (for CLAUDE.md loading)
multi --base-path /path/to/project src/
| Feature | Multi-MCP | Single-Model Tools |
|---|---|---|
| Parallel model execution | โ | โ |
| Multi-model consensus | โ | Varies |
| Model debates | โ | โ |
| CLI + API model support | โ | โ |
| OWASP security analysis | โ | Varies |
"No API key found"
.env fileuv run python -c "from multi_mcp.settings import settings; print(settings.openai_api_key)"Integration tests fail
RUN_E2E=1 environment variableDebug mode:
export LOG_LEVEL=DEBUG # INFO is default
uv run python -m multi_mcp.server
Check logs in logs/server.log for detailed information.
Q: Do I need all three AI providers? A: No, just one API key (OpenAI, Anthropic, or Google) is enough to get started.
Q: Does it truly run in parallel?
A: Yes! When you use codereview, compare or debate tools, all models are executed concurrently using Python's asyncio.gather(). This means you get responses from multiple models in the time it takes for the slowest model to respond, not the sum of all response times.
Q: How many models can I run at the same time? A: There's no hard limit! You can run as many models as you want in parallel. In practice, 2-5 models work well for most use cases. All tools use your configured default models (typically 2-3), but you can specify any number of models you want.
We welcome contributions! See CONTRIBUTING.md for:
Quick start:
git clone https://github.com/YOUR_USERNAME/multi_mcp.git
cd multi_mcp
uv sync --extra dev
make check && make test
MIT License - see LICENSE file for details
Be the first to review this server!
by Modelcontextprotocol ยท Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno ยท Developer Tools
Toleno Network MCP Server โ Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace ยท Developer Tools
Create, build, and publish Python MCP servers to PyPI โ conversationally.
by Microsoft ยท Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace ยท Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm โ conversationally
by mcp-marketplace ยท Finance
Free stock data and market news for any MCP-compatible AI assistant.