Server data from the Official MCP Registry
Compare-first local MCP sidecar for browser AI workflows.
Compare-first local MCP sidecar for browser AI workflows.
Valid MCP server (2 strong, 1 medium validity signals). No known CVEs in dependencies. Imported from the Official MCP Registry.
9 files analyzed · No issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
From the project's GitHub README.
One prompt, many AI chats, one side panel.
Teach an agent or operator one browser-first compare workflow: check readiness in already-open AI tabs, ask once from the side panel, then retry or export from the same turn.
Prompt Switchboard is a compare-first, browser-native AI compare workspace. It lets you send one prompt to ChatGPT, Gemini, Perplexity, Qwen, and Grok, then compare the replies in one side panel instead of bouncing between tabs.
After the core compare flow is clear, the repo also exposes governed integrations for Codex and Claude Code browser workflows. OpenCode and OpenClaw remain later, packet-style follow-through lanes instead of the main product story.
Agent-facing truth comes after the product story is clear: the first thing this repo teaches is how to run one real compare turn inside the extension itself. Registry packs, host packets, and public bundles are supporting surfaces around that browser product, not the first install or first success door.
Trust boundary
Prompt Switchboard stays inside your browser, uses your existing sessions on supported sites, and does not add a hosted relay or account layer. The supported repo build also does not rely on OS-level desktop automation, Force Quit helpers, or host-wide process cleanup.
Install the latest build • Landing page • Install guide • First compare guide • Supported sites • Trust boundary • FAQ guide • Privacy • Security • Building locally

The shortest way to evaluate Prompt Switchboard is simple: install the latest packaged build, keep the AI tabs you already use open, then ask once from the side panel and compare the answers in one place.
| Surface | Current truth |
|---|---|
| Product install | GitHub Release zip is the supported install path today. |
| Core product | Browser extension + compare-first side panel with your existing signed-in tabs. |
| Optional integrations | Governed coding-agent integrations for Codex and Claude Code, plus repo-owned starter packets for OpenCode and OpenClaw, and an optional Docker wrapper for the same surface. |
| Official registry | The official MCP Registry already returns a live Prompt Switchboard MCP entry for the governed integration surface. |
| Not live yet | Browser store, host marketplaces, and any Glama listing. |
The supported install path today is the packaged GitHub Release zip. Browser-store submission materials are being kept ready, but GitHub Releases remains the supported install surface today. The official MCP Registry already returns a live Prompt Switchboard MCP entry for the same governed integration surface. That registry proof does not make the browser-store path live, and it does not mean every host marketplace is already published.
The optional integration lane stays one step lower in the information hierarchy. Reach for MCP starter kits, host packets, Docker integration docs, and distribution truth only after the first compare path is already clear.
Before the first compare run, make sure the supported AI tabs you want to use are already open and signed in inside the same browser profile. The side panel now includes a first-run checklist and readiness repair actions, so the shortest path to success lives inside the product instead of only in the docs.
If you only remember one route through this repo, remember this one:
Use these pages in that exact order:
Before you start:
chrome://extensions, enable Developer Mode, and use Load unpacked on the extracted folder.Today the public install path is the packaged GitHub Release zip. A lower-friction store distribution path is being prepared, but it is not live yet.
If you are validating the real Chrome proof lane, keep one extra rule in mind:
official Google Chrome branded builds 137+ / 139+ no longer reliably auto-load
unpacked extensions from command-line flags. Automated runtime proof should use
Chromium or Chrome for Testing. Real Chrome proof keeps the same signed-in
profile, then uses chrome://extensions -> Developer Mode -> Load unpacked
manually.
Need the local build path, release workflow, Docker integration lane, or front-door
maintenance steps? Read CONTRIBUTING.md and the dedicated
Docker integration page.
Maintainer-only cleanup and runtime hygiene commands stay in
CONTRIBUTING.md so this README can stay focused on the
public product surface.
If you want to see the value quickly, try one of these on three or more supported sites:
Summarize the launch plan for a browser-native AI extension in three bullets.Compare the trade-offs between React and Vue for a browser extension UI.Rewrite this paragraph in a clearer, friendlier tone for a GitHub README.If you already use MCP-capable coding agents, come here after the first compare works:
These builder surfaces are intentionally second-ring pages. They are real and useful, but they should not outrank install, first compare, supported sites, and trust boundary in the first impression.
If you need standalone skill folders for host-specific submission flows, use public-skills/README.md. Those packets are repo-owned submission materials for OpenHands/extensions and ClawHub-style publish flows; they are not proof that any public listing is already live.
The strongest product claim here is not abstract AI productivity. It is much simpler: Prompt Switchboard removes the messy part of side-by-side comparison.
| Manual multi-tab compare | Prompt Switchboard |
|---|---|
| Paste the same prompt into every site | Ask once from the side panel |
| Wait in separate tabs and windows | Watch status chips update in one board |
| Reconstruct which answer belongs to which model | Keep aligned model cards in one compare view |
| Copy results back into your own notes by hand | Copy the best-fit answer or reopen the original tab directly |
| Lose the comparison context after the session | Keep the run saved locally for export and restore |
Prompt Switchboard also includes governed coding-agent integrations for product-level workflows. Those integrations are real, but they are not the default first-stop story of the repo. The default story is still: install, run one compare, then export or retry from the same turn.
stdio.127.0.0.1.Current truthful split:
https://clawhub.ai/xiaojiou176/prompt-switchboard-compare-workflows.awesome-opencode/awesome-opencode#276
and is still review-pending; that receipt does not make the OpenCode
plugin package published.Use the repo-local operator helper for the main maintainer path:
npm run mcp:operator -- doctor
npm run mcp:operator -- server
npm run mcp:operator -- smoke
npm run mcp:operator -- live-probe
npm run mcp:operator -- live-diagnose
npm run mcp:operator -- live-support-bundle
Use these links instead of keeping the full builder ledger duplicated in the README:
mcp/integration-kits/support-matrix.jsonmcp/integration-kits/public-distribution-matrix.jsonThe machine-readable builder truth lives at
prompt-switchboard://builder/support-matrix.
Quick placement map:
config.toml.mcp.jsonopencode.jsoncopenclaw mcp set or mcp.serversIf host wiring looks correct but site behavior still feels brittle, read
prompt-switchboard://sites/capabilities next. That resource is the current
per-site DOM/readiness/private-API boundary map for the compare-first product
surface.
Native Messaging is not the shipped transport in this release. If you want
to explore that direction later, start from the scaffold notes in
mcp/native-messaging/README.md instead of
treating it as an already-wired runtime path.

The demo now shows the actual product rhythm: ready state, compare fan-out, workflow staging, and a completed comparison board.

This detail view highlights the compare-first design with the current next-step lane: one prompt header, WorkflowPanel, analyst guidance, clear model identity, delivery status chips, and direct links back to the original site.
The workflow map makes the runtime boundary explicit: Prompt Switchboard orchestrates the browser-side flow, while the supported AI websites remain the actual execution surfaces.

Settings keep the project honest as a real tool, not just a hero screenshot: export and import, language, theme, and keyboard preferences all live inside the extension.
These integrations depend on live DOM structure. When a supported site changes markup, Prompt Switchboard may need selector updates before the compare flow fully recovers.
Need the public-facing install and support detail page? Read docs/supported-sites.html.
Good fit
Not the goal
Use the public support pages for the shortest answers:
The short version is still:
Use the public issue tracker for non-sensitive bugs, setup questions, or product feedback:
https://github.com/xiaojiou176-open/multi-ai-sidepanel/issues
For security-sensitive reports, follow SECURITY.md instead of opening a detailed public issue.
For open-ended product ideas, workflow discussion, or compare-first feedback, use GitHub Discussions:
https://github.com/xiaojiou176-open/multi-ai-sidepanel/discussions
Track packaged builds and release notes on the Releases page.
If Prompt Switchboard makes multi-model comparison easier for you, star the repo so the latest packaged builds, selector drift fixes, and compare-first front-door updates stay easy to find.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.