Annotates any web page with hover labels for AI assistants — zero extensions, any browser
Annotates any web page with hover labels for AI assistants — zero extensions, any browser
Valid MCP server (2 strong, 3 medium validity signals). No known CVEs in dependencies. Package registry verified. Imported from the Official MCP Registry. Trust signals: trusted author (7/8 approved).
5 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-mcpware-ui-annotator-mcp": {
"args": [
"-y",
"@mcpware/ui-annotator"
],
"command": "npx"
}
}
}From the project's GitHub README.
English | 廣東話
Bridge the gap between what you see and what AI can reference — in any browser, zero extensions.
The only tool that puts visible labels on every web element. Hover any element, see its name. Tell your AI assistant "make the sidebar wider" — it knows exactly which element you mean. No screenshots, no CSS selectors, no miscommunication.

Dramatically improves AI-driven UI design and iteration. The pain: telling AI "move that button next to the search bar" never works because the AI can't see your page. UI Annotator fixes this — hover over any element and its component name appears as a label. Now you say "move SearchButton below NavBar" and Claude edits the right component instantly. No browser extensions, works with any framework. The workflow becomes: open page → hover to identify elements → describe changes using real component names → Claude edits → refresh and repeat. Turns a frustrating back-and-forth into a fluid design loop.
When reviewing a web UI with an AI coding assistant, the hardest part isn't the code change — it's describing which element you want changed.
"That thing on the left... the second row... no, the one with the icon..."
You don't know what it's called. The AI doesn't know what you're pointing at. You waste time on miscommunication instead of shipping.
Open your page through the annotator proxy. Hover any element — instantly see its name, CSS selector, and dimensions. Now you both speak the same language.
# Start the MCP server
npx @mcpware/ui-annotator
# Open in ANY browser
http://localhost:7077/localhost:3847
That's it. No browser extensions. No code changes. No setup. Works in Chrome, Firefox, Safari, Edge — any browser.
Your app (localhost:3847)
│
▼
┌─────────────────────┐
│ UI Annotator Proxy │ ← Reverse proxy on port 7077
│ (MCP Server) │
└─────────────────────┘
│
▼
Proxied page with hover annotations injected
│
├──► User sees: hover overlay + tooltip with element names
└──► AI sees: structured element data via MCP tools
The proxy fetches your page, injects a lightweight annotation script, and serves it back. The script scans the DOM, identifies named elements, and reports them to the MCP server. Your AI assistant queries the server to understand what's on the page.
Hover any element to see:
Click the Inspect button in the toolbar (or let your AI toggle it). In inspect mode:
The toolbar sits at the top center of the page showing:
| Tool | What it does |
|---|---|
annotate(url) | Returns proxy URL for user to open in any browser |
get_elements() | Returns all detected UI elements with names, selectors, positions |
highlight_element(name) | Flash-highlights a specific element so user can confirm |
rescan_elements() | Force DOM rescan after page changes |
inspect_mode(enabled) | Toggle inspect mode remotely |
| Browser DevTools | UI Annotator | |
|---|---|---|
| Target user | Frontend developers who know the DOM | Anyone — QA, PM, designer, junior dev |
| Learning curve | Need to understand DOM tree, CSS selectors, box model | Hover and read — zero learning |
| Communication | "The div.flex.gap-4 inside the second child of..." | "The sidebar" |
| Language | CSS/HTML technical terms | Human-readable names |
| Setup | Teach people to open DevTools + navigate the DOM | Open a URL |
| AI integration | None — AI can't see what you're inspecting | MCP — AI sees the same element names you do |
DevTools is for debugging. UI Annotator is for communication — giving humans and AI a shared vocabulary for UI elements.
None of these do what UI Annotator does — live visual labels on every element via reverse proxy:
| Tool | Approach | Why we're different |
|---|---|---|
| browser-use (82K⭐) | AI automation framework | Automates browsers, doesn't label elements for humans. Different use case entirely. |
| Chrome DevTools MCP (31K⭐) | DOM snapshot + element UIDs | AI can inspect, but humans don't see visual annotations. No shared vocabulary. |
| Playwright MCP (29K⭐) | Accessibility tree snapshot | Returns structured text, no visual overlay. Truncates important context. |
| OmniParser | Screenshot + CV detection | Screenshot-based, not live DOM. ~40% accuracy on hard benchmarks. |
| MCP Pointer (526 users) | Chrome extension + MCP | Requires Chrome extension. Human clicks to select — no hover overlay. |
| Agentation | npm embedded in your app | Requires code changes. React 18+ dependency. Not zero-config. |
| Vibe Annotations | Chrome extension | Extension-based, developer-only annotation workflow. |
| Feature | UI Annotator | MCP Pointer | Agentation | Cursor | Chrome DevTools MCP |
|---|---|---|---|---|---|
| Visual hover annotation | Yes | No | Partial | Yes (IDE only) | No |
| Shows element names | Yes | Yes | Yes | No (high-level) | Programmatic |
| Shows dimensions | Yes | Yes | Yes (Detailed) | Yes | Programmatic |
| MCP server | Yes | Yes | Yes | No | Yes |
| Zero browser extensions | Yes | No | Yes | N/A | No |
| Zero code changes | Yes | Yes | No | N/A | Yes |
| Any browser | Yes | Chrome only | Desktop only | Cursor only | Chrome only |
| Zero dependencies | Yes | Chrome ext | React 18+ | Cursor | Chrome |
| Click to copy element name | Yes | No | No | No | No |
http module@modelcontextprotocol/sdk (stdio transport)localhost:7077/localhost:3847http://localhost:3847fetch() / XMLHttpRequest interceptor (rewrites API paths through proxy)href="/..." and src="/..." attributes to route through proxy</body>Content-Security-Policy headers to allow injected scriptid, class, semantic roles, or interactive rolesborder-radius) + positions tooltip (always within viewport)POST /__annotator/elementsGET /__annotator/commands every second for server instructions (highlight, rescan, inspect toggle)MutationObserver auto-rescans when DOM changes# Add as MCP server
claude mcp add ui-annotator -- npx @mcpware/ui-annotator
# Then in conversation:
# "Annotate my app at localhost:3847"
# → AI returns proxy URL, you open it, hover elements, discuss changes by name
npx @mcpware/ui-annotator
# Proxy starts on http://localhost:7077
# Open http://localhost:7077/localhost:YOUR_PORT
| Variable | Default | Description |
|---|---|---|
UI_ANNOTATOR_PORT | 7077 | Port for the proxy server |
| Project | What it does | Install |
|---|---|---|
| Instagram MCP | 23 Instagram Graph API tools — posts, comments, DMs, stories, analytics | npx @mcpware/instagram-mcp |
| Claude Code Organizer | Visual dashboard for Claude Code memories, skills, MCP servers, hooks | npx @mcpware/claude-code-organizer |
| Pagecast | Record browser sessions as GIF or video via MCP | npx @mcpware/pagecast |
| LogoLoom | AI logo design → SVG → full brand kit export | npx @mcpware/logoloom |
MIT
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.