Server data from the Official MCP Registry
Agent orchestration + org memory via MCP. Manage initiatives and decisions from any client.
Agent orchestration + org memory via MCP. Manage initiatives and decisions from any client.
Remote endpoints: streamable-http: https://mcp.useorgx.com/ sse: https://mcp.useorgx.com/
Valid MCP server (2 strong, 1 medium validity signals). 1 code issue detected. 3 known CVEs in dependencies Imported from the Official MCP Registry.
Endpoint verified · Requires authentication · 7 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Remote Plugin
No local installation needed. Your AI client connects to the remote endpoint directly.
Add this to your MCP configuration to connect:
{
"mcpServers": {
"com-useorgx-orgx-mcp": {
"url": "https://mcp.useorgx.com/sse"
}
}
}From the project's GitHub README.
A Cloudflare Workers deployment that exposes OrgX initiatives, milestones, tasks, org snapshots, and Stripe upgrades over the Model Context Protocol (MCP). The worker reuses the Next.js API routes inside this repo, so shipped business logic stays in one place.
OrgX MCP connects Claude and other MCP-capable clients to OrgX so users can:
server.json for the full surface)| Tool | Purpose |
|---|---|
get_pending_decisions | List decisions awaiting approval. |
approve_decision / reject_decision | Resolve a pending decision inline. |
get_initiative_pulse | Health, milestones, blockers, recent activity for one initiative. |
get_agent_status | What every OrgX agent is doing right now. |
scaffold_initiative | Create a full initiative → workstreams → milestones → tasks tree in one call. |
create_entity / create_task / create_milestone / create_decision | Add individual entities without the full scaffold. |
entity_action | Lifecycle transitions (launch, pause, complete, archive) on any entity. |
query_org_memory | Search prior decisions, initiatives, artifacts. |
spawn_agent_task | Delegate to a specialist agent (rate-limited, quality-gated). |
get_morning_brief | Latest autonomous session brief with ROI deltas. |
get_org_snapshot | Compact or detailed org readout for onboarding and status. |
Full tool contract: server.json at the repo root — 35+ tools with OAuth
scopes, input schemas, and OpenAI widget metadata. Call orgx_describe_tool
from any MCP client to inspect a live contract.
Every state/action tool ships a matching widget via MCP Apps (Claude) and
Skybridge (ChatGPT). Resources: ui://widget/decisions.html,
ui://widget/initiative-pulse.html, ui://widget/agent-status.html,
ui://widget/scaffolded-initiative.html, ui://widget/task-spawned.html,
ui://widget/morning-brief.html, plus their skybridge variants.
TBD — the orgx-mcp repo is currently unlicensed pending an organization-wide
decision. Reach out to reviewers@useorgx.com if you need terms before we
publish a LICENSE file.
This repository is the canonical source for the OrgX MCP worker.
The canonical public GitHub location is https://github.com/useorgx/orgx-mcp.
External listings, package metadata, review docs, and launch collateral should use
the useorgx organization and must not link to legacy OrgX-ai or orgx-ai
GitHub surfaces.
The copy inside useorgx/orgx at orgx/workers/orgx-mcp is a vendored mirror used for monorepo integration and verification. After worker changes land here, sync them into the monorepo with:
pnpm sync:orgx
Use pnpm sync:orgx:check to confirm the monorepo mirror is current before opening or merging a PR.
pnpm (matches the repo's package manager)MCP_SERVICE_KEY (Vercel) / ORGX_SERVICE_KEY (Worker secret)ORGX_API_URLMCP_JWT_SECRET (Worker secret)STRIPE_*, SUPABASE_*)Note:
OAUTH_CLIENT_IDandOAUTH_CLIENT_SECRETare NOT needed. OAuth clients (like ChatGPT) register dynamically viaPOST /registerand get their credentials stored in the OAuthState Durable Object.
# From the repo root
pnpm install
cp .dev.vars.example .dev.vars # customize once, ignored by git
pnpm dev # runs wrangler dev on http://127.0.0.1:8787
wrangler.toml stays out of git; all local secrets live in .dev.vars (same format as wrangler secret put). Example contents:
ORGX_API_URL="http://localhost:3000"
ORGX_SERVICE_KEY="oxk-..."
MCP_JWT_SECRET="your-32-byte-secret"
When running pnpm dev, Wrangler automatically loads .dev.vars, so the worker can mint JWTs and proxy to the local Next.js API.
# From the repo root
pnpm install --frozen-lockfile
pnpm wrangler deploy # prod
pnpm wrangler deploy --env preview # staging (uses [env.preview])
Before deploying, seed Cloudflare secrets once per environment:
pnpm wrangler secret put ORGX_SERVICE_KEY --env production
pnpm wrangler secret put MCP_JWT_SECRET --env production
These secrets are NOT overwritten by wrangler deploy (unlike vars in wrangler.toml).
CI expects matching GitHub Secrets:
ORGX_SERVICE_KEYMCP_JWT_SECRETThe public MCP entrypoint is the bare root URL:
POST / – streamable HTTP for MCP clientsGET / – SSE when the client requests text/event-streamThe raw /mcp and /sse routes are still used internally, but they sit behind the OAuth provider and are not the recommended discovery URLs for external clients.
For local MCP clients like Cursor and Claude, point mcp-remote at the root MCP URL.
Hosted config discovery endpoints are metadata-only. Any local installer must prompt
before writing files, keep generated Cursor assets under .cursor/orgx/, and avoid
writing OrgX files under .cursor/commands/, .cursor/rules/, or .claude/.
Add the worker to Cursor's MCP config (macOS/Linux ~/.cursor/mcp.json):
{
"mcpServers": {
"orgx": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp.useorgx.com/",
"--header",
"Authorization: Bearer <access-token>"
]
}
}
}
Quick CLI test:
npx mcp-remote https://mcp.useorgx.com/ \
--header "Authorization: Bearer <access-token>" \
--health-check
The worker implements the full MCP OAuth 2.1 spec with PKCE:
POST /register - clients like ChatGPT register and receive unique credentialsGET /authorize - redirects to Clerk (OrgX web) for user authenticationPOST /token - exchanges authorization codes for JWT access tokensoffline_access scope is requestedOAuth client credentials are stored in the OAuthState Durable Object (not environment variables).
Durable Objects (OrgXMcp class) keep each MCP session isolated so both transports can run simultaneously.
Reviewers need:
Required callback URLs:
http://localhost:6274/oauth/callbackhttp://localhost:6274/oauth/callback/debughttps://claude.ai/api/mcp/auth_callbackhttps://claude.com/api/mcp/auth_callbackThe reviewer environment is prepared inside the OrgX web app, not inside the MCP worker.
Authenticated OrgX routes for the dedicated reviewer account:
GET https://useorgx.com/api/review/anthropic/statusPOST https://useorgx.com/api/review/anthropic/bootstrapPOST https://useorgx.com/api/review/anthropic/resetThese routes operate only on the currently authenticated user's dedicated Anthropic Review Workspace. Use the reviewer runbook for the exact bootstrap/reset flow and the prompt matrix Anthropic should exercise.
pnpm dev (uses .dev.vars)npx mcp-remote ... --health-check to verify the session can list tools~/.cursor/mcp.json)This worker ships a deterministic E2E flow you can run live from any MCP client (real OrgX APIs, no mocks):
thursday-e2e (primary). Scaffolds an initiative, creates a pending decision, approves it, spawns an agent task, and renders the widgets.thursday-e2e-demo (backwards-compat). Same flow as thursday-e2e.Context survival notes:
Widget protocol notes:
openai/outputTemplate + text/html+skybridge.ui.resourceUri + text/html;profile=mcp-app.User prompt: Show me the pending decisions that need approval today.
Expected behavior: The worker calls get_pending_decisions, returns seeded decisions for the authenticated workspace, and renders the decisions widget in compatible hosts.
User prompt: Give me the pulse for the Search Copilot Readiness initiative.
Expected behavior: The worker calls get_initiative_pulse, returns milestones, blockers, and activity, and renders the initiative pulse widget in compatible hosts.
User prompt: Scaffold a launch initiative with two workstreams, one milestone each, and two tasks per milestone.
Expected behavior: The worker calls scaffold_initiative, creates the nested hierarchy in OrgX, and returns the scaffold widget with the created initiative tree.
User prompt: Assign the engineering agent a task to audit the onboarding funnel.
Expected behavior: The worker calls spawn_agent_task, records the assignment in OrgX, and returns the task or handoff result.
batch_create_entities: IDs + ref dependency resolutionbatch_create_entities now returns created IDs in a machine-usable form (and includes them in the plain text response for LLM clients that drop structured payloads).
It also supports caller-provided ref keys and *_ref relationship fields so you can create a full hierarchy in a single call (initiative → workstream → milestone → task):
{
"entities": [
{
"type": "workstream",
"ref": "ws-query",
"title": "AI Query Discovery",
"initiative_id": "e46bb475-..."
},
{
"type": "milestone",
"ref": "ms-queries",
"title": "30+ Queries Mapped",
"initiative_id": "e46bb475-...",
"workstream_ref": "ws-query"
},
{
"type": "task",
"title": "Brainstorm 50 ICP queries",
"initiative_id": "e46bb475-...",
"workstream_ref": "ws-query",
"milestone_ref": "ms-queries"
}
]
}
Supported relationship refs (when the corresponding *_id is omitted): initiative_ref, workstream_ref, milestone_ref, command_center_ref, project_ref, objective_ref, run_ref.
scaffold_initiative: Nested hierarchy in 1 callFor the common case of creating an initiative plus its full hierarchy, use scaffold_initiative:
{
"title": "AI Legibility Foundation",
"auto_plan": false,
"launch_after_create": true,
"workstreams": [
{
"title": "AI Query Discovery",
"milestones": [
{
"title": "30+ ICP Queries Mapped",
"tasks": [
{ "title": "Brainstorm 50 ICP queries" },
{ "title": "Score + prioritize top 30" }
]
}
]
}
]
}
When workstreams are provided, scaffold_initiative now preserves that explicit hierarchy and disables initiative auto-planning by default (auto_plan: false) so OrgX does not generate a second overlapping structure on top of the scaffold. If you omit workstreams, auto-planning remains enabled by default so a planner can synthesize the hierarchy later.
launch_after_create still defaults to true, so stream dispatch can begin immediately after the scaffold is created. Set launch_after_create: false to keep the initiative in draft state after scaffold creation.
The tool returns a nested hierarchy with IDs (plus created[], failed[], ref_map, and launch outcome metadata for chaining).
list_entities: hierarchy-scoped readslist_entities supports hierarchy filters so clients can read one branch without reconstructing the tree client-side:
initiative_id for workstream, milestone, task, stream, decisionworkstream_id for milestone, task, stream, decisionmilestone_id for taskThe fields parameter also accepts generic aliases such as title and summary; OrgX maps them to the correct storage columns per entity type (for example, workstream uses name under the hood).
Contract note: the canonical behavior for initiative creation and hierarchy reads lives in the OrgX API. This worker must mirror that contract, especially auto_plan defaults, supported hierarchy filters, and generic field alias handling.
context[] pointers on core entitiesThe following entity types persist a context JSON array: initiative, workstream, milestone, task.
Each entry is a pointer with an optional relevance note (pointers, not payloads):
{
"type": "task",
"title": "Write /use-cases/solo-technical-founders page",
"context": [
{
"type": "url",
"uri": "https://...",
"label": "Research doc",
"relevance": "Query targets + competitor gaps"
},
{
"type": "entity",
"entity_type": "milestone",
"entity_id": "ab0e929c-...",
"relevance": "Use audit output"
},
{
"type": "plan_session",
"session_id": "plan-abc123",
"section": "## Content Strategy",
"relevance": "Decision rationale"
}
]
}
To hydrate these pointers for execution, use get_task_with_context (task-focused) or list_entities with id + hydrate_context=true (generic).
complete_plan.attach_tocomplete_plan supports attach_to to automatically add a plan_session pointer into target entities’ context[]:
{
"session_id": "plan-abc123",
"implementation_summary": "Shipped batch scaffolding improvements",
"attach_to": [
{ "entity_type": "initiative", "entity_id": "e46bb475-..." },
{
"entity_type": "task",
"entity_id": "task-xyz",
"section": "## Content Strategy"
}
]
}
The MCP worker uses GitHub Actions for automated deployment and registry publishing.
Deployments are triggered automatically:
| Trigger | Environment | Registry Publish |
|---|---|---|
Push to main (**) | Production | No |
| GitHub Release published | Production | Yes |
| Manual workflow dispatch | Configurable | Optional |
Set these secrets in your GitHub repository settings:
| Secret | Description | How to Get |
|---|---|---|
CLOUDFLARE_API_TOKEN | Cloudflare API token with Workers permissions | Cloudflare Dashboard |
CLOUDFLARE_ACCOUNT_ID | Your Cloudflare account ID | Cloudflare Dashboard → Workers |
ORGX_SERVICE_KEY | Service key for OrgX API | OrgX Admin Settings |
MCP_JWT_SECRET | JWT signing secret (32+ bytes) | Generate with openssl rand -hex 32 |
MCP_REGISTRY_PUBKEY | Ed25519 public key for registry | Generated below |
MCP_REGISTRY_PRIVATE_KEY | Ed25519 private key (hex) for registry | Generated below |
Use the release script to bump versions and create tags:
# From the repo root
# Patch release (1.0.0 -> 1.0.1)
pnpm release:patch
# Minor release (1.0.0 -> 1.1.0)
pnpm release:minor
# Major release (1.0.0 -> 2.0.0)
pnpm release:major
# Or specify exact version
pnpm release 2.0.0
Then push and create the GitHub release:
# Push commit and tag
git push && git push origin mcp-v1.0.0
# Create GitHub release (triggers deploy + registry publish)
gh release create mcp-v1.0.0 --generate-notes --title "OrgX MCP v1.0.0"
Trigger deployment manually from GitHub Actions:
OrgX MCP server is listed in the official MCP Registry at com.useorgx/orgx-mcp. This section documents how to update the registry listing.
mcp-publisher CLI - Install via:
# macOS/Linux
curl -L "https://github.com/modelcontextprotocol/registry/releases/latest/download/mcp-publisher_$(uname -s | tr '[:upper:]' '[:lower:]')_$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/').tar.gz" | tar xz mcp-publisher
sudo mv mcp-publisher /usr/local/bin/
# Or via Homebrew
brew install modelcontextprotocol/tap/mcp-publisher
Domain verification - Must verify ownership of useorgx.com
Generate Ed25519 keypair:
# From the repo root
./scripts/generate-registry-keys.sh
This creates files in keys/ (gitignored):
mcp-registry.pem - Private key (keep secure!)http-well-known.txt - Public key for HTTP verificationSet up HTTP domain verification:
# Set the public key as a Cloudflare secret
wrangler secret put MCP_REGISTRY_PUBKEY
# Paste the base64 public key from keys/http-well-known.txt
# Deploy the worker
pnpm wrangler deploy
# Verify it works (must be reachable on apex for com.useorgx/*)
curl https://useorgx.com/.well-known/mcp-registry-auth
# Should return: v=MCPv1; k=ed25519; p=<your-pubkey>
# (Optional) Also available on:
# curl https://www.useorgx.com/.well-known/mcp-registry-auth
# curl https://mcp.useorgx.com/.well-known/mcp-registry-auth
Login to registry:
# Read private key hex
PRIVKEY=$(grep -v '^#' keys/private-key-hex.txt | tr -d '[:space:]')
# Login with HTTP verification
mcp-publisher login http --domain=useorgx.com --private-key="$PRIVKEY"
When updating server.json (e.g., adding new tools), publish to the registry:
# From the repo root
# Validate first (always do this!)
./scripts/publish-to-registry.sh --dry-run
# Publish for real
./scripts/publish-to-registry.sh
Run this after deploys (or metadata/auth changes) to verify core MCP + registry endpoints:
# From the repo root
pnpm smoke:endpoints
Checks include:
/healthz/.well-known/oauth-authorization-server/.well-known/oauth-protected-resource/.well-known/mcp-registry-auth on both mcp.useorgx.com and useorgx.comThe server.json file describes OrgX MCP for the registry:
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "com.useorgx/orgx-mcp",
"description": "AI agent orchestration and organizational memory...",
"version": "1.0.3",
"remotes": [
{ "type": "streamable-http", "url": "https://mcp.useorgx.com/" },
{ "type": "sse", "url": "https://mcp.useorgx.com/" }
],
"tools": [...],
"resources": [...],
"prompts": [...]
}
Key points:
name uses com.useorgx/* namespace (requires useorgx.com domain verification)streamable-http and sse transports are listedversion when making changes"Domain verification failed"
MCP_REGISTRY_PUBKEY secret is set correctlycurl https://useorgx.com/.well-known/mcp-registry-authhttps://useorgx.com/.well-known/mcp-registry-auth must return 200 directly (no 3xx to www)."Schema validation failed"
mcp-publisher validate to see detailed errorsserver.json against the schema"Rate limited"
See docs/privacy-policy.md for the repository-level policy covering the hosted MCP worker. Public link: https://github.com/useorgx/orgx-mcp/blob/main/docs/privacy-policy.md
See docs/security-data-handling.md for the operational security summary, OAuth callback allowlist requirements, and reviewer handling guidance.
Submission and reviewer checklist: docs/anthropic-directory.md Reviewer runbook: docs/anthropic-reviewer-runbook.md Release manager checklist: docs/anthropic-release-manager-checklist.md
Pre-submit repo check:
pnpm directory:preflight
Operational reviewer check:
https://useorgx.com/api/review/anthropic/statusaccount_upgrade returns a checkout or contact URL; it does not silently purchase a plan.Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.