Server data from the Official MCP Registry
Go from natural language to verified finite state machines — topology bugs caught before code runs.
Go from natural language to verified finite state machines — topology bugs caught before code runs.
Valid MCP server (2 strong, 2 medium validity signals). No known CVEs in dependencies. ⚠️ Package registry links to a different repository than scanned source. Imported from the Official MCP Registry. 1 finding(s) downgraded by scanner intelligence.
13 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: ANTHROPIC_API_KEY
Environment variable: ORCA_PROVIDER
Environment variable: ORCA_MODEL
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-jascal-orca-mcp-server": {
"env": {
"ORCA_MODEL": "your-orca-model-here",
"ORCA_PROVIDER": "your-orca-provider-here",
"ANTHROPIC_API_KEY": "your-anthropic-api-key-here"
},
"args": [
"-y",
"@orcalang/orca-mcp-server"
],
"command": "npx"
}
}
}From the project's GitHub README.
Orchestrated State Machine Language — a two-layer architecture for reliable LLM code generation.
The core insight: LLMs generate flat transition tables reliably, but they struggle to guarantee topology correctness on their own. Orca separates program structure (state machine topology) from computation (action functions), then verifies the structure automatically before any code runs.
Machines are written in plain Markdown — a format LLMs can read and write natively.
# machine PaymentProcessor
## context
| Field | Type | Default |
|-------------|---------|---------|
| order_id | string | |
| amount | decimal | |
| retry_count | int | 0 |
## events
- submit_payment
- payment_authorized
- payment_declined
- retry_requested
- settlement_confirmed
## state idle [initial]
> Waiting for a payment submission
## state authorizing
> Waiting for payment gateway response
- on_entry: send_authorization_request
## state declined
> Payment was declined
## state settled [final]
> Payment fully settled
## transitions
| Source | Event | Guard | Target | Action |
|-------------|----------------------|------------|-------------|------------------|
| idle | submit_payment | | authorizing | |
| authorizing | payment_authorized | | settled | |
| authorizing | payment_declined | | declined | |
| declined | retry_requested | can_retry | authorizing | increment_retry |
| declined | retry_requested | !can_retry | settled | record_failure |
## guards
| Name | Expression |
|-----------|-------------------------|
| can_retry | `ctx.retry_count < 3` |
## actions
| Name | Signature | Effect |
|----------------------------|------------------------------------------|-------------|
| send_authorization_request | `(ctx) -> Context` | AuthRequest |
| increment_retry | `(ctx) -> Context` | |
| record_failure | `(ctx) -> Context` | |
## effects
| Name | Input | Output |
|-------------|------------------------------------|--------------------------|
| AuthRequest | `{ order_id: string, amount: decimal }` | `{ token: string }` |
The verifier checks this before anything runs: reachability, deadlocks, guard determinism, orphan declarations, and effect consistency.
Language
[initial] / [final] markers, descriptions, on_entry / on_exit actionsall-final / any-final / custom sync strategiestimeout: 30s -> state_nameignore: EVENT_NAME.orca.md separated by ---## effects section: named I/O schemas for external side effectsVerifier
reachable, unreachable, passes_through, live, responds, invariantORPHAN_EFFECT (declared but unused) and UNDECLARED_EFFECT (referenced but not declared)Compilers
createMachine() configstateDiagram-v2Runtimes (standalone — no XState dependency)
@orcalang/orca-runtime-ts)orca-runtime-python)orca-runtime-go)All three runtimes share the same feature set: guard evaluation, action registration, event bus (pub/sub + request/response), timeouts, parallel regions, snapshot/restore, machine invocation, persistence, and structured logging.
packages/
orca-lang/ Core: parser, verifier, XState/Mermaid compiler, CLI
runtime-ts/ TypeScript runtime
runtime-python/ Python async runtime
runtime-go/ Go runtime
demo-ts/ Text adventure game (uses runtime-ts)
demo-python/ Agent framework scenarios (uses runtime-python)
demo-go/ Ride-hailing coordinator — 5 machines (uses runtime-go)
demo-nanolab/ nanoGPT training orchestrator — 5 machines (uses runtime-python)
mcp-server/ MCP server exposing Orca tools to Claude and other agents
# TypeScript packages
pnpm install
pnpm build
# Python packages (runtime + demos, requires Python >= 3.11)
pnpm run setup:python
# Go packages
pnpm run setup:go
pnpm run build:demo-go
cd packages/orca-lang
# Verify a machine
npx tsx src/index.ts verify examples/payment-processor.orca.md
# Compile to XState
npx tsx src/index.ts compile xstate examples/payment-processor.orca.md
# Compile to Mermaid
npx tsx src/index.ts compile mermaid examples/text-adventure.orca.md
# Convert legacy .orca to .orca.md
# npx tsx src/index.ts convert <path-to-legacy.orca>
## state processing [parallel]
> Payment and notification run concurrently
- on_done: -> completed
### region payment_flow
#### state charging [initial]
#### state paid [final]
### region notification_flow
#### state sending_email [initial]
#### state notified [final]
The machine transitions to completed when both regions reach their final state (all-final sync, the default).
---
# machine OrderCoordinator
## state processing_payment
- invoke: PaymentProcessor
- on_done: payment_confirmed
- on_error: payment_failed
---
# machine PaymentProcessor
## state idle [initial]
## state settled [final]
...
The parent owns the child's lifecycle: starts it on entry, stops it on exit. The child's context is isolated from the parent's.
## state waiting_for_response
> LLM call in progress
- timeout: 30s -> timed_out
All runtimes support saving and restoring machine state:
// Save
const snap = machine.snapshot();
persistence.save('run-id', snap);
// Resume later (without re-running on_entry)
const snap = persistence.load('run-id');
await machine.resume(snap);
import { MultiSink, FileSink, ConsoleSink, makeEntry } from '@orcalang/orca-runtime-ts';
const sink = new MultiSink(new ConsoleSink(), new FileSink('audit.jsonl'));
const m = new OrcaMachine(def, bus, {
onTransition: (oldState, newState) => {
sink.write(makeEntry({ runId, machine: def.name, from: oldState.toString(), to: newState.toString(), ... }));
}
});
import { parseOrcaAuto, OrcaMachine, EventBus } from '@orcalang/orca-runtime-ts';
const def = parseOrcaAuto(source);
const bus = new EventBus();
const machine = new OrcaMachine(def, bus);
machine.registerAction('send_authorization_request', (ctx, event) => {
return { ...ctx, payment_token: 'tok_123' };
});
machine.start();
machine.send({ type: 'submit_payment', payload: { order_id: 'ord_1', amount: 99.99 } });
from orca_runtime_python import parse_orca_auto, OrcaMachine, EventBus
def_ = parse_orca_auto(source)
bus = EventBus()
machine = OrcaMachine(def_, bus)
@machine.register_action('send_authorization_request')
async def send_auth(ctx, event):
return {**ctx, 'payment_token': 'tok_123'}
await machine.start()
await machine.send({'type': 'submit_payment', 'payload': {'order_id': 'ord_1', 'amount': 99.99}})
import "orca-runtime-go/orca_runtime_go"
def, _ := orca_runtime_go.ParseOrcaAuto(source)
bus := orca_runtime_go.NewEventBus()
machine := orca_runtime_go.NewOrcaMachine(def, bus, nil, nil)
machine.RegisterAction("send_authorization_request", func(ctx map[string]any, event orca_runtime_go.Event) map[string]any {
ctx["payment_token"] = "tok_123"
return ctx
})
machine.Start()
machine.Send(orca_runtime_go.Event{Type: "submit_payment"})
# Text adventure (TypeScript) — interactive CLI
cd packages/demo-ts && pnpm run cli
# Smoke test (non-interactive)
pnpm test:demo-ts
# Agent framework (Python)
pnpm run test:demo-python
# Ride-hailing coordinator (Go) — runs FareSettlement end-to-end
pnpm run test:demo-go
# With snapshot/resume:
cd packages/demo-go && ./trip --resume
# nanoGPT training orchestrator (Python, no torch required for tests)
pnpm run test:demo-nanolab
# nanoGPT training with PyTorch (GPU support)
# Install torch with GPU support, then run the full pipeline
.venv/bin/pip install torch torchvision torchaudio numpy requests
pnpm run run:demo-nanolab
# All TypeScript packages
pnpm test
# Core language only
pnpm test:lang
# Go runtime
cd packages/runtime-go && go test ./...
# Python runtime
cd packages/orca-lang && ../../.venv/bin/python -m pytest ../runtime-python/tests/ -v
# nanolab tests
pnpm run test:demo-nanolab
Test counts: 233 orca-lang · 63 runtime-ts · 87 runtime-python · 16 runtime-go · 47 demo-nanolab
All in packages/orca-lang/examples/:
| File | Description |
|---|---|
simple-toggle.orca.md | Minimal 2-state machine |
payment-processor.orca.md | Guards, retries, effects |
text-adventure.orca.md | Multi-state game engine |
hierarchical-game.orca.md | Nested compound states |
parallel-order.orca.md | Parallel regions with sync |
payment-with-properties.orca.md | Bounded model checking properties |
key-exchange.orca.md | Multi-machine: client/server key exchange protocol |
invocation-order.orca.md | Multi-machine: order processing with child invocations |
saas-auth.orca.md | SaaS authentication and registration flow |
health-check.orca.md | Health check machine used by the dogfood runner |
simple-discount.orca.md | Minimal standalone decision table |
payment-routing.orca.md | Payment gateway router decision table |
shipping-rules.orca.md | Shipping cost calculator decision table |
payment-with-routing.orca.md | Combined machine + decision table |
See DECISION_TABLES.md for a full guide to decision tables.
Orca ships six Claude Code skills backed by the @orcalang/orca-mcp-server MCP server. The skills call MCP tools directly — no shell or file access needed.
| Skill | Trigger | What it does |
|---|---|---|
/orca-generate | <spec> | Generate a verified machine from a natural language spec |
/orca-generate-multi | <spec> | Generate a coordinated multi-machine system |
/orca-verify | [file] | Verify a machine for errors and warnings |
/orca-refine | [file] | Auto-fix verification errors using an LLM |
/orca-compile | [xstate|mermaid] [file] | Compile to XState TypeScript or Mermaid |
/orca-actions | [typescript|python|go] [file] | Generate action scaffold stubs |
Skills that use an LLM (/orca-generate, /orca-generate-multi, /orca-refine, and optionally /orca-actions --use-llm) call the MCP server, which calls your configured LLM provider. Skills that are purely structural (/orca-verify, /orca-compile, plain /orca-actions) never make LLM calls.
| Variable | Required | Description |
|---|---|---|
ORCA_API_KEY | Yes | API key for your LLM provider |
ORCA_PROVIDER | Yes | anthropic, openai, ollama, or grok |
ORCA_BASE_URL | No | Override the provider's default base URL (for OpenAI-compatible APIs) |
ORCA_MODEL | No | Model name (defaults to claude-sonnet-4-6 for Anthropic) |
Use ORCA_PROVIDER=openai with ORCA_BASE_URL for any OpenAI-compatible provider (MiniMax, Together, local vLLM, etc.).
Add the orca server to your Claude Desktop config file:
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%\Claude\claude_desktop_config.json{
"mcpServers": {
"orca": {
"command": "npx",
"args": ["-y", "@orcalang/orca-mcp-server"],
"env": {
"ORCA_API_KEY": "<your-api-key>",
"ORCA_PROVIDER": "anthropic"
}
}
}
}
For an OpenAI-compatible provider (e.g. MiniMax):
{
"mcpServers": {
"orca": {
"command": "npx",
"args": ["-y", "@orcalang/orca-mcp-server"],
"env": {
"ORCA_API_KEY": "<your-api-key>",
"ORCA_PROVIDER": "openai",
"ORCA_BASE_URL": "https://api.minimaxi.chat/v1",
"ORCA_MODEL": "MiniMax-M2.7"
}
}
}
}
Restart Claude Desktop after editing. Skills in .claude/skills/ are discovered automatically when you open this repo.
Node.js version — Claude Desktop uses its own bundled Node.js, which may be older than the Node 18+ required by this package (ESM). If the server fails to start, add a
PATHentry toenvthat puts your system Node'sbindirectory first — this is the most reliable way to ensure the rightnpxis found:"env": { "PATH": "/usr/local/bin:/usr/bin:/bin", "ORCA_API_KEY": "..." }Run
dirname $(which npx)to find the correct path. On nvm it will be something like~/.nvm/versions/node/v22.x.x/bin.
Claude Code reads MCP server config from .mcp.json at the project root. This file is gitignored because it contains credentials — each developer creates their own.
Option A — use the published package (same as Desktop, no rebuild needed):
{
"mcpServers": {
"orca": {
"command": "npx",
"args": ["-y", "@orcalang/orca-mcp-server"],
"type": "stdio",
"env": {
"ORCA_API_KEY": "<your-api-key>",
"ORCA_PROVIDER": "anthropic"
}
}
}
}
Option B — use the local build (recommended for development — changes take effect after rebuild):
{
"mcpServers": {
"orca": {
"command": "node",
"args": ["/absolute/path/to/orca-lang/packages/mcp-server/dist/server.js"],
"type": "stdio",
"env": {
"ORCA_API_KEY": "<your-api-key>",
"ORCA_PROVIDER": "anthropic"
}
}
}
}
Build (or rebuild after changes):
pnpm --filter @orcalang/orca-mcp-server build
# or from the package directory:
cd packages/mcp-server && npx tsc
Create .mcp.json at the project root (it is already in .gitignore), then restart Claude Code. Skills are auto-discovered from .claude/skills/ — no additional configuration needed.
Node.js version — Claude Code may use an older Node.js than the Node 18+ required by this package (ESM). If the server fails to start, add a
PATHentry toenvthat puts your system Node'sbindirectory first:"env": { "PATH": "/usr/local/bin:/usr/bin:/bin", "ORCA_API_KEY": "..." }Run
dirname $(which npx)to find the correct path. On nvm it will be something like~/.nvm/versions/node/v22.x.x/bin.
The name comes from Orchestrated (state machine language), but the whale was in mind too: orcas are highly coordinated, hunt in structured pods, and divide roles precisely — which maps well to a multi-machine system where a coordinator directs child machines through well-defined protocols.
Disambiguation: There is another project called Orca — a visual live-coding environment for sequencing MIDI and audio events, built by Hundred Rabbits. It's excellent, completely unrelated, and worth knowing about if you work in music or creative coding. This project is a different thing entirely: a state machine language for software orchestration.
Yes, deliberately — and that's the point.
The halting problem says you cannot decide in general whether an arbitrary program will terminate. That result applies to Turing-complete computations. Finite state machines are not Turing-complete: they have a finite, explicitly enumerated set of states and transitions declared upfront, with no unbounded loops or dynamic control flow in the topology itself. Reachability and deadlock analysis on an FSM is just graph traversal — it always terminates in O(states + transitions).
Orca's verifier exploits this by only verifying the topology layer — the state machine structure — where decidability is guaranteed. It does not attempt to verify the computation layer — the action functions you write inside each state. Those functions can be as complex as you like, and Orca makes no claims about them.
The practical consequence: the verifier can give you hard guarantees about your program's control flow (every state is reachable, no deadlocks, every event is handled, guards are mutually exclusive) without requiring your business logic to be formally specified. The two-layer separation is what makes this tractable. You get real structural correctness, scoped to the part of the program that can actually be checked.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.