Server data from the Official MCP Registry
CI-1T prediction stability engine. Detect ghosts, evaluate drift, monitor fleets. 20 tools.
CI-1T prediction stability engine. Detect ghosts, evaluate drift, monitor fleets. 20 tools.
Valid MCP server (2 strong, 4 medium validity signals). 3 known CVEs in dependencies (0 critical, 3 high severity) Package registry verified. Imported from the Official MCP Registry.
3 files analyzed · 4 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: CI1T_API_KEY
Environment variable: CI1T_BASE_URL
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-collapseindex-ci1t-mcp": {
"env": {
"CI1T_API_KEY": "your-ci1t-api-key-here",
"CI1T_BASE_URL": "your-ci1t-base-url-here"
},
"args": [
"-y",
"@collapseindex/ci1t-mcp"
],
"command": "npx"
}
}
}From the project's GitHub README.
Version: 1.7.0
Last Updated: February 27, 2026
License: Proprietary
MCP (Model Context Protocol) server for the CI-1T prediction stability engine. Lets AI agents — Claude Desktop, Cursor, Windsurf, VS Code Copilot, and any MCP-compatible client — evaluate model stability, manage fleet sessions, and control API keys directly.
One credential. One env var. That's it.
| Tool | Description | Auth |
|---|---|---|
evaluate | Evaluate prediction stability (floats or Q0.16) | API key |
fleet_evaluate | Fleet-wide multi-node evaluation (floats or Q0.16) | API key |
probe | Probe any LLM for instability (3x same prompt). BYOM mode: bring your own model via OpenAI-compatible API | API key or BYOM |
health | Check CI-1T engine status | API key |
fleet_session_create | Create a persistent fleet session | API key |
fleet_session_round | Submit a scoring round | API key |
fleet_session_state | Get session state (read-only) | API key |
fleet_session_list | List active fleet sessions | API key |
fleet_session_delete | Delete a fleet session | API key |
list_api_keys | List user's API keys | API key |
create_api_key | Generate and register a new API key | API key |
delete_api_key | Delete an API key by ID | API key |
get_invoices | Get billing history (Stripe) | API key |
onboarding | Welcome guide + setup instructions | None |
interpret_scores | Statistical breakdown of scores | None |
convert_scores | Convert between floats and Q0.16 | None |
generate_config | Integration boilerplate for any framework | None |
compare_windows | Compare baseline vs recent episodes for drift detection | None |
alert_check | Check episodes against custom thresholds, return alerts | None |
visualize | Interactive HTML visualization of evaluate results | None |
| Resource | URI | Description |
|---|---|---|
tools_guide | ci1t://tools-guide | Full usage guide: response schemas, chaining patterns, fleet workflow, thresholds, example pipelines |
New users get guided setup automatically. If no API key is configured:
onboarding tool returns a full welcome guide with account status, setup steps, config examples, available tools, and pricinginterpret_scores, convert_scores, generate_config) always work — no auth, no creditsEvery new account gets 1,000 free credits (no credit card required), enough for 1,000 evaluation episodes.
| Variable | Required | Description |
|---|---|---|
CI1T_API_KEY | Yes | Your ci_... API key — single credential for all tools |
CI1T_BASE_URL | No | API base URL (default: https://collapseindex.org) |
Add to claude_desktop_config.json:
{
"mcpServers": {
"ci1t": {
"command": "docker",
"args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"],
"env": {
"CI1T_API_KEY": "ci_your_key_here"
}
}
}
}
Add to .cursor/mcp.json or equivalent:
{
"mcpServers": {
"ci1t": {
"command": "docker",
"args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"],
"env": {
"CI1T_API_KEY": "ci_your_key_here"
}
}
}
}
Add to .vscode/mcp.json:
{
"servers": {
"ci1t": {
"type": "stdio",
"command": "docker",
"args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"],
"env": {
"CI1T_API_KEY": "ci_your_key_here"
}
}
}
}
git clone https://github.com/collapseindex/ci1t-mcp.git
cd ci1t-mcp
npm install
npm run build
# Set env var and run
CI1T_API_KEY=ci_xxx node dist/index.js
docker build -t collapseindex/ci1t-mcp .
Once connected, an AI agent can:
"Evaluate these prediction scores: 45000, 32000, 51000, 48000, 29000, 55000"
The agent calls evaluate with scores: [45000, 32000, 51000, 48000, 29000, 55000] and gets back stability metrics per episode, including credits used and remaining.
"Create a fleet session with 4 nodes named GPT-4, Claude, Gemini, Llama"
"List my API keys"
"Probe this prompt for stability: What is the capital of France?"
"Probe my local Ollama llama3 model with: What is the meaning of life?"
The agent calls probe in BYOM mode — sends the prompt 3x to http://localhost:11434/v1 and scores the responses locally. No CI-1T credits used.
"Interpret these scores: 0.12, 0.45, 0.88, 0.03, 0.67"
The agent calls interpret_scores locally (no API call, no credits) and returns mean, std, min/max, and normalized values. For full stability classification, use evaluate.
"Convert these probabilities to Q0.16: 0.5, 0.95, 0.01"
"Generate a FastAPI integration for CI-1T with guardrail pattern"
| Metric | Description |
|---|---|
| CI (Collapse Index) | Primary stability metric (Q0.16: 0–65535). Lower = more stable |
| AL (Authority Level) | Engine trust level for the model (0–4) |
| Ghost | Model appears stable but may be silently wrong |
| Warn / Fault | Threshold and hard-failure flags |
Classification labels (Stable / Drift / Flip / Collapse) are determined by the engine. Use the evaluate tool to get exact classifications — thresholds are configurable via the API.
┌──────────────────────┐ stdio ┌───────────────────────┐
│ Claude Desktop / │◄──────────────►│ ci1t-mcp server │
│ Cursor / VS Code │ │ (Node.js / Docker) │
└──────────────────────┘ └──────────┬────────────┘
│ HTTPS
│ X-API-Key
┌──────────────┼──────────────┐
│ │ │
┌────▼───┐ ┌─────▼────┐ ┌─────▼─────┐
│Evaluate│ │Fleet API │ │Dashboard │
│ API │ │Sessions │ │API Keys │
│ │ │ │ │Billing │
└────────┘ └──────────┘ └───────────┘
collapseindex.org
probe tool now supports Bring Your Own Model modebase_url + model (+ optional model_api_key) to probe any OpenAI-compatible endpoint directlycrypto.randomBytes() instead of Math.random()toQ16() decimal heuristic prevents integer arrays [0, 1] from being misclassified as floatscompare_windows severity messagetools_guide MCP resource (ci1t://tools-guide): comprehensive usage guide with response schemas, chaining patterns, fleet session workflow, classification thresholds, and example pipelinescompare_windows tool: compare baseline vs recent episodes — drift delta, trend direction, degradation detectionalert_check tool: check episodes against custom thresholds (CI, EMA, AL, ghost, fault) with severity levelsvisualize tool: generates self-contained interactive HTML with Canvas 2D bar chartsCI1T_API_KEY — no Bearer token neededCI1T_TOKEN env var entirelyonboarding tool: welcome guide with account status, setup steps, config examples, pricing, and available toolsinterpret_scores, convert_scores, generate_config (local, no auth, no credits)evaluate and fleet_evaluate now auto-detect floats (0–1) vs Q0.16 (0–65535) — no manual conversion needed© 2026 Collapse Index Labs™ — Alex Kwon
collapseindex.org · ask@collapseindex.org
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.