Server data from the Official MCP Registry
Persistent knowledge graph for AI workflows with context tiers and .kin inheritance.
Persistent knowledge graph for AI workflows with context tiers and .kin inheritance.
Valid MCP server (1 strong, 3 medium validity signals). 1 known CVE in dependencies Package registry verified. Imported from the Official MCP Registry.
6 files analyzed · 2 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-jmcentire-kindex": {
"args": [
"kindex-adapter-csv"
],
"command": "uvx"
}
}
}From the project's GitHub README.
The memory layer Claude Code doesn't have.
Kindex does one thing. It knows what you know.
It's a persistent knowledge graph for AI-assisted workflows. It indexes your conversations, projects, and intellectual work so that Claude Code never starts a session blind. Available as a free Claude Code plugin (MCP server) or standalone CLI.
Memory plugins capture what happened. Kindex captures what it means and how it connects. Most memory tools are session archives with search. Kindex is a weighted knowledge graph that grows intelligence over time — understanding relationships, surfacing constraints, and managing exactly how much context to inject based on your available token budget.
Two commands. Zero configuration.
pip install kindex[mcp]
claude mcp add --scope user --transport stdio kindex -- kin-mcp
kin init
Claude Code now has 30 native tools: search, add, context, show, ask, learn, link, list_nodes, status, suggest, graph_stats, graph_merge, dream, changelog, ingest, tag_start, tag_update, tag_resume, remind_create, remind_list, remind_snooze, remind_done, remind_check, remind_exec, mode_activate, mode_list, mode_show, mode_create, mode_export, mode_import, mode_seed.
Or add .mcp.json to any repo for project-scope access:
{ "mcpServers": { "kindex": { "command": "kin-mcp" } } }
pip install kindex
kin init
With LLM-powered extraction:
pip install kindex[llm]
With reminders (natural language time parsing):
pip install kindex[reminders]
With everything (LLM + vectors + MCP + reminders):
pip install kindex[all]
Five context tiers auto-select based on available tokens. When other plugins dump everything into context, Kindex gives you 200 tokens of executive summary or 4000 tokens of deep context — whatever fits. Your plugin doesn't eat the context window.
| Tier | Budget | Use Case |
|---|---|---|
| full | ~4000 tokens | Session start, deep work |
| abridged | ~1500 tokens | Mid-session reference |
| summarized | ~750 tokens | Quick orientation |
| executive | ~200 tokens | Post-compaction re-injection |
| index | ~100 tokens | Existence check only |
Nodes have types, weights, domains, and audiences. Edges carry provenance and decay over time. The graph understands what matters — not just what was said.
Constraints block deploys. Directives encode preferences. Watches flag attention items. Checkpoints run pre-flight. No other memory plugin has this.
Three-tier prompt architecture with Anthropic prompt caching. Stable knowledge (codebook) is cached at 10% cost. Query-relevant context is predicted via graph expansion and cached per-topic. Only the question pays full price. Transparent — kin ask just works better and cheaper.
.kin inheritance chains let a service repo inherit from a platform context, which inherits from an org voice. Private/team/org/public scoping with PII stripping on export. Enterprise-ready from day one.
A 162-file fantasy novel vault — characters, locations, magic systems, plot outlines — ingested in one pass. Cross-referenced by content mentions. Searched in milliseconds.
$ kin status
Nodes: 192
Edges: 11,802
Orphans: 3
$ time kin search "the Baker"
# Kindex: 10 results for "the Baker"
## [document] The Baker - Hessa's Profile and Message Broker System (w=0.70)
→ Thieves Guild, Five Marks, Thieves Guild Operations
## [person] Mia and The Baker (Hessa) -- Relationship (w=0.70)
→ Sebastian and Mia, Mia -- Motivations and Goals
0.142 total
$ kin graph stats
Nodes: 192
Edges: 11,802
Density: 0.3218
Components: 5
Avg degree: 122.94
192 nodes. 11,802 edges. 5 context tiers. Hybrid FTS5 + graph traversal in 142ms.
Installing the MCP plugin gives Claude the tools. But Claude won't use them proactively unless you tell it to. Kindex ships with a recommended CLAUDE.md block that turns passive tools into active habits:
# Print the recommended directives
kin setup-claude-md
# Or auto-append to your global CLAUDE.md
kin setup-claude-md --install
This adds session lifecycle rules (start/during/segment/end), explicit capture triggers (discoveries, decisions, key files, notable outputs), and search-before-add discipline. The difference between "Claude has a knowledge graph" and "Claude actively maintains a knowledge graph" is this block.
The SessionStart hook (kin setup-hooks) reinforces these directives at the start of every session with a "Session directives" block that reminds Claude to use kindex MCP tools throughout the session.
With the directives active, Claude will:
Reminders can carry shell commands and/or natural-language instructions. When due, the daemon executes them automatically — simple commands run directly, complex tasks launch headless claude -p. A Stop hook guard blocks Claude from exiting when actionable reminders are pending.
# Kill a cloud instance in 1 hour (but download results first)
kin remind create "Kill vast.ai instance" --at "in 1 hour" \
--action "vastai destroy instance 12345" \
--instructions "Download results from /workspace/ before killing"
# Manual trigger
kin remind exec --reminder-id <id>
Kindex dreams. After each Claude Code session, a detached background process runs fuzzy deduplication, auto-applies pending suggestions, and strengthens edges between nodes that share domains. Like memory consolidation during sleep — replay, strengthen important paths, prune noise.
# See what would happen (no changes)
kin dream --dry-run
# Run full consolidation
kin dream
# Fast path: dedup + suggestions only
kin dream --lightweight
# Include LLM-powered cluster summarisation
kin dream --deep
# Fork and return immediately (used by Stop hook)
kin dream --detach --lightweight
Three triggers: manual CLI, periodic cron (step 11 of kin cron), and automatic detached subprocess on Claude Code session exit. File locking prevents concurrent cycles. The detached process uses start_new_session=True to survive Claude Code's exit.
Modes are reusable conversation-priming artifacts that induce a processing mode in an AI session. Based on research showing that induced understanding outperforms direct instruction by 5.4x, and that 15 tokens of mode-setting capture 98.8% of achievable priming benefit.
Five built-in modes: collaborate, code, create, research, chat. Create custom modes from any session and export them for team sharing (PII-free).
# Seed default modes
kin mode seed
# Activate a mode — outputs the priming artifact
kin mode activate collaborate
# Create a custom mode
kin mode create debug-session \
--primer "We're hunting a bug. Precision over speed..." \
--boundary "Show your reasoning chain. Name assumptions." \
--permissions "Speculate about root causes freely."
# Export for team sharing (PII-stripped)
kin mode export collaborate > collaborate.json
# Import a teammate's mode
kin mode import their-mode.json
Modes are not instructions — they're state inductions. A primer establishes how to think, a boundary defines what quality means, and permissions state what's allowed. The AI shifts processing mode rather than following a checklist.
# Add knowledge (with optional tags)
kin add "Stigmergy is coordination through environmental traces" --tags biology,coordination
# Search with hybrid FTS5 + graph traversal
kin search stigmergy
kin search coordination --tags biology # filter results by tag
# Ask questions (with automatic classification)
kin ask "How does weight decay work?"
# Get context for AI injection
kin context --topic stigmergy --level full
# List and filter by tags
kin list --tags python,ml # nodes tagged with both
kin list --type concept --tags ai # combine type and tag filters
# Track operational rules
kin add "Never break the API contract" --type constraint --trigger pre-deploy --action block
# Check status before deploy
kin status --trigger pre-deploy
# Ingest from all sources
kin ingest all
# Session tags — named work context handles
kin tag start auth-refactor --focus "OAuth2 flow" --remaining "tokens,tests"
kin tag segment --focus "Token storage" --summary "Flow design done"
kin tag resume auth-refactor # context block for new session
kin tag end --summary "All done"
# Reminders — never forget, never nag
kin remind create "standup" --at "every weekday at 9am" --priority high
kin remind create "reply to Kevin" --at "in 30 minutes" --priority urgent
kin remind list
kin remind snooze --reminder-id <id> --duration 1h
kin remind done --reminder-id <id>
Projects use .kin/ directories that encode their communication style, engineering standards, and values. Teams inherit from orgs. Repos inherit from teams. The knowledge graph carries the voice forward.
~/.kindex/voices/acme.kin # Org voice (downloadable, public)
^
| inherits
~/Code/platform/.kin/config # Platform team context
^
| inherits
~/Code/payments-service/.kin/config # Service-specific context
# payments-service/.kin/config
name: payments-service
audience: team
domains: [payments, python]
inherits:
- ../platform/.kin/config
The .kin/ directory is the standard location for all kindex project artifacts:
.kin/config — project metadata (voice, domains, audience, inheritance).kin/index.json — graph snapshot for git trackingThe payments service gets Acme's voice principles, the platform's engineering standards, AND its own domain context. Local values override ancestors. Lists merge with dedup. Parent directories auto-walk when no explicit inherits is set.
Old-style .kin files (plain YAML) are auto-upgraded to .kin/config on first access.
See examples/kin-voices/ for ready-to-use voice templates.
SQLite + FTS5 <- primary store and full-text search
nodes: id, title, content, type, weight, audience, domains, extra
edges: from_id, to_id, type, weight, provenance
fts5: content synced via triggers
Retrieval pipeline:
FTS5 BM25 --+
Graph BFS --+-- RRF merge -- tier formatter -- context block
(vectors) --+ |
| full | abridged | summarized | executive | index
|
Embedding providers (configurable):
local (sentence-transformers) | openai | gemini
LLM cache tiers (kin ask):
Tier 1: codebook (stable node index) <- cached @ 10% cost
Tier 2: query-relevant context <- cached per-topic @ 10% cost
Tier 3: user question <- full price, tiny
Reminders:
reminders table (SQLite) <- separate from knowledge graph
Time parsing: dateparser (NL) + dateutil.rrule (recurrence) + cronsim (cron)
Channels: system (macOS) | slack | email | claude (hook) | terminal
Daemon: launchd/cron adaptive interval -> check due -> notify -> auto-snooze
Scheduling: adaptive tiers (>7d=daily, >1d=hourly, >1h=10min, <1h=5min, none=disabled)
Actions: shell commands run directly | complex tasks launch claude -p
Stop guard: blocks session exit when actionable reminders pending
Dream (kin dream):
Modes: lightweight (<5s) | full (non-LLM) | deep (claude -p clusters)
Triggers: CLI | cron step 11 | Stop hook (detached, start_new_session=True)
Dedup: difflib.SequenceMatcher, 4-char title bucketing, 0.95 merge / 0.85 suggest
Consolidation: suggestion auto-apply, domain edge strengthening, cluster summarisation
Safety: fcntl.flock exclusion, protected types skip, provenance tracking
Three integration paths:
MCP plugin --> Claude calls tools natively (search, add, learn, remind, ...)
CLI hooks --> SessionStart / PreCompact / Stop lifecycle events
Adapters --> Entry-point discovery for custom ingestion sources
Code --> ctags + cscope + tree-sitter structural analysis
Knowledge: concept, document, session, person, project, decision, question, artifact, skill
Ingest repository structure with kin ingest code --directory .:
depends_on), inheritance (implements), containment (context_of), call graph (relates_to)Code structure lives in the same graph as your decisions, watches, and constraints. Search finds both what calls a function and what broke last time someone changed it.
Operational: constraint (invariants), directive (soft rules), checkpoint (pre-flight), watch (attention flags)
| Command | Description |
|---|---|
kin search <query> | Hybrid FTS5 + graph search with RRF merging (--tags, --mine) |
kin context | Formatted context block for AI injection (--level, --tokens) |
kin add <text> | Quick capture with auto-extraction and linking (--tags, --type) |
kin show <id> | Full node details with edges, provenance, and state |
kin list | List nodes (--type, --status, --tags, --audience, --mine, --limit) |
kin ask <question> | Question classification + LLM or context answer |
| Command | Description |
|---|---|
kin learn | Extract knowledge from sessions and inbox |
kin link <a> <b> | Create weighted edge between nodes |
kin alias <id> [add|remove|list] | Manage AKA/synonyms for a node |
kin register <id> <path> | Associate a file path with a node |
kin orphans | Nodes with no connections |
kin trail <id> | Temporal history and provenance chain |
kin decay | Apply weight decay to stale nodes/edges |
kin recent | Recently active nodes |
kin tag [action] | Session tags: start, update, segment, pause, end, resume, list, show |
kin remind [action] | Reminders: create, list, show, snooze, done, cancel, check, exec |
kin mode [action] | Conversation modes: activate, list, show, create, export, import, seed |
| Command | Description |
|---|---|
kin graph [mode] | Dashboard: stats, centrality, communities, bridges, trailheads |
kin suggest | Bridge opportunity suggestions (--accept, --reject) |
kin skills [person] | Skill profile and expertise for a person |
kin embed | Index all nodes for vector similarity search |
| Command | Description |
|---|---|
kin status | Graph health + operational summary (--trigger, --owner, --mine) |
kin set-audience <id> <scope> | Set privacy scope (private/team/org/public) |
kin set-state <id> <key> <value> | Set mutable state on directives/watches |
kin export | Audience-aware graph export with PII stripping |
kin import <file> | Import nodes/edges from JSON/JSONL (--mode merge/replace) |
kin sync-links | Update node content with connection references |
| Command | Description |
|---|---|
kin ingest <source> | Ingest from: projects, sessions, files, commits, github, linear, code, all |
kin cron | One-shot maintenance cycle (for crontab/launchd) |
kin dream | Knowledge consolidation: dedup, suggestions, edge strengthening (--deep, --detach) |
kin watch | Watch for new sessions and ingest them (--interval) |
kin analytics | Archive session analytics and activity heatmap |
kin index | Write .kin/index.json for git tracking |
| Command | Description |
|---|---|
kin init | Initialize data directory |
kin config [show|get|set] | View or edit configuration |
kin setup-hooks | Install lifecycle hooks into Claude Code |
kin setup-cron | Install periodic maintenance (launchd/crontab) |
kin setup-claude-md | Output/install recommended CLAUDE.md kindex directives |
kin stop-guard | Stop hook guard for actionable reminders |
kin doctor | Health check with graph enforcement (--fix) |
kin migrate | Import markdown topics into SQLite |
kin budget | LLM spend tracking |
kin whoami | Show current user identity |
kin changelog | What changed (--since, --days, --actor) |
kin log | Recent activity log |
kin git-hook [install|uninstall] | Manage git hooks in a repository |
kin prime | Generate context for SessionStart hook (--codebook) |
kin compact-hook | Pre-compact knowledge capture |
Config is layered like git — global defaults, then global config, then local config. Each layer deep-merges over the previous, so you only set what you want to override.
| Layer | Path | Purpose |
|---|---|---|
| Global | ~/.config/kindex/kin.yaml | User-wide defaults |
| Local | .kin/config or kin.yaml in cwd | Project-specific overrides |
Use kin config set --global llm.enabled true for global settings, or kin config set llm.model claude-sonnet-4-6 for project-local.
data_dir: ~/.kindex
llm:
enabled: false
model: claude-haiku-4-5-20251001
api_key_env: ANTHROPIC_API_KEY
cache_control: true # Prompt caching (90% savings on repeated prefixes)
codebook_min_weight: 0.5 # Min node weight for codebook inclusion
tier2_max_tokens: 4000 # Token budget for query-relevant context
embedding:
provider: local # local, openai, or gemini
# model: "" # empty = provider default
# api_key_env: "" # empty = provider default (OPENAI_API_KEY / GEMINI_API_KEY)
# dimensions: 0 # 0 = provider default (384 / 1536 / 3072)
budget:
daily: 0.50
weekly: 2.00
monthly: 5.00
project_dirs:
- ~/Code
- ~/Personal
defaults:
hops: 2
min_weight: 0.1
mode: bfs
reminders:
enabled: true
check_interval: 300 # 5 min base interval
adaptive_scheduling: true # adjust interval based on nearest reminder
min_interval: 300 # floor for adaptive scheduling
default_channels: [system] # system, slack, email, claude, terminal
snooze_duration: 900 # 15 min default snooze
auto_snooze_timeout: 300 # auto-snooze after 5 min inaction
idle_suppress_after: 600 # suppress if idle > 10 min
channels:
slack:
enabled: false
webhook_url: ""
email:
enabled: false
smtp_host: ""
to_addr: ""
make dev # install with dev + LLM dependencies
make test # run 980 tests
make check # lint + test combined
make clean # remove build artifacts
MIT
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.