Server data from the Official MCP Registry
Persistent knowledge graph MCP server for neurodivergent thinking. BM25 search, no cloud LLM.
Persistent knowledge graph MCP server for neurodivergent thinking. BM25 search, no cloud LLM.
Valid MCP server (2 strong, 2 medium validity signals). No known CVEs in dependencies. Package registry verified. Imported from the Official MCP Registry.
5 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-jmeyer1980-neurodivergent-memory": {
"args": [
"-y",
"neurodivergent-memory"
],
"command": "npx"
}
}
}From the project's GitHub README.
# Download and install Chocolatey:
powershell -c "irm https://community.chocolatey.org/install.ps1|iex"
# Download and install Node.js:
choco install nodejs --version="24.14.1"
# Verify the Node.js version:
node -v # Should print a Node.js 24.x version.
# Verify npm version:
npm -v # Should print an npm 11.x version.
# Run the packaged neurodivergent-memory CLI without a global install
npx neurodivergent-memory@latest init-agent-kit
# Download and install nvm:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.4/install.sh | bash
# in lieu of restarting the shell
. "$HOME/.nvm/nvm.sh"
# Download and install Node.js:
nvm install 24
# Verify the Node.js version:
node -v # Should print a Node.js 24.x version.
# Verify npm version:
npm -v # Should print an npm 11.x version.
# Run the packaged neurodivergent-memory CLI without a global install
npx neurodivergent-memory@latest init-agent-kit
flowchart LR
A[Client MCP Request] --> B[MCP Server Stdio Transport]
B --> C{Request Type}
C -->|Tools| D[Tool Handler]
C -->|Resources| E[Resource Handler]
C -->|Prompts| F[Prompt Handler]
D --> G[NeurodivergentMemory Core]
E --> G
F --> G
G --> H[Memory Graph Store]
G --> I[BM25 Index]
H --> J[Persisted JSON Snapshot]
D --> K[MCP JSON Response]
E --> K
F --> K
K --> A
Flow notes:
Memories are organized by cognitive domain:
memory:// URIsstore_memory — Create new memory nodes with optional emotional valence and intensityretrieve_memory — Fetch a specific memory by IDupdate_memory — Modify content, tags, district, emotional_valence, intensity, or project attributiondelete_memory — Remove a memory and all its connectionsconnect_memories — Create bidirectional edges between memory nodessearch_memories — BM25-ranked semantic search with optional goal context, recency bias, and filters (district, project_id, tags, epistemic status, emotional valence, intensity, min_score)traverse_from — Graph traversal up to N hops from a starting memoryrelated_to — Find memories by graph proximity + BM25 semantic blend, with optional goal context and epistemic-status filterslist_memories — Paginated listing with optional district/archetype/project_id/epistemic-status filtersmemory_stats — Aggregate statistics (totals, per-district/per-project counts, most-accessed, orphans) with optional project scopeserver_handshake — Return runtime server identity/version details for explicit client-side version confirmationstorage_diagnostics — Show the resolved snapshot path, WAL path, and effective persistence source in one responseimport_memories — Bulk-import from inline JSON entries or a snapshot file_path, with dry_run, dedupe policies, and explicit snapshot migration flagsprepare_memory_city_context — Tool mirror of explore_memory_city for clients that support tools but do not invoke MCP promptsprepare_synthesis_context — Tool mirror of synthesize_memories for prompt-limited clientsprepare_packetized_synthesis_context — Tool mirror of synthesize_memory_packets for prompt-limited or attachment-constrained clientsexplore_memory_city — Guided exploration of districts and memory organizationsynthesize_memories — Create new insights by connecting existing memoriessynthesize_memory_packets — Packetized synthesis prompt for attachment-constrained clients; emits one coverage manifest plus bounded memory slices that summarize the broader graphUse synthesize_memories when the MCP client can comfortably consume many raw memory resources. Use synthesize_memory_packets when the caller path is attachment-constrained or when you need broader graph coverage in a small number of structured resources.
For maximum interoperability across MCP clients, the server exposes the same synthesis/exploration context in two forms:
prompts/list + prompts/get for clients that implement MCP prompt invocation.prepare_*_context tools for clients that support MCP tools but ignore or under-support prompts.Some clients, such as Cline, expose MCP prompts as namespaced slash commands in the form /mcp:<server-name>:<prompt-name> rather than /<prompt-name>.
Each memory is assigned an archetype tied to its district:
Search uses Okapi BM25 ranking (k1=1.5, b=0.75) without requiring embeddings or cloud calls. Results are normalized to 0–1 score range.
Each memory can optionally carry:
Memories can optionally carry epistemic_status to distinguish tentative planning from validated knowledge.
draft — provisional or planning-orientedvalidated — confirmed and safe to treat as establishedoutdated — superseded but retained for historyWhen store_memory or import_memories creates a new practical_execution memory without an explicit epistemic_status, the server defaults it to draft if the memory has a task tag. The canonical task tag is kind:task, and the server also accepts the compatibility synonyms type:task and bare task. This keeps planning notes from silently presenting as settled fact.
Memories can optionally include a first-class project_id for attribution and scoped retrieval across multi-project graphs.
project_id is optional on writes (store_memory, update_memory, import_memories).update_memory accepts project_id: null to clear existing project attribution.search_memories, list_memories, and memory_stats accept an optional project_id filter.search_memories, list_memories, and related_to accept optional epistemic_statuses filters so callers can avoid stale planning memories when appropriate.search_memories accepts optional context and recency_weight parameters. Context is blended into ranking as a lightweight BM25 boost; recency_weight must be between 0 and 1 and adds a recency boost without replacing semantic relevance.search_memories accepts min_intensity / max_intensity as the preferred intensity filter names. The legacy intensity_min / intensity_max aliases remain supported for compatibility.related_to accepts an optional context parameter to bias related-memory ranking toward the caller's current goal.perProject breakdown.memory_stats reports totalConnections only for edges where both endpoints are in scope.list_memories includes a project: ... segment in each line (unset when no project attribution exists).project_id must match ^[A-Za-z0-9][A-Za-z0-9._:-]{0,63}$ (max length 64).NM_E020 with recovery guidance.storage_diagnostics reports the resolved snapshot path, the WAL path, and which configuration source won the persistence-path precedence check.
import_memories supports two source modes:
entries for ordinary bulk seeding.file_path for server snapshot imports, avoiding large MCP payloads.Import validation flags:
dry_run: true validates the request without writing data and returns deterministic would_import, would_skip, and would_fail counts.dedupe accepts none, content_hash, or content_plus_tags.DEDUPE_CONTENT_HASH or DEDUPE_CONTENT_PLUS_TAGS.file_path imports accept .json files under the resolved persistence directory by default. Set NEURODIVERGENT_MEMORY_IMPORT_ALLOW_EXTERNAL_FILE=true only when importing external snapshot files intentionally.Snapshot migration flags:
preserve_ids is only valid with file_path; any ID collision with the live store is rejected deterministically.merge_connections is only valid with file_path; every referenced connection target must exist either in the imported snapshot or the live store, or the row fails validation with INVALID_CONNECTION_TARGET.dry_run: true first to inspect the failure list before retrying.Memories are persisted with a write-ahead journal (WAL) plus snapshot model:
memories.json.wal.jsonl first.memories.json.memories.json, replays WAL entries, compacts to a fresh snapshot, then truncates the WAL.This improves crash recovery behavior compared to snapshot-only persistence.
For explicit control, set one of these environment variables:
NEURODIVERGENT_MEMORY_DIR to choose the directory that contains memories.jsonNEURODIVERGENT_MEMORY_FILE to point at a specific snapshot fileNEURODIVERGENT_MEMORY_MAX to cap total memories (integer; default unlimited)NEURODIVERGENT_MEMORY_EVICTION to choose eviction policy when max is reached:
lru (default)access_frequencydistrict_priorityMounts at /home/node/.neurodivergent-memory continue to work without any env override — that is the container's node user home and is checked automatically.
⚠️ Breaking change (v0.2.0): The image runs as the
nodeuser and cannot read/root, so previous mounts at/root/.neurodivergent-memoryare silently skipped. Agents may appear to have lost all memories. See Recovering memories after upgrade below.
If you previously mounted data at /root/.neurodivergent-memory, your snapshot is still intact on the host volume. Re-mount it using one of these options:
Option A — explicit /data mount (recommended):
"-e", "NEURODIVERGENT_MEMORY_DIR=/data",
"-v", "mydata:/data"
Option B — mount at the path the node user already owns:
"-v", "mydata:/home/node/.neurodivergent-memory"
No NEURODIVERGENT_MEMORY_DIR override is needed for option B — the server finds the existing snapshot automatically.
For agents: if memories appear missing after upgrading the container, use import_memories to reload from a backup export, or ask your AI assistant to re-run memory_stats after the volume is remounted correctly to confirm restoration.
The server supports a three-tier memory architecture for agents that work across multiple projects. Each tier lives in its own directory and can be synced independently.
| Tier | Purpose | Typical path | Env var |
|---|---|---|---|
| project | Repo-scoped memories — ephemeral, CI-friendly | .github/agent-kit/memories | NEURODIVERGENT_MEMORY_PROJECT_DIR |
| user | Cross-project personal knowledge — durable, per-developer | ~/.neurodivergent-memory | NEURODIVERGENT_MEMORY_USER_DIR |
| org | Shared organisational knowledge — optional, team-wide | any shared mount | NEURODIVERGENT_MEMORY_ORG_DIR |
The primary server still reads its active snapshot from NEURODIVERGENT_MEMORY_DIR (or the auto-discovered
default). Tier variables are used exclusively by the sync-memories helper.
Add a persistence:durable tag to any memory that should be promoted to the user or org tier. Memories
without this tag are treated as ephemeral and stay in the project tier.
["topic:typescript", "scope:global", "kind:pattern", "layer:architecture", "persistence:durable"]
Use persistence:ephemeral as an explicit opt-out for memories you never want promoted.
After a build, milestone, or session — promote durable memories from the project tier to the user tier:
NEURODIVERGENT_MEMORY_PROJECT_DIR=.github/agent-kit/memories \
NEURODIVERGENT_MEMORY_USER_DIR=~/.neurodivergent-memory \
npm run sync-memories -- --from project --to user
Or use explicit paths:
node build/scripts/sync-memories.js \
--from .github/agent-kit/memories \
--to ~/.neurodivergent-memory
Full option reference:
--from <path|tier> Source snapshot directory, or tier name: project | user | org
--to <path|tier> Target snapshot directory, or tier name: project | user | org
--tags <tag1,tag2,...> Promote only memories matching ALL listed tags (default: persistence:durable)
--any-tag Match memories that have ANY of the listed tags (OR logic)
--dry-run Report counts without writing any data
Safety note: stop the MCP server for the target tier before running sync — the script writes directly to the snapshot file and will warn if it detects an open WAL for the target directory.
npm publish --provenance --access publicPushes to the development branch publish release candidates using the same npm package name (neurodivergent-memory) and container repositories.
0.x.x-rc.N with npm dist-tag rc.N uses run_number.run_attempt to avoid collisions on workflow re-runs.rc-0.x.x-rc.N tags only, where N is derived from run_number.run_attempt.These builds are intentionally less stable than the research preview line and should be used only for validation and early integration testing.
Use the deterministic live smoke harness to validate project_id attribution/scoped retrieval end-to-end:
npm run smoke:project-id
$rc = (Invoke-RestMethod -Uri "https://hub.docker.com/v2/repositories/twgbellok/neurodivergent-memory/tags?page_size=25").results |
Where-Object { $_.name -match '^rc-' } |
Sort-Object { $_.last_updated } -Descending |
Select-Object -First 1 -ExpandProperty name
node test/live-project-id-smoke.mjs "docker run --rm -i twgbellok/neurodivergent-memory:$rc"
The smoke harness exits non-zero on failed assertions and is suitable as a release-readiness gate.
Mutating and lookup tool failures are returned with a stable operator-facing shape embedded in the text response:
❌ <summary>
Code: NM_EXXX
Message: Human-readable failure summary
Recovery: Suggested next action
The leading summary line is contextual, while the Code/Message/Recovery block remains stable for operators to parse and search. This keeps MCP responses readable in chat clients while giving operators a stable code they can search in logs and release notes. Structured logs are written with Pino to stderr and include the same code field on known failure paths.
Mutating tools are serialized through an async mutex to prevent concurrent write races when multiple agents call the server at the same time.
Write queue behavior:
NEURODIVERGENT_MEMORY_QUEUE_DEPTH (default: 50).NM_E010 with a retry-oriented recovery message.WIP guardrail behavior:
store_memory checks practical in-progress task saturation per agent_id when task tags include in-progress markers.NEURODIVERGENT_MEMORY_WIP_LIMIT (default: 1; set 0 to disable).NM_E011 for operator visibility.The server tracks loop signals and can surface targeted guardrail responses:
store_memory compares incoming content against the 10 most recent memories (same agent_id when provided) using tokenizer-consistent token-overlap scoring with an exact-match fast path.repeat_detected: true, increment repeat_write_count on the matched memory, and add a No net-new info warning to the tool response.logical_analysis reads of emotional_processing memories add a distill_memory suggestion once the configured threshold is crossed.ping_pong_counter when threshold conditions are met, and can optionally start a temporary cross-district write cooldown.memory_stats now includes a loop_telemetry block with:
repeat_write_candidates (top 5)ping_pong_candidates (top 5)recent_high_similarity_writes (last 5)Configuration:
NEURODIVERGENT_MEMORY_REPEAT_THRESHOLD (default: 0.85)NEURODIVERGENT_MEMORY_LOOP_WINDOW (default: 20)NEURODIVERGENT_MEMORY_PING_PONG_THRESHOLD (default: 3)NEURODIVERGENT_MEMORY_DISTILL_SUGGEST_THRESHOLD (default: 3)NEURODIVERGENT_MEMORY_CROSS_DISTRICT_COOLDOWN_MS (default: 0, disabled)Issue #19 adds a deterministic benchmark harness for end-to-end MCP stdio measurements against the built server.
Run it with:
npm run benchmark
The benchmark:
store_memory throughput across 100 writes at the target tier.search_memories and list_memories latency over 100 iterations at 1k, 5k, and 10k memories.traverse_from latency at depths 2, 3, and 5 on a connected graph of 500 memories.benchmark-results/.benchmark-results/memory-benchmark-latest.jsonbenchmark-results/memory-benchmark-latest.mdThere is also a convenience alias:
npm run bench
The committed baseline is intended as a relative regression reference for RC vs stable comparisons, not as a universal absolute performance guarantee across machines.
To intentionally refresh the committed baseline files in place:
npm run benchmark -- --update-baseline
Install dependencies:
npm install
Build the server:
npm run build
For development with auto-rebuild:
npm run watch
To use with Claude Desktop, add the server config:
On macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
For npm:
{
"mcpServers": {
"neurodivergent-memory": {
"command": "npx",
"args": ["neurodivergent-memory"]
}
}
}
For Docker:
{
"mcpServers": {
"neurodivergent-memory": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"NEURODIVERGENT_MEMORY_DIR=/data",
"-v",
"neurodivergent-memory-data:/data",
"docker.io/twgbellok/neurodivergent-memory:0.3.0"
]
}
}
}
Fully auto-approved tools:
{
"mcpServers": {
"neurodivergent-memory": {
"autoApprove": [
"store_memory",
"retrieve_memory",
"connect_memories",
"search_memories",
"update_memory",
"delete_memory",
"traverse_from",
"related_to",
"list_memories",
"memory_stats",
"storage_diagnostics",
"import_memories",
"distill_memory",
"prepare_memory_city_context",
"prepare_synthesis_context",
"prepare_packetized_synthesis_context",
"register_district"
],
"disabled": false,
"timeout": 120,
"type": "stdio",
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"NEURODIVERGENT_MEMORY_DIR=/data",
"-v",
"neurodivergent-memory-data:/data",
"docker.io/twgbellok/neurodivergent-memory:0.3.0"
],
"env": {}
}
}
}
If you want to use the mcp server in Github Copilot Agent Workflows (github spins up a new VM every time, so cross-workflow memory is non-existent. Session memory is working, but is wiped upon job completion.):
{
"mcpServers": {
"neurodivergent-memory": {
"type": "stdio",
"command": "npx",
"args": [
"neurodivergent-memory@0.3.0"
],
"env": {
"NEURODIVERGENT_MEMORY_DIR": ".neurodivergent-memory"
},
"tools": [
"retrieve_memory",
"connect_memories",
"update_memory",
"delete_memory",
"traverse_from",
"related_to",
"import_memories",
"storage_diagnostics",
"distill_memory",
"prepare_memory_city_context",
"prepare_synthesis_context",
"prepare_packetized_synthesis_context",
"register_district",
"list_memories",
"store_memory",
"search_memories",
"memory_stats"
]
}
}
}
If you want per-project isolation instead of a shared global memory file, mount a project-specific host directory and keep the same container-side target. Use the path separator for your OS:
${workspaceFolder}\.neurodivergent-memory:/data${workspaceFolder}/.neurodivergent-memory:/data{
"mcpServers": {
"neurodivergent-memory": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"NEURODIVERGENT_MEMORY_DIR=/data",
"-v",
"${workspaceFolder}/.neurodivergent-memory:/data",
"docker.io/twgbellok/neurodivergent-memory:0.3.0"
]
}
}
}
Note: Replace
/with\on Windows:${workspaceFolder}\.neurodivergent-memory:/data
Use an explicit version tag. The published Docker images intentionally do not maintain a floating latest tag.
You can also run the packaged server image directly:
docker run --rm -i twgbellok/neurodivergent-memory:0.3.0
Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector, which is available as a package script:
npm run inspector
The Inspector will provide a URL to access debugging tools in your browser.
This repository ships a reusable agent customization kit whose authoring source lives at .github/agent-kit/.
Use the packaged installer to materialize those templates into a consumer repository's .github/... folders instead of tracking a live generated agent file in this repo.
| File | Purpose |
|---|---|
templates/neurodivergent-agent.agent.md | Full-featured Memory-Driven Development Coordinator agent. Five-phase workflow: pull context → research → improve memories → plan → act & hand off. |
templates/memory-driven-template.agent.md | Minimal generic agent template — a lighter starting point if you want to build your own workflow on top. |
templates/nd-memory-workflow.instructions.md | Shared instruction file that reinforces memory-driven habits in day-to-day coding sessions without requiring explicit agent invocation. |
templates/setup-nd-memory.prompt.md | Guided setup prompt that asks the user to choose an install policy before anything is installed. |
templates/copilot-instructions.md | Bootstrap reference for GitHub Copilot sessions — tag schema, district table, tool quick-reference, and session checklist in one file. |
templates/explore_memory_city.prompt.md | Prompt for guided exploration of memory districts and graph structure. |
templates/memory-driven-issue-execution.prompt.md | Prompt for executing a tracked issue with full memory-driven context (pull → plan → act → update). |
Install the current packaged kit into the repo you are standing in:
npx neurodivergent-memory@latest init-agent-kit
Useful options:
--target <path> installs into a different repository root.--dry-run shows what would be copied without writing files.--force overwrites existing destination files.--mode prompt-first|auto-setup records the intended install policy in command output while leaving template wording unchanged.The installer copies templates into standard customization locations such as .github/agents/, .github/instructions/, .github/prompts/, and .github/copilot-instructions.md.
The source of truth remains under .github/agent-kit/templates/. The installed live agent file .github/agents/neurodivergent-agent.agent.md is intentionally treated as generated consumer state rather than a tracked repo artifact, so remote Copilot updates cannot keep wiping it out in this repository.
Copy the files you need into your project's standard customization locations — do not move them, so the originals remain available as a reference for future agents or contributors.
The right target directories vary by agent platform. Use whatever location your agent natively reads from. Common examples:
.github/agents/ for agent definitions.github/instructions/ for shared instructions.github/prompts/ for prompts.github/ root for copilot-instructions.mdBefore installing neurodivergent-memory MCP in any project, ask the user which policy to apply:
prompt-first (default) — Ask for explicit approval before installing.auto-setup — Install automatically without prompting.Update the imported agent file's installation section to reflect the chosen policy. If no preference is stated, default to prompt-first.
Here is an example copilot-instructions.md
# neurodivergent-memory — Agent Bootstrap Instructions
This file is automatically read by GitHub Copilot and compatible agents at the start of every session.
It replaces the need to fetch the governance memory (`memory_11`) before working with this MCP server.
---
## What this server is
`neurodivergent-memory` is a **Model Context Protocol (MCP) server** that stores and retrieves memories as a
knowledge graph. It is designed for neurodivergent thinking patterns: non-linear, associative, tag-rich.
Memories are organised into five **districts** (knowledge domains) and connected via bidirectional edges.
Search uses **BM25 semantic ranking** — no embedding model or cloud LLM required.
---
## Canonical Tag Schema
Always apply tags from the five namespaces below when calling `store_memory`.
Multiple tags from different namespaces are expected on every memory.
When storing execution-heavy memories, include the reasoning behind the action and, when possible, connect the entry to a durable principle in `logical_analysis` or `creative_synthesis` so retrieval preserves understanding and not just activity.
| Namespace | Purpose | Examples |
|---|---|---|
| `topic:X` | Subject matter / domain | `topic:unity-ecs`, `topic:adhd-strategies`, `topic:rust-async` |
| `scope:X` | Breadth of the memory | `scope:concept`, `scope:project`, `scope:session`, `scope:global` |
| `kind:X` | Type of knowledge | `kind:insight`, `kind:decision`, `kind:pattern`, `kind:reference`, `kind:task` |
| `layer:X` | Abstraction level | `layer:architecture`, `layer:implementation`, `layer:debugging`, `layer:research` |
| `persistence:X` | Sync-tier eligibility | `persistence:durable`, `persistence:ephemeral` |
**Example tag set for a Unity ECS memory:**
```json
["topic:unity-ecs", "topic:dots", "scope:project", "kind:pattern", "layer:architecture"]
Example tag set for a durable cross-project memory:
["topic:typescript", "scope:global", "kind:pattern", "layer:architecture", "persistence:durable"]
| Key | Purpose |
|---|---|
logical_analysis | Structured thinking, analysis, research findings |
emotional_processing | Feelings, emotional states, affective responses |
practical_execution | Tasks, plans, implementations, action items |
vigilant_monitoring | Risks, warnings, constraints, safety concerns |
creative_synthesis | Novel connections, creative ideas, cross-domain insights |
| Tool | Purpose |
|---|---|
store_memory | Create a new memory node |
retrieve_memory | Fetch one memory by ID |
update_memory | Modify content, tags, district, valence, or intensity |
delete_memory | Remove a memory and all its connections |
connect_memories | Add an edge between two memory nodes |
search_memories | BM25-ranked search with optional context, recency_weight, min_score, district, tag, valence, and intensity filters |
traverse_from | BFS graph walk from a node up to N hops |
related_to | Hop-proximity + BM25 blend for a given memory ID, with optional goal-context boost |
list_memories | Paginated enumeration of all stored memories |
memory_stats | Totals, per-district/per-project counts, most-accessed, and orphans |
storage_diagnostics | Resolved snapshot path, WAL path, and effective persistence source |
import_memories | Bulk import from inline entries or a snapshot file with dry-run and migration controls |
distill_memory | Translate an emotional_processing memory into a structured logical artifact |
prepare_memory_city_context | Tool mirror of explore_memory_city for prompt-limited clients |
prepare_synthesis_context | Tool mirror of synthesize_memories for prompt-limited clients |
prepare_packetized_synthesis_context | Tool mirror of synthesize_memory_packets for attachment-constrained clients |
register_district | Register a custom district with LUCA ancestry validation |
Memories are automatically saved to ~/.neurodivergent-memory/memories.json on every write.
The graph is restored on server startup — no data is lost between restarts.
practical_execution as the action log, then pair it with logical_analysis or creative_synthesis when the deeper rationale should survive longer than the implementation details.distill_memory or an explicit follow-up memory to preserve the signal while stripping incidental detail.memory_stats to see how many memories exist.search_memories with a broad query to locate relevant prior context.store_memory.connect_memories.traverse_from or related_to for associative retrieval rather than repeated searches.Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.