Server data from the Official MCP Registry
SQLite-backed MCP server for persistent memory, full-text retrieval, and graph traversal.
SQLite-backed MCP server for persistent memory, full-text retrieval, and graph traversal.
Valid MCP server (2 strong, 2 medium validity signals). No known CVEs in dependencies. Package registry verified. Imported from the Official MCP Registry. Trust signals: trusted author (9/9 approved); 4 highly-trusted packages.
6 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-j0hanz-memory-mcp": {
"args": [
"-y",
"@j0hanz/memory-mcp"
],
"command": "npx"
}
}
}From the project's GitHub README.
A SQLite-backed MCP server for persistent memory storage, full-text retrieval, and relationship graph traversal.
Memory MCP provides a local, persistent memory layer for MCP-enabled assistants. It stores SHA-256-addressed memory items in SQLite with FTS5-powered full-text search, a directed relationship graph, BFS recall traversal, and token-budget-aware context retrieval — all accessible over stdio transport with no external dependencies.
retrieve_context) selects memories that fit a caller-specified token budget — no manual pagination needed.internal://instructions (Markdown guide) and memory://memories/{hash} URI template with hash auto-completion.SIGINT, SIGTERM) and no HTTP endpoints. |>=24.Use the npm package directly with npx — no installation required:
{
"mcpServers": {
"memory-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/memory-mcp@latest"]
}
}
}
[!TIP] The server uses stdio transport only; no HTTP endpoint is exposed. Stdout must not be polluted by custom logging.
Or run with Docker:
docker run --rm -i ghcr.io/j0hanz/memory-mcp:latest
Workspace file .vscode/mcp.json:
{
"servers": {
"memory-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/memory-mcp@latest"]
}
}
}
CLI:
code --add-mcp '{"name":"memory-mcp","command":"npx","args":["-y","@j0hanz/memory-mcp@latest"]}'
CLI:
code-insiders --add-mcp '{"name":"memory-mcp","command":"npx","args":["-y","@j0hanz/memory-mcp@latest"]}'
~/.cursor/mcp.json:
{
"mcpServers": {
"memory-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/memory-mcp@latest"]
}
}
}
claude_desktop_config.json:
{
"mcpServers": {
"memory-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/memory-mcp@latest"]
}
}
}
CLI:
claude mcp add memory-mcp -- npx -y @j0hanz/memory-mcp@latest
MCP config:
{
"mcpServers": {
"memory-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/memory-mcp@latest"]
}
}
}
# Pull and run (stdio mode)
docker run --rm -i \
-e MEMORY_DB_PATH=/data/memory.db \
-v memory-data:/data \
ghcr.io/j0hanz/memory-mcp:latest
MCP client config:
{
"mcpServers": {
"memory-mcp": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"-e",
"MEMORY_DB_PATH=/data/memory.db",
"-v",
"memory-data:/data",
"ghcr.io/j0hanz/memory-mcp:latest"
]
}
}
}
src/ must update README.md and affected mcp/ mirror pages in the same PR.https://modelcontextprotocol.io/specification/2025-11-25/... links for protocol references; avoid latest and mixed legacy targets.src/server.ts.src/tools/index.ts, src/resources/index.ts, and src/prompts/index.ts.src/instructions.md match runtime behavior.npm run type-check, npm run test:fast, npm run build.| Tool | Category | Notes |
|---|---|---|
store_memory | Write | Idempotent by content+sorted tags hash |
store_memories | Write | Batch (1–50), transaction-wrapped |
get_memory | Read | Hash lookup |
update_memory | Write | Returns old_hash + new_hash |
delete_memory | Write | Cascades relationship deletion |
delete_memories | Write | Batch (1–50), transaction-wrapped |
search_memories | Read | FTS5 + importance/type filters + cursor |
create_relationship | Write | Idempotent directed edge creation |
delete_relationship | Write | Deletes exact directed edge |
get_relationships | Read | Direction filter + linked memory fields |
recall | Read | FTS5 seed + BFS traversal (depth 0–3) |
retrieve_context | Read | Token-budget-aware context retrieval |
memory_stats | Read | Store aggregates and type breakdown |
store_memoryStore a new memory with content, tags, and optional type/importance. Idempotent — storing the same content+tags returns the existing hash with created: false.
| Name | Type | Required | Default | Description |
|---|---|---|---|---|
content | string | Yes | — | Memory content (1–100000 chars) |
tags | string[] | Yes | — | 1–100 tags, each max 50 chars, no whitespace |
memory_type | enum | No | general | general, fact, plan, decision, reflection, lesson, error, gradient |
importance | integer | No | 0 | Priority 0–10 |
Returns: { hash, created }
store_memoriesStore multiple memories in one transaction (max 50 items).
| Name | Type | Required | Description |
|---|---|---|---|
items | Array<StoreMemoryItem> | Yes | 1–50 items, each with content, tags, optional memory_type, optional importance |
Returns: { items, succeeded, failed }
get_memoryRetrieve one memory by its SHA-256 hash.
| Name | Type | Required | Description |
|---|---|---|---|
hash | string | Yes | 64-char lowercase SHA-256 hex |
Returns: Memory or { ok: false, error } on E_NOT_FOUND.
update_memoryUpdate content and optionally tags for an existing memory. Returns both hashes.
| Name | Type | Required | Default | Description |
|---|---|---|---|---|
hash | string | Yes | — | Existing memory hash |
content | string | Yes | — | Replacement content |
tags | string[] | No | Existing tags | Replacement tags |
Returns: { old_hash, new_hash }
delete_memoryDelete one memory by hash. Cascades to related relationship rows.
| Name | Type | Required | Description |
|---|---|---|---|
hash | string | Yes | Memory hash |
Returns: { hash, deleted }
delete_memoriesDelete multiple memories by hash in one transaction.
| Name | Type | Required | Description |
|---|---|---|---|
hashes | string[] | Yes | 1–50 memory hashes |
Returns: { items, succeeded, failed }
search_memoriesFull-text search over memory content and tags using FTS5. Supports importance and type filters with cursor pagination.
| Name | Type | Required | Default | Description |
|---|---|---|---|---|
query | string | Yes | — | Search text (1–1000 chars) |
limit | integer | No | 20 | Results per page (1–100) |
cursor | string | No | — | Pagination cursor from previous response |
min_importance | integer | No | — | Only return memories with importance >= this value (0–10) |
max_importance | integer | No | — | Only return memories with importance <= this value (0–10) |
memory_type | enum | No | — | Filter by memory type |
Returns: { memories, total_returned, nextCursor? }
create_relationshipCreate a directed relationship edge between two memories. Idempotent.
Suggested relation_type values: related_to, causes, depends_on, parent_of, child_of, supersedes, contradicts, supports, references.
| Name | Type | Required | Description |
|---|---|---|---|
from_hash | string | Yes | Source memory hash |
to_hash | string | Yes | Target memory hash |
relation_type | string | Yes | Edge label (1–50 chars, no whitespace, free-form) |
Returns: { created }
delete_relationshipDelete one directed relationship edge.
| Name | Type | Required | Description |
|---|---|---|---|
from_hash | string | Yes | Source hash |
to_hash | string | Yes | Target hash |
relation_type | string | Yes | Relationship type |
Returns: { deleted } or { ok: false, error } on E_NOT_FOUND.
get_relationshipsRetrieve relationships for a memory, with optional direction filter.
| Name | Type | Required | Default | Description |
|---|---|---|---|---|
hash | string | Yes | — | Memory hash |
direction | enum | No | both | outgoing, incoming, or both |
Returns: { relationships, count }
Each relationship includes from_hash, to_hash, relation_type, created_at, linked_hash, linked_content, and linked_tags.
recallSearch memories by full-text query, then traverse the relationship graph up to depth hops via BFS. Emits MCP progress notifications per hop.
| Name | Type | Required | Default | Description |
|---|---|---|---|---|
query | string | Yes | — | Seed search query (1–1000 chars) |
depth | integer | No | 1 | BFS hops (0–3) |
limit | integer | No | 10 | Seed memory count (1–50) |
cursor | string | No | — | Pagination cursor from previous response |
min_importance | integer | No | — | Seed filter: only memories with importance >= value (0–10) |
max_importance | integer | No | — | Seed filter: only memories with importance <= value (0–10) |
memory_type | enum | No | — | Seed filter: only memories of this type |
Returns: { memories, graph, depth_reached, aborted?, nextCursor? }
Each item in graph uses the shape:
{ "from_hash": "...", "to_hash": "...", "relation_type": "..." }
[!NOTE]
aborted: trueindicates the traversal hit a safety limit (RECALL_MAX_FRONTIER_SIZE,RECALL_MAX_EDGE_ROWS, orRECALL_MAX_VISITED_NODES). Partial results are still returned.
retrieve_contextSearch memories and return relevance-ranked results that fit within a caller-specified token budget. Eliminates manual pagination and token counting for context window management.
| Name | Type | Required | Default | Description |
|---|---|---|---|---|
query | string | Yes | — | Search query (1–1000 chars) |
token_budget | integer | No | 4000 | Maximum estimated tokens to return (100–200000) |
strategy | enum | No | relevance | Sort order: relevance (FTS rank), importance (highest first), recency (newest first) |
Returns: { memories, estimated_tokens, truncated }
[!TIP] Token estimation is approximate (content length ÷ 4).
truncated: truemeans the budget was reached before all candidates were included.
memory_statsReturn aggregate memory and relationship stats. Takes no input.
Returns:
{
"memories": {
"total": 0,
"oldest": null,
"newest": null,
"avg_importance": null
},
"relationships": { "total": 0 },
"by_type": {}
}
| URI | MIME | Description |
|---|---|---|
internal://instructions | text/markdown | Markdown usage guide for all tools and workflows |
memory://memories/{hash} | application/json | Returns one memory as JSON; hash completion supported |
| Name | Arguments | Purpose |
|---|---|---|
get-help | none | Returns full usage instructions for all tools |
| Variable | Description | Default | Required |
|---|---|---|---|
MEMORY_DB_PATH | SQLite database file path | memory_db/memory.db | No |
RECALL_MAX_FRONTIER_SIZE | Max BFS frontier nodes per hop (100–50000) | 1000 | No |
RECALL_MAX_EDGE_ROWS | Max relationship rows fetched per traversal (100–50000) | 5000 | No |
RECALL_MAX_VISITED_NODES | Max visited nodes across entire traversal (100–50000) | 5000 | No |
[!IMPORTANT] If
MEMORY_DB_PATHis relative (including the defaultmemory_db/memory.db), it resolves from the process working directory.
[!TIP] Add
memory_db/to your.gitignoreto keep the database out of version control — it contains local session data and should not be shared or committed.
| Item | Value |
|---|---|
| Content length | 1–100000 chars |
| Tag count | 1–100 per memory |
| Tag length | 1–50 chars, no whitespace |
| Hash format | 64-char lowercase hex SHA-256 |
| Search query length | 1–1000 chars |
search_memories.limit | 1–100 (default 20) |
recall.depth | 0–3 (default 1) |
recall.limit | 1–50 (default 10) |
retrieve_context.token_budget | 100–200000 (default 4000) |
| Batch size | 1–50 items (store_memories, delete_memories) |
| Recall frontier guard | RECALL_MAX_FRONTIER_SIZE (default 1000 per hop) |
| SQLite busy timeout | 5000 ms |
[!NOTE] Cursor values are opaque base64url-encoded tokens. Treat them as opaque and do not parse them.
StdioServerTransport) — no HTTP endpoints.stderr; stdout must remain clean for the MCP protocol.MATCH execution (non-alphanumeric characters act as delimiters, preventing FTS injection).Install dependencies:
npm install
Core scripts:
| Script | Command | Purpose |
|---|---|---|
build | npm run build | Clean, compile, validate instructions, copy assets, chmod executable |
dev | npm run dev | TypeScript watch mode |
dev:run | npm run dev:run | Run built server with .env and file watch |
start | npm run start | Start built server |
test | npm run test | Full build + tests via task runner |
test:fast | npm run test:fast | Run TS tests directly with Node test runner |
lint | npm run lint | ESLint checks |
lint:fix | npm run lint:fix | ESLint auto-fix |
type-check | npm run type-check | Strict TypeScript checks |
format | npm run format | Prettier format |
inspector | npm run inspector | Build and open MCP Inspector against stdio server |
Inspect with MCP Inspector:
npx @modelcontextprotocol/inspector node dist/index.js
GitHub Actions release workflow (.github/workflows/release.yml) handles versioning, validation, and publishing via a single workflow_dispatch trigger:
workflow_dispatch (patch / minor / major / custom)
│
▼
release — bump package.json + server.json → lint → type-check → test → build → tag → GitHub Release
│
├──► publish-npm ──► publish-mcp (npm Trusted Publishing OIDC → MCP Registry)
│
└──► publish-docker (GHCR, linux/amd64 + linux/arm64)
Trigger a release:
gh workflow run release.yml -f bump=patch
Or use the GitHub UI: Actions → Release → Run workflow.
[!NOTE] npm publishing uses OIDC Trusted Publishing — no
NPM_TOKENsecret required. MCP Registry uses GitHub OIDC. Docker uses the built-inGITHUB_TOKEN.
| Symptom | Cause | Fix |
|---|---|---|
| Startup fails with FTS5 error | Node.js build without FTS5 | Use Node.js 24+ with SQLite FTS5 support |
E_NOT_FOUND on get_memory | Hash doesn't exist | Verify via search_memories first |
E_INVALID_CURSOR | Stale or malformed cursor | Retry the request without the cursor parameter |
| MCP client can't connect | Custom stdout logging added | Ensure nothing writes to stdout in the server process |
aborted: true in recall | Traversal hit a safety limit | Reduce depth, or tune RECALL_MAX_* env vars |
| Database locked errors | High concurrent write load | SQLite busy timeout is 5000 ms; reduce concurrent writes |
MIT
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.