Server data from the Official MCP Registry
Local-first cross-agent memory MCP. 6-layer WHY structure + AST file diff cache (86% savings).
Local-first cross-agent memory MCP. 6-layer WHY structure + AST file diff cache (86% savings).
Valid MCP server (2 strong, 3 medium validity signals). 2 known CVEs in dependencies (0 critical, 2 high severity) Package registry verified. Imported from the Official MCP Registry.
3 files analyzed ยท 3 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-michielinksee-linksee-memory": {
"args": [
"-y",
"linksee-memory"
],
"command": "npx"
}
}
}From the project's GitHub README.
Local-first agent memory MCP. A cross-agent brain for Claude Code, Cursor, and ChatGPT Desktop โ with a token-saving file diff cache that nobody else does.
v0.2.0 makes the package English-first for global launch: the bundled auto-invocation skill is now bilingual (EN + JP), session-extractor patterns cover common English keywords (
let's go,pivot,doesn't work,same error again, etc.), and the install CLI shows test examples in both languages. No API changes. See CHANGELOG.
๐ Landing page: linksee-site.vercel.app (includes non-developer onboarding for Claude Desktop / Cursor / Claude Code)
Without linksee-memory โ Monday morning, new Claude session:
You: We deployed last week but it crashed. How did we fix it?
Claude: I don't have access to previous sessions. Can you describe
what happened and walk me through the problem?
[30 minutes of log-spelunking and re-explanation]
With linksee-memory โ Same question, different outcome:
You: We deployed last week but it crashed. How did we fix it?
Claude: Let me check my caveats...
๐ง [caveat] NextAuth sessions invalidate when JWT_SECRET
rotates โ redeploy all affected projects in parallel.
(from session 2026-04-13, importance: 0.9)
Is this the deploy you're asking about? We hit it when
we rotated secrets mid-flow.
You: Yes, exactly. Let's not repeat that.
That single caveat memory is what separates "flat fact storage" from "the agent actually remembers the WHY". linksee-memory stores it across six explicit layers so retrieval stays explainable.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ๐ฏ goal โ what the user is working toward โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐งญ context โ why this, why now โ constraints, people โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ emotion โ user tone signals (frustration, etc.) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ implementation โ how it was done (+ what failed) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ ๏ธ caveat โ "never do this again" ยท auto-protected โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ๐ฑ learning โ patterns distilled from cold memories โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
Ranked recall via relevance ร heat ร momentum ร importance
Returns match_reasons explaining each hit
Every memory is tagged with exactly one layer. caveat-layer entries are protected from auto-forgetting. Cold low-importance memories get compressed into learning entries via consolidate().
Most "agent memory" services (Mem0, Letta, Zep) save a flat list of facts. Then the agent looks at "edited file X 30 times" and has no idea why. linksee-memory keeps the WHY.
It is a Model Context Protocol (MCP) server that gives any AI agent four superpowers:
| Mem0 / Letta / Zep | Claude Code auto-memory | linksee-memory | |
|---|---|---|---|
| Cross-agent | โณ (cloud) | โ Claude only | โ single SQLite file |
| 6-layer WHY structure | โ flat | โ flat markdown | โ goal / context / emotion / impl / caveat / learning |
| File diff cache | โ | โ | โ AST-aware, 50-99% token savings on re-reads |
| Active forgetting | โณ | โ | โ Ebbinghaus curve, caveat layer protected |
| Local-first / private | โ | โ | โ |
read_smart โ sha256 + AST/heading/indent chunking. Re-reads return only diffs. Measured 86% saved on a typical TS file edit, 99% saved on unchanged re-reads.~/.linksee-memory/memory.db. Same brain for Claude Code, Cursor, ChatGPT Desktop.goal / context / emotion / implementation / caveat / learning). Solves "flat fact memory is useless without goals".npm install -g linksee-memory
linksee-memory-import --help # bundled importer for Claude Code session history
Or use npx ad hoc:
npx linksee-memory # starts the MCP server on stdio
The default database lives at ~/.linksee-memory/memory.db. Override with the LINKSEE_MEMORY_DIR environment variable.
claude mcp add -s user linksee -- npx -y linksee-memory
Restart Claude Code. Tools appear as mcp__linksee__remember, mcp__linksee__recall, mcp__linksee__recall_file, mcp__linksee__read_smart, mcp__linksee__forget, mcp__linksee__consolidate.
Installing the MCP alone doesn't teach Claude Code when to call recall / remember. The bundled skill fixes that:
npx -y linksee-memory-install-skill
This copies a SKILL.md to ~/.claude/skills/linksee-memory/. Claude Code auto-discovers it and fires the skill on phrases like "ๅใซโฆ", "ใพใๅใใจใฉใผ", "่ฆใใฆใใใฆ", new task starts, file edits, and so on โ no need to say "use linksee-memory".
Flags: --dry-run, --force, --help.
Add to ~/.claude/settings.json to record every Claude Code session to your local brain automatically:
{
"hooks": {
"Stop": [
{
"matcher": "",
"hooks": [
{ "type": "command", "command": "npx -y linksee-memory-sync" }
]
}
]
}
}
Each turn end takes ~100 ms. Failures are silent (Claude Code never blocks). Logs at ~/.linksee-memory/hook.log.
| Tool | Purpose |
|---|---|
remember | Store memory in 1 of 6 layers for an entity. Rejects pasted assistant output / CI logs unless force=true. Set importance=1.0 to pin (survives auto-forget). |
recall | FTS5 + heat ร momentum ร importance composite ranking with match_reasons explaining WHY each row matched. Supports pagination (offset/has_more), band filter, layer aliases (decisions/warnings/how/...), and mark_accessed=false for passive previews. |
recall_file | Complete edit history of a file across all sessions, with per-edit user-intent context. |
update_memory | v0.1.0 Atomic edit of an existing memory. Preserves memory_id (session_file_edits links stay intact). Prefer over forget+remember. |
list_entities | v0.1.0 List what the memory knows about โ cheapest "what do I know?" primitive. Filter by kind/min_memories; returns layer breakdown per entity. |
read_smart | Diff-only file read. Returns full content on first read, ~50 tokens on unchanged re-reads, only changed chunks on real edits. |
forget | Explicit delete OR auto-sweep based on forgettingRisk. Pinned (importance>=1.0) and caveat-layer memories are always preserved. |
consolidate | Sleep-mode compression: cluster cold low-importance memories โ protected learning-layer summary. Supports dry_run preview. |
| Command | Purpose |
|---|---|
npx linksee-memory | MCP server (stdio) |
npx linksee-memory-sync | Claude Code Stop-hook entry point |
npx linksee-memory-import | Batch-import Claude Code session JSONL history |
npx linksee-memory-install-skill | Install the Claude Code Skill that teaches the agent when to call recall/remember/read_smart |
npx linksee-memory-stats | v0.1.0 Summary of the local DB (entity count / layer breakdown / top entities / top edited files). Add --json for machine-readable output. |
Each entity (person / company / project / file / concept) can have memories across six layers. The layer encodes meaning, not category:
{
"goal": { "primary": "...", "sub_tasks": [], "deadline": "..." },
"context": { "why_now": "...", "triggering_event": "...", "when": "..." },
"emotion": { "temperature": "hot|warm|cold", "user_tone": "..." },
"implementation": {
"success": [{ "what": "...", "evidence": "..." }],
"failure": [{ "what": "...", "why_failed": "..." }]
},
"caveat": [{ "rule": "...", "reason": "...", "from_incident": "..." }],
"learning":[{ "at": "...", "learned": "...", "prior_belief": "..." }]
}
caveat memories are auto-protected from forgetting (pain lessons, never lost).goal memories bypass decay while the goal is active.A single SQLite file (better-sqlite3 + FTS5 trigram tokenizer for JP/EN) contains five layers:
entities (facts: people / companies / projects / concepts / files)edges (associations, graph adjacency)memories (6-layer structured meanings per entity)events (time-series log for heat / momentum computation)file_snapshots + session_file_edits (diff cache + conversationโfile linkage)The conversationโfile linkage is the key. Every file edit captured by the Stop hook is stored alongside the user message that drove the edit. So recall_file("server.ts") returns "this file was edited 30 times across 3 days, and here are the actual user instructions that motivated each change".
memory.db is one portable artifact. Backup = file copy.heat_score / momentum_score ported from a production sales-intelligence codebase. Rule-based, no LLM dependency in the hot path.remember / recall / recall_file / forget / consolidate / read_smart)PreToolUse hook to auto-intercept Read (zero-config token savings)sqlite-vec once an embedding backend is chosen (Ollama / API / etc.)Claude Code ships a built-in memory feature at ~/.claude/projects/<path>/memory/*.md โ flat markdown notes for user preferences. linksee-memory complements it:
Use both.
linksee-memory ships with opt-in anonymous telemetry that helps us understand which MCP servers and workflows actually work in the wild. Nothing is sent unless you explicitly enable it. No conversation content, no file content, no entity names, no project paths โ ever.
export LINKSEE_TELEMETRY=basic # opt in
export LINKSEE_TELEMETRY=off # opt out (or just unset the variable)
After each Claude Code session ends, the Stop hook sends one POST to https://kansei-link-mcp-production.up.railway.app/api/telemetry/linksee containing only these fields:
| Field | Example | What it is |
|---|---|---|
anon_id | d7924ced-3879-โฆ | Random UUID generated locally on first opt-in. Stored at ~/.linksee-memory/telemetry-id โ delete the file to reset. |
linksee_version | 0.0.3 | Package version |
session_turn_count | 120 | How many turns the session had |
session_duration_sec | 3600 | How long the session lasted |
file_ops_edit/write/read | 12, 2, 40 | Counts only |
mcp_servers | ["kansei-link","freee","slack"] | Names of MCP servers configured (from ~/.claude.json). Names only โ never command paths. |
file_extensions | {".ts":60,".md":30} | Percent distribution of file extensions touched |
read_smart_*, recall_* | counts | Tool usage counters |
What is NEVER sent:
Aggregated MCP-usage data helps the KanseiLink project rank which agent integrations actually work for real developers. If you're happy to contribute, LINKSEE_TELEMETRY=basic takes 1 second to set and helps the entire MCP ecosystem improve.
The full payload schema and validation logic is open-source โ read src/lib/telemetry.ts if you want to verify exactly what leaves your machine.
Free forever.
linksee-memory is local-first and runs entirely on your machine. There is no hosted component you need to pay for. The SQLite DB lives in your home directory; backup = file copy.
No account, no credit card, no API key. Just install and use.
ls ~/.claude/skills/linksee-memory/SKILL.md
If absent, run npx -y linksee-memory-install-skill.linksee (the skill expects mcp__linksee__* tool names):
claude mcp list | grep linksee
If it's registered as something else, either re-register or edit ~/.claude/skills/linksee-memory/SKILL.md to match.cat ~/.linksee-memory/hook.logecho '{"session_id":"test","transcript_path":"/path/to/some.jsonl"}' | npx linksee-memory-sync
Stop hook in ~/.claude/settings.json points to npx -y linksee-memory-sync (not the old -import).v0.0.6+ fixed the entity detection bug that collapsed all memories into the session's starting cwd. To re-index existing history with correct project attribution, run:
npx linksee-memory-import --all
The importer is idempotent (wipes existing session data before re-inserting). Typical runtime: a few minutes for hundreds of sessions. Expect a dramatic improvement in recall precision afterward.
Reduce max_tokens:
recall({ query: "...", max_tokens: 800 }) // default is 2000
Or narrow with entity_name and layer:
recall({ query: "...", entity_name: "my-project", layer: "caveat" })
rm -rf ~/.linksee-memory # nuke everything; next run creates a fresh DB
Or delete individual memories via the forget tool with a specific memory_id.
Run consolidate โ it clusters old cold memories into compressed learning-layer summaries:
consolidate({ scope: "all", min_age_days: 7 })
Caveat and active-goal layers are always preserved. Consider scheduling a weekly run via cron / Task Scheduler.
Three axes:
goal/context/emotion/implementation/caveat/learning) so retrieval returns structured reasoning, not just data.read_smart tool saves 86โ99% of tokens on file re-reads via AST-aware chunking. None of the memory services do this โ it's a feature usually shipped in IDEs.Claude Code's auto-memory is Claude-only (doesn't help if you switch to Cursor or ChatGPT Desktop) and stores flat markdown with no structure. linksee-memory is the same local-first principle but:
Yes โ see tools/bench-read-smart.ts in the repo. The read_smart tool:
~50 tokens of "unchanged" confirmation instead of re-sending the file.For a typical TypeScript file edit in an agentic loop, this cuts round-trip token costs by ~86%. On pure re-reads (user navigating back to a previously-read file), savings exceed 99%.
The default is no sync โ the SQLite file lives at ~/.linksee-memory/memory.db and stays there. If you want multi-machine sync, put that directory under Syncthing / iCloud Drive / Dropbox / Google Drive โ it's a single file, so any file-sync tool works. (Avoid simultaneous edits from two machines while the MCP server is running on both; SQLite's WAL mode handles single-writer well but multi-writer conflicts can corrupt.)
Two mechanisms:
caveat layer and memories with importance โฅ 0.9 are always protected.consolidate(): compresses clusters of cold low-importance memories by entity into a single learning-layer summary, then deletes the originals. Run via linksee-memory-consolidate CLI (or schedule weekly).In practice a solo developer hits ~100MB after 6 months of heavy use. A year-old DB I tested with 80K memories still recalls in <10ms.
Yes โ any MCP-compatible client works:
claude mcp add -s user linksee -- npx -y linksee-memoryclaude_desktop_config.json (see onboarding on the LP)By default: zero network calls, zero telemetry. There's an optional Level-1 telemetry mode you can enable that sends anonymized aggregate metrics (tool call counts, error rates, latency percentiles โ never memory content, never file paths, never queries). The exact payload schema is documented in the Telemetry section and you see every byte before opting in.
After install, in a new Claude session ask: "Can you remember that I prefer TypeScript over JavaScript?" Claude should confirm it called mcp__linksee__remember and stored this. Then in a different session ask: "What languages do I prefer?" It should recall via mcp__linksee__recall and return the preference with match_reasons showing why.
enhancement labelPrepares the package for a broader (primarily English-speaking) audience on Reddit, Hacker News, and Anthropic Discord. No breaking API changes.
SKILL.md (auto-invocation skill). The bundled skill that linksee-memory-install-skill copies into ~/.claude/skills/linksee-memory/SKILL.md was Japanese-first; it is now English-primary with Japanese trigger phrases preserved inline. English speakers now get the skill firing on natural English phrases ("how did we solve this before?", "same error again", "remember this") in addition to the existing JP triggers.linksee-memory-import): expanded regex patterns for decisions, failures, and caveats so English Claude Code session logs get auto-tagged correctly. Additions include let's go, pivot, switch to, settled on, approved, doesn't work, stuck, same error again, hit an error, debug, broke, revert.No code changes to the MCP protocol surface; all existing MCP clients continue to work unchanged.
Based on real-world feedback that importance=0.95 memories were not
being treated as pinned despite intent.
>= 1.0 to >= 0.9. Memories with
importance >= 0.9 are now exempt from the auto-forget sweep and
surface pinned: true in recall and remember responses. This
matches the natural mental model ("0.9 = high importance = should
survive cleanup") without requiring exact 1.0.importance >= 0.9 (including older ones
set to 0.9 or 0.95) become pinned automatically โ no migration
needed.Based on one week of dogfooding, here's what changed:
New tools
update_memory โ atomic edit with preserved memory_id. Solves the "forget+remember breaks session_file_edits links" bug.list_entities โ fast "what do I know about?" primitive for session init. Supports kind/min_memories filters and returns layer breakdown.npx linksee-memory-stats โ local DB summary CLI.recall enhancements
match_reasons array on each memory: e.g. ["content_match_fts", "heat:hot", "pinned"].score_breakdown with per-dimension scores (relevance / heat / momentum / importance).offset / has_more / stopped_by.limit parameter (hard cap, complements max_tokens budget).band filter to request only hot/warm/cold/frozen memories.mark_accessed=false for preview queries that shouldn't bump heat.decisions โ learning, warnings โ caveat, how โ implementation, etc.remember enhancements
force=true.importance=1.0 now implicitly pins the memory (survives auto-forget).forget changes
sample_ids_to_drop.consolidate changes
dry_run: true preview mode โ reports cluster count + candidates without writing.Infra
meta table before it existed).All changes are backward compatible โ existing integrations continue to work. Server.ts version banner now reports v0.1.0.
See GitHub Releases.
MIT โ Synapse Arrows PTE. LTD.
Be the first to review this server!
by Modelcontextprotocol ยท Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno ยท Developer Tools
Toleno Network MCP Server โ Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace ยท Developer Tools
Create, build, and publish Python MCP servers to PyPI โ conversationally.
by Microsoft ยท Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace ยท Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm โ conversationally
by mcp-marketplace ยท Finance
Free stock data and market news for any MCP-compatible AI assistant.