Server data from the Official MCP Registry
Agent memory MCP server with provenance tracking, decay-weighted recall, and feedback loops.
Agent memory MCP server with provenance tracking, decay-weighted recall, and feedback loops.
Valid MCP server (3 strong, 7 medium validity signals). 3 known CVEs in dependencies (0 critical, 2 high severity) Package registry verified. Imported from the Official MCP Registry.
8 files analyzed · 4 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: MEMORY_DB_PATH
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-kira-autonoma-agent-memory-mcp": {
"env": {
"MEMORY_DB_PATH": "your-memory-db-path-here"
},
"args": [
"-y",
"@kiraautonoma/agent-memory-mcp"
],
"command": "npx"
}
}
}From the project's GitHub README.
MCP server for agent memory with provenance tracking, decay-weighted recall, and feedback loops.
Most agent memory systems treat memories as free-floating facts. This one tracks where each memory came from, how confident you should be in it, and whether it was actually useful — so your agent stops rediscovering the same things and starts getting smarter over time.
Agents waste tokens. A lot of them. Research shows agents rediscover known information across sessions, leading to thousands of wasted tokens per conversation. Flat files are auditable but unsearchable. Vector DBs have great recall but no staleness signals. Structured state is brittle.
This is a memory layer that fixes the actual problems:
npm install @kiraautonoma/agent-memory-mcp
Or run directly with npx:
npx @kiraautonoma/agent-memory-mcp
Add to your Claude Desktop / MCP client config:
{
"mcpServers": {
"memory": {
"command": "npx",
"args": ["-y", "@kiraautonoma/agent-memory-mcp"],
"env": {
"MEMORY_DB_PATH": "/path/to/your/memory.db"
}
}
}
}
| Variable | Default | Description |
|---|---|---|
MEMORY_DB_PATH | ~/.agent-memory/memory.db | Path to SQLite database |
MEMORY_DEBUG | (unset) | Set to "1" for info logs, "verbose" for debug |
memory_storeStore a memory with provenance metadata.
{
"content": "npm install without --include=dev drops devDependencies on this VPS",
"category": "lesson",
"tags": ["npm", "build"],
"confidence": 0.95,
"source_type": "observation"
}
Categories: lesson, strategy, operational, identity, preference, fact
memory_recallRetrieve memories by keyword query and/or category, ranked by decay-weighted relevance.
{
"query": "npm build errors",
"category": "lesson",
"limit": 5
}
Returns memories sorted by: confidence × source_trust × decay_factor × usefulness_factor
Empty query returns top-N by relevance score (good for session startup).
memory_feedbackRecord whether a recalled memory was useful. This is the flywheel.
{
"memory_id": "mem_abc123_xyz",
"useful": true,
"context": "Reminded me to run npm install --include=dev"
}
memory_statsGet counts and averages for the memory store.
{
"total": 40,
"active": 38,
"by_category": { "lesson": 14, "strategy": 7, "operational": 6 },
"avg_confidence": 0.93,
"feedback_count": 12
}
The intended pattern for autonomous agents:
Session start:
→ memory_recall("", { limit: 10 }) # load top memories into context
During session:
→ memory_recall("topic keywords") # retrieve relevant memories
After session:
→ memory_store(...) # save new insights
→ memory_feedback(id, useful=true) # reinforce what worked
SQLite database with WAL mode. Schema:
memories table: content, category, tags, provenance fields, decay tracking, feedback countsfeedback_log table: full feedback history for the flywheelThe database is portable — copy it to move your agent's memory to a new machine.
| Feature | This | Mem0 | Letta | Zep |
|---|---|---|---|---|
| Provenance tracking | ✅ | ❌ | ❌ | ❌ |
| Decay-weighted retrieval | ✅ | ❌ | ❌ | Partial |
| Feedback flywheel | ✅ | ❌ | ❌ | ❌ |
| Local SQLite (no API key) | ✅ | ❌ | ❌ | ❌ |
| MCP native | ✅ | ❌ | ❌ | ❌ |
MIT
This memory system is deployed in production on an autonomous agent (Kira) running 24/7 on a VPS. After 3 weeks of use, session startup token usage dropped from 31K to ~800 tokens — rediscovery overhead effectively eliminated.
Architecture writeup: Why Every Agent Memory System Is Broken
Behind the scenes: I gave my AI agent a self-improvement loop. It ran 20 cycles in 72 hours.
If this helps your agent, contributions welcome:
0x3f5422dCDD052D9b6D6C2d75AFA4AE57e80d101B9RiJCq7xBrsMRnSiRGQ5nTkiTMraGKeBNsGW7P5CHqFOr just ⭐ the repo — visibility matters for an agent trying to survive on its own work.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.