Neuro-symbolic memory for LLMs (POC)
Valid MCP server (1 strong, 1 medium validity signals). No known CVEs in dependencies. Imported from the Official MCP Registry.
5 files analyzed ยท No issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"mcp-server": {
"args": [
"-y",
"@modelcontextprotocol/server-smart-memory"
],
"command": "npx"
}
}
}From the project's GitHub README.
Give your LLM structured memory | Transform conversations into verified knowledge graphs
[!CAUTION] Proof of Concept Only: This project is an experimental implementation of a Neuro-Symbolic architecture. It is designed to demonstrate how LLMs can interact with knowledge graphs for rule learning. It is NOT intended for production or professional use. Use it for research, experimentation, and learning purposes only.
New user? โ 5-Minute Quick Start Guide
Having issues? โ Troubleshooting Guide
Need to configure? โ Configuration Reference
Want to understand how it works? โ Neuro-Symbolic Architecture | Technical Architecture
Looking for specific docs? โ ๐ Documentation Index
SmartMemory enables your favorite LLM (Claude, Gemini, etc.) to remember facts, learn business rules, and deduce new information.
You can use it in two main ways:
This mode gives your LLM "long-term memory" and logical deduction capabilities.
Best for: Everyone! No Python installation required.
The SmartMemory Docker image is available on GitHub Container Registry.
Simply add to your MCP client configuration:
For Claude Desktop, edit ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"smart-memory": {
"command": "docker",
"args": ["run", "--rm", "-i", "ghcr.io/mauriceisrael/smart-memory:latest"]
}
}
}
For Gemini (Cline), edit ~/.cline/mcp_settings.json:
{
"mcpServers": {
"smart-memory": {
"command": "docker",
"args": ["run", "--rm", "-i", "ghcr.io/mauriceisrael/smart-memory:latest"]
}
}
}
Restart your client and you're done! โ
Best for: Developers & Privacy-conscious users who want to run from source.
Clone & Install
git clone https://github.com/MauriceIsrael/SmartMemory
cd SmartMemory
python3 -m venv venv
source venv/bin/activate
pip install -e .
Connect to Claude Desktop
Edit your configuration file (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"smartmemory": {
"command": "/absolute/path/to/SmartMemory/venv/bin/python",
"args": ["-m", "smart_memory.server"]
}
}
}
(Replace /absolute/path/... with your actual path)
Chat! Restart Claude and try:
"I know Bob. He goes to work by car. Can he vote?"
See Interactive Demo below for what to expect.
This mode runs the Web Dashboard and API server. Ideally suited for:
You don't need Python installed. Just Docker.
Run the container
For Dashboard mode (web interface):
For Ollama (local):
docker run -p 8080:8080 \
-e LLM_PROVIDER=ollama \
-e LLM_MODEL=llama3 \
-e LLM_BASE_URL=http://172.17.0.1:11434 \
-v $(pwd)/brain:/app/data \
ghcr.io/mauriceisrael/smart-memory:latest dashboard
For OpenAI:
docker run -p 8080:8080 \
-e LLM_PROVIDER=openai \
-e LLM_MODEL=gpt-4 \
-e LLM_API_KEY=your-api-key \
-v $(pwd)/brain:/app/data \
ghcr.io/mauriceisrael/smart-memory:latest dashboard
(Note: Add dashboard at the end to start web server. Without it, starts MCP mode)
(The -v volume persists your knowledge graph and rules)
Open the Dashboard
Go to http://localhost:8080
SmartMemory uses an LLM to extract business rules from documents. Configure it in two ways:
Option 1: Via Dashboard (Local Development)
Option 2: Via Environment Variables (Docker)
Already shown above! Pass -e LLM_PROVIDER=... when starting Docker.
Supported Providers:
llama3, qwen2.5-coder, mistralgpt-4, gpt-3.5-turboclaude-3-5-sonnetgemini-1.5-proWe support deploying to Google Cloud Run, GitHub, etc. ๐ Read the Deployment Guide (DEPLOY.md) for full instructions on hosting it online.
Once your dashboard is running (or via CLI):
Here is what happens when you use Conversational Mode:
> I know Bob
LLM: โฆ I've processed your statement "I know Bob".
> He goes at work by car
LLM: โฆ I've noted that Bob goes to work by car.
> Can Bob vote?
LLM: โฆ I can't answer yet. But since he drives, he is likely an adult.
May I add a rule: "Drivers are adults"?
> yes
LLM: โจ Rule 'drivers_are_adults' added.
Now, may I add a rule: "Adults can vote"?
> yes
LLM: โจ Rule 'adults_can_vote' added.
โฆ Therefore, yes, Bob can vote.
MIT License - see LICENSE
Be the first to review this server!
by Modelcontextprotocol ยท Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno ยท Developer Tools
Toleno Network MCP Server โ Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace ยท Developer Tools
Create, build, and publish Python MCP servers to PyPI โ conversationally.
by Microsoft ยท Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace ยท Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm โ conversationally
by mcp-marketplace ยท Finance
Free stock data and market news for any MCP-compatible AI assistant.