Server data from the Official MCP Registry
MCP Server for SlimContext - AI chat history compression tools
MCP Server for SlimContext - AI chat history compression tools
Valid MCP server (2 strong, 4 medium validity signals). 4 known CVEs in dependencies (0 critical, 4 high severity) Package registry verified. Imported from the Official MCP Registry.
6 files analyzed · 5 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: YOUR_API_KEY
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-agentailor-slimcontext-mcp-server": {
"env": {
"YOUR_API_KEY": "your-your-api-key-here"
},
"args": [
"-y",
"slimcontext-mcp-server"
],
"command": "npx"
}
}
}From the project's GitHub README.
A Model Context Protocol (MCP) server that wraps the SlimContext library, providing AI chat history compression tools for MCP-compatible clients.
SlimContext MCP Server exposes two powerful compression strategies as MCP tools:
trim_messages - Token-based compression that removes oldest messages when exceeding token thresholdssummarize_messages - AI-powered compression using OpenAI to create concise summariesnpm install -g slimcontext-mcp-server
# or
pnpm add -g slimcontext-mcp-server
# Clone and setup
git clone <repository>
cd slimcontext-mcp-server
pnpm install
# Build
pnpm build
# Run in development
pnpm dev
# Type checking
pnpm typecheck
Add to your MCP client configuration:
{
"mcpServers": {
"slimcontext": {
"command": "npx",
"args": ["-y", "slimcontext-mcp-server"]
}
}
}
OPENAI_API_KEY: OpenAI API key for summarization (optional, can be passed as tool parameter)Compresses chat history using token-based trimming strategy.
Parameters:
messages (required): Array of chat messagesmaxModelTokens (optional): Maximum model token context window (default: 8192)thresholdPercent (optional): Percentage threshold to trigger compression 0-1 (default: 0.7)minRecentMessages (optional): Minimum recent messages to preserve (default: 2)Example:
{
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Hello!" },
{ "role": "assistant", "content": "Hi there! How can I help you today?" },
{ "role": "user", "content": "Tell me about AI." }
],
"maxModelTokens": 4000,
"thresholdPercent": 0.8,
"minRecentMessages": 2
}
Response:
{
"success": true,
"original_message_count": 4,
"compressed_message_count": 3,
"messages_removed": 1,
"compression_ratio": 0.75,
"compressed_messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "assistant", "content": "Hi there! How can I help you today?" },
{ "role": "user", "content": "Tell me about AI." }
]
}
Compresses chat history using AI-powered summarization strategy.
Parameters:
messages (required): Array of chat messagesmaxModelTokens (optional): Maximum model token context window (default: 8192)thresholdPercent (optional): Percentage threshold to trigger compression 0-1 (default: 0.7)minRecentMessages (optional): Minimum recent messages to preserve (default: 4)openaiApiKey (optional): OpenAI API key (can also use OPENAI_API_KEY env var)openaiModel (optional): OpenAI model for summarization (default: 'gpt-4o-mini')customPrompt (optional): Custom summarization promptExample:
{
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "I want to build a web scraper." },
{
"role": "assistant",
"content": "I can help you build a web scraper! What programming language would you prefer?"
},
{ "role": "user", "content": "Python please." },
{
"role": "assistant",
"content": "Great choice! For Python web scraping, I recommend using requests and BeautifulSoup..."
},
{ "role": "user", "content": "Can you show me a simple example?" }
],
"maxModelTokens": 4000,
"thresholdPercent": 0.6,
"minRecentMessages": 2,
"openaiModel": "gpt-4o-mini"
}
Response:
{
"success": true,
"original_message_count": 6,
"compressed_message_count": 4,
"messages_removed": 2,
"summary_generated": true,
"compression_ratio": 0.67,
"compressed_messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{
"role": "system",
"content": "The user expressed interest in building a web scraper and requested help with Python. The assistant recommended using requests and BeautifulSoup libraries for Python web scraping."
},
{
"role": "assistant",
"content": "Great choice! For Python web scraping, I recommend using requests and BeautifulSoup..."
},
{ "role": "user", "content": "Can you show me a simple example?" }
]
}
Both tools expect messages in SlimContext format:
interface SlimContextMessage {
role: 'system' | 'user' | 'assistant' | 'tool' | 'human';
content: string;
}
All tools return structured error responses:
{
"success": false,
"error": "Error message description",
"error_type": "SlimContextError" | "OpenAIError" | "UnknownError"
}
Common error scenarios:
SlimContext uses a simple heuristic for token estimation: Math.ceil(content.length / 4) + 2. This provides a reasonable approximation for most use cases. For more accurate token counting, you would need to implement a custom token estimator in your client application.
MIT
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.