Server data from the Official MCP Registry
Academic paper search across 11 sources with AI curation, ranked push, Zotero and Obsidian support.
Academic paper search across 11 sources with AI curation, ranked push, Zotero and Obsidian support.
Valid MCP server (0 strong, 3 medium validity signals). 6 known CVEs in dependencies (1 critical, 3 high severity) Package registry verified. Imported from the Official MCP Registry.
3 files analyzed ยท 7 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: OPENALEX_EMAIL
Environment variable: CORE_API_KEY
Environment variable: TELEGRAM_BOT_TOKEN
Environment variable: TELEGRAM_CHAT_ID
Environment variable: DISCORD_WEBHOOK_URL
Environment variable: ZOTERO_LIBRARY_ID
Environment variable: ZOTERO_API_KEY
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-eclipse-cj-paper-distill-mcp": {
"env": {
"CORE_API_KEY": "your-core-api-key-here",
"OPENALEX_EMAIL": "your-openalex-email-here",
"ZOTERO_API_KEY": "your-zotero-api-key-here",
"TELEGRAM_CHAT_ID": "your-telegram-chat-id-here",
"ZOTERO_LIBRARY_ID": "your-zotero-library-id-here",
"TELEGRAM_BOT_TOKEN": "your-telegram-bot-token-here",
"DISCORD_WEBHOOK_URL": "your-discord-webhook-url-here"
},
"args": [
"paper-distill-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
Academic paper search, intelligent curation, and multi-platform delivery โ built on the Model Context Protocol.
Compatible with all MCP clients: Claude Desktop, Claude Code, Cursor, Trae, Codex CLI, Gemini CLI, OpenClaw, VS Code, Zed, and more.
โ ๏ธ Early development stage. Many features are still being validated and may contain bugs or instabilities. Feedback and bug reports are warmly welcome!
uvx paper-distill-mcp
That's it. Your AI client will discover all tools automatically. No API keys required for basic paper search.
No
uv? โcurl -LsSf https://astral.sh/uv/install.sh | shorbrew install uv
pip:
pip install paper-distill-mcp
Homebrew:
brew tap Eclipse-Cj/tap
brew install paper-distill-mcp
Docker:
docker run -i --rm ghcr.io/eclipse-cj/paper-distill-mcp
From source (developers):
git clone https://github.com/Eclipse-Cj/paper-distill-mcp.git
cd paper-distill-mcp
python3 -m venv .venv && .venv/bin/pip install --upgrade pip && .venv/bin/pip install -e .
Add to claude_desktop_config.json (Settings โ Developer โ Edit Config):
{
"mcpServers": {
"paper-distill": {
"command": "uvx",
"args": ["paper-distill-mcp"]
}
}
}
claude mcp add paper-distill -- uvx paper-distill-mcp
Or add to .mcp.json:
{
"mcpServers": {
"paper-distill": {
"command": "uvx",
"args": ["paper-distill-mcp"]
}
}
}
Add to ~/.codex/config.toml:
[mcp_servers.paper-distill]
command = "uvx"
args = ["paper-distill-mcp"]
Add to ~/.gemini/settings.json:
{
"mcpServers": {
"paper-distill": {
"command": "uvx",
"args": ["paper-distill-mcp"]
}
}
}
mcporter config add paper-distill --command uvx --scope home -- paper-distill-mcp
mcporter list # verify
To remove:
mcporter config remove paper-distill
git clone https://github.com/Eclipse-Cj/paper-distill-mcp.git ~/.openclaw/tools/paper-distill-mcp
cd ~/.openclaw/tools/paper-distill-mcp
uv venv .venv && uv pip install .
mcporter config add paper-distill \
--command ~/.openclaw/tools/paper-distill-mcp/.venv/bin/python3 \
--scope home \
-- -m mcp_server.server
mcporter list
To remove:
rm -rf ~/.openclaw/tools/paper-distill-mcp && mcporter config remove paper-distill
Same JSON config, different config file paths:
| Client | Config path |
|---|---|
| Claude Desktop | claude_desktop_config.json |
| Trae | Settings โ MCP โ Add |
| Cursor | ~/.cursor/mcp.json |
| VS Code | .vscode/mcp.json |
| Windsurf | ~/.codeium/windsurf/mcp_config.json |
| Zed | settings.json |
paper-distill-mcp --transport http --port 8765
After connecting your client, tell the agent "initialize paper-distill". It will call setup() and walk you through:
pool_refresh() populates the paper poolAll settings can be updated at any time through conversation:
All parameters are set via configure() or add_topic() โ no manual file editing needed.
add_topic / manage_topics)| Parameter | Description | Default |
|---|---|---|
key | Topic identifier (e.g. "llm-reasoning") | โ |
label | Display name (e.g. "LLM Reasoning") | โ |
keywords | Search keywords, 3โ5 recommended | โ |
weight | Topic priority 0.0โ1.0 (higher = more papers) | 1.0 |
blocked | Temporarily disable without deleting | false |
configure)| Parameter | Options | Default | Description |
|---|---|---|---|
paper_count_value | any integer | 6 | Papers per push |
paper_count_mode | "at_most" / "at_least" / "exactly" | "at_most" | Count mode |
picks_per_reviewer | any integer | 5 | Shortlist size per reviewer |
review_mode | "single" / "dual" | "single" | Single AI or dual blind review |
custom_focus | free text | "" | Custom selection criteria |
๐ก Dual blind review: two independent AI reviewers each shortlist papers; a chief reviewer makes the final push/overflow/discard call. Papers that don't make the cut are held for the next cycle rather than discarded. Enable with
configure(review_mode="dual").
configure)Controls paper scoring. The four weights should sum to approximately 1.0.
| Parameter | Measures | Default |
|---|---|---|
w_relevance | Keyword and topic match | 0.55 |
w_recency | How recently the paper was published | 0.20 |
w_impact | Citation count (log-normalized) | 0.15 |
w_novelty | Whether this is the first appearance | 0.10 |
Example: "Prioritize recent papers" โ
configure(w_recency=0.35, w_relevance=0.40)
configure)Abstract extraction is the most token-intensive step. It runs on the main agent by default, but can be delegated to a cheaper model to cut costs significantly.
| Parameter | Value | Description |
|---|---|---|
summarizer | "self" | Main agent handles extraction (most expensive) |
agent name (e.g. "scraper") | Delegate to a low-cost sub-agent | |
| API URL | Call an external LLM API (DeepSeek, Ollama, etc.) |
๐ง Strongly recommended: for 30+ papers, frontier model costs add up fast. A $0.14/M-token model handles extraction just as well. Set this with
configure(summarizer="scraper").
configure)| Parameter | Description | Default |
|---|---|---|
scan_batches | Split the paper pool into N batches, reviewed over N+1 days | 2 (3 days) |
pool_refresh() searches all 11 APIs and fills the pool. The pool is then split into batches for daily AI review โ avoiding a single 60+ paper dump.
scan_batches=2 (default): review first half on day 1, second half on day 2, finalize on day 3scan_batches=3: review one-third per day, finalize on day 4When all batches are reviewed, the pool is exhausted and the next run triggers a fresh API search automatically.
| Platform | Environment variables | platform value |
|---|---|---|
| Telegram | TELEGRAM_BOT_TOKEN + TELEGRAM_CHAT_ID | "telegram" |
| Discord | DISCORD_WEBHOOK_URL | "discord" |
| Feishu | FEISHU_WEBHOOK_URL | "feishu" |
| WeCom | WECOM_WEBHOOK_URL | "wecom" |
โ ๏ธ Important: set environment variables in the MCP client config
envfield, not as system environment variables. Otherwisesend_push()cannot access the webhook URL and the AI may generate scripts that call webhooks directly, causing encoding issues.
Config example (WeCom + Claude Desktop):
{
"mcpServers": {
"paper-distill": {
"command": "uvx",
"args": ["paper-distill-mcp"],
"env": {
"WECOM_WEBHOOK_URL": "https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=YOUR_KEY"
}
}
}
}
Restart the MCP client after editing the config.
Push message format (fixed):
1. Paper Title (Year)
Journal Name
- One-sentence summary
- Why it was selected
https://doi.org/...
configure)Personal paper library website, auto-updated on every push. Built on Astro + Vercel (free tier).
| Parameter | Description |
|---|---|
site_deploy_hook | Vercel deploy hook URL (triggers site rebuild) |
site_repo_path | Local path to the paper-library repository |
Setup steps (the AI agent will guide you):
configure(site_deploy_hook=...)After setup, every finalize_review() call pushes the digest JSON to the site repo and triggers a Vercel rebuild. The site updates in ~30 seconds.
Save papers to Zotero with one command. Requires a Zotero account and API key.
Getting credentials:
Add to MCP client config:
{
"mcpServers": {
"paper-distill": {
"command": "uvx",
"args": ["paper-distill-mcp"],
"env": {
"ZOTERO_LIBRARY_ID": "your userID",
"ZOTERO_API_KEY": "your API key"
}
}
}
}
After setup, reply collect 1 3 after a push to save papers 1 and 3 to Zotero, automatically sorted into per-topic folders.
| Variable | Description | Required |
|---|---|---|
OPENALEX_EMAIL | Increases OpenAlex API rate limit; also used for Unpaywall | optional |
CORE_API_KEY | CORE API key (free registration) | optional |
DEEPSEEK_API_KEY | Enhanced search via DeepSeek | optional |
ZOTERO_LIBRARY_ID + ZOTERO_API_KEY | Save papers to Zotero | optional |
SITE_URL | Paper library website URL | optional |
PAPER_DISTILL_DATA_DIR | Data directory | default: ~/.paper-distill/ |
| Tool | Description |
|---|---|
setup() | First call โ detects fresh install and returns guided initialization instructions |
add_topic(key, label, keywords) | Add a research topic with search keywords |
configure(...) | Update any setting: paper count, ranking weights, review mode, etc. |
| Tool | Description |
|---|---|
search_papers(query) | Parallel search across 11 sources |
rank_papers(papers) | 4-dimensional weighted scoring |
filter_duplicates(papers) | Deduplicate against previously pushed papers |
| Tool | Description |
|---|---|
pool_refresh(topic?) | Search all 11 APIs and build the paper pool |
prepare_summarize(custom_focus?) | Generate AI abstract extraction prompt |
prepare_review(dual?) | Generate review prompt โ AI makes push/overflow/discard decisions |
finalize_review(selections) | Process AI decisions, update pool, output push message |
pool_status() | Pool status: count, scan day, exhausted or not |
collect(paper_indices) | Save papers to Zotero + generate Obsidian notes |
| Tool | Description |
|---|---|
init_session | Detect delivery platform and load research context |
load_session_context | Load historical research context |
generate_digest(papers, date) | Generate output files (JSONL, site, Obsidian) |
send_push(date, papers, platform) | Deliver to Telegram / Discord / Feishu / WeCom |
collect_to_zotero(paper_ids) | Save to Zotero via DOI |
manage_topics(action, topic) | List / disable / enable / reweight topics |
ingest_research_context(text) | Inherit research context across sessions |
AI client (Claude Code / Codex CLI / Gemini CLI / Cursor / ...)
โ MCP (stdio or HTTP)
paper-distill-mcp
โโโ search/ โ 11-source academic search (with OA full-text enrichment)
โโโ curate/ โ scoring + deduplication
โโโ generate/ โ output (JSONL, Obsidian, site)
โโโ bot/ โ push formatting (4 platforms)
โโโ integrations/ โ Zotero API
The server does not call any LLM internally. Search, ranking, and deduplication are pure data operations. Intelligence comes from your AI client.
The system searches all papers by default (including subscription journals) and maximizes free full-text access through:
For papers with no free version, the system returns a DOI link. If you have institutional VPN access, clicking the DOI link while connected is usually enough โ publishers identify your institution by IP.
open_access_urlpriority: arXiv > CORE > Unpaywall > OpenAlex > Semantic Scholar > Papers with Code
Symptom: the review prompt generated by prepare_review() causes the AI client to hang or time out.
Cause: too many candidate papers in the pool (e.g. 80โ100), making the prompt exceed the client's context window or output token limit. VS Code Copilot and some IDE plugins have limited context capacity.
Solutions (pick one):
scan_batches (recommended) โ split the pool into more batches:
configure(scan_batches=5)
Requires-Python >=3.10Python 3.10+ is required. macOS ships with Python 3.9 by default โ install a newer version with brew install python@3.13 or use uv.
ghcr.io is blocked in mainland China. Use pip with a Chinese mirror:
pip install paper-distill-mcp -i https://pypi.tuna.tsinghua.edu.cn/simple
git clone https://github.com/Eclipse-Cj/paper-distill-mcp.git
cd paper-distill-mcp
python3 -m venv .venv && .venv/bin/pip install --upgrade pip && .venv/bin/pip install -e .
python tests/test_mcp_smoke.py # 9 tests, no network required
This project is licensed under AGPL-3.0. See LICENSE for details.
Unauthorized commercial use is prohibited. For commercial licensing inquiries, contact the author.
Bug reports and feature requests are welcome. The project is in active early development โ thank you for your patience and support ๐
Be the first to review this server!
by Toleno ยท Developer Tools
Toleno Network MCP Server โ Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace ยท Developer Tools
Create, build, and publish Python MCP servers to PyPI โ conversationally.
by Microsoft ยท Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace ยท Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm โ conversationally
by mcp-marketplace ยท Finance
Free stock data and market news for any MCP-compatible AI assistant.
by Taylorwilsdon ยท Productivity
Control Gmail, Calendar, Docs, Sheets, Drive, and more from your AI