Server data from the Official MCP Registry
Multi-engine scholarly research server for search, traversal, full text, and reading lists.
Multi-engine scholarly research server for search, traversal, full text, and reading lists.
Remote endpoints: streamable-http: https://laibniz-scholarfetch-web.hf.space/mcp/
Valid MCP server (1 strong, 1 medium validity signals). No known CVEs in dependencies. Imported from the Official MCP Registry. 1 finding(s) downgraded by scanner intelligence.
Endpoint verified · Open access · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Remote Plugin
No local installation needed. Your AI client connects to the remote endpoint directly.
Add this to your MCP configuration to connect:
{
"mcpServers": {
"io-github-laibniz-scholarfetch": {
"url": "https://laibniz-scholarfetch-web.hf.space/mcp/"
}
}
}From the project's GitHub README.
ScholarFetch is a multi-engine academic research environment for:
It combines:
stdio, sse, streamable-http)The core idea is simple: start from keywords, DOI, or authors, traverse papers and references, inspect abstracts and full text, save what matters, then export a compact corpus for synthesis.
git clone https://github.com/laibniz/scholarfetch.git
cd scholarfetch
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
scholarfetch
Console scripts:
scholarfetchscholarfetch-mcpscholarfetch-fastmcpAlternative:
python3 scholarfetch.py
ScholarFetch loads provider credentials server-side / client-side from environment.
Default env file:
.scholarfetch.envTypical variables:
ELSEVIER_API_KEY=...
ELSEVIER_INSTTOKEN=...
SPRINGER_META_API_KEY=...
SPRINGER_OPENACCESS_API_KEY=...
Notes:
ELSEVIER_INSTTOKEN is optionalScholarFetch CLI is designed for research traversal.
Typical flow:
Example:
/search graph neural networks
/author Albert Einstein
/papers 1 has:abstract
/article 1
/refs 1
/saved
/export fulltext dummy corpus.txt
OPEN, ABSTRACT, TEXT, REFS, and AUTHORBackspace to go to parent nodeEsc to return to promptS to save a paper from paper lists or reference listsX to remove from the saved listAUTHOR action from a paper now lets you select:
ALL AUTHORSopenabstracttextrefsauthor/search <keywords|doi|person name>/author <name>/papers <author name|index> [filters]/doi <doi>/open <index>/abstract <doi|index>/article <doi|index>/refs <doi|index>/ref <index>/saved/export [format style path ...]/import [path]/pick [mode]/config/engines/helpUse with /papers:
year>=YYYY, year<=YYYY, year=YYYYhas:abstract, has:doi, has:pdf, has:fulltextvenue:<text>, title:<text>, doi:<text>Examples:
/papers 1 year>=2020 has:abstract
/papers 1 has:fulltext
/papers andrea de mauro venue:marketing
ScholarFetch supports four export modes from the saved paper set.
bib
citations
harvard, apa, or ieeeabstracts
fulltext
This makes ScholarFetch useful as a corpus builder for downstream synthesis agents.
ScholarFetch exposes the same research model through MCP.
Modes:
python3 scholarfetch_mcp.pypython3 scholarfetch_fastmcp.py --transport stdiopython3 scholarfetch_fastmcp.py --transport sse --host 127.0.0.1 --port 8000python3 scholarfetch_fastmcp.py --transport streamable-http --host 127.0.0.1 --port 8000 --http-path /mcpValidation:
python3 scholarfetch_mcp.py --self-test
python3 scholarfetch_fastmcp.py --self-test
Public demo endpoints:
io.github.laibniz/scholarfetchThe MCP server is designed for agent workflows, not only one-off calls.
An agent can:
This lets an agent build a focused research set inside one MCP session and then hand off an export artifact to another synthesis step.
See MCP_SERVER.md for the detailed tool model.
scholarfetch.py: CLI entrypointscholarfetch_cli.py: core CLI + retrieval logicscholarfetch_mcp.py: classic MCP serverscholarfetch_fastmcp.py: FastMCP serverMCP_SERVER.md: MCP usage guideAGENTS.md: agent-facing workflow guideSKILL.md: structured research skill guideSKILLS.md: index for agent-facing skill docsCONTRIBUTING.md: contributor notesIf you are running ScholarFetch from an MCP-compatible system, read:
These documents explain how to use ScholarFetch as a literature-research environment rather than as a flat search API.
See CONTRIBUTING.md.
See SECURITY.md.
MIT License. See LICENSE.
Be the first to review this server!
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.
by Taylorwilsdon · Productivity
Control Gmail, Calendar, Docs, Sheets, Drive, and more from your AI