Server data from the Official MCP Registry
Local FAISS vector database for RAG with document ingestion, semantic search, and MCP prompts.
Local FAISS vector database for RAG with document ingestion, semantic search, and MCP prompts.
Valid MCP server (3 strong, 1 medium validity signals). 9 known CVEs in dependencies (0 critical, 3 high severity) Package registry verified. Imported from the Official MCP Registry.
5 files analyzed · 10 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"mcp-server": {
"args": [
"local-faiss-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
A Model Context Protocol (MCP) server that provides local vector database functionality using FAISS for Retrieval-Augmented Generation (RAG) applications.

local-faiss command for standalone indexing and search# Install
pip install local-faiss-mcp
# Index documents
local-faiss index document.pdf
# Search
local-faiss search "What is this document about?"
Or use with Claude Code - configure MCP client (see Configuration) and try:
Use the ingest_document tool with: ./path/to/document.pdf
Then use query_rag_store to search for: "How does FAISS perform similarity search?"
Claude will retrieve relevant document chunks from your vector store and use them to answer your question.
⚡️ Upgrading? Run pip install --upgrade local-faiss-mcp
pip install local-faiss-mcp
For DOCX, HTML, EPUB, and 40+ additional formats, install pandoc:
# macOS
brew install pandoc
# Linux
sudo apt install pandoc
# Or download from: https://pandoc.org/installing.html
Note: PDF, TXT, and MD work without pandoc.
git clone https://github.com/nonatofabio/local_faiss_mcp.git
cd local_faiss_mcp
pip install -e .
After installation, you can run the server in three ways:
1. Using the installed command (easiest):
local-faiss-mcp --index-dir /path/to/index/directory
2. As a Python module:
python -m local_faiss_mcp --index-dir /path/to/index/directory
3. For development/testing:
python local_faiss_mcp/server.py --index-dir /path/to/index/directory
Command-line Arguments:
--index-dir: Directory to store FAISS index and metadata files (default: current directory)--embed: Hugging Face embedding model name (default: all-MiniLM-L6-v2)--rerank: Enable re-ranking with specified cross-encoder model (default: BAAI/bge-reranker-base)Using a Custom Embedding Model:
# Use a larger, more accurate model
local-faiss-mcp --index-dir ./.vector_store --embed all-mpnet-base-v2
# Use a multilingual model
local-faiss-mcp --index-dir ./.vector_store --embed paraphrase-multilingual-MiniLM-L12-v2
# Use any Hugging Face sentence-transformers model
local-faiss-mcp --index-dir ./.vector_store --embed sentence-transformers/model-name
Using Re-ranking for Better Results:
Re-ranking uses a cross-encoder model to reorder FAISS results for improved relevance. This two-stage "retrieve and rerank" approach is common in production search systems.
# Enable re-ranking with default model (BAAI/bge-reranker-base)
local-faiss-mcp --index-dir ./.vector_store --rerank
# Use a specific re-ranking model
local-faiss-mcp --index-dir ./.vector_store --rerank cross-encoder/ms-marco-MiniLM-L-6-v2
# Combine custom embedding and re-ranking
local-faiss-mcp --index-dir ./.vector_store --embed all-mpnet-base-v2 --rerank BAAI/bge-reranker-base
How Re-ranking Works:
Popular re-ranking models:
BAAI/bge-reranker-base - Good balance (default)cross-encoder/ms-marco-MiniLM-L-6-v2 - Fast and efficientcross-encoder/ms-marco-TinyBERT-L-2-v2 - Very fast, smaller modelThe server will:
{index-dir}/faiss.index (or create a new one){index-dir}/metadata.json (or create new)The server provides two tools for document management:
Ingest a document into the vector store.
Parameters:
document (required): Text content OR file path to ingestsource (optional): Identifier for the document source (default: "unknown")Auto-detection: If document looks like a file path, it will be automatically parsed.
Supported formats:
Examples:
{
"document": "FAISS is a library for efficient similarity search...",
"source": "faiss_docs.txt"
}
{
"document": "./documents/research_paper.pdf"
}
Query the vector store for relevant document chunks.
Parameters:
query (required): The search query texttop_k (optional): Number of results to return (default: 3)Example:
{
"query": "How does FAISS perform similarity search?",
"top_k": 5
}
The server provides MCP prompts to help extract answers and summarize information from retrieved documents:
Extract the most relevant answer from retrieved document chunks with proper citations.
Arguments:
query (required): The original user query or questionchunks (required): Retrieved document chunks as JSON array with fields: text, source, distanceUse Case: After querying the RAG store, use this prompt to get a well-formatted answer that cites sources and explains relevance.
Example workflow in Claude:
query_rag_store tool to retrieve relevant chunksextract-answer prompt with the query and resultsCreate a focused summary from multiple document chunks.
Arguments:
topic (required): The topic or theme to summarizechunks (required): Document chunks to summarize as JSON arraymax_length (optional): Maximum summary length in words (default: 200)Use Case: Synthesize information from multiple retrieved documents into a concise summary.
Example Usage:
In Claude Code, after retrieving documents with query_rag_store, you can use the prompts like:
Use the extract-answer prompt with:
- query: "What is FAISS?"
- chunks: [the JSON results from query_rag_store]
The prompts will guide the LLM to provide structured, citation-backed answers based on your vector store data.
The local-faiss CLI provides standalone document indexing and search capabilities.
Index documents from the command line:
# Index single file
local-faiss index document.pdf
# Index multiple files
local-faiss index doc1.pdf doc2.txt doc3.md
# Index all files in folder
local-faiss index documents/
# Index recursively
local-faiss index -r documents/
# Index with glob pattern
local-faiss index "docs/**/*.pdf"
Configuration: The CLI automatically uses MCP configuration from:
./.mcp.json (local/project-specific)~/.claude/.mcp.json (Claude Code config)~/.mcp.json (fallback)If no config exists, creates ./.mcp.json with default settings (./.vector_store).
Supported formats:
brew install pandoc (macOS) or apt install pandoc (Linux)Search the indexed documents:
# Basic search
local-faiss search "What is FAISS?"
# Get more results
local-faiss search -k 5 "similarity search algorithms"
Results show:
Add this server to your Claude Code MCP configuration (.mcp.json):
User-wide configuration (~/.claude/.mcp.json):
{
"mcpServers": {
"local-faiss-mcp": {
"command": "local-faiss-mcp"
}
}
}
With custom index directory:
{
"mcpServers": {
"local-faiss-mcp": {
"command": "local-faiss-mcp",
"args": [
"--index-dir",
"/home/user/vector_indexes/my_project"
]
}
}
}
With custom embedding model:
{
"mcpServers": {
"local-faiss-mcp": {
"command": "local-faiss-mcp",
"args": [
"--index-dir",
"./.vector_store",
"--embed",
"all-mpnet-base-v2"
]
}
}
}
With re-ranking enabled:
{
"mcpServers": {
"local-faiss-mcp": {
"command": "local-faiss-mcp",
"args": [
"--index-dir",
"./.vector_store",
"--rerank"
]
}
}
}
Full configuration with embedding and re-ranking:
{
"mcpServers": {
"local-faiss-mcp": {
"command": "local-faiss-mcp",
"args": [
"--index-dir",
"./.vector_store",
"--embed",
"all-mpnet-base-v2",
"--rerank",
"BAAI/bge-reranker-base"
]
}
}
}
Project-specific configuration (./.mcp.json in your project):
{
"mcpServers": {
"local-faiss-mcp": {
"command": "local-faiss-mcp",
"args": [
"--index-dir",
"./.vector_store"
]
}
}
}
Alternative: Using Python module (if the command isn't in PATH):
{
"mcpServers": {
"local-faiss-mcp": {
"command": "python",
"args": ["-m", "local_faiss_mcp", "--index-dir", "./.vector_store"]
}
}
}
Add this server to your Claude Desktop configuration:
{
"mcpServers": {
"local-faiss-mcp": {
"command": "local-faiss-mcp",
"args": ["--index-dir", "/path/to/index/directory"]
}
}
}
--embed flag (default: all-MiniLM-L6-v2 with 384 dimensions)
faiss.index, metadata saved as metadata.jsonDifferent models offer different trade-offs:
| Model | Dimensions | Speed | Quality | Use Case |
|---|---|---|---|---|
all-MiniLM-L6-v2 | 384 | Fast | Good | Default, balanced performance |
all-mpnet-base-v2 | 768 | Medium | Better | Higher quality embeddings |
paraphrase-multilingual-MiniLM-L12-v2 | 384 | Fast | Good | Multilingual support |
all-MiniLM-L12-v2 | 384 | Medium | Better | Better quality at same size |
Important: Once you create an index with a specific model, you must use the same model for subsequent runs. The server will detect dimension mismatches and warn you.
Test the FAISS vector store functionality without MCP infrastructure:
source venv/bin/activate
python test_standalone.py
This test:
Run the complete test suite:
pytest tests/ -v
Run specific test files:
# Test embedding model functionality
pytest tests/test_embedding_models.py -v
# Run standalone integration test
python tests/test_standalone.py
The test suite includes:
MIT
Be the first to review this server!
by Modelcontextprotocol · AI & ML
Dynamic and reflective problem-solving through structured thought sequences
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.