Server data from the Official MCP Registry
First-party Provenote MCP server for drafts, research threads, auditable runs, and knowledge search.
First-party Provenote MCP server for drafts, research threads, auditable runs, and knowledge search.
Valid MCP server (1 strong, 1 medium validity signals). 7 known CVEs in dependencies (1 critical, 4 high severity) Imported from the Official MCP Registry.
7 files analyzed · 7 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
From the project's GitHub README.


This illustrated overview is a repo-authored summary of the shortest documented path. It is intentionally not presented as a live product recording.
Canonical product path:
messy long context -> structured insight -> note / research thread / draft -> inspectable outcome
That is the first door. MCP, starter bundles, distribution pages, and promotion assets are valuable second-ring surfaces, but they should not outrank the product path.
Agent-facing truth comes first: Provenote teaches an agent to read messy
context, structure it, move it into note / research-thread / draft lanes, and
only then carry that outcome workflow forward through the first-party
provenote-mcp server. Public skills, host bundles, and registry packs are
companion surfaces around that workbench, not the product root.
If you only want the shortest truthful filter before reading deeper, use this table first:
| What you need to know | Current answer |
|---|---|
| Product thesis | turn messy long context into structured insight and inspectable outcomes you can carry into notes, research threads, and drafts |
| Fastest result path | import one source -> run Auditable Markdown -> download one inspectable result |
| First proof | the quick-result overview plus the public proof page |
| Second ring only | MCP, host bundles, public skills, and distribution surfaces |
| What it must never be reduced to | a hosted one-minute trial or a generic chat wrapper |
If you only want the fastest honest map, use this:
| Question | Open this first | Why |
|---|---|---|
| "Can this help with messy long context?" | Long Context | This is the product center, not a side use case. |
| "Can I get one real result quickly?" | Quick Result Path | This is the shortest repo-documented local proof loop. |
| "Is this real or just copywriting?" | Public Proof | This is the evidence layer. |
| "What is still intentionally unclaimed?" | Project Status | This is the boundary page. |
Judge the workbench before you judge the side doors. MCP pages, starter bundles, distribution packs, and promotion assets matter, but they are second-layer surfaces around the main product path.
Most AI note tools make it easy to generate words and hard to verify where those words came from.
That gets even worse when the raw material is long and messy: a huge chat log, a copied forum thread, a meeting recap, or a web page pile you do not want to flatten into one more throwaway summary.
Provenote is built for the opposite direction:
Think of it like moving from a loose pile of research tabs to a workbench with labeled drawers, measuring tools, and a clean export lane.
If your problem starts as "I have too much messy context," the strongest current repo-backed answer is not another empty chat box. It is the path from long context to structured notes and then into reusable outcome objects.
If you want one visible outcome before learning the whole workbench, keep the order this short:
That is the shortest repo-documented result path today. MCP, starter bundles, podcasts, and distribution surfaces stay second ring until this outcome path already makes sense.
The strongest current first-entry path is:
Import the source
-> Run Chat Knowledgeization
-> Inspect the structured insight
-> Continue it as a note or notebook research thread
-> Keep draft work inside the notebook lane
In plain language: first turn the pile into labeled folders, then decide whether it belongs in your notes, your active research lane, or your notebook draft workflow.
provenote CLI when you want notebook outcome inspection, auditable markdown, or research-thread-to-draft handoffs without treating MCP host setup as the only operator path.If you only remember one first-entry rule, make it this:
messy long context
-> structured insight
-> note / seeded research lane / notebook research thread
-> draft-adjacent notebook work
That is the product center. MCP host pages come after that path, not before it.
Think of these surfaces like rooms around the main workshop:
| Surface | What it is today | What it is not |
|---|---|---|
| Claude Code / Codex / Cursor / OpenCode pages | repo-backed compatibility guides through the first-party MCP server, with public-ready starter bundles under examples/hosts/ | official partnership, bundled integration, plugin, or marketplace listing |
provenote CLI | a first-party local operator surface for outcome inspection and research_thread -> draft -> verify/download workflows | a separate distributed product line or a renamed MCP server |
| public skills surface | tracked public-ready skill packets now exist under public-skills/ for host-specific submission flows | a public skills catalog, live marketplace listing, or host-specific skills program endorsement |
| OpenClaw | a live ClawHub skill listing now exists at https://clawhub.ai/xiaojiou176/provenote-mcp-outcome-workflows, and public-ready OpenClaw-compatible bundles still live under examples/hosts/openclaw | official OpenClaw partnership, vendor endorsement, or every other marketplace surface being live |
| Official MCP Registry | a live websiteUrl-backed provenote-mcp entry already points to the repo-owned MCP docs/install surface | a package-backed public artifact, official host marketplace listing, or vendor endorsement |
| release / listing / domain / trademark / partnership | external decision and publication work | completed repo-side truth |
If you want the shortest public page that keeps those boundaries honest, start with docs/project-status.md.
If you want repo-owned install artifacts instead of prose-only setup pages, start with examples/hosts/README.md. That index points to the public-ready Claude Code, Codex, Cursor, and OpenCode starter bundles plus the OpenClaw-compatible bundles and the ClawHub submission pack.
If you want the standalone host-facing skill folders used for OpenHands/extensions or ClawHub-style submissions, start with public-skills/README.md.
If you want the full claim ladder instead of inferring it from scattered host pages, use docs/distribution.md.
Once the product path is clear, use one second-ring page instead of reading every host, registry, and distribution page in order.
| If you need... | Open this first | Why it stays second-ring |
|---|---|---|
| a host-compatible MCP path | docs/mcp.md | coding-agent carry-forward matters, but it is not the product center |
| checked-in bundle artifacts | examples/hosts/README.md and examples/hosts/packet-index.json | bundle installs are packaging truth, not the first doorway |
| public claim ladder and listing boundary | docs/distribution.md | it is a truth ledger, not the main product story |
| what is still intentionally unclaimed | docs/project-status.md | this is the boundary page, not the first impression |
| terminal/operator workflows | docs/runbooks/operator-cli.md | the CLI is a carry-forward surface after the workbench story is clear |
The ordering is intentional: first the workbench, then the proof lane, then the carry-forward surfaces.
Ordinary chat and ask flows are a fast assistant surface for exploration and iteration.
When you need stronger traceability, the stricter product truth lane lives in auditable markdown and auditable-runs rather than ordinary chat alone.
This path optimizes for the first visible result, not full system mastery.
Think of it as the fastest repo-documented local proof loop, not a one-minute hosted trial.
In plain language: this is a 5 to 10 minute local proof loop, not a hosted one-minute trial.
Copy the local environment template and add the two fast-path values.
cp .env.example .env
Fast-path values:
OPEN_NOTEBOOK_ENCRYPTION_KEY=change-me-to-a-secret-string
GEMINI_API_KEY=your-google-ai-studio-key
Start the default local stack.
docker compose -f ops/compose/docker-compose.yml up -d --build
On this Docker fast path, Compose injects the SurrealDB connection defaults and a known-good fast-path Gemini model for you. That is why the first visible result path only asks you to set the encryption key and Gemini key up front.
Open http://localhost:8502, let the app route you into the /sources workbench, then create or import a source and move into the source detail view.
Run Auditable Markdown to generate a downloadable markdown report with integrity counters.
Open a notebook and create a Draft if you want a reusable notebook-level outcome instead of a single-source artifact.
If you want the full walkthrough, go straight to docs/quickstart.md. If you want a zero-setup hosted product, this repository is not promising that experience today.
If you prefer a terminal-first operator path once the local stack is up, use the narrow first-party operator runbook in docs/runbooks/operator-cli.md. That surface is intentionally outcome-first and does not replace the workbench UI or the MCP compatibility pages.
If you want the ecosystem boundary before you read host pages, use docs/project-status.md.
Provenote treats disk cleanup as a governed operator path, not an ad-hoc "delete big folders" exercise.
apps/web/node_modules
cd apps/web && npm ci.${HOME}/.cache/provenote/...${HOME}/.cache/provenote/python/uv-cache${HOME}/.cache/provenote/playwright/ms-playwright${HOME}/.cache/provenote/ci-host/npm-cache.runtime-cache/venv/default.runtime-cache/ci-host/...${HOME}/.cache/uv${HOME}/.npm${HOME}/Library/Caches/ms-playwright${HOME}/Library/Containers/com.docker.docker${HOME}/.dockeropen-notebook-ci:* imagesdocker system df -vcleanup_runtime_cache.sh laneUse the explicit operator flow:
make cleanup-operator-audit
make cleanup-operator-apply
That operator flow is intentionally split:
cleanup_runtime_cache.sh handles repo-local runtime/cache surfacescleanup_machine_cache.sh handles the remaining repo-related download caches under ~/.cache/provenoterun_uv_managed.sh, run-playwright-managed.sh, and run_in_consistent_container.sh) invoke that machine-cache lane in apply mode so repo-specific external caches obey TTL/cap/root-cap policy by defaultdocker-buildx-clean / docker-runtime-audit cover repo-related Docker builder and image surfacesThis is the fastest concrete proof path in the current repo surface:
| Evidence lane | What goes in | What comes out | What you can inspect |
|---|---|---|---|
| Auditable Markdown | one imported source from text, file, audio, or web content | a downloadable markdown report | coverage-oriented integrity counters plus a direct markdown download path |
| Notebook Drafts | multiple notebook-linked sources | a downloadable notebook-level markdown draft plus export bundle | draft versions, section/claim inspection, bundle export, and notebook outcome state |
The current UI and API evidence behind that lane is already public:
Start from docs/proof.md if you want the file-and-route evidence behind those claims.
If you want a fixed, reproducible local proof loop, use the public proof pack.
# Auditable Markdown Report
## Source
- title: Example imported source
- lane: auditable markdown
## Integrity Counters
- coverage_rate: 0.98
- missing_count: 0
- duplicate_count: 0
- uncited_claims_count: 0
## Output
- downloadable markdown report
- inspectable sections and claims
- stronger traceability than a plain chat reply
This is a sanitized result shape, not a copied user artifact. The concrete product evidence for the lane lives in the current UI panel and auditable-run API surface.

If you want proof before commitment, start here:
| What you want to verify | Where to look |
|---|---|
| The product is wider than a single chat screen | docs/proof.md |
| The fastest path to a real result | docs/quickstart.md |
| How to use it from coding agents | docs/mcp.md |
| The current scope and readiness boundary | docs/project-status.md |
| The current release story | CHANGELOG.md and GitHub Releases |
| The runtime shape | docs/architecture.md |
| The support and security boundary | SUPPORT.md and SECURITY.md |
Release visibility is one public signal, not automatic proof that the latest release-event build and asset set are clean.
| If your goal is... | Start here | Then go deeper with... |
|---|---|---|
| Evaluate whether Provenote is worth your attention | README.md | docs/proof.md, docs/faq.md |
| Get to a first visible result quickly | docs/quickstart.md | docs/installation.md, docs/configuration.md |
| Connect it to coding agents through MCP | docs/mcp.md | docs/integrations/claude-code.md, docs/integrations/codex.md, docs/integrations/cursor.md, docs/integrations/opencode.md |
| Understand concrete outcome-first use cases | docs/use-cases/long-context-to-structured-notes.md | docs/use-cases/source-grounded-ai-research.md, docs/use-cases/ai-notes-with-receipts.md, docs/use-cases/source-grounded-drafts.md, docs/use-cases/source-to-verified-draft.md |
| Understand the system shape | docs/architecture.md | services/api/main.py, apps/web/src/components/layout/AppSidebar.tsx |
| Contribute safely | CONTRIBUTING.md | docs/development.md, MAINTAINERS.md |
| Understand the current naming and domain boundary | docs/brand-domain.md | docs/faq.md, docs/mcp.md |
Provenote ships a first-party MCP server so you can bring the same outcome objects you use in the web workbench into coding-agent hosts.
If your starting problem is messy long context, start with Long Context first and come back here when you want to carry those outcome objects into a host.
Current public fit:
Current boundary:
Start with docs/mcp.md if you want the MCP overview, then choose the host-specific page that matches your agent runtime.
If you want checked-in starter artifacts instead of only setup pages, go straight to examples/hosts/README.md. That index links the public-ready Claude Code, Codex, Cursor, and OpenCode starter bundles plus the OpenClaw-compatible bundles and the ClawHub submission pack.
The main workbench currently spans:

Provenote is a strong fit if you want to:
Provenote is probably not the best fit if you want:
Provenote is the public identity of this repository.
This repository is a deep, productized fork of the upstream Open Notebook project.
Upstream lineage still matters for provenance, but the current support, review, release, and collaboration surface is repository-local to this checkout.
That means two things can both be true at once:
The repo-local stewardship and trust boundary is anchored in NOTICE.md, MAINTAINERS.md, SUPPORT.md, SECURITY.md, and CONTRIBUTING.md.
When disk pressure comes from local Docker builders plus repo-local rebuildables, use one explicit operator path instead of guessing which command owns which surface:
make cleanup-operator-dry-run
make cleanup-operator-rebuildable
make cleanup-operator-aggressive
That split is intentional:
docker-buildx-clean handles local Buildx/builder residuetooling/scripts/ops/cleanup_runtime_cache.sh handles repo-local runtime and rebuildable surfacesapps/web/node_modules remains a repo-local rebuildable dependency root, not a machine cacheThis fork continues to distribute upstream-licensed material under LICENSE.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.