Server data from the Official MCP Registry
Structured thinking + steel-manning verification for AI agents. Backed by 43+ papers.
Structured thinking + steel-manning verification for AI agents. Backed by 43+ papers.
Valid MCP server (5 strong, 4 medium validity signals). No known CVEs in dependencies. Package registry verified. Imported from the Official MCP Registry.
10 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-stabgan-steelmind": {
"args": [
"-y",
"@stabgan/steelmind-mcp"
],
"command": "npx"
}
}
}From the project's GitHub README.
The research-grounded reasoning MCP server for AI agents. Combines step-by-step sequential thinking with steel-manning verification — backed by 43+ cognitive science and AI research papers.
Steelmind gives your AI agent two tools:
think — Record structured reasoning steps with sequential decomposition. Embeds Socratic self-questioning and Polya's problem-solving method.verify — Challenge conclusions with steel-manning before committing. Embeds dialectical evaluation from MetaCrit and SIEV research.The code is minimal. The descriptions do the heavy lifting — tool descriptions account for ~80% of reasoning improvement per Anthropic τ-bench research.
| Feature | Think MCP | Sequential Thinking | Steelmind |
|---|---|---|---|
| Step tracking | ✗ | ✓ | ✓ |
| Adjustable step count | ✗ | ✓ | ✓ |
| Cognitive mode separation | ✗ | ✗ | ✓ |
| Steel-manning verification | ✗ | ✗ | ✓ |
| Socratic self-questioning | ✗ | ✗ | ✓ |
| Research-grounded descriptions | ✗ | ✗ | ✓ |
| Verify nudge on completion | ✗ | ✗ | ✓ |
| Tool count | 1 | 1 | 2 |
Key research insight: MetaCrit (arxiv 2507.15015) proved that separating reasoning generation from reasoning evaluation prevents self-bias and improves accuracy by up to 76%. Sequential-thinking uses one tool for both. Steelmind separates them.
{
"mcpServers": {
"steelmind": {
"command": "npx",
"args": ["-y", "@stabgan/steelmind-mcp"]
}
}
}
{
"mcpServers": {
"steelmind": {
"command": "docker",
"args": ["run", "--rm", "-i", "stabgan/steelmind-mcp"]
}
}
}
npm install -g @stabgan/steelmind-mcp
{
"mcpServers": {
"steelmind": {
"command": "steelmind-mcp"
}
}
}
think toolRecords a structured reasoning step with sequential tracking.
Input:
{
"thought": "What are the dependencies? Need to check imports before refactoring.",
"thoughtNumber": 1,
"totalThoughts": 3,
"nextThoughtNeeded": true
}
Output (mid-sequence):
[Thinking 1/3]
What are the dependencies? Need to check imports before refactoring.
Output (final step — includes verify nudge):
[Thinking 3/3]
My conclusion: use the adapter pattern for backward compatibility.
---
Thinking complete. Before acting on this conclusion, use the verify tool to challenge it.
The verify nudge appears in the tool result (not just the description), making it far more likely the model will actually call verify. Tool results get different attention treatment than descriptions — they're processed as fresh context.
verify toolChallenges your reasoning with steel-manning before you commit.
Input:
{
"concern": "The adapter pattern adds complexity. Is the simpler approach actually better?"
}
Output:
The adapter pattern adds complexity. Is the simpler approach actually better?
Pure identity function — returns your concern unchanged. The value is in the description, which prompts: "Steel-man the opposition: What is the strongest argument that your conclusion is wrong?"
think(step 1/3) → think(step 2/3) → think(step 3/3) → [verify nudge] → verify → act
↑
adjust totalThoughts if needed
Steelmind's design is grounded in 43+ research papers. Key findings:
| Paper | Finding | How Steelmind Uses It |
|---|---|---|
| MetaCrit (arxiv 2507.15015) | Separating generation from evaluation prevents self-bias | Two separate tools: think (generate) + verify (evaluate) |
| Anthropic τ-bench | Optimized tool descriptions yield 54% improvement | Descriptions are the primary scaffold, not code |
| Think2 (arxiv 2602.18806) | Structured metacognition yields 3x self-correction | Sequential step tracking + Socratic questioning |
| SIEV (ICML) | Models lose 40+ points under dialectical evaluation | Steel-manning prompt in verify description |
| Scaling TTC (arxiv 2408.03314) | Difficulty-adaptive compute improves efficiency 4x | Adjustable totalThoughts |
| EasyTool (NAACL 2025) | Concise descriptions outperform verbose ones | ~100 word descriptions |
| ToolACE | "When NOT to use" improves irrelevance detection 6→84% | Negative guidance in both descriptions |
| Cognitive Foundations (arxiv 2511.16660) | External scaffolding improves performance up to 72% | Research-grounded cognitive frameworks |
Works with any MCP-compatible client:
Designed for frontier models but works across families:
npm install # Install dependencies
npm run build # Compile TypeScript
npm test # Run 90 tests
npm run lint # ESLint
npm run format # Prettier
npm start # Run the server
MIT
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.