Server data from the Official MCP Registry
Feedback collection for AI agents. Create surveys, collect responses, get results.
Feedback collection for AI agents. Create surveys, collect responses, get results.
Valid MCP server (2 strong, 3 medium validity signals). No known CVEs in dependencies. ⚠️ Package registry links to a different repository than scanned source. Imported from the Official MCP Registry. 1 finding(s) downgraded by scanner intelligence.
7 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: MTS_API_KEY
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-sunsiyuan-human-survey": {
"env": {
"MTS_API_KEY": "your-mts-api-key-here"
},
"args": [
"-y",
"web"
],
"command": "npx"
}
}
}From the project's GitHub README.
Feedback collection infrastructure for AI agents.
HumanSurvey lets an agent doing long-horizon work collect structured feedback from a group of people:
Agent is doing a job
→ needs structured feedback from a group
→ creates survey from JSON schema
→ shares /s/{id} URL with respondents
→ humans respond over hours or days
→ agent retrieves structured JSON results and acts on them
HumanSurvey is a minimal API and MCP server for one narrow job: let agents collect structured feedback from groups of humans and get machine-usable results back.
It is designed for:
It is not designed for:
choice, text, scale, matrixshowIf in Markdown and JSON schemasingle_choicemulti_choicetextscalematrix# Survey Title
**Description:** Instructions for the respondent.
## Section Name
**Q1. Your question here?**
- ☐ Option A
- ☐ Option B
- ☐ Option C
**Q2. Multi-select question?** (select all that apply)
- ☐ Choice 1
- ☐ Choice 2
- ☐ Choice 3
**Q3. Open-ended question:**
> _______________
| # | Item | Rating |
|---|------|--------|
| 1 | Item A | ☐Good ☐OK ☐Bad |
| 2 | Item B | ☐Good ☐OK ☐Bad |
Scale questions:
**Q4. How severe is this issue?**
[scale 1-5 min-label="Low" max-label="Critical"]
Conditional logic:
**Q1. Did the deploy fail?**
- ☐ Yes
- ☐ No
**Q2. Which step failed?**
> show if: Q1 = "Yes"
> _______________________________________________
Add to your Claude Code config (~/.claude.json):
{
"mcpServers": {
"survey": {
"command": "npx",
"args": ["-y", "humansurvey-mcp"],
"env": {
"HUMANSURVEY_API_KEY": "hs_sk_your_key_here"
}
}
}
}
Then in Claude Code:
> Create a post-event feedback survey with a 1-5 rating, open text, and a yes/no question
Available tools:
create_key — self-provision an API key; no human setup requiredcreate_survey — create from JSON schema; optional max_responses, expires_at, webhook_urlget_results — aggregated results + raw responseslist_surveys — list surveys owned by your keyclose_survey — close a survey immediatelycurl -X POST https://www.humansurvey.co/api/keys \
-H "Content-Type: application/json" \
-d '{
"name": "my claude agent",
"email": "you@example.com",
"wallet_address": "eip155:8453:0xabc..."
}'
All fields optional. wallet_address uses CAIP-10 format — will be used for agent-native payments in the future.
Then create a survey:
curl -X POST https://www.humansurvey.co/api/surveys \
-H "Authorization: Bearer hs_sk_..." \
-H "Content-Type: application/json" \
-d '{
"schema": {
"title": "Post-Event Feedback",
"sections": [{
"questions": [
{ "type": "scale", "label": "How would you rate the event?", "min": 1, "max": 5 },
{ "type": "text", "label": "What should we improve?" }
]
}]
}
}'
Response:
{
"survey_url": "/s/abc123",
"question_count": 1
}
Read results:
curl https://www.humansurvey.co/api/surveys/abc123/responses \
-H "Authorization: Bearer hs_sk_..."
https://www.humansurvey.co/docshttps://www.humansurvey.co/api/openapi.jsonhttps://www.humansurvey.co/llms.txt| Component | Technology |
|---|---|
| Framework | Next.js (App Router) |
| Database | Neon (serverless Postgres) |
| Parser | remark (unified ecosystem) |
| Frontend | React + Tailwind CSS |
| MCP Server | @modelcontextprotocol/sdk |
| Deployment | Vercel |
├── apps/web/ # Next.js app (API + frontend)
├── packages/parser/ # Markdown → Survey JSON parser
├── packages/mcp-server/ # MCP server for Claude Code
└── docs/ # Architecture docs
Read CONTRIBUTING.md before opening a PR. The most important rule is scope discipline: new UI variants, analytics dashboards, and human-operator features are usually out of scope.
pnpm install
pnpm dev # Start Next.js dev server
pnpm --filter @mts/parser test
pnpm build # Build all packages
MIT
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.