Server data from the Official MCP Registry
Personalised AI augmentation system — makes you better at your work, not dependent on AI
Personalised AI augmentation system — makes you better at your work, not dependent on AI
Remote endpoints: streamable-http: https://proworker-hosted.onrender.com/mcp
Valid MCP server (3 strong, 1 medium validity signals). 14 known CVEs in dependencies (2 critical, 5 high severity) Imported from the Official MCP Registry.
15 tools verified · Open access · 14 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Remote Plugin
No local installation needed. Your AI client connects to the remote endpoint directly.
Add this to your MCP configuration to connect:
{
"mcpServers": {
"io-github-angelo-leone-talent-augmenting-layer": {
"url": "https://proworker-hosted.onrender.com/mcp"
}
}
}From the project's GitHub README.
Works with: ChatGPT | Claude | Gemini | Cursor | Windsurf | Any LLM
Make workers better, not dependent. A personalised AI augmentation system that follows you across every platform.
Talent-Augmenting Layer (TAL) is a personalised AI augmentation layer that transforms how AI interacts with you. It works with any LLM, on any platform, through a 4-tier architecture designed for cross-platform portability. Instead of treating you as a generic user, TAL:
The core insight: AI that does everything for you makes you worse over time. AI that knows WHEN to help, WHEN to coach, WHEN to challenge, and WHEN to step back makes you permanently better.
Current AI tools have one mode: maximum helpfulness. This creates three failure patterns:
| Pattern | What Happens | Research Evidence |
|---|---|---|
| De-skilling | Workers lose skills they stop practicing | Clinicians using AI for 3 months performed WORSE after removal than before (2024-25 studies) |
| Over-reliance | Workers accept AI output without critical evaluation | Humans with AI perform better than humans alone but WORSE than AI alone — because they blindly accept wrong suggestions (Buçinca 2021) |
| Autopilot | Workers disengage from cognitive work | Junior employees who "just hand in" AI work perform worse than those who engage critically (Mollick 2023) |
Talent-Augmenting Layer exists to prevent all three.
Talent-Augmenting Layer is a layer, not a product tied to one platform. It works through 4 tiers, from zero-dependency copy-paste to a full hosted web app:
┌─────────────────────────────────────────────────────────────────┐
│ Tier 4: Hosted Web App │
│ Browser-based · Google OAuth · LLM assessment · email check-ins│
├─────────────────────────────────────────────────────────────────┤
│ Tier 3: MCP Server │
│ 14 tools · 5 resources · 4 prompts · automatic tracking │
│ Claude Code · Cursor · Windsurf · Claude Desktop │
├─────────────────────────────────────────────────────────────────┤
│ Tier 2: Platform-Native │
│ Custom GPTs · Gemini Gems · Claude Projects │
│ Persistent context · conversation starters │
├─────────────────────────────────────────────────────────────────┤
│ Tier 1: Universal System Prompt │
│ Any LLM · zero dependencies · copy-paste setup │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ profiles/pro-{name}.md │
│ Portable markdown · same format across all tiers │
│ Identity · Expertise Map · TALQ Scores · Task Classification │
│ Growth Trajectory · Contrast Libraries · Red Lines │
└─────────────────────────────────────────────────────────────────┘
All tiers share: same TALQ instrument, same scoring,
same profile format, same behavioural rules.
Every task gets classified into one of five AI interaction modes:
| Mode | AI Role | Friction | Example |
|---|---|---|---|
| Automate | Execute + annotate | Low | Data cleanup, formatting, boilerplate |
| Augment | Accelerate + challenge | Low-Med | Research in expert domains, code in proficient areas |
| Coach | Scaffold + question | Med-High | Skills you're actively developing |
| Protect | Force cognition + teach | High | Skills at risk of atrophying from AI over-use |
| Hands-off | Don't touch | N/A | Tasks that are core to your human identity and judgment |
| Technique | Source | When Used |
|---|---|---|
| Cognitive Forcing | Buçinca et al. (2021) | Novice domains, high-stakes decisions — ask for user's hypothesis first |
| Contrastive Explanations | Buçinca et al. (2024) | Learning moments — explain the DELTA between user's mental model and reality |
| Adaptive Support | Buçinca et al. (2024) | All interactions — adjust friction based on user state |
| Expert Augmentation | Mollick (2023) | Expert domains — skip basics, challenge assumptions, accelerate |
| De-skilling Protection | Multiple (2024-25) | Protected skills — add friction, require human-first attempts |
New to Claude Code and TAL MCP? Start with the first-time guide: docs/CLAUDE_CODE_FIRST_TIME_SETUP.md.
Pick the option that matches your setup:
| Option | Time | What You Need |
|---|---|---|
| Any LLM | 2 min | Access to any LLM with custom instructions |
| Custom GPT / Gem / Project | 5 min | ChatGPT Plus, Gemini, or Claude account |
| MCP Server | 10 min | Python + an MCP client (Claude Code, Cursor, Windsurf) |
| Hosted Web App | 15 min | Docker or Python + Google Cloud OAuth |
universal-prompt/ASSESSMENT_PROMPT.md into a conversation. Answer the questions. Save the generated profile.universal-prompt/SYSTEM_PROMPT.md + your profile into your LLM's custom instructions.Pre-configured instances with persistent context and conversation starters:
platform-configs/chatgpt-gpt.json as a Custom GPTplatform-configs/gemini-gem.md to create a Gemplatform-configs/claude-project.md to set up a ProjectFull tool integration with automatic tracking:
cd mcp-server && pip install -e .
Add to your MCP client config (Claude Code, Cursor, Windsurf, Claude Desktop):
{
"mcpServers": {
"talent-augmenting-layer": {
"command": "python",
"args": ["-m", "src.server"],
"cwd": "/path/to/talent-augmenting-layer/mcp-server",
"env": {
"TALENT_AUGMENTING_LAYER_PROFILES_DIR": "/path/to/talent-augmenting-layer/profiles"
}
}
}
}
Run talent-assess as an MCP prompt to create your profile. If you want the Claude Code slash command /talent-assess, open this repository in Claude Code so it loads .claude/commands/, or copy those command files into ~/.claude/commands/.
Browser-based app with Google login, LLM-powered assessment, and email check-in reminders:
cd hosted && docker build -t talent-augmenting-layer . && docker run -p 5000:5000 --env-file .env talent-augmenting-layer
See hosted/README.md for full setup (OAuth credentials, LLM API key, SMTP config).
/talent-assess — Run initial assessment or full re-assessment/talent-update — Update profile based on recent interactions/talent-coach — Start a targeted coaching session on a specific skillThese slash commands are separate from the MCP server prompts. The MCP server exposes talent-assess, talent-coach, and talent-update as prompts. In MCP usage, the conversation is powered by your selected client model (for example, your Claude Code model), while the server provides tools and profile storage.
See docs/integration-guide.md for detailed platform-specific instructions.
Talent-Augmenting Layer is designed as a layer -- not tied to any specific tool, LLM, or platform. The 4-tier architecture means it works everywhere:
| Tier | Platforms | Setup |
|---|---|---|
| Tier 1: Universal prompt | ChatGPT, Claude, Gemini, Copilot, Perplexity, any LLM API | Copy-paste (2 min) |
| Tier 2: Platform-native | ChatGPT Custom GPTs, Gemini Gems, Claude Projects | Pre-configured instance (5 min) |
| Tier 3: MCP Server | Claude Code, Cursor, Windsurf, Claude Desktop | pip install + config (10 min) |
| Tier 4: Hosted web app | Any browser | Docker deploy (15 min) |
The profile is portable markdown -- it works anywhere you can inject system context. Take your profile from Claude Code to ChatGPT to Cursor and back. Your AI calibration follows you.
Built on empirical research, not opinions:
| Source | Key Finding | How We Use It |
|---|---|---|
| Buçinca et al. (2021) | Cognitive forcing reduces over-reliance by ~30% | Ask for hypothesis before revealing AI's answer |
| Buçinca et al. (2024) | Contrastive explanations improve skills +8% (d=0.35) | Explain delta between user's model and AI's |
| Buçinca et al. (2024) | Optimal AI support depends on individual state | Personalize via profile, adapt dynamically |
| Mollick et al. (2023) | AI: +40% quality, +26% speed — but juniors who "just hand in" do worse | Protect against autopilot, especially in growth areas |
| Drago & Laine (2025) | The Intelligence Curse: humans must stay complementary | Build skills that maintain human economic relevance |
| Acemoglu | Pro-worker AI should increase human marginal product | Every interaction should make the user more valuable |
| Vygotsky | Zone of Proximal Development | Scaffold just beyond current ability |
| Ericsson | Deliberate Practice | Practice at edge of ability with feedback |
| Deci & Ryan | Self-Determination Theory | Protect autonomy, build competence |
| Dweck | Growth Mindset | Frame friction as opportunity |
talent-augmenting-layer/
├── CLAUDE.md # Core system prompt (the brain)
├── README.md # This file
├── CITATION.cff # Machine-readable citation metadata
├── LICENSE # CC BY-NC-SA 4.0
├── COPYRIGHT # Attribution notice
├── .claude/
│ ├── commands/
│ │ ├── talent-assess.md # /talent-assess slash command
│ │ ├── talent-update.md # /talent-update slash command
│ │ └── talent-coach.md # /talent-coach slash command
│ └── settings.local.json # Claude Code permissions
├── universal-prompt/ # Tier 1: Works with any LLM
│ ├── SYSTEM_PROMPT.md # Full system prompt (~4k tokens)
│ ├── SYSTEM_PROMPT_COMPACT.md # Compact version for token-limited platforms
│ ├── ASSESSMENT_PROMPT.md # Self-contained assessment prompt
│ └── QUICK_START.md # Step-by-step setup instructions
├── platform-configs/ # Tier 2: Pre-configured platform instances
│ ├── chatgpt-gpt.json # ChatGPT Custom GPT configuration
│ ├── gemini-gem.md # Gemini Gem setup guide
│ └── claude-project.md # Claude Project setup guide
├── mcp-server/ # Tier 3: Cross-platform MCP server
│ ├── pyproject.toml # Package config
│ ├── README.md # MCP server docs
│ └── src/
│ ├── server.py # MCP tools, resources, prompts (14 tools)
│ ├── profile_manager.py # Profile CRUD, parsing, interaction logging
│ └── assessment.py # Embedded assessment engine (questions, scoring)
├── hosted/ # Tier 4: Standalone web application
│ ├── app.py # Flask application (routes, OAuth, assessment)
│ ├── config.py # Environment configuration
│ ├── database.py # Database models and persistence
│ ├── llm_client.py # LLM integration for conversational assessment
│ ├── scoring.py # TALQ scoring engine
│ ├── auth.py # Google OAuth authentication
│ ├── email_service.py # 2-week check-in email reminders
│ ├── scheduler.py # Background task scheduling
│ ├── templates/ # HTML templates (login, assessment, dashboard, checkin)
│ ├── static/ # CSS and JavaScript
│ ├── requirements.txt # Python dependencies
│ ├── Dockerfile # Container deployment
│ └── README.md # Hosted app setup guide
├── assessment/
│ ├── framework.md # Assessment methodology
│ ├── scoring-instrument.md # TALQ psychometric instrument
│ ├── coaching-modules.md # Structured coaching sessions (5 modules, 13 sessions)
│ ├── ab-testing-framework.md # A/B testing design for outcomes research
│ └── literature-foundations.md # Research backing
├── dashboard/
│ └── app.py # Streamlit org-level analytics dashboard
├── web-ui/
│ └── index.html # Standalone web assessment UI
├── docs/
│ └── integration-guide.md # 4-tier integration guide
├── profiles/
│ ├── TEMPLATE.md # Blank profile template
│ └── pro-angelo.md # Example: Angelo's profile
└── context/ # Research papers (Buçinca, Acemoglu, Mollick)
Related project: Talent-Augmenting Layer Benchmark -- a 3-layer evaluation framework for measuring whether LLMs augment or replace human intelligence.
Good question. Memory stores facts. Talent-Augmenting Layer is how memory is used.
| Feature | Plain Memory | Talent-Augmenting Layer |
|---|---|---|
| Stores user info | Yes | Yes |
| Adapts AI behaviour | No — just recalls | Yes — systematically calibrates every interaction |
| Protects skills | No | Yes — cognitive forcing, de-skilling prevention |
| Coaches growth | No | Yes — targeted scaffolding in growth areas |
| Classifies tasks | No | Yes — automate/augment/coach/protect/hands-off |
| Evolves over time | Appends facts | Tracks skill progression, adjusts calibration |
| Research-backed | No | Yes — grounded in HCI and workforce learning literature |
Memory is the database. TAL is the operating system.
This is an open-source personalised AI augmentation layer. Current status:
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
You are free to share and adapt this work for non-commercial purposes, as long as you give appropriate credit and distribute contributions under the same license.
See LICENSE for the full text.
If you use Talent-Augmenting Layer in research or publications, please cite:
@software{leone2026talentaugmentinglayer,
author = {Leone, Angelo},
title = {Talent-Augmenting Layer: A Personalised AI Augmentation Layer for Workforce Development},
version = {0.2.0},
year = {2026},
url = {https://github.com/angelo-leone/talent-augmenting-layer},
license = {CC-BY-NC-SA-4.0}
}
Or see CITATION.cff for machine-readable citation metadata.
Built by Angelo Leone at PUBLIC. Powered by research from Buçinca, Acemoglu, Mollick, Drago & Laine. Every interaction should leave you more capable, not more dependent.
Copyright (c) 2026 Angelo Leone. All rights reserved under CC BY-NC-SA 4.0.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.