Server data from the Official MCP Registry
Live webcast transcription + vocal stress analysis (F0 jitter, hesitation) for earnings calls.
Live webcast transcription + vocal stress analysis (F0 jitter, hesitation) for earnings calls.
Valid MCP server (2 strong, 4 medium validity signals). No known CVEs in dependencies. Package registry verified. Imported from the Official MCP Registry. Trust signals: trusted author (4/4 approved).
6 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-ykshah1309-live-audio-intelligence-mcp": {
"args": [
"live-audio-intelligence-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
MCP server for live financial webcast transcription and heuristic vocal stress analysis.
Turns any live webcast URL (earnings calls, CNBC, investor days) into a real-time pipeline that feeds an LLM two things simultaneously:
faster-whisper (CPU, int8).Built on the Model Context Protocol. Exposes 4 tools over stdio; drop it into Claude Desktop, Claude Code, or any MCP client.
Sell-side analysts and hedge-fund PMs don't just want to read the earnings transcript after the fact — they want a real-time signal about how confident the CFO sounds when asked about Q4 guidance. This server wires a Whisper pipeline and a pYIN-based prosody analyzer directly into an LLM's tool loop, so the model can ask "what did the CEO just say about China?" and "how stressed did they sound saying it?" in the same conversation.
FFmpeg is a system binary, not a Python package. The ffmpeg-python
wrapper is not a dependency here — we drive the binary directly via
subprocess. You must install it yourself.
macOS (Homebrew):
brew install ffmpeg
Linux (Debian / Ubuntu):
sudo apt-get update && sudo apt-get install -y ffmpeg
Linux (Fedora / RHEL):
sudo dnf install -y ffmpeg
Windows — choose one:
# Option A — winget (Windows 10/11)
winget install --id=Gyan.FFmpeg -e
# Option B — Chocolatey
choco install ffmpeg
# Option C — Scoop
scoop install ffmpeg
Confirm it's on your PATH:
ffmpeg -version
If the command errors with "not found", reopen the terminal (PATH changes
don't propagate to already-open shells) or add the ffmpeg bin/ directory
to your PATH manually.
Requires Python ≥ 3.10.
pip install live-audio-intelligence-mcp
Or run directly without installing with uv:
uvx live-audio-intelligence-mcp
The first run will download the faster-whisper base.en model (~140 MB) from
Hugging Face and cache it under ~/.cache/huggingface/.
Stdio MCP server:
live-audio-intelligence-mcp
Or equivalently:
python -m live_audio_intelligence_mcp
Add to claude_desktop_config.json:
{
"mcpServers": {
"live-audio-intelligence": {
"command": "live-audio-intelligence-mcp"
}
}
}
claude mcp add live-audio-intelligence -- live-audio-intelligence-mcp
| Tool | Purpose |
|---|---|
monitor_live_stream(url, disable_vad=False) | Resolve the audio URL, spawn ffmpeg, start chunking + transcription. Returns a stream_id. |
get_rolling_transcript(stream_id, minutes_back=10) | Get the last N minutes of concatenated transcript text. |
analyze_speaker_stress(stream_id, time_window_seconds=60) | Run prosody analysis over the last N seconds of audio. Returns stress score, pitch jitter, hesitation ratio, pause stats, and a human-readable interpretation. |
stop_monitor(stream_id) | Kill ffmpeg, clean up temp files, drop the transcript buffer. |
| Score | Interpretation |
|---|---|
| 0–20 | Confident, fluent delivery |
| 20–45 | Normal variation |
| 45–75 | Elevated stress — worth monitoring |
| 75–100 | High stress — potential market-moving signal |
Composite of:
The three features are literature-backed correlates of speaker arousal (see pYIN for F0 tracking, and the broad "disfluency is a correlate of cognitive load" line of work). The weights and saturation points are hand-picked defaults, chosen so that a calm speaker scores in the 0–20 band on clean studio audio and visibly stressed speech scores ≥ 45 — they are not fit to any labeled dataset. Consumers who care about absolute numbers should recalibrate thresholds against their own recordings.
A synthetic-audio calibration harness lives at scripts/validate_stress_score.py. It generates controlled audio (smooth sine, jittered pitch, silence-padded speech) and asserts that the score responds in the expected direction. This is calibration evidence, not market-outcome validation.
For speakerphone audio (most earnings Q&A), pass disable_vad=true to
monitor_live_stream. Silero VAD tends to aggressively classify muddy
conference-call speech as silence; disabling it preserves more of the speech
at the cost of transcribing a bit more ambient noise.
By default the server caps concurrent streams at 4 (each stream holds an ffmpeg subprocess, a yt-dlp subprocess, a thread, and a temp directory). Override via env var for high-throughput deployments:
LAI_MAX_CONCURRENT_STREAMS=16 live-audio-intelligence-mcp
Exceeding the cap raises StreamLimitExceededError rather than silently
queuing.
┌──────────────────┐
URL ─────▶ │ yt-dlp resolve │
└────────┬─────────┘
│ audio URL
▼
┌──────────────────┐ ┌────────────────┐
│ ffmpeg (bg) │ ───▶ │ 15s WAV chunk │
│ 16kHz mono PCM │ │ queue │
└──────────────────┘ └───────┬────────┘
│
┌──────────────────┴────────────────┐
▼ ▼
┌──────────────────┐ ┌──────────────────┐
│ faster-whisper │ │ librosa.pyin │
│ (int8 / CPU) │ │ + pause detect │
└────────┬─────────┘ └────────┬─────────┘
│ rolling transcript │ stress score
▼ ▼
┌────────────── MCP stdio ───────────────┐
│ LLM (Claude) — calls tools freely │
└────────────────────────────────────────┘
All blocking work (Whisper inference, ffmpeg I/O, librosa DSP) is dispatched
to threads via asyncio.to_thread so the MCP event loop stays responsive.
git clone https://github.com/ykshah1309/live-audio-intelligence-mcp
cd live-audio-intelligence-mcp
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -e ".[dev]"
pytest
live-audio-intelligence-mcp
The pytest suite in tests/ covers the pure-Python logic that doesn't require network or ffmpeg:
StreamManagerValueError / RuntimeError)pytest -q
python scripts/validate_stress_score.py
This generates synthetic audio with known acoustic properties and verifies the stress score responds in the expected direction. It's a sanity check for the weighting heuristics — not a replacement for empirical validation against real earnings-call outcomes.
ffmpeg: command not found — ffmpeg isn't on PATH. See the install
section above. On Windows, reopen your terminal after installing.
yt-dlp could not resolve URL — The site isn't supported by yt-dlp
or the URL is malformed. Test with yt-dlp -F <url> from the command
line; if that fails, the server will too.
Whisper downloads hang on first run — The ~140 MB model download goes
to ~/.cache/huggingface/. Check your network and Hugging Face access.
"Insufficient voiced frames" in stress output — The audio window is
mostly silence or noise. Usually means the stream is still buffering;
wait 30s and retry. For speakerphone Q&A, start the monitor with
disable_vad=true.
See CONTRIBUTING.md.
See CHANGELOG.md.
MIT — see LICENSE.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.