MCP Server

Chamade runs its own hosted MCP server. You configure your client once with the URL and your API key — no local server to install, no process to keep running.

Overview

The Chamade MCP server is hosted at https://mcp.chamade.io/mcp/ and speaks the Streamable HTTP transport from the MCP spec. It exposes 15 tools that let an AI agent join voice meetings, speak, read transcripts, handle DMs, and upload files across every supported platform. Auth is either a standard Authorization: Bearer chmd_* header (same API key as the REST API, best for config-file clients) or the MCP OAuth 2.1 flow with Dynamic Client Registration (used by claude.ai Custom Connectors — paste the URL, sign in, done).

The default URL runs in stateless mode: each tool call is self-contained, chamade restarts are transparent, and DMs + call events are read by long-polling chamade_inbox. Opt into stateful mode by appending ?stateful to the URL — that enables persistent sessions and real-time push via notifications/claude/channel for Claude Code launched with the channel flag. See Push events and Channel Mode for details.

For voice calls, the recommended path is hosted STT/TTS with BYOK: drop your own ElevenLabs / Deepgram / OpenAI / Cartesia key in dashboard → Voice providers, and Chamade runs the full speech pipeline with it — free on Chamade's side, you only pay your provider. Transcripts arrive as call_transcript events and chamade_call_say speaks into the meeting. See Voice providers for setup. If you prefer to run your own speech stack in-process, the raw-PCM audio WebSocket stays available.

Setup

You need an API key from the Dashboard (starts with chmd_). Pick the config that matches your client.

Option A — HTTP direct recommended

For clients that support the Streamable HTTP MCP transport: Claude Desktop (recent), Claude Code, Cursor, Windsurf, and a growing list of others. Zero dependencies, zero install, one endpoint.

json
{ "mcpServers": { "chamade": { "type": "http", "url": "https://mcp.chamade.io/mcp/", "headers": { "Authorization": "Bearer chmd_your_key_here" } } } }

Add that block to your client's MCP config file:

Option B — stdio shim (legacy stdio-only clients)

For MCP clients that only support the older stdio transport, use the @chamade/mcp-server npm package. Since v3 it's a thin wrapper that bridges stdio to our hosted HTTP endpoint under the hood — same tools, same push events, same latency. Zero drift with the hosted surface because there is no parallel stdio implementation to drift against.

json
{ "mcpServers": { "chamade": { "command": "npx", "args": ["-y", "@chamade/mcp-server@3"], "env": { "CHAMADE_API_KEY": "chmd_your_key_here" } } } }

Requires Node.js 18+. The shim pins mcp-remote as a dependency so the first run installs it locally — subsequent launches are instant.

Option C — mcp-remote direct

If you prefer the community stdio proxy with no Chamade branding, you can invoke mcp-remote directly. Equivalent to Option B but with a more verbose config and a community-maintained package:

json
{ "mcpServers": { "chamade": { "command": "npx", "args": [ "-y", "mcp-remote", "https://mcp.chamade.io/mcp/", "--transport", "http-only", "--header", "Authorization:Bearer chmd_your_key_here" ] } } }

Option D — OAuth sign-in (claude.ai Custom Connectors)

For claude.ai web and other hosts that add MCP servers through an in-browser sign-in flow rather than a config file: paste the server URL alone — no API key needed. Chamade runs an OAuth 2.1 authorization server (Dynamic Client Registration + PKCE) so the client registers itself on the fly, the user approves once in a consent screen at chamade.io/mcp-auth/consent, and an opaque access token is issued. Token rotates on refresh; the consent grant can be revoked at any time from the dashboard.

claude.ai → Settings → Connectors → Add custom connector
https://mcp.chamade.io/mcp/

Works the same way on any MCP client that follows the MCP OAuth 2.1 authorization spec (RFC 9728 Protected Resource Metadata + RFC 7591 DCR + RFC 8414/OIDC discovery). The chmd_* static bearer (Options A–C above) remains supported in parallel — pick whichever fits your client.

Polling loop (non-Claude-Code clients)

Claude Code is the only client that currently handles Chamade's push channel mode natively. On every other MCP client (Claude Desktop, Cursor, Windsurf, custom harnesses), the agent must poll chamade_inbox to pick up inbound DMs in near real-time. The loop is one tool call with long-polling — drop the paragraph below into your agent's system prompt and you're done. Copy it from /llms-full.txt if you'd rather point the client there than paste by hand.

System prompt snippet
You have access to the Chamade MCP tools. Chamade delivers inbound DMs via polling on this client. On startup, call chamade_account once to get the `last_message_cursor` field from the response and remember it. Then, between user turns (or in a dedicated background loop if your harness supports one), call chamade_inbox with that cursor AND wait=55 — the request blocks server-side up to 55 seconds and returns as soon as a new DM arrives (or 55s expires). Save the new `last_message_cursor` from each response for the next call. For every conversation in `.conversations` with new messages, reply via chamade_dm_chat(platform=…, text=…) within 60 seconds — Chamade auto-sends a typing indicator while you think, and a fallback "no response received" message if you miss the window. When a `whatsapp_window` block reports `open=false`, your send will return HTTP 202 queued and a re-engagement template is fired automatically; nothing extra to do on your side. Start the poll loop immediately on startup and keep it running indefinitely — only stop if the user explicitly asks you to. Every idle moment should be spent in a chamade_inbox long-poll.

For Claude Code with the channel flag, this is unnecessary — events arrive as push notifications and the agent reacts to each one. See Channel Mode.

Example conversation

Once configured, your AI agent can handle meetings through natural language:

You Join this Teams meeting: https://teams.microsoft.com/l/meetup-join/...
Tool chamade_call_join → call_id: "abc123", state: "connecting"
Your STT [Alice] OK let's start the sprint review.
Agent Alice started the sprint review. Let me confirm in chat.
Tool chamade_call_chat → "Got it, I'm noting this down."

Tools reference

ToolDescription
chamade_call_joinJoin a voice meeting. Returns a call_id, capabilities, and — for voice calls — an audio block describing the raw-PCM WebSocket endpoint you can wire to your own STT/TTS stack as the alternative to hosted STT/TTS. With an enabled provider preset, transcripts start flowing automatically and chamade_call_say works.
chamade_call_chatSend a text chat message in the meeting. Works on all platforms, no audio involved.
chamade_call_statusGet call status. In BYO audio mode, transcripts come from your own STT client — this only returns transcript deltas when hosted STT is enabled (BYOK: add a provider key in dashboard → Voice providers). Polling fallback: call in a loop (delta pattern, only new lines each time) when your client doesn't support channel mode.
chamade_call_acceptAnswer a ringing inbound call (SIP, Teams DM call, etc.).
chamade_call_refuseRefuse/reject a ringing inbound call.
chamade_call_typingSend a typing indicator in meeting chat.
chamade_call_leaveHang up and leave the meeting.
chamade_call_listList all active calls. Polling fallback: call periodically to detect new ringing inbound calls when your client doesn't support channel mode.
chamade_call_say[BYOK] Speak text via hosted TTS. Requires a TTS preset in dashboard → Voice providers (BYOK — you bring the ElevenLabs / Deepgram / … key, Chamade runs the pipeline for free). Returns 400 when no preset is configured — in that case synthesize locally and push PCM on the call's audio WebSocket.
chamade_call_stop_speaking[BYOK] Cancel any in-flight hosted-TTS utterance (barge-in / self-interrupt). No-op if nothing is currently being spoken. Use when the user starts talking over the agent, or when the agent decides to abandon what it was saying.
chamade_inboxCheck DM conversations (Discord, Telegram, Teams, WhatsApp, Slack, NC Talk). Three modes: snapshot, per-platform detail, delta with optional long-poll. Shows the WhatsApp 24h window state inline. Polling fallback: pass last_message_cursor to get only new messages since last call, and wait=55 to long-poll server-side up to 55 s for near-real-time latency without channel mode.
chamade_dm_chatSend a DM message by platform. On WhatsApp outside the 24h window, returns HTTP 202 with {status: "queued"} and auto-fires a re-engagement template; queued messages flush automatically when the user replies.
chamade_dm_typingSend a typing indicator in DM by platform.
chamade_file_upload_urlMint a short-lived (5 min) pre-signed URL your agent can curl -F file=@path to — no auth header required. Returns a file_id you then pass in attachments: [{file_id}] on chamade_dm_chat or chamade_call_chat. The correct pattern for any non-trivial local file: a shell-streamed upload keeps the tool call tiny (~80 chars) instead of forcing the model to emit every base64 character of the payload.
chamade_accountCheck account status — plan, the features block (which features are ready vs byok), per-platform readiness + capabilities, and the identity map (agent handle + operator handle per platform). Call this first on cold start to bootstrap.

Resources

URI templateDescription
chamade://calls/{call_id}/transcript[BYOK] Live transcript from Chamade's hosted STT. Only populated when hosted STT is enabled for your account (BYOK — add an ElevenLabs / Deepgram key in dashboard → Voice providers). In BYO audio mode (no preset), your own STT client is the source of truth — this resource will return empty.

Audio streaming (BYO)

MCP is a control plane, not a data plane — it cannot stream audio. This is architectural, not a Chamade limitation: JSON-RPC 2.0 has no primitive for continuous binary streams, so every voice infrastructure in the ecosystem follows the same pattern — raw WebSocket (or WebRTC) for audio, a text protocol for control.

Chamade follows the same split: MCP tools drive the call (chamade_call_join, chamade_call_chat, etc.), and the raw PCM flows over a separate WebSocket described in the REST API reference. Your agent's host code connects to both and pipes bytes between the call WebSocket and your own STT/TTS stack (OpenAI Realtime, LiveKit Agents, Pipecat, Deepgram Voice Agent, or a manual cascade). The audio block returned by chamade_call_join tells you exactly what stream_url, sample rate, and frame format to expect. For the hosted STT/TTS alternative, see Voice providers.

Push events (channel mode)

Chamade's MCP server supports real-time push events for channel-aware clients (currently Claude Code): new DMs, incoming calls, and call state changes arrive as notifications/claude/channel on the open MCP session, no polling required. Two opt-ins required, in order:

  1. Append ?stateful to your MCP URL in .mcp.json (so chamade issues a persistent session).
  2. Launch Claude Code with --dangerously-load-development-channels server:chamade (so Claude Code consumes the push notifications).

Without both, the default is restart-proof polling via chamade_inbox — which is the right default for any client that doesn't specifically need push. See the dedicated Channel Mode page for the full setup, flag details, event format, and workflow patterns.

Prefer the REST API?

The MCP server is a thin wrapper around the REST API. Every MCP tool maps one-to-one to an HTTP endpoint. If your agent doesn't speak MCP (or you want direct control), use https://chamade.io/api/* with the same X-API-Key or Authorization: Bearer header.

Troubleshooting

Connector doesn't appear in Claude after adding it

Quit the client completely and relaunch (on macOS Claude Desktop, "close window" is not enough — use ⌘Q). Then open the MCP / tools menu in a new conversation; chamade should appear with 15 tools. If it doesn't:

401 Unauthorized on every tool call

The bearer token was rejected. Check:

403 “Invalid Origin header”

The transport layer rejects unknown browser origins as a DNS-rebinding defence. Accepted origins: claude.ai, claude.com, chatgpt.com, chat.openai.com, chat.mistral.ai, www.opera.com, neon.opera.com, and mcp.chamade.io itself. Native clients (Claude Desktop, Claude Code CLI, Cursor, Windsurf, stdio shim) don't send an Origin header and are always accepted. If you're building a new browser client and hit this, email [email protected] to add your origin.

Session drops after a few minutes / client stuck on “Connection lost”

You're running in stateful mode (?stateful) against a server that restarted or cycled, and your client isn't auto-reinitialising on HTTP 404 for the stale session. Two fixes:

OAuth popup fails / never redirects back

Claude.ai Custom Connectors and Claude Desktop "Add custom connector" both use the hosted OAuth 2.1 AS. If the consent page hangs or the popup fails to redirect:

I joined a call but the agent is silent / no transcripts

Voice calls have two modes:

Google Meet specifically: audio_in relies on Google's Meet Media API (Developer Preview) and requires every meeting participant to be enrolled in Google's program. audio_out is not supported at all on Meet today.

DMs are received but my agent never sees them

Check chamade_accountfeatures.channel. If your client declared experimental.claude/channel in the MCP handshake, DMs are pushed as channel notifications. Otherwise the agent has to poll chamade_inbox(last_message_cursor=…, wait=55) in a loop. If neither path runs, no events reach the model — the REST webhook on the Chamade side received the message, but nothing forwards it into your conversation.

WhatsApp message returns status: queued instead of delivered

Expected behaviour — the 24h WhatsApp customer-service window is closed for that conversation. Chamade automatically fires a Meta-approved re-engagement template; your queued messages flush as soon as the user replies, and you'll receive a dm_delivered event with the message IDs. No retry needed on your side.

Still stuck?

Email [email protected] with (a) the client you're using and its version, (b) the exact request URL, (c) the response status + body (if any), and (d) a timestamp — we can correlate against server logs.