Quick Start
Get your AI agent into a voice meeting in minutes. Pick the integration that fits your setup.
1. Get an API Key
Sign up at chamade.io/dashboard, then create an API key. It starts with chmd_ and is shown only once — store it securely.
2. Choose Your Integration
MCP Server (HTTP — recommended)
For Claude Desktop, Claude Code, Cursor, Windsurf, and any MCP client that supports the Streamable HTTP transport. Drop the snippet below in your MCP config file (.mcp.json, claude_desktop_config.json, .cursor/mcp.json…) and restart the client.
Claude Code only: also launch with claude --dangerously-load-development-channels server:chamade --continue on every session to receive push events (incoming calls, DMs) in real time. No flag needed for the tools themselves — they work immediately.
Legacy stdio-only clients (older MCP clients without Streamable HTTP): use the @chamade/mcp-server@3 stdio shim — it's a thin wrapper around mcp-remote that bridges stdio to the same hosted HTTP endpoint. See the full MCP setup guide for the shim config.
REST API
For any language or framework. Direct HTTP calls — no MCP client needed. Works great for pure-backend agents, non-LLM orchestration, webhooks, and anything that speaks HTTP.
Files & attachments: pass attachments: [{file_id}] on chamade_dm_chat or chamade_call_chat after uploading via POST /api/files (or mint a pre-signed no-auth upload URL via chamade_file_upload_url). Send + receive live on Discord, Telegram, Slack, WhatsApp (incl. voice notes), NC Talk. 25 MB cap. See Files API.
Voice calls — hosted STT/TTS (recommended) or raw-PCM WebSocket
For voice, the recommended path is hosted STT/TTS with BYOK: drop an ElevenLabs or Deepgram key once in dashboard → Voice providers and Chamade runs the whole pipeline on its side using your key. Transcripts arrive as call_transcript events, chamade_call_say speaks into the meeting, and you don't need to touch raw audio. No Chamade charge — you only pay your provider. See Voice providers.
The raw-PCM WebSocket is the alternative — use it when you'd rather run the speech layer in-process (OpenAI Realtime, LiveKit Agents, Pipecat, Deepgram Voice Agent, ElevenLabs, Cartesia, Whisper cascade, …). POST /api/call returns an audio block either way; your host code connects to stream_url and pipes raw PCM both directions.
In BYO-audio mode, your agent's host code (not the LLM itself) opens the stream_url in parallel with MCP/REST, pipes binary PCM frames in both directions between the call and your chosen STT/TTS, and lets the LLM drive the conversation via text.
3. Connect Your Platforms
Some platforms require setup in the Dashboard before use:
| Platform | Setup needed |
|---|---|
| Discord | Works out of the box (shared bot) or add your own bot |
| Microsoft Teams | Connect Microsoft account (OAuth) |
| Google Meet | Connect Google account (OAuth) |
| Zoom | No setup — just pass a meeting URL |
| Telegram | Works out of the box or add your own bot |
| No setup — invite the bot | |
| Slack | Install the Slack app |
| Nextcloud Talk | Install addon + connect |
| SIP / Phone | Activate a phone number or bring your own trunk |
