MeetStream Guide: Configure Post‑Call Transcription

View as MarkdownOpen in Claude

This guide explains how to enable post‑call transcription for your MeetStream bot, choose a provider, and fetch the transcript after the call ends.

Supported providers:

  • assemblyai
  • deepgram
  • jigsawstack
  • meeting_captions (native captions from Google Meet / Microsoft Teams)

1) Enable transcription in recording_config while creating a bot

When calling Create Bot / Create Agent, include a recording_config object with a transcript section.

At minimum, you’ll provide:

1{
2 "recording_config": {
3 "transcript": {
4 "provider": { ... }
5 }
6 }
7}

You can use exactly one provider under transcript.provider.


2) Provider payload examples

A) JigsawStack (jigsawstack)

1"transcript": {
2 "provider": {
3 "jigsawstack": {
4 "language": "auto",
5 "translate": false,
6 "by_speaker": true
7 }
8 }
9}

B) Deepgram (deepgram)

1"transcript": {
2 "provider": {
3 "deepgram": {
4 "model": "nova-3",
5 "language": "en",
6 "punctuate": true,
7 "smart_format": true,
8 "diarize": true,
9 "paragraphs": true,
10 "numerals": true,
11 "filler_words": false,
12 "keywords": ["MeetStream", "recording", "transcript"],
13 "utterances": true,
14 "utt_split": 0.8,
15 "detect_language": false,
16 "search": ["MeetStream", "recording"],
17 "tag": ["custom"]
18 }
19 }
20}

C) AssemblyAI (assemblyai)

1"transcript": {
2 "provider": {
3 "assemblyai": {
4 "speech_models": ["universal-2"],
5 "language_code": "en_us",
6 "speaker_labels": true,
7 "punctuate": true,
8 "format_text": true,
9 "filter_profanity": false,
10 "redact_pii": false,
11 "keyterms_prompt": ["MeetStream"],
12 "auto_chapters": false,
13 "entity_detection": false
14 }
15 }
16}

D) Meeting captions (native Google Meet / Teams captions)

If you want the platform’s native captions (Google Meet / Microsoft Teams), use:

1"transcript": {
2 "provider": {
3 "meeting_captions": {}
4 }
5}

For meeting_captions, you do not use transcript_id to fetch captions.
Instead, you’ll fetch the bot details and download the caption file from the returned S3 link (see below).


3) Getting transcript_id after creating the bot

When you create a bot with one of these providers:

  • assemblyai
  • deepgram
  • jigsawstack

…the Create Bot response will include a transcript_id.

Store that transcript_id — you’ll use it to retrieve the transcript after the call ends.


4) Fetch the post‑call transcript using transcript_id

A) Standard transcript (formatted)

1GET /api/v1/transcript/{{transcript_id}}/get_transcript

B) Raw transcript

1GET /api/v1/transcript/{{transcript_id}}/get_transcript?raw=True

Replace {{transcript_id}} with the value you received in Create Bot response.


5) Fetch native captions (meeting_captions provider)

For captions from Google Meet / Teams, you’ll retrieve the caption file using bot details.

Bot detail endpoint

1GET /api/v1/bots/{{bot_id}}/detail

In this response, you will receive an S3 download link for the caption file (if captions are available).

Captions availability depends on the meeting platform and whether captions were enabled during the meeting.


  1. Create bot with recording_config.transcript.provider set.
  2. Receive bot lifecycle webhooks (bot.joining, bot.inmeeting, bot.stopped).
  3. When you receive bot.stopped:
    • For assemblyai/deepgram/jigsawstack: fetch transcript using transcript_id.
    • For meeting_captions: call /api/v1/bots/{{bot_id}}/detail and download captions via the S3 link.

Troubleshooting tips

  • If transcript is not ready immediately after bot.stopped, retry after a short delay.
  • Make sure your provider configuration values (like model/language) are valid for that provider.
  • If using meeting_captions, confirm captions were enabled and supported for the platform.

If you want, I can also generate:

  • a single full Create Bot payload per provider, and/or
  • ready-to-copy examples for Node.js / Python to fetch transcripts and handle retries.