# MeetStream Guide: Configure Post‑Call Transcription

This guide explains how to enable **post‑call transcription** for your MeetStream bot, choose a provider, and fetch the transcript after the call ends.

Supported providers:
- `assemblyai`
- `deepgram`
- `jigsawstack`
- `sarvam`
- `meetstream`
- `meeting_captions` (native captions from Google Meet / Microsoft Teams)

---
<div style="position: relative; padding-bottom: 56.25%; height: 0;"><iframe src="https://www.loom.com/embed/64eb54fa1a7d4e07b38450cf25495002" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe></div>
---

## 1) Enable transcription in `recording_config` while creating a bot

When calling **Create Bot**, include a `recording_config` object with a `transcript` section.

At minimum, you’ll provide:

```json
{
  "recording_config": {
    "transcript": {
      "provider": { ... }
    }
  }
}
```

> You can use **exactly one provider** under `transcript.provider`.

---

## 2) Provider payload examples

### A) JigsawStack (`jigsawstack`)

```json
"transcript": {
  "provider": {
    "jigsawstack": {
      "language": "auto",
      "translate": false,
      "by_speaker": true
    }
  }
}
```

---

### B) Deepgram (`deepgram`)

```json
"transcript": {
  "provider": {
    "deepgram": {
      "model": "nova-3",
      "language": "en",
      "punctuate": true,
      "smart_format": true,
      "diarize": true,
      "paragraphs": true,
      "numerals": true,
      "filler_words": false,
      "keywords": ["MeetStream", "recording", "transcript"],
      "utterances": true,
      "utt_split": 0.8,
      "detect_language": false,
      "search": ["MeetStream", "recording"],
      "tag": ["custom"]
    }
  }
}
```

---

### C) AssemblyAI (`assemblyai`)

```json
"transcript": {
  "provider": {
    "assemblyai": {
      "speech_models": ["universal-2"],
      "language_code": "en_us",
      "speaker_labels": true,
      "punctuate": true,
      "format_text": true,
      "filter_profanity": false,
      "redact_pii": false,
      "keyterms_prompt": ["MeetStream"],
      "auto_chapters": false,
      "entity_detection": false
    }
  }
}
```
---

### D) Sarvam AI (`sarvam`)

```json
"transcript": {
  "provider": {
    "sarvam": {
        "model": "saaras:v3",
        "language_code": "en-IN",
        "mode": "transcribe",
        "with_diarization": true
    }
  }
}
```
---

### E) MeetStream AI (`meetstream`)

```json
"transcript": {
  "provider": {
    "meetstream": {
        "language": "auto",
        "translate": false
    }
  }
}
```

---

### F) Meeting captions (native Google Meet / Teams captions)

If you want the **platform’s native captions** (Google Meet / Microsoft Teams), use:

```json
"transcript": {
  "provider": {
    "meeting_captions": {}
  }
}
```

> For `meeting_captions`, you **do not** use `transcript_id` to fetch captions.  
> Instead, you’ll fetch the bot details and download the caption file from the returned S3 link.

---

## 3) Getting `transcript_id` after creating the bot

When you create a bot with one of these providers:
- `assemblyai`
- `deepgram`
- `jigsawstack`
- `sarvam`
- `meetstream`

The Create Bot response will include a `transcript_id`.

Store that `transcript_id` — you’ll use it to retrieve the transcript after the call ends.

---

## 4) Fetch the post‑call transcript using `transcript_id`

### A) Standard transcript (formatted)

```http
GET /api/v1/transcript/{{transcript_id}}/get_transcript
```

### B) Raw transcript

```http
GET /api/v1/transcript/{{transcript_id}}/get_transcript?raw=True
```

Replace `{{transcript_id}}` with the value you received in Create Bot response.

---

## 5) Fetch native captions (`meeting_captions` provider)

For captions from Google Meet / Teams, you’ll retrieve the caption file using bot details.

### Bot detail endpoint

```http
GET /api/v1/bots/{{bot_id}}/detail
```

In this response, you will receive an S3 download link for the caption file (if captions are available).

> Captions availability depends on the meeting platform and whether captions were enabled during the meeting.

---

## 6) Recommended workflow (best practice)

1. Create bot with `recording_config.transcript.provider` set.
2. Receive bot lifecycle webhooks (`bot.joining`, `bot.inmeeting`, `bot.stopped`).
3. When you receive `bot.stopped`:
   - For `assemblyai/deepgram/jigsawstack/sarvam/meetstream`: fetch transcript using `transcript_id`.
   - For `meeting_captions`: call `/api/v1/bots/{{bot_id}}/detail` and download captions via the S3 link.

---

## Troubleshooting tips

- If transcript is not ready immediately after `bot.stopped`, retry after a short delay.
- Make sure your provider configuration values (like model/language) are valid for that provider.
- If using `meeting_captions`, confirm captions were enabled and supported for the platform.

---

