# Live video streaming in MeetStream

This guide explains how to receive **live video** from a MeetStream bot over WebSocket while a meeting is in progress. You provide a URL when you create the bot; MeetStream connects to your server and streams fragmented MP4 (`fMP4`) data you can record or process.

---

## Platform support

Live video streaming is supported for **Google Meet** and **Microsoft Teams** meetings only. It is **not** available for other platforms (including Zoom) at this time.

---

## What you need

- A **create bot** request that sets `video_required` to `true` and includes `live_video_required.websocket_url` (see below).
- A **WebSocket server** you control that implements the message protocol in this document.
- For local development, a way to expose that server on the public internet with **`wss://`** (for example ngrok or cloudflared).

---

## What happens during a session

1. You create a bot with a `meeting_link` for Google Meet or Teams and pass your WebSocket URL in the payload.
2. After the bot joins and recording starts, MeetStream opens a WebSocket connection to your URL.
3. You receive JSON control messages (`video_stream_start`, periodic `video_latency_ping`, and `video_stream_end`) and **binary frames** containing fMP4 chunks in order.
4. You respond to each `video_latency_ping` with `video_latency_pong` so latency can be measured.

---

## Create bot payload

Use this shape when creating a bot in MeetStream:

```json
{
  "meeting_link": "https://...",
  "video_required": true,
  "live_video_required": {
    "websocket_url": "wss://your-server.example.com/"
  }
}
```

You can also use camelCase keys if your client prefers: `liveVideoRequired` and `websocketUrl`.

### WebSocket URL rules

- Must be a string starting with `ws://` or `wss://`.
- For production, use **`wss://`** (TLS).

---

## WebSocket protocol

### Messages from MeetStream (text / JSON)

#### `video_stream_start`

Sent once after the connection is established.

```json
{
  "type": "video_stream_start",
  "bot_id": "bot-123",
  "codec": "h264",
  "audio_codec": "aac",
  "container": "fmp4",
  "width": 1920,
  "height": 1080,
  "framerate": 25,
  "audio_sample_rate": 44100,
  "audio_bitrate": "128k"
}
```

#### `video_latency_ping`

Sent periodically.

```json
{
  "type": "video_latency_ping",
  "bot_id": "bot-123",
  "seq": 42,
  "sent_at_ms": 1743500000123
}
```

#### `video_stream_end`

Sent when the stream is stopping.

```json
{
  "type": "video_stream_end",
  "bot_id": "bot-123",
  "duration_seconds": 152.7
}
```

### Messages from MeetStream (binary)

- Raw fMP4 bytes from the recorder. Append chunks **in order** to build a continuous stream or file.

### Messages you send back (text / JSON)

For every `video_latency_ping`, reply with:

```json
{
  "type": "video_latency_pong",
  "seq": 42,
  "sent_at_ms": 1743500000123,
  "server_received_at_ms": 1743500000189,
  "bot_id": "bot-123"
}
```

Echo the same `seq`, `sent_at_ms`, and `bot_id` from the ping; set `server_received_at_ms` to a millisecond timestamp when your server handled the ping.

---

## Implementing your WebSocket server

You can use any stack that speaks WebSocket. Your server should:

1. Accept connections on `ws://` or `wss://`.
2. Parse text frames as JSON.
3. On `video_stream_start`, prepare your output (file, buffer, pipeline).
4. On each `video_latency_ping`, send a `video_latency_pong` as above.
5. On binary frames, append bytes in order to your output.
6. On `video_stream_end` or disconnect, finalize and close your output.

### Minimal handling loop

1. Track state per connection (for example `bot_id`, open output handle).
2. **Text JSON:** handle `video_stream_start`, `video_latency_ping`, `video_stream_end` as described.
3. **Binary:** append to the current output if `video_stream_start` was already received.
4. **Disconnect:** flush and close resources.

### Production tips

- Terminate TLS in front of your app and expose **`wss://`** to MeetStream.
- Allow large WebSocket frames if your platform has limits.
- Process writes sequentially per stream so chunk order is preserved.
- Isolate streams per bot or session.
- Plan storage and backpressure for long meetings.

---

## Reference implementations

### Client side — creating a bot with live video

Send a POST to the MeetStream API to create a bot with live video streaming enabled. The response includes the `bot_id` you will see in WebSocket messages.

#### cURL

```bash
curl -X POST https://api.meetstream.ai/api/bots \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "meeting_link": "https://meet.google.com/abc-defg-hij",
    "video_required": true,
    "live_video_required": {
      "websocket_url": "wss://your-server.example.com/"
    }
  }'
```

#### Node.js

```javascript
const response = await fetch("https://api.meetstream.ai/api/bots", {
  method: "POST",
  headers: {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    meeting_link: "https://meet.google.com/abc-defg-hij",
    video_required: true,
    live_video_required: {
      websocket_url: "wss://your-server.example.com/",
    },
  }),
});

const bot = await response.json();
console.log("Created bot:", bot.bot_id);
```

#### Python

```python
import requests

resp = requests.post(
    "https://api.meetstream.ai/api/bots",
    headers={
        "Authorization": "Bearer YOUR_API_KEY",
        "Content-Type": "application/json",
    },
    json={
        "meeting_link": "https://meet.google.com/abc-defg-hij",
        "video_required": True,
        "live_video_required": {
            "websocket_url": "wss://your-server.example.com/",
        },
    },
)

bot = resp.json()
print("Created bot:", bot["bot_id"])
```

### Server side — Node.js WebSocket receiver

A complete receiver that writes incoming fMP4 chunks to `.mp4` files on disk. Each connection maps to one bot session.

```javascript
import { createWriteStream } from "node:fs";
import { mkdir } from "node:fs/promises";
import { join } from "node:path";
import { WebSocketServer } from "ws";

const PORT = Number(process.env.WS_PORT || 9876);
const HOST = process.env.WS_HOST || "0.0.0.0";
const OUT_DIR = process.env.OUTPUT_DIR || join(process.cwd(), "recordings");

await mkdir(OUT_DIR, { recursive: true });

const wss = new WebSocketServer({ host: HOST, port: PORT });

wss.on("connection", (ws) => {
  let writeStream = null;
  let botId = "bot";
  let bytesWritten = 0;

  ws.on("message", (data, isBinary) => {
    // Binary frames: append fMP4 chunks to the output file
    if (isBinary) {
      if (!writeStream) return;
      const buf = Buffer.isBuffer(data) ? data : Buffer.from(data);
      bytesWritten += buf.length;
      writeStream.write(buf);
      return;
    }

    const msg = JSON.parse(data.toString());

    if (msg.type === "video_stream_start") {
      botId = msg.bot_id || "unknown";
      const outPath = join(OUT_DIR, `${botId}_${Date.now()}.mp4`);
      writeStream = createWriteStream(outPath);
      bytesWritten = 0;
      console.log(`Recording started -> ${outPath}`);
    }

    if (msg.type === "video_latency_ping") {
      ws.send(JSON.stringify({
        type: "video_latency_pong",
        seq: msg.seq,
        sent_at_ms: msg.sent_at_ms,
        server_received_at_ms: Date.now(),
        bot_id: msg.bot_id || botId,
      }));
    }

    if (msg.type === "video_stream_end") {
      console.log(`Recording ended for ${botId} (${msg.duration_seconds}s)`);
      if (writeStream) { writeStream.end(); writeStream = null; }
    }
  });

  ws.on("close", () => {
    if (writeStream) { writeStream.end(); writeStream = null; }
  });
});

console.log(`Listening on ws://${HOST}:${PORT}/`);
```

Run with:

```bash
npm install ws
node --experimental-modules server.mjs
```

### Server side — Python WebSocket receiver

The same receiver in Python using the `websockets` library.

```python
import asyncio
import json
import os
import time

import websockets

PORT = int(os.environ.get("WS_PORT", 9876))
HOST = os.environ.get("WS_HOST", "0.0.0.0")
OUT_DIR = os.environ.get("OUTPUT_DIR", os.path.join(os.getcwd(), "recordings"))

os.makedirs(OUT_DIR, exist_ok=True)


async def handle(ws):
    out_file = None
    bot_id = "bot"
    bytes_written = 0

    try:
        async for message in ws:
            # Binary frames: append fMP4 chunks to the output file
            if isinstance(message, bytes):
                if out_file is None:
                    continue
                out_file.write(message)
                bytes_written += len(message)
                continue

            msg = json.loads(message)

            if msg["type"] == "video_stream_start":
                bot_id = msg.get("bot_id", "unknown")
                path = os.path.join(OUT_DIR, f"{bot_id}_{int(time.time())}.mp4")
                out_file = open(path, "wb")
                bytes_written = 0
                print(f"Recording started -> {path}")

            elif msg["type"] == "video_latency_ping":
                await ws.send(json.dumps({
                    "type": "video_latency_pong",
                    "seq": msg["seq"],
                    "sent_at_ms": msg["sent_at_ms"],
                    "server_received_at_ms": int(time.time() * 1000),
                    "bot_id": msg.get("bot_id", bot_id),
                }))

            elif msg["type"] == "video_stream_end":
                print(f"Recording ended for {bot_id} ({msg.get('duration_seconds')}s)")
                if out_file:
                    out_file.close()
                    out_file = None
    finally:
        if out_file:
            out_file.close()


async def main():
    async with websockets.serve(handle, HOST, PORT):
        print(f"Listening on ws://{HOST}:{PORT}/")
        await asyncio.Future()

asyncio.run(main())
```

Run with:

```bash
pip install websockets
python server.py
```

---

## Reaching a server on your laptop

If MeetStream runs in the cloud and your receiver runs locally, expose the local WebSocket port with a tunnel (for example ngrok or cloudflared) and use the public **`wss://`** URL in `live_video_required.websocket_url`.

Point the tunnel at the port your video receiver listens on. Do not reuse another WebSocket used for a different purpose.

---

## Troubleshooting

| Symptom | What to check |
|--------|----------------|
| No connection or no data | Confirm `video_required` is `true`, the URL is correct, and the meeting is Google Meet or Teams. |
| No binary chunks | Confirm the tunnel or firewall allows inbound connections and TLS is valid for `wss://`. |
| TLS or certificate errors | Verify certificates, tunnel URL, and that you use `wss://` in production. |
| Scheduled or recurring bots missing live video | Include `live_video_required` in every create-bot payload your automation sends. |

Early in a session, the first bytes may arrive slightly before your WebSocket is fully ready; once connected, chunks should flow normally.

---

## Security

- Prefer **`wss://`** in production.
- Do not put secrets in URLs if you can avoid it; protect your receiver with auth, IP restrictions, or a private network where possible.
- Treat tunnel URLs as sensitive while testing.

---

## Quick test checklist

1. Start your WebSocket receiver.
2. Expose it with a public `wss://` URL if needed.
3. Create a bot with a Google Meet or Teams `meeting_link` and `live_video_required.websocket_url` set to that URL.
4. Confirm you receive `video_stream_start`, then growing binary traffic.
5. Confirm you send `video_latency_pong` for each `video_latency_ping`.
6. Confirm `video_stream_end` when the session ends.

If something still fails, note your `bot_id`, meeting platform, and timestamps when contacting MeetStream support.
