MeetStream Guide: Create an AI Agent & Bring It Into a Meeting
What is MIA?
MIA stands for MeetStream Infrastructure Agent — the platform layer that lets you create, configure, and deploy AI agents into live meetings. Through the MIA tab in the dashboard, you define how your agent listens, thinks, and acts during a call.
This guide walks you through creating an AI agent using MIA, connecting it to an MCP server for tool calling, and deploying it into a live meeting.
Applies to: Google Meet, Zoom, Microsoft Teams. Support: docs.meetstream.ai • API: api.meetstream.ai
1) Open the MeetStream dashboard
- Go to app.meetstream.ai.
- Navigate to the MIA tab.
- Click Create New Agent.
MIA tab — Create New Agent
2) Choose a mode: Realtime vs Pipeline
MeetStream offers two agent modes. Pick the one that fits your use case.
Tip: Start with Realtime mode if you want the fastest response times. Switch to Pipeline when you need specific provider combinations.
3) Select the agent response type
Choose how the agent should interact during the meeting:
In all three modes, the agent can perform tool actions if an MCP server is connected (see step 6).
4) Select provider and model
Pick the LLM provider and model that will power your agent. The available options depend on the mode you chose in step 2:
- Realtime mode — select one provider/model for the entire pipeline.
- Pipeline mode — select a provider/model individually for LLM, TTS, and STT.
5) Add a system prompt
The system prompt defines how your agent behaves — its personality, constraints, and instructions.
Write a clear prompt that tells the agent:
- What role it plays (e.g., “You are a meeting assistant that takes notes and creates action items”)
- What it should and shouldn’t do
- How it should respond (tone, length, format)
Example
6) Connect an MCP server (tool calling)
MCP (Model Context Protocol) lets your agent call external tools — like creating tickets, querying databases, or triggering workflows. This section shows how to set up an MCP server locally using Docker and expose it to MeetStream.
Step 1: Install Docker
- Download and install the latest Docker Desktop from docker.com.
- Sign in to Docker Desktop.
Step 2: Add an MCP server from the Docker catalog
- In Docker Desktop, look for the MCP tab (new feature).
- Go to the Catalogue section.
- Search for an MCP server — for example, Linear, GitHub, or any other available server.
- Add the server and authorize it when prompted.
Docker MCP Catalogue
Step 3: Run the MCP gateway
Start the Docker MCP gateway with streaming transport on port 8080:
Docker will start the MCP gateway locally on port 8080. You will also receive a bearer token in the output — save it, you’ll need it in step 4.
Important: Copy and store the bearer token from the command output. You will need it to authenticate MeetStream with your MCP gateway.
Step 4: Tunnel with ngrok
Your MCP gateway is running on localhost:8080, but MeetStream needs a public URL. Use ngrok to create a tunnel:
ngrok will print a public HTTPS URL like:
Your MCP endpoint is now reachable at:
For Docker MCP gateway, always append
/mcpto the ngrok URL.
Step 5: Add the MCP server URL in the dashboard
Back in the MeetStream dashboard (agent creation screen):
- In the MCP Server URL field, enter your full URL:
- In the Header section, add the bearer token:
- Click Fetch — MeetStream will connect to your MCP server and retrieve a list of available tools/actions.
- Select the tools you want the agent to use (e.g.,
create,list,edit, etc.). - Click Save to finalize the agent.
MCP Server Configuration
7) Bring the agent into a meeting
Now that your agent is created, you can deploy it into a live meeting using the API.
API endpoint
Required fields
Include these three fields in your payload alongside the standard bot parameters:
Example cURL
Once the bot joins the meeting, your AI agent is live — you can start talking to it immediately.
8) End-to-end summary
Here’s the full flow at a glance:
- Dashboard → MIA tab → Create New Agent
- Mode → Realtime (fast, single provider) or Pipeline (flexible, multi-provider)
- Response type → Voice / Chat / Action
- Provider & model → Pick the LLM (and TTS/STT if Pipeline)
- System prompt → Define the agent’s behavior
- MCP server → Docker MCP gateway → ngrok tunnel → add URL + bearer token → fetch & select tools
- Deploy →
POST /api/v1/bots/create_botwithagent_config_id+ WebSocket URLs - Talk → Agent is live in the meeting
Troubleshooting
MCP Fetch returns no tools
- Confirm the Docker MCP gateway is running (
docker mcp gateway run ...). - Confirm ngrok is tunneling to the correct port (8080).
- Make sure you appended
/mcpto the ngrok URL. - Verify the bearer token in the header is correct.
Agent doesn’t respond in the meeting
- Check that
agent_config_idmatches the saved agent in the dashboard. - Ensure both WebSocket URLs are included in the
create_botpayload. - Verify the meeting link is valid and the bot has joined successfully (check webhook events).
ngrok tunnel expired
- Free ngrok tunnels rotate URLs on restart. Re-run
ngrok http 8080and update the MCP Server URL in the dashboard.
For webhook event handling (bot lifecycle + post-call processing), see the Webhook Events Guide.
FAQ
What does MIA stand for?
MeetStream Infrastructure Agent. It’s the platform layer that powers agent creation, configuration, and deployment into meetings.
Can I use my own LLM API key?
Provider and model selection is handled through the MIA dashboard. Contact support at docs.meetstream.ai for details on custom provider configurations.
What’s the difference between Realtime and Pipeline mode?
Realtime uses a single provider for LLM, TTS, STT, and MCP — it’s faster because everything runs through one service. Pipeline lets you mix different providers for each component (e.g., one model for speech-to-text, another for the LLM), giving you more flexibility at the cost of slightly higher latency.
Can the agent perform actions in all response types (Voice, Chat, Action)?
Yes. As long as an MCP server is connected, the agent can execute tool actions regardless of whether its response type is set to Voice, Chat, or Action.
Do I need Docker to use MCP with MeetStream?
Not necessarily. Docker is one way to run an MCP server locally using the built-in MCP catalog and gateway. If you already have an MCP-compatible server hosted elsewhere, you can point MeetStream directly to its URL.
Why do I need to append /mcp to the ngrok URL?
The Docker MCP gateway exposes its MCP endpoint at the /mcp path. Without it, MeetStream won’t reach the correct route and the Fetch will fail.
Can I connect multiple MCP servers to one agent?
Currently, you configure one MCP server URL per agent. If you need tools from multiple sources, consider running a gateway that aggregates them behind a single endpoint.
What happens if my ngrok tunnel goes down during a meeting?
The agent loses access to MCP tools while the tunnel is down. It will still be in the meeting but won’t be able to execute tool calls. Restart ngrok, update the MCP URL in the dashboard, and redeploy if needed. For production use, host your MCP server on a stable endpoint instead of a tunnel.
Where do I find my agent_config_id?
After saving your agent in the MIA dashboard, the agent_config_id is shown in the agent details. You can also retrieve it via the MeetStream API.
Can I update an agent’s configuration after creating it?
Yes. Go to the MIA tab in the dashboard, select your agent, and edit its settings (mode, prompt, MCP server, tools, etc.). Changes apply to new deployments — agents already in a meeting will use the configuration they were launched with.
Which meeting platforms are supported?
Google Meet, Zoom, and Microsoft Teams.
