Summarizing virtual meetings

In this guide, we’ll show you how to use LLM Gateway to generate summaries of your virtual meetings. You’ll transcribe your audio and then use LLM Gateway to produce a custom summary.

The summarization, summary_model, and summary_type parameters on the transcription API are deprecated. Use LLM Gateway as shown below for more flexible and powerful summaries.

Get started

Before we begin, make sure you have an AssemblyAI account and an API key. You can sign up for a free account and get your API key from your dashboard.

Step-by-step instructions

1

Create a new file and install the required packages.

$pip install requests

Set up the API endpoint and headers.

1import requests
2import time
3
4base_url = "https://api.assemblyai.com"
5
6headers = {
7 "authorization": "<YOUR_API_KEY>"
8}
2

Upload your local file to the AssemblyAI API and submit a transcription request.

1with open("./my-audio.mp3", "rb") as f:
2 response = requests.post(base_url + "/v2/upload",
3 headers=headers,
4 data=f)
5
6upload_url = response.json()["upload_url"]
7
8data = {
9 "audio_url": upload_url,
10 "speech_models": ["universal-3-pro", "universal-2"],
11 "language_detection": True
12}
13
14response = requests.post(base_url + "/v2/transcript", json=data, headers=headers)
15
16transcript_id = response.json()['id']
17polling_endpoint = base_url + "/v2/transcript/" + transcript_id
18
19while True:
20 transcription_result = requests.get(polling_endpoint, headers=headers).json()
21 if transcription_result['status'] == 'completed':
22 break
23 elif transcription_result['status'] == 'error':
24 raise RuntimeError(f"Transcription failed: {transcription_result['error']}")
25 else:
26 time.sleep(3)
3

Send the transcript text to LLM Gateway with a summarization prompt.

1prompt = """Provide a summary of this meeting transcript in bullet point format.
2Focus on the key discussion points, decisions made, and any action items."""
3
4llm_gateway_data = {
5 "model": "claude-sonnet-4-6",
6 "messages": [
7 {"role": "user", "content": f"{prompt}\n\nTranscript: {transcription_result['text']}"}
8 ],
9 "max_tokens": 1500
10}
11
12response = requests.post(
13 "https://llm-gateway.assemblyai.com/v1/chat/completions",
14 headers=headers,
15 json=llm_gateway_data
16)
17
18result = response.json()["choices"][0]["message"]["content"]
19print(result)

Understanding the response

The LLM Gateway returns the summary in the choices[0].message.content field. You can customize the format and style of the summary by adjusting the prompt.

Best practices

  • Customize the prompt to match your meeting type (e.g., standup, all-hands, client call).
  • For longer meetings, consider breaking the transcript into sections and summarizing each one separately.
  • Use Structured Outputs if you need the summary in a specific JSON format.
  • Try different LLM models to find the one that produces the best summaries for your use case.

Advanced usage

You can define custom summary formats by adjusting the prompt:

1prompt = """Summarize this meeting transcript using the following format:
2## Meeting Summary
3- **Key Discussion Points**: [list main topics discussed]
4- **Decisions Made**: [list any decisions]
5- **Action Items**: [list action items with owners if mentioned]
6- **Next Steps**: [list next steps]"""

Next steps