Model selection

The speech_models parameter lets you specify which model to use for transcription. You can provide multiple models in priority order, and our system will automatically route to the best available model based on your request.

speech_models is required

You must include the speech_models parameter in every pre-recorded transcription request. There is no default model. If you omit speech_models, the request will fail.

Model routing behavior: The system attempts to use the models in priority order falling back to the next model when needed. For example, with ["universal-3-pro", "universal-2"], the system will try to use universal-3-pro for languages it supports (English, Spanish, Portuguese, French, German, and Italian), and automatically fall back to Universal for all other languages. This ensures you get the best performing transcription where available while maintaining the widest language coverage.

Recommended model

We recommend Universal-3 Pro as your primary model for pre-recorded transcription. It delivers the highest accuracy with support for fine-tuning and customization via prompting. Use ["universal-3-pro", "universal-2"] to get the best accuracy where available while maintaining the widest language coverage.

NameParameterDescriptionBest for
Universal-3 Prospeech_models=['universal-3-pro']Our highest accuracy model with fine-tuning support and customization via prompting.Highest-accuracy transcription, post-call analytics, meeting notetakers, medical transcription, domain-specific accuracy via prompting
Universal-2speech_models=['universal-2']Our highly accurate, fastest performing model with support across 99 languages.High-volume batch transcription, 99-language coverage, price-sensitive workloads, fallback for unsupported U3 Pro languages

Quickstart

You can change the model by setting the speech_models in the POST request body:

1import requests
2import time
3
4base_url = "https://api.assemblyai.com"
5
6headers = {
7 "authorization": "<YOUR_API_KEY>"
8}
9
10data = {
11 "audio_url": "https://assembly.ai/wildfires.mp3",
12 "speech_models": ["universal-3-pro", "universal-2"],
13 "language_detection": True
14}
15
16url = base_url + "/v2/transcript"
17response = requests.post(url, json=data, headers=headers)
18
19transcript_id = response.json()['id']
20polling_endpoint = base_url + "/v2/transcript/" + transcript_id
21
22while True:
23 transcription_result = requests.get(polling_endpoint, headers=headers).json()
24
25 if transcription_result['status'] == 'completed':
26 print(transcription_result['text'])
27 break
28
29 elif transcription_result['status'] == 'error':
30 raise RuntimeError(f"Transcription failed: {transcription_result['error']}")
31
32 else:
33 time.sleep(3)

Identify the model used

After transcription completes, you can check which model was actually used to process your request by reading the speech_model_used field. This is useful when you provide multiple models in the speech_models array, as the system may fall back to a different model depending on language support.

1import requests
2import time
3
4base_url = "https://api.assemblyai.com"
5
6headers = {
7 "authorization": "<YOUR_API_KEY>"
8}
9
10data = {
11 "audio_url": "https://assembly.ai/wildfires.mp3",
12 "speech_models": ["universal-3-pro", "universal-2"],
13 "language_detection": True
14}
15
16url = base_url + "/v2/transcript"
17response = requests.post(url, json=data, headers=headers)
18
19transcript_id = response.json()['id']
20polling_endpoint = base_url + "/v2/transcript/" + transcript_id
21
22while True:
23 transcription_result = requests.get(polling_endpoint, headers=headers).json()
24
25 if transcription_result['status'] == 'completed':
26 print(f"Model used: {transcription_result['speech_model_used']}")
27 print(transcription_result['text'])
28 break
29
30 elif transcription_result['status'] == 'error':
31 raise RuntimeError(f"Transcription failed: {transcription_result['error']}")
32
33 else:
34 time.sleep(3)

Complete example

Here is the full working code that demonstrates model selection with error handling:

1import requests
2import time
3
4base_url = "https://api.assemblyai.com"
5headers = {"authorization": "<YOUR_API_KEY>"}
6
7data = {
8 "audio_url": "https://assembly.ai/wildfires.mp3",
9 "speech_models": ["universal-3-pro", "universal-2"],
10 "language_detection": True
11}
12
13response = requests.post(base_url + "/v2/transcript", headers=headers, json=data)
14
15if response.status_code != 200:
16 print(f"Error: {response.status_code}, Response: {response.text}")
17 response.raise_for_status()
18
19transcript_json = response.json()
20transcript_id = transcript_json["id"]
21polling_endpoint = f"{base_url}/v2/transcript/{transcript_id}"
22
23while True:
24 transcript = requests.get(polling_endpoint, headers=headers).json()
25 if transcript["status"] == "completed":
26 print(f"Model used: {transcript['speech_model_used']}")
27 print(f"\nTranscript:\n\n{transcript['text']}")
28 break
29 elif transcript["status"] == "error":
30 raise RuntimeError(f"Transcription failed: {transcript['error']}")
31 else:
32 time.sleep(3)