Transcribe audio

<Note>To use our EU server for transcription, replace `api.assemblyai.com` with `api.eu.assemblyai.com`.</Note> Create a transcript from a media file that is accessible via a URL.

Authentication

Authorizationstring
API Key authentication via header

Request

Params to create a transcript
audio_urlstringRequiredformat: "url"
The URL of the audio or video file to transcribe.
speech_modelslist of stringsRequired

List multiple speech models in priority order, allowing our system to automatically route your audio to the best available option. See Model Selection for available models and routing behavior.

audio_end_atintegerOptional

The point in time, in milliseconds, to stop transcribing in your media file. See Set the start and end of the transcript for more details.

audio_start_fromintegerOptional

The point in time, in milliseconds, to begin transcribing in your media file. See Set the start and end of the transcript for more details.

auto_chaptersbooleanOptionalDefaults to false

Enable Auto Chapters, can be true or false

auto_highlightsbooleanOptionalDefaults to false

Enable Key Phrases, either true or false

content_safetybooleanOptionalDefaults to false

Enable Content Moderation, can be true or false

content_safety_confidenceintegerOptional25-100Defaults to 50

The confidence threshold for the Content Moderation model. Values must be between 25 and 100.

custom_spellinglist of objectsOptional

Customize how words are spelled and formatted using to and from values. See Custom Spelling for more details.

disfluenciesbooleanOptionalDefaults to false

Transcribe Filler Words, like “umm”, in your media file; can be true or false

entity_detectionbooleanOptionalDefaults to false

Enable Entity Detection, can be true or false

filter_profanitybooleanOptionalDefaults to false

Filter profanity from the transcribed text, can be true or false. See Profanity Filtering for more details.

format_textbooleanOptionalDefaults to true

Enable Text Formatting, can be true or false

iab_categoriesbooleanOptionalDefaults to false

Enable Topic Detection, can be true or false

keyterms_promptlist of stringsOptional

Improve accuracy with up to 200 (for Universal-2) or 1000 (for Universal-3-Pro) domain-specific words or phrases (maximum 6 words per phrase). See Keyterms Prompting for more details.

language_codeenum or nullOptional

The language of your audio file. Possible values are found in Supported Languages. The default value is ‘en_us’.

language_codeslist of enums or nullOptional

The language codes of your audio file. Used for Code switching One of the values specified must be en.

language_confidence_thresholddoubleOptional

The confidence threshold for the automatically detected language. An error will be returned if the language confidence is below this threshold. Defaults to 0. See Automatic Language Detection for more details.

language_detectionbooleanOptionalDefaults to false

Enable Automatic language detection, either true or false.

language_detection_optionsobjectOptional

Specify options for Automatic Language Detection.

multichannelbooleanOptionalDefaults to false

Enable Multichannel transcription, can be true or false.

promptstringOptional

Provide natural language prompting of up to 1,500 words of contextual information to the model. See the Prompting Guide for best practices.

Note: This parameter is only supported for the Universal-3-Pro model.

punctuatebooleanOptionalDefaults to true

Enable Automatic Punctuation, can be true or false

redact_piibooleanOptionalDefaults to false

Redact PII from the transcribed text using the Redact PII model, can be true or false. See PII Redaction for more details.

redact_pii_audiobooleanOptionalDefaults to false

Generate a copy of the original media file with spoken PII “beeped” out, can be true or false. See PII redaction for more details.

redact_pii_audio_optionsobjectOptional

Specify options for PII redacted audio files.

redact_pii_audio_qualityenumOptional

Controls the filetype of the audio created by redact_pii_audio. Currently supports mp3 (default) and wav. See PII redaction for more details.

Allowed values:
redact_pii_policieslist of enumsOptional

The list of PII Redaction policies to enable. See PII redaction for more details.

redact_pii_subenum or nullOptional

The replacement logic for detected PII, can be entity_type or hash. See PII redaction for more details.

Allowed values:
sentiment_analysisbooleanOptionalDefaults to false

Enable Sentiment Analysis, can be true or false

speaker_labelsbooleanOptionalDefaults to false

Enable Speaker diarization, can be true or false

speaker_optionsobjectOptional

Specify options for Speaker diarization. Use this to set a range of possible speakers.

speakers_expectedinteger or nullOptional

Tells the speaker label model how many speakers it should attempt to identify. See Set number of speakers expected for more details.

speech_thresholddouble or nullOptional

Reject audio files that contain less than this fraction of speech. Valid values are in the range [0, 1] inclusive. See Speech Threshold for more details.

speech_understandingobjectOptional
Enable speech understanding tasks like [Translation](https://www.assemblyai.com/docs/speech-understanding/translation), [Speaker Identification](https://www.assemblyai.com/docs/speech-understanding/speaker-identification), and [Custom Formatting](https://www.assemblyai.com/docs/speech-understanding/custom-formatting). See the task-specific docs for available options and configuration.
summarizationbooleanOptionalDefaults to false

Enable Summarization, can be true or false

summary_modelenumOptional

The model to summarize the transcript. See Summary models for available models and when to use each.

Allowed values:
summary_typeenumOptional

The type of summary. See Summary types for descriptions of the available summary types.

Allowed values:
temperaturedoubleOptional0-1Defaults to 0

Control the amount of randomness injected into the model’s response. See the Prompting Guide for more details.

Note: This parameter can only be used with the Universal-3-Pro model.

webhook_auth_header_namestring or nullOptional

The header name to be sent with the transcript completed or failed webhook requests

webhook_auth_header_valuestring or nullOptional

The header value to send back with the transcript completed or failed webhook requests for added security

webhook_urlstringOptionalformat: "url"

The URL to which we send webhook requests.

custom_topicsbooleanOptionalDefaults to falseDeprecated
This parameter does not currently have any functionality attached to it.
speech_modelstring or nullOptionalDeprecated

This parameter has been replaced with the speech_models parameter, learn more about the speech_models parameter here.

topicslist of stringsOptionalDeprecated
This parameter does not currently have any functionality attached to it.

Response

Transcript created and queued for processing
audio_urlstringformat: "url"
The URL of the media that was transcribed
auto_highlightsboolean

Whether Key Phrases is enabled, either true or false

idstringformat: "uuid"
The unique identifier of your transcript
language_confidencedouble or null0-1

The confidence score for the detected language, between 0.0 (low confidence) and 1.0 (high confidence). See Automatic Language Detection for more details.

language_confidence_thresholddouble or null

The confidence threshold for the automatically detected language. An error will be returned if the language confidence is below this threshold. See Automatic Language Detection for more details.

redact_piiboolean

Whether PII Redaction is enabled, either true or false

statusenum
The status of your transcript. Possible values are queued, processing, completed, or error.
Allowed values:
summarizationboolean

Whether Summarization is enabled, either true or false

webhook_authboolean

Whether webhook authentication details were provided

acoustic_modelstringDeprecated
This parameter does not currently have any functionality attached to it.
language_modelstringDeprecated
This parameter does not currently have any functionality attached to it.
speech_modelstring or nullDeprecated

This parameter has been replaced with the speech_models parameter, learn more about the speech_models parameter here.

audio_channelsinteger or null

The number of audio channels in the audio file. This is only present when multichannel is enabled.

audio_durationinteger or null
The duration of this transcript object's media file, in seconds
audio_end_atinteger or null

The point in time, in milliseconds, in the file at which the transcription was terminated. See Set the start and end of the transcript for more details.

audio_start_frominteger or null

The point in time, in milliseconds, in the file at which the transcription was started. See Set the start and end of the transcript for more details.

auto_chaptersboolean or null

Whether Auto Chapters is enabled, can be true or false

auto_highlights_resultobject or null

An array of results for the Key Phrases model, if it is enabled. See Key Phrases for more information.

chapterslist of objects or null

An array of temporally sequential chapters for the audio file. See Auto Chapters for more information.

confidencedouble or null0-1

The confidence score for the transcript, between 0.0 (low confidence) and 1.0 (high confidence)

content_safetyboolean or null

Whether Content Moderation is enabled, can be true or false

content_safety_labelsobject or null

An array of results for the Content Moderation model, if it is enabled. See Content moderation for more information.

custom_spellinglist of objects or null

Customize how words are spelled and formatted using to and from values. See Custom Spelling for more details.

disfluenciesboolean or null

Transcribe Filler Words, like “umm”, in your media file; can be true or false

entitieslist of objects or null

An array of results for the Entity Detection model, if it is enabled. See Entity detection for more information.

entity_detectionboolean or null

Whether Entity Detection is enabled, can be true or false

errorstring or null
Error message of why the transcript failed
filter_profanityboolean or null

Whether Profanity Filtering is enabled, either true or false

format_textboolean or null

Whether Text Formatting is enabled, either true or false

iab_categoriesboolean or null

Whether Topic Detection is enabled, can be true or false

iab_categories_resultobject or null

The result of the Topic Detection model, if it is enabled. See Topic Detection for more information.

keyterms_promptlist of strings or null

Improve accuracy with up to 200 (for Universal-2) or 1000 (for Universal-3-Pro) domain-specific words or phrases (maximum 6 words per phrase). See Keyterms Prompting for more details.

language_codeenum or null

The language of your audio file. Possible values are found in Supported Languages. The default value is ‘en_us’.

language_codeslist of enums or null

The language codes of your audio file. Used for Code switching One of the values specified must be en.

language_detectionboolean or null

Whether Automatic language detection is enabled, either true or false

language_detection_optionsobject or null

Specify options for Automatic Language Detection.

multichannelboolean or null

Whether Multichannel transcription was enabled in the transcription request, either true or false

promptstring or null

Provide natural language prompting of up to 1,500 words of contextual information to the model. See the Prompting Guide for best practices.

Note: This parameter is only supported for the Universal-3-Pro model.

punctuateboolean or null

Whether Automatic Punctuation is enabled, either true or false

redact_pii_audioboolean or null

Whether a redacted version of the audio file was generated, either true or false. See PII redaction for more information.

redact_pii_audio_qualityenum or null

The audio quality of the PII-redacted audio file, if redact_pii_audio is enabled. See PII redaction for more information.

Allowed values:
redact_pii_policieslist of enums or null

The list of PII Redaction policies that were enabled, if PII Redaction is enabled. See PII redaction for more information.

redact_pii_subenum or null

The replacement logic for detected PII, can be entity_type or hash. See PII redaction for more details.

Allowed values:
sentiment_analysisboolean or null

Whether Sentiment Analysis is enabled, can be true or false

sentiment_analysis_resultslist of objects or null

An array of results for the Sentiment Analysis model, if it is enabled. See Sentiment Analysis for more information.

speaker_labelsboolean or null

Whether Speaker diarization is enabled, can be true or false

speakers_expectedinteger or null

Tell the speaker label model how many speakers it should attempt to identify. See Set number of speakers expected for more details.

speech_model_usedstring or null

The speech model that was actually used for the transcription. See Model Selection for available models.

speech_modelslist of strings or null

List multiple speech models in priority order, allowing our system to automatically route your audio to the best available option. See Model Selection for available models and routing behavior.

speech_thresholddouble or null

Defaults to null. Reject audio files that contain less than this fraction of speech. Valid values are in the range [0, 1] inclusive. See Speech Threshold for more details.

speech_understandingobject or null
Speech understanding tasks like [Translation](https://www.assemblyai.com/docs/speech-understanding/translation), [Speaker Identification](https://www.assemblyai.com/docs/speech-understanding/speaker-identification), and [Custom Formatting](https://www.assemblyai.com/docs/speech-understanding/custom-formatting). See the task-specific docs for available options and configuration.
summarystring or null

The generated summary of the media file, if Summarization is enabled

summary_modelstring or null

The Summarization model used to generate the summary, if Summarization is enabled

summary_typestring or null

The type of summary generated, if Summarization is enabled

temperaturedouble or null0-1

The temperature that was used for the model’s response. See the Prompting Guide for more details.

Note: This parameter can only be used with the Universal-3-Pro model.

textstring or null
The textual transcript of your media file
throttledboolean or null
True while a request is throttled and false when a request is no longer throttled
utteranceslist of objects or null

When multichannel or speaker_labels is enabled, a list of turn-by-turn utterance objects. See Speaker diarization and Multichannel transcription for more information.

webhook_auth_header_namestring or null

The header name to be sent with the transcript completed or failed webhook requests

webhook_status_codeinteger or null

The status code we received from your server when delivering the transcript completed or failed webhook request, if a webhook URL was provided

webhook_urlstring or nullformat: "url"

The URL to which we send webhook requests.

wordslist of objects or null

An array of temporally-sequential word objects, one for each word in the transcript.

translated_textsobject or null

Translated text keyed by language code. See Translation for more details.

custom_topicsboolean or nullDeprecated
This parameter does not currently have any functionality attached to it.
speed_boostboolean or nullDeprecated
This parameter does not currently have any functionality attached to it.
topicslist of strings or nullDeprecated
This parameter does not currently have any functionality attached to it.

Errors