Get transcript

<Note>To retrieve your transcriptions on our EU server, replace `api.assemblyai.com` with `api.eu.assemblyai.com`.</Note> Get the transcript resource. The transcript is ready when the "status" is "completed".

Authentication

Authorizationstring
API Key authentication via header

Path parameters

transcript_idstringRequired
ID of the transcript

Response

The transcript resource
audio_urlstringformat: "url"
The URL of the media that was transcribed
auto_highlightsboolean

Whether Key Phrases is enabled, either true or false

idstringformat: "uuid"
The unique identifier of your transcript
language_confidencedouble or null0-1

The confidence score for the detected language, between 0.0 (low confidence) and 1.0 (high confidence). See Automatic Language Detection for more details.

language_confidence_thresholddouble or null

The confidence threshold for the automatically detected language. An error will be returned if the language confidence is below this threshold. See Automatic Language Detection for more details.

redact_piiboolean

Whether PII Redaction is enabled, either true or false

statusenum
The status of your transcript. Possible values are queued, processing, completed, or error.
Allowed values:
summarizationboolean

Whether Summarization is enabled, either true or false

webhook_authboolean

Whether webhook authentication details were provided

acoustic_modelstringDeprecated
This parameter does not currently have any functionality attached to it.
language_modelstringDeprecated
This parameter does not currently have any functionality attached to it.
speech_modelstring or nullDeprecated

This parameter has been replaced with the speech_models parameter, learn more about the speech_models parameter here.

audio_channelsinteger or null

The number of audio channels in the audio file. This is only present when multichannel is enabled.

audio_durationinteger or null
The duration of this transcript object's media file, in seconds
audio_end_atinteger or null

The point in time, in milliseconds, in the file at which the transcription was terminated. See Set the start and end of the transcript for more details.

audio_start_frominteger or null

The point in time, in milliseconds, in the file at which the transcription was started. See Set the start and end of the transcript for more details.

auto_chaptersboolean or null

Whether Auto Chapters is enabled, can be true or false

auto_highlights_resultobject or null

An array of results for the Key Phrases model, if it is enabled. See Key Phrases for more information.

chapterslist of objects or null

An array of temporally sequential chapters for the audio file. See Auto Chapters for more information.

confidencedouble or null0-1

The confidence score for the transcript, between 0.0 (low confidence) and 1.0 (high confidence)

content_safetyboolean or null

Whether Content Moderation is enabled, can be true or false

content_safety_labelsobject or null

An array of results for the Content Moderation model, if it is enabled. See Content moderation for more information.

custom_spellinglist of objects or null

Customize how words are spelled and formatted using to and from values. See Custom Spelling for more details.

disfluenciesboolean or null

Transcribe Filler Words, like “umm”, in your media file; can be true or false

entitieslist of objects or null

An array of results for the Entity Detection model, if it is enabled. See Entity detection for more information.

entity_detectionboolean or null

Whether Entity Detection is enabled, can be true or false

errorstring or null
Error message of why the transcript failed
filter_profanityboolean or null

Whether Profanity Filtering is enabled, either true or false

format_textboolean or null

Whether Text Formatting is enabled, either true or false

iab_categoriesboolean or null

Whether Topic Detection is enabled, can be true or false

iab_categories_resultobject or null

The result of the Topic Detection model, if it is enabled. See Topic Detection for more information.

keyterms_promptlist of strings or null

Improve accuracy with up to 200 (for Universal-2) or 1000 (for Universal-3-Pro) domain-specific words or phrases (maximum 6 words per phrase). See Keyterms Prompting for more details.

language_codeenum or null

The language of your audio file. Possible values are found in Supported Languages. The default value is ‘en_us’.

language_codeslist of enums or null

The language codes of your audio file. Used for Code switching One of the values specified must be en.

language_detectionboolean or null

Whether Automatic language detection is enabled, either true or false

language_detection_optionsobject or null

Specify options for Automatic Language Detection.

multichannelboolean or null

Whether Multichannel transcription was enabled in the transcription request, either true or false

promptstring or null

Provide natural language prompting of up to 1,500 words of contextual information to the model. See the Prompting Guide for best practices.

Note: This parameter is only supported for the Universal-3-Pro model.

punctuateboolean or null

Whether Automatic Punctuation is enabled, either true or false

redact_pii_audioboolean or null

Whether a redacted version of the audio file was generated, either true or false. See PII redaction for more information.

redact_pii_audio_qualityenum or null

The audio quality of the PII-redacted audio file, if redact_pii_audio is enabled. See PII redaction for more information.

Allowed values:
redact_pii_policieslist of enums or null

The list of PII Redaction policies that were enabled, if PII Redaction is enabled. See PII redaction for more information.

redact_pii_subenum or null

The replacement logic for detected PII, can be entity_type or hash. See PII redaction for more details.

Allowed values:
sentiment_analysisboolean or null

Whether Sentiment Analysis is enabled, can be true or false

sentiment_analysis_resultslist of objects or null

An array of results for the Sentiment Analysis model, if it is enabled. See Sentiment Analysis for more information.

speaker_labelsboolean or null

Whether Speaker diarization is enabled, can be true or false

speakers_expectedinteger or null

Tell the speaker label model how many speakers it should attempt to identify. See Set number of speakers expected for more details.

speech_model_usedstring or null

The speech model that was actually used for the transcription. See Model Selection for available models.

speech_modelslist of strings or null

List multiple speech models in priority order, allowing our system to automatically route your audio to the best available option. See Model Selection for available models and routing behavior.

speech_thresholddouble or null

Defaults to null. Reject audio files that contain less than this fraction of speech. Valid values are in the range [0, 1] inclusive. See Speech Threshold for more details.

speech_understandingobject or null
Speech understanding tasks like [Translation](https://www.assemblyai.com/docs/speech-understanding/translation), [Speaker Identification](https://www.assemblyai.com/docs/speech-understanding/speaker-identification), and [Custom Formatting](https://www.assemblyai.com/docs/speech-understanding/custom-formatting). See the task-specific docs for available options and configuration.
summarystring or null

The generated summary of the media file, if Summarization is enabled

summary_modelstring or null

The Summarization model used to generate the summary, if Summarization is enabled

summary_typestring or null

The type of summary generated, if Summarization is enabled

temperaturedouble or null0-1

The temperature that was used for the model’s response. See the Prompting Guide for more details.

Note: This parameter can only be used with the Universal-3-Pro model.

textstring or null
The textual transcript of your media file
throttledboolean or null
True while a request is throttled and false when a request is no longer throttled
utteranceslist of objects or null

When multichannel or speaker_labels is enabled, a list of turn-by-turn utterance objects. See Speaker diarization and Multichannel transcription for more information.

webhook_auth_header_namestring or null

The header name to be sent with the transcript completed or failed webhook requests

webhook_status_codeinteger or null

The status code we received from your server when delivering the transcript completed or failed webhook request, if a webhook URL was provided

webhook_urlstring or nullformat: "url"

The URL to which we send webhook requests.

wordslist of objects or null

An array of temporally-sequential word objects, one for each word in the transcript.

translated_textsobject or null

Translated text keyed by language code. See Translation for more details.

custom_topicsboolean or nullDeprecated
This parameter does not currently have any functionality attached to it.
speed_boostboolean or nullDeprecated
This parameter does not currently have any functionality attached to it.
topicslist of strings or nullDeprecated
This parameter does not currently have any functionality attached to it.

Errors