Create a transcript from a media file that is accessible via a URL.
The URL of the audio or video file to transcribe.
The point in time, in milliseconds, to stop transcribing in your media file
The point in time, in milliseconds, to begin transcribing in your media file
Enable Auto Chapters, can be true or false
Enable Key Phrases, either true or false
How much to boost specified words
Enable Content Moderation, can be true or false
The confidence threshold for the Content Moderation model. Values must be between 25 and 100.
Customize how words are spelled and formatted using to and from values
Enable custom topics, either true or false
Transcribe Filler Words, like “umm”, in your media file; can be true or false
Enable Entity Detection, can be true or false
Filter profanity from the transcribed text, can be true or false
Enable Text Formatting, can be true or false
Enable Topic Detection, can be true or false
The language of your audio file. Possible values are found in Supported Languages. The default value is ‘en_us’.
The confidence threshold for the automatically detected language. An error will be returned if the language confidence is below this threshold. Defaults to 0.
Enable Automatic language detection, either true or false.
Enable Multichannel transcription, can be true or false.
Enable Automatic Punctuation, can be true or false
Redact PII from the transcribed text using the Redact PII model, can be true or false
Generate a copy of the original media file with spoken PII “beeped” out, can be true or false. See PII redaction for more details.
Controls the filetype of the audio created by redact_pii_audio. Currently supports mp3 (default) and wav. See PII redaction for more details.
The list of PII Redaction policies to enable. See PII redaction for more details.
The replacement logic for detected PII, can be “entity_name” or “hash”. See PII redaction for more details.
Enable Sentiment Analysis, can be true or false
Enable Speaker diarization, can be true or false
Tells the speaker label model how many speakers it should attempt to identify, up to 10. See Speaker diarization for more details.
The speech model to use for the transcription.
Reject audio files that contain less than this fraction of speech. Valid values are in the range [0, 1] inclusive.
Enable Summarization, can be true or false
The model to summarize the transcript
The type of summary
The list of custom topics
The header name to be sent with the transcript completed or failed webhook requests
The header value to send back with the transcript completed or failed webhook requests for added security
The URL to which we send webhook requests. We sends two different types of webhook requests. One request when a transcript is completed or failed, and one request when the redacted audio is ready if redact_pii_audio is enabled.
The list of custom vocabulary to boost transcription probability for
Enable Dual Channel transcription, can be true or false.
The unique identifier of your transcript
The URL of the media that was transcribed
The status of your transcript. Possible values are queued, processing, completed, or error.
Whether webhook authentication details were provided
Whether Key Phrases is enabled, either true or false
Whether PII Redaction is enabled, either true or false
Whether Summarization is enabled, either true or false
The language model that was used for the transcript
The acoustic model that was used for the transcript
The language of your audio file. Possible values are found in Supported Languages. The default value is ‘en_us’.
Whether Automatic language detection is enabled, either true or false
The confidence threshold for the automatically detected language. An error will be returned if the language confidence is below this threshold.
The confidence score for the detected language, between 0.0 (low confidence) and 1.0 (high confidence)
The speech model to use for the transcription.
The textual transcript of your media file
An array of temporally-sequential word objects, one for each word in the transcript. See Speech recognition for more information.
When dual_channel or speaker_labels is enabled, a list of turn-by-turn utterance objects. See Speaker diarization for more information.
The confidence score for the transcript, between 0.0 (low confidence) and 1.0 (high confidence)
The duration of this transcript object’s media file, in seconds
Whether Automatic Punctuation is enabled, either true or false
Whether Text Formatting is enabled, either true or false
Transcribe Filler Words, like “umm”, in your media file; can be true or false
Whether Multichannel transcription was enabled in the transcription request, either true or false
The number of audio channels in the audio file. This is only present when multichannel is enabled.
The URL to which we send webhook requests. We sends two different types of webhook requests. One request when a transcript is completed or failed, and one request when the redacted audio is ready if redact_pii_audio is enabled.
The status code we received from your server when delivering the transcript completed or failed webhook request, if a webhook URL was provided
The header name to be sent with the transcript completed or failed webhook requests
An array of results for the Key Phrases model, if it is enabled. See Key phrases for more information.
The point in time, in milliseconds, in the file at which the transcription was started
The point in time, in milliseconds, in the file at which the transcription was terminated
The list of custom vocabulary to boost transcription probability for
The word boost parameter value
Whether Profanity Filtering is enabled, either true or false
Whether a redacted version of the audio file was generated, either true or false. See PII redaction for more information.
Controls the filetype of the audio created by redact_pii_audio. Currently supports mp3 (default) and wav. See PII redaction for more details.
The list of PII Redaction policies that were enabled, if PII Redaction is enabled. See PII redaction for more information.
The replacement logic for detected PII, can be “entity_type” or “hash”. See PII redaction for more details.
Whether Speaker diarization is enabled, can be true or false
Tell the speaker label model how many speakers it should attempt to identify, up to 10. See Speaker diarization for more details.
Whether Content Moderation is enabled, can be true or false
An array of results for the Content Moderation model, if it is enabled. See Content moderation for more information.
Whether Topic Detection is enabled, can be true or false
The result of the Topic Detection model, if it is enabled. See Topic Detection for more information.
Customize how words are spelled and formatted using to and from values
Whether Auto Chapters is enabled, can be true or false
An array of temporally sequential chapters for the audio file
The type of summary generated, if Summarization is enabled
The Summarization model used to generate the summary, if Summarization is enabled
The generated summary of the media file, if Summarization is enabled
Whether custom topics is enabled, either true or false
The list of custom topics provided if custom topics is enabled
Whether Sentiment Analysis is enabled, can be true or false
An array of results for the Sentiment Analysis model, if it is enabled. See Sentiment Analysis for more information.
Whether Entity Detection is enabled, can be true or false
An array of results for the Entity Detection model, if it is enabled. See Entity detection for more information.
Defaults to null. Reject audio files that contain less than this fraction of speech. Valid values are in the range [0, 1] inclusive.
True while a request is throttled and false when a request is no longer throttled
Error message of why the transcript failed
Whether Dual channel transcription was enabled in the transcription request, either true or false
Whether speed boost is enabled