Transcribe a pre-recorded audio file
Learn how to transcribe and analyze an audio file.
Universal-2 is live
Dive into our research paper to see how we’re redefining speech AI accuracy. Read more here.
Overview
By the end of this tutorial, you’ll be able to:
- Transcribe a pre-recorded audio file.
- Enable Speaker Diarization to detect speakers in an audio file.
Here’s the full sample code for what you’ll build in this tutorial:
Python
TypeScript
Go
Java
C#
Ruby
Before you begin
To complete this tutorial, you need:
- Python, TypeScript, Go, Java, .NET, or Ruby installed.
- A free AssemblyAI account.
Step 1: Install the SDK
Python
TypeScript
Go
Java
C#
Ruby
Install the package via pip:
Step 2: Configure the SDK
In this step, you ‘ll create an SDK client and configure it to use your API key.
Browse to Account, and then click the text under Your API key to copy it.
Step 3: Submit audio for transcription
In this step, you’ll submit the audio file for transcription and wait until it’s completes. The time it takes to process an audio file depends on its duration and the enabled models. Most transcriptions complete within 45 seconds.
Specify a URL to the audio you want to transcribe. The URL needs to be accessible from AssemblyAI’s servers. For a list of supported formats, see FAQ.
Python
TypeScript
Go
Java
C#
Ruby
Local audio files
If you want to use a local file, you can also specify a local path, for example:
YouTube
YouTube URLs are not supported. If you want to transcribe a YouTube video, you need to download the audio first.
Python
TypeScript
Go
Java
C#
Ruby
To generate the transcript, pass the audio URL to client.Transcripts.Transcribe()
. This may take a minute while we’re processing the audio.
Select the speech model
You can select the class of models to use in order to make cost-performance tradeoffs best suited for your application. See Select the speech model.
You’ve successfully transcribed your first audio file. You can see all submitted transcription jobs in the Processing queue.
Step 4: Enable additional AI models
You can extract even more insights from the audio by enabling any of our AI models using transcription options. In this step, you’ll enable the Speaker diarization model to detect who said what.
Many of the properties in the transcript object only become available after you enable the corresponding model. For more information, see the models under Speech-to-Text and Audio Intelligence.
Next steps
In this tutorial, you’ve learned how to generate a transcript for an audio file and how to extract speaker information by enabling the Speaker diarization model.
Want to learn more?
- For more ways to analyze your audio data, explore our Audio Intelligence models.
- If you want to transcribe audio in real-time, see Transcribe streaming audio from a microphone.
- To search, summarize, and ask questions on your transcripts with LLMs, see LeMUR.
Need some help?
If you get stuck, or have any other questions, we’d love to help you out. Contact our support team at support@assemblyai.com or create a support ticket.