Separating automatic language detection from transcription
In this guide, you’ll learn how to perform automatic language detection (ALD) separately from the transcription process. For the transcription, the file then gets routed to either the Best or Nano model class, depending on the supported language.
This workflow is designed to be cost-effective, slicing the first 60 seconds of audio and running it through Nano ALD, which detects 99 languages, at a cost of $0.002 per transcript for this language detection workflow (not including the total transcription cost).
Performing ALD with this workflow has a few benefits:
- Cost-effective language detection
- Ability to detect 99 languages
- Ability to use Nano as fallback if the language is not supported in Best
- Ability to enable Audio Intelligence models if the language is supported
- Ability to use LeMUR with LLM prompts in Spanish for Spanish audio
Before you begin
To complete this tutorial, you need:
- Python installed.
- A free AssemblyAI account.
The entire source code of this guide can be viewed here.
Step-by-step instructions
Install the Python SDK:
Create a set with all supported languages for Best. You can find them in our documentation here.
Define a Transcriber
. Note that here we don’t pass in a global TranscriptionConfig
, but later apply different ones during the transcribe()
call.
Define two helper functions:
detect_language()
performs language detection on the first 60 seconds of the audio using Nano and returns the language code.transcribe_file()
performs the transcription using Best or Nano depending on the identified language.
Test the code with different audio files. Apply both helper functions sequentially to each file to first identify the language and then transcribe the file.
Output: