Identifying speakers in audio recordings

When applying the Speaker Diarization model, the transcription not only contains the text but also includes speaker labels, enhancing the overall structure and organization of the output.

In this step-by-step guide, you’ll learn how to apply the model. In short, you have to send the speaker_labels parameter in your request, and then find the results inside a field called utterances.

Get started

Before we begin, make sure you have an AssemblyAI account and an API key. You can sign up for a free account and get your API key from your dashboard.

The complete source code for this guide can be viewed here.

Here is an audio example for this guide:

$https://assembly.ai/wildfires.mp3

Step-by-step instructions

1

Install the SDK.

1pip install -U assemblyai
2

Import the assemblyai package and set the API key.

1import assemblyai as aai
2
3aai.settings.api_key = "<YOUR_API_KEY>"
3

Create a TranscriptionConfig with speaker_labels set to True.

1# highlight-next-line
2config = aai.TranscriptionConfig(speaker_labels=True)
4

Create a Transcriber object and pass in the configuration.

wordHighlight="./my-audio.mp3"
1transcriber = aai.Transcriber(config=config)
5

Use the Transcriber object’s transcribe method and pass in the audio file’s path as a parameter. The transcribe method saves the results of the transcription to the Transcriber object’s transcript attribute.

1FILE_URL = "https://assembly.ai/wildfires.mp3"
2
3transcript = transcriber.transcribe(FILE_URL)
6

You can access the speaker label results through the transcription object’s utterances attribute.

1# extract all utterances from the response
2utterances = transcript.utterances
3
4# For each utterance, print its speaker and what was said
5for utterance in utterances:
6 speaker = utterance.speaker
7 text = utterance.text
8 print(f"Speaker {speaker}: {text}")

Understanding the response

The speaker label information is included in the utterances key of the response. Each utterance object in the list includes a speaker field, which contains a string identifier for the speaker (e.g., “A”, “B”, etc.). The utterances list also contains a text field for each utterance containing the spoken text, and confidence scores both for utterances and their individual words.

For more information, see the Speaker Diarization model documentation or see the API reference.

Specifying the number of speakers

You can provide the optional parameter speakers_expected, that can be used to specify the expected number of speakers in an audio file.

Conclusion

Automatically identifying different speakers from an audio recording, also called speaker diarization, is a multi-step process. It can unlock additional value from many genres of recording, including conference call transcripts, broadcast media, podcasts, and more. You can learn more about use cases for speaker diarization and the underlying research from the AssemblyAI blog.

Was this page helpful?
Built with