Migration guide: Google Speech-to-Text to AssemblyAI

This guide walks through the process of migrating from Google Speech-to-Text (STT) to AssemblyAI.

Get Started

Before we begin, make sure you have an AssemblyAI account and an API key. You can sign up for a free account and get your API key from your dashboard.

Side-by-side code comparison

Below is a side-by-side comparison of a basic snippet to transcribe a file by Google Speech-to-Text and AssemblyAI.

1from google.cloud import speech
2
3client = speech.SpeechClient()
4
5audio = speech.RecognitionAudio(
6uri="gs://cloud-samples-tests/speech/Google_Gnome.wav"
7)
8
9config = speech.RecognitionConfig(
10encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
11sample_rate_hertz=16000,
12language_code="en-US",
13model="video", # Chosen model
14)
15
16operation = client.long_running_recognize(config=config, audio=audio)
17
18print("Waiting for operation to complete...")
19response = operation.result(timeout=90)
20
21for i, result in enumerate(response.results):
22alternative = result.alternatives[0]
23print("-" \* 20)
24print(f"First alternative of result {i}")
25print(f"Transcript: {alternative.transcript}")

Installation

1from google.cloud import speech
2
3client = speech.SpeechClient()

When migrating from Google Speech-to-Text to AssemblyAI, you’ll first need to handle authentication and SDK setup:

Get your API key from your AssemblyAI dashboard.
Check our documentation for the full list of available SDKs.

Things to know:

  • Store your API key securely in an environment variable
  • API key authentication works the same across all AssemblyAI SDKs

Audio File Sources

1audio = speech.RecognitionAudio(uri="gs://cloud-samples-tests/speech/Google_Gnome.wav")
2
3config = speech.RecognitionConfig(
4encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
5sample_rate_hertz=16000,
6language_code="en-US",
7model="video", # Chosen model
8)
9
10operation = client.long_running_recognize(config=config, audio=audio)

Here are helpful things to know when migrating your audio input handling:

  • There’s no need to specify the audio encoding format when using AssemblyAI - we have a transcoding pipeline under the hood which works on all supported file types so that you can get the most accurate transcription.
  • You can submit a local file, URL, stream, buffer, blob, etc., directly to our transcriber. Check out some common ways you can host audio files here.
  • You can transcribe audio files that are up to 10 hours long and you can transcribe multiple files in parallel. The default amount of jobs you can transcribe at once is 200 while on the PAYG plan.

Basic Transcription

1print("Waiting for operation to complete...")
2response = operation.result(timeout=90)
3for i, result in enumerate(response.results):
4 alternative = result.alternatives[0]
5 print("-" * 20)
6 print(f"First alternative of result {i}")
7 print(f"Transcript: {alternative.transcript}")

Here are helpful things to know about our transcribe method:

  • The SDK handles polling under the hood.
  • The full transcript is directly accessible via transcript.text.
  • English is the default language if none is specified.
  • We have a cookbook for error handling common errors when using our API.

Adding Features

1config = speech.RecognitionConfig(
2 encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
3 sample_rate_hertz=8000,
4 language_code="en-US",
5 enable_speaker_diarization=True, # Speaker diarization
6 diarization_speaker_count=2, # Specify amount of speakers
7 profanity_filter=True # Remove profanity from transcript
8)

Key differences:

  • Use aai.TranscriptionConfig to specify any extra features that you wish to use.
  • The results for Speaker Diarization are stored in transcript.utterances. To see the full transcript response object, refer to our API Reference.
  • Check our documentation for our full list of available features and their parameters.