universal-1

Speech recognition with Ruby using Universal-1

Learn how to transcribe audio and video files in your Ruby applications with AssemblyAI's Universal-1 speech recognition model.

Speech-to-Text in Ruby using Universal-1

We recently announced our latest speech recognition model, Universal-1, which achieves state-of-the-art speech-to-text accuracy. Trained on millions of hours of audio data, Universal-1 demonstrates near-human accuracy, even with accented speech, background noise, and difficult phrases like flight numbers and email addresses.

Universal-1 is also an order of magnitude faster than our previous model, Conformer-2, and supports English, Spanish, French, and German, with more languages coming shortly.

Along with Universal-1, we’ve also introduced two new classes of models: Best and Nano. Best lets you take advantage of Universal-1 for applications where accuracy is paramount. Nano is our new cost-effective alternative with support for 99 different languages.

In this post, you’ll learn how to transcribe an audio file in your Ruby applications using Universal-1 and Nano.

Why Use Universal-1 for Speech-to-Text?

Universal-1 is AssemblyAI's best model for automatic speech recognition. Here's why you'd want to use Universal-1 over other ASR models:

  1. Superior Accuracy: Universal-1 achieves 10% higher accuracy in English, Spanish, and German compared to top commercial models.
  2. Reduced Errors: Universal-1 reduces hallucination rates by 30% over Whisper, offering more reliable transcriptions.
  3. Faster Processing: It provides a 5x speed increase over Whisper Large-v3, making it highly efficient for long audio files.
    Learn more about Universal-1 and how it compares to other speech-to-text models.

Set up the AssemblyAI Ruby SDK

The easiest way to transcribe audio is by using one of our official SDKs.

To install the AssemblyAI Ruby SDK, add the gem to your bundle and install the bundle:

bundle add assemblyai
bundle install

Create a new file main.rb, and configure a new authenticated SDK client using your AssemblyAI API key from your account dashboard.

Get your AssemblyAI API key
require 'assemblyai'

client = AssemblyAI::Client.new(
  api_key: ENV['ASSEMBLYAI_API_KEY']
)

You’ll find all the operations you need on the AssemblyAI instance.

Transcribe an audio file using Universal-1

All transcriptions use the Best by default, so you’ll always get the highest accuracy without any extra configuration.

Use the following code to transcribe an audio file from a URL using Best:

transcript = client.transcripts.transcribe(
  audio_url: "https://storage.googleapis.com/aai-web-samples/5_common_sports_injuries.mp3"
)

raise transcript.error unless transcript.error.nil?

puts transcript.text

If you instead want to transcribe a local file, you can upload the file to AssemblyAI and pass the uploaded file URL to the transcribe method:

uploaded_file = client.files.upload(file: './audio.mp3')

transcript = client.transcripts.transcribe(audio_url: uploaded_file.upload_url)

raise transcript.error unless transcript.error.nil?

puts transcript.text

To run your application, configure your ASSEMBLYAI_API_KEY as an environment variable, and use the following command to execute the code:

ruby main.rb

Nano—a cost-effective alternative

Switching between Best and Nano is only a matter of setting the speech model parameter. To use Nano, set the speech_model to AssemblyAI::Transcripts::SpeechModel::NANO:

transcript = client.transcripts.transcribe(
  audio_url: "https://storage.googleapis.com/aai-web-samples/5_common_sports_injuries.mp3",
  speech_model: AssemblyAI::Transcripts::SpeechModel::NANO
)

Best, Nano and More with Audio Intelligence

We just used Universal-1 through both the Best and Nano class of models to transcribe audio.

Next, there are many further features that AssemblyAI offers beyond transcription to explore, such as:

  • Entity detection to automatically identify and categorize key information.
  • Content moderation for detecting inappropriate content in audio files to ensure that your content is safe for all audiences.
  • PII redaction to minimize sensitive information about individuals by automatically identifying and removing it from your transcript.
  • LeMUR for applying Large Language Models (LLMs) to audio data in a single line of code.

You can also learn more about our approach to creating superhuman Speech AI models on our Research page.