Apply LLMs to audio files
Learn how to leverage LLMs for speech using LeMUR.
Overview
A Large Language Model (LLM) is a machine learning model that uses natural language processing (NLP) to generate text. LeMUR is a framework that lets you apply LLMs to audio transcripts, for example to ask questions about a call, or to summarize a meeting.
By the end of this tutorial, you’ll be able to use LeMUR to summarize an audio file.
Here’s the full sample code for what you’ll build in this tutorial:
Python SDK
Python
TypeScript SDK
TypeScript
C#
Ruby
PHP
If you run the code above, you’ll see the following output:
Before you begin
To complete this tutorial, you need:
- Python, TypeScript, .NET, Ruby or PHP installed.
- An AssemblyAI account with a credit card set up.
- Basic understanding of how to Transcribe an audio file.
Step 1: Install prerequisites
Python SDK
Python
TypeScript SDK
TypeScript
C#
Ruby
PHP
Install the package via pip:
Step 2: Transcribe an audio file
LeMUR uses one or more transcripts as input to generate text output. In this step, you’ll transcribe an audio file that you can later use to create a prompt for.
For more information about transcribing audio, see Transcribe an audio file.
Python SDK
Python
TypeScript SDK
TypeScript
C#
Ruby
PHP
Use existing transcript
If you’ve already transcribed an audio file you want to use, you can get an existing transcript using its ID. You can find the ID for previously transcribed audio files in the Processing queue.
Python SDK
Python
Typescript SDK
Typescript
C#
Ruby
PHP
Step 3: Prompt LeMUR to generate text output
In this step, you’ll create a Custom task with LeMUR and use the transcript you created in the previous step as input.
The input to a custom task is called a prompt. A prompt is a text string that provides LeMUR with instructions on how to generate the text output.
For more techniques on how to build prompts, see Improving your prompt.
Python SDK
Python
TypeScript SDK
TypeScript
C#
Ruby
PHP
Write a prompt with instructions on how LeMUR should generate the text output.
Python SDK
Python
TypeScript SDK
Typescript
C#
Ruby
PHP
Create a custom task with LeMUR, using the transcript and prompt as input. The final model defines the LLM to use to process the task. For available models to choose from, see Change the model type.
Next steps
In this tutorial, you’ve learned how to generate LLM output based on your audio transcripts. The type of output depends on your prompt, so try exploring different prompts to see how they affect the output. Here’s a few more prompts to try.
- “Provide an analysis of the transcript and offer areas to improve with exact quotes.”
- “What’s the main take-away from the transcript?”
- “Generate a set of action items from this transcript.”
To learn more about how to apply LLMs to your transcripts, see the following resources:
Need some help?
If you get stuck, or have any other questions, we’d love to help you out. Contact our support team at support@assemblyai.com or create a support ticket.