Apply LLM Gateway to Audio Transcripts
Overview
A Large Language Model (LLM) is a machine learning model that uses natural language processing (NLP) to generate text. LLM Gateway is a unified API that provides access to 15+ models from Claude, GPT, and Gemini through a single interface. You can use LLM Gateway to analyze audio transcripts, for example to ask questions about a call, or to summarize a meeting.
By the end of this tutorial, you’ll be able to use LLM Gateway to summarize an audio file.
Here’s the full sample code for what you’ll build in this tutorial:
Python
JavaScript
C#
Ruby
PHP
If you run the code above, you’ll see the following output:
Before you begin
To complete this tutorial, you need:
- Python, Node, .NET, Ruby or PHP installed.
- An AssemblyAI account with a credit card set up.
- Basic understanding of how to Transcribe an audio file.
Step 1: Install prerequisites
Python
JavaScript
C#
Ruby
PHP
Install the package via pip:
Step 2: Transcribe an audio file
LLM Gateway uses transcript text as input to generate text output. In this step, you’ll transcribe an audio file that you can later use with LLM Gateway.
For more information about transcribing audio, see Transcribe an audio file.
Python
JavaScript
C#
Ruby
PHP
Use existing transcript
If you’ve already transcribed an audio file you want to use, you can get an existing transcript using its ID. You can find the ID for previously transcribed audio files in the Processing queue.
Python
JavaScript
C#
Ruby
PHP
Step 3: Send transcript to LLM Gateway
In this step, you’ll send the transcript text to LLM Gateway along with a prompt to generate text output.
The prompt is a text string that provides the LLM with instructions on how to generate the text output. You’ll combine the prompt with the transcript text and send it to LLM Gateway using the chat completions API.
Python
JavaScript
C#
Ruby
PHP
Write a prompt with instructions on how the LLM should generate the text output.
Python
JavaScript
C#
Ruby
PHP
Send the transcript text and prompt to LLM Gateway. The model parameter defines which LLM to use. For available models, see LLM Gateway Overview.
Next steps
In this tutorial, you’ve learned how to generate LLM output based on your audio transcripts using LLM Gateway. The type of output depends on your prompt, so try exploring different prompts to see how they affect the output. Here’s a few more prompts to try.
- “Provide an analysis of the transcript and offer areas to improve with exact quotes.”
- “What’s the main take-away from the transcript?”
- “Generate a set of action items from this transcript.”
To learn more about LLM Gateway and working with different models, see the following resources:
Need some help?
If you get stuck, or have any other questions, we’d love to help you out. Contact our support team at support@assemblyai.com or create a support ticket.