Apply LLM Gateway to Streaming
Overview
A Large Language Model (LLM) is a machine learning model that uses natural language processing (NLP) to generate text. LLM Gateway is a unified API that provides access to 20+ models from Claude, GPT, Gemini, and more through a single interface. You can use LLM Gateway to analyze streaming audio transcripts in real time, for example to summarize a live conversation or extract action items as they happen.
By the end of this tutorial, you’ll be able to use LLM Gateway to analyze a streaming audio transcript from your microphone.
Here’s the full sample code for what you’ll build in this tutorial:
Python
Python SDK
JavaScript
JavaScript SDK
Before you begin
To complete this tutorial, you need:
- Python or Node installed.
- An AssemblyAI account with a credit card set up.
- A microphone connected to your computer.
- Basic understanding of how to Transcribe streaming audio.
Step 1: Install prerequisites
Python
Python SDK
JavaScript
JavaScript SDK
Install the required packages via pip:
Step 2: Connect to Universal Streaming
In this step, you’ll set up a connection to the Universal Streaming API with the llm_gateway parameter. This parameter configures LLM Gateway to process your streaming transcripts.
For more information about streaming transcription, see Transcribe streaming audio.
Python
Python SDK
JavaScript
JavaScript SDK
The llm_gateway parameterisa JSON-stringified object that follows the same interface as the LLM Gateway chat completions API. It accepts the following fields:
Step 3: Stream audio and analyze with LLM Gateway
In this step, you’ll stream audio from your microphone, collect the transcribed text from completed turns, and then send the accumulated transcript to LLM Gateway for analysis when the session ends.
Set up the event handlers to stream audio and collect transcripts from completed turns.
Python
Python SDK
JavaScript
JavaScript SDK
Define a function to send the accumulated transcript to LLM Gateway for analysis. This function uses the LLM Gateway chat completions API to process the transcript with your prompt.
Python
Python SDK
JavaScript
JavaScript SDK
When using the raw WebSocket approach with llm_gateway in the connection parameters, LLM Gateway responses are received as LLMGatewayResponse messages through the WebSocket, handled by the on_message callback registered in the previous step. No separate API call is needed.
Next steps
In this tutorial, you’ve learned how to analyze streaming audio transcripts using LLM Gateway. The type of output depends on your prompt, so try exploring different prompts to see how they affect the output. Here are a few more prompts to try:
- “Provide an analysis of the transcript and offer areas to improve with exact quotes.”
- “What’s the main take-away from the transcript?”
- “Generate a set of action items from this transcript.”
To learn more about LLM Gateway and streaming, see the following resources:
Need some help?
If you get stuck, or have any other questions, we’d love to help you out. Contact our support team at support@assemblyai.com or create a support ticket.