Apply LLM Gateway to Streaming
Overview
A Large Language Model (LLM) is a machine learning model that uses natural language processing (NLP) to generate text. LLM Gateway is a unified API that provides access to 15+ models from Claude, GPT, and Gemini through a single interface. You can use LLM Gateway to analyze streaming audio transcripts in real time, for example to summarize a live conversation or extract action items as they happen.
By the end of this tutorial, you’ll be able to use LLM Gateway to analyze a streaming audio transcript from your microphone.
Here’s the full sample code for what you’ll build in this tutorial:
Python SDK
Python
JavaScript SDK
JavaScript
Before you begin
To complete this tutorial, you need:
- Python or Node installed.
- An AssemblyAI account with a credit card set up.
- A microphone connected to your computer.
- Basic understanding of how to Transcribe streaming audio.
Step 1: Install prerequisites
Python SDK
Python
JavaScript SDK
JavaScript
Install the AssemblyAI Python SDK via pip:
Step 2: Connect to Universal Streaming
In this step, you’ll set up a connection to the Universal Streaming API with the llm_gateway parameter. This parameter configures LLM Gateway to process your streaming transcripts.
For more information about streaming transcription, see Transcribe streaming audio.
Python SDK
Python
JavaScript SDK
JavaScript
The llm_gateway parameterisa JSON-stringified object that follows the same interface as the LLM Gateway chat completions API. It accepts the following fields:
Step 3: Stream audio and analyze with LLM Gateway
In this step, you’ll stream audio from your microphone, collect the transcribed text from completed turns, and then send the accumulated transcript to LLM Gateway for analysis when the session ends.
Set up the event handlers to stream audio and collect transcripts from completed turns.
Python SDK
Python
JavaScript SDK
JavaScript
Define a function to send the accumulated transcript to LLM Gateway for analysis. This function uses the LLM Gateway chat completions API to process the transcript with your prompt.
Python SDK
Python
JavaScript SDK
JavaScript
When using the Python SDK with LLMGatewayConfig, analysis responses are received automatically through the LLMGatewayResponseEvent event handler registered in the previous step. No separate API call is needed.
Next steps
In this tutorial, you’ve learned how to analyze streaming audio transcripts using LLM Gateway. The type of output depends on your prompt, so try exploring different prompts to see how they affect the output. Here are a few more prompts to try:
- “Provide an analysis of the transcript and offer areas to improve with exact quotes.”
- “What’s the main take-away from the transcript?”
- “Generate a set of action items from this transcript.”
To learn more about LLM Gateway and streaming, see the following resources:
Need some help?
If you get stuck, or have any other questions, we’d love to help you out. Contact our support team at support@assemblyai.com or create a support ticket.