Universal-3 Pro on LiveKit
Overview
This guide covers integrating AssemblyAI’s Universal-3 Pro streaming speech-to-text model into a LiveKit voice agent using the Agents framework.
When not explicitly provided, the default endpointing parameters for Universal-3 Pro differ on LiveKit versus using AssemblyAI’s API directly:
- LiveKit AssemblyAI plugin defaults:
min_turn_silence=100max_turn_silence=100
- AssemblyAI API defaults:
min_turn_silence=100max_turn_silence=1000
However, you can always override these by passing your own preferred values explicitly.
Misconfiguring these parameters is the most common cause of poor performance. Read the Turn detection section below for the recommended values per turn detection mode.
Support for Universal-3 Pro requires livekit-agents version 1.4.4 or later.
Turn detection
In LiveKit, how your agent detects the end of a user’s turn is controlled by the turn_detection parameter in AgentSession.
Universal-3 Pro uses a punctuation-based turn detection system, which checks for terminal punctuation (. ? !) after periods of silence rather than using a confidence score.
This means the min_turn_silence and max_turn_silence parameters you pass to AssemblyAI directly control when transcripts are emitted and when turns end. For more details on how this works, see Configuring turn detection.
Default parameter differences
Universal-3 Pro’s endpointing is controlled by two AssemblyAI API parameters — min_turn_silence and max_turn_silence — that you pass to the STT plugin. These are separate from LiveKit’s min_endpointing_delay and max_endpointing_delay.
The LiveKit plugin defaults are optimized for third-party turn detection models, where you want transcripts handed off as fast as possible. When using turn_detection="stt", you should explicitly set max_turn_silence=1000 if you’d like to mimic the behavior of streaming directly to the API without LiveKit.
Tuning endpointing parameters
These are the default values used when no parameters are explicitly provided. You will likely need to experiment with different values depending on your use case:
- Increase
min_turn_silence— when brief pauses cause the speculative EOT check to fire too early, ending turns on terminal punctuation before the user has finished speaking. - Increase
max_turn_silence— when the forced turn end is cutting off users mid-thought or splitting entities like phone numbers across turns, a higher value lets the model wait longer before forcing the turn to end when the model is unsure.
See the Entity splitting tradeoff section for examples.
STT-based Turn Detection (recommended)
With turn_detection="stt", AssemblyAI’s built-in punctuation-based turn detection determines when the user has finished speaking. AssemblyAI’s end_of_turn signals are then used directly by LiveKit to commit the turn.
In this mode, we recommend explicitly setting min_turn_silence=100 and max_turn_silence=1000. These are AssemblyAI’s API defaults and provide a good balance of responsiveness and accuracy.
The LiveKit plugin defaults to min_turn_silence=100 and max_turn_silence=100, which might be too aggressive for STT-based turn detection.
Recommended starting parameters (set on assemblyai.STT(), not on AgentSession):
How it works:
- User speaks → audio streams to AssemblyAI
- User pauses for
100ms→ AssemblyAI checks for terminal punctuation - If terminal punctuation (
.?!) → turn ends immediately - If no terminal punctuation → partial emitted, turn continues waiting
- If silence reaches
1000ms→ turn is forced to end regardless of punctuation
min_endpointing_delay is additive in STT mode
LiveKit’s min_endpointing_delay (default 0.5 seconds) is applied on top of AssemblyAI’s own endpointing. In STT mode, this delay starts after the STT end-of-speech signal, meaning it adds up to 500ms of extra latency by default.
Set min_endpointing_delay=0 to avoid this. AssemblyAI’s own endpointing parameters (min_turn_silence and max_turn_silence) already control the timing, so an additional delay on the LiveKit side is unnecessary latency.
LiveKit turn detection (with MultilingualModel())
As a third-party turn detection model, LiveKit’s turn detector runs on top of STT output to make turn decisions. AssemblyAI’s role is then just to provide transcripts as quickly as possible, while the turn detection model decides when the user is actually done speaking.
Use MultilingualModel() rather than EnglishModel(), as Universal-3 Pro supports English, Spanish, German, French, Portuguese, and Italian. MultilingualModel() covers support for all of these languages.
The LiveKit plugin defaults of min_turn_silence=100 and max_turn_silence=100 work well here, as max_turn_silence is brought down to match min_turn_silence so that transcripts are handed off to the turn detection model as fast as possible.
MultilingualModel parameters (set on AgentSession, not on the STT plugin):
How it works:
- User speaks → audio streams to AssemblyAI
- User pauses for
100ms→ AssemblyAI emits transcript (final and partial are the same) immediately - LiveKit’s
MultilingualModel()evaluates the transcript in conversational context - If the model predicts a likely turn boundary → waits
min_endpointing_delay(0.5s) then commits the turn - If the model predicts the user will continue → waits up to
max_endpointing_delay(3.0s) for more speech
Other turn detection modes
-
vad:- Detect end of turn from speech and silence data alone using Silero VAD.
- Turn boundaries are determined purely by voice activity without semantic context.
- AssemblyAI’s turn detection parameters still control when transcripts are emitted, but it is recommended to leave them at the plugin defaults (
min_turn_silence=100,max_turn_silence=100) so transcripts arrive as quickly as possible.
-
manual:- Disable automatic turn detection entirely.
- You control turns explicitly using
session.commit_user_turn(),session.clear_user_turn(), andsession.interrupt(). - See the manual turn control docs for details.
Entity splitting tradeoff
Lower min_turn_silence and max_turn_silence values produce faster transcripts but can split entities or utterances across turns. The two parameters affect this differently.
min_turn_silence too low
-
Speculative check fires too early, splitting entities on punctuation.
-
Example: User spells out an email address with brief pauses between parts. The speculative check fires at 100ms of silence, and the model adds terminal punctuation to each segment, ending the turn prematurely.
max_turn_silence too low
-
Forced turn-end cuts off user mid-thought.
-
Example: User pauses longer than 1 second to think mid-sentence. The forced end fires at 1000ms, splitting the utterance into two turns regardless of punctuation.
Universal-3 Pro’s formatting is significantly better when it has full context in a single turn — email addresses, phone numbers, credit card numbers, and physical addresses all benefit from this.
LLMs downstream can usually piece together split entities, but if your use case involves alphanumeric dictation or entity extraction, consider increasing min_turn_silence and max_turn_silence during those portions of the conversation.
You can update configuration mid-stream to raise max_turn_silence temporarily (e.g., to 2000–4000 ms) when expecting entity input, then lower it again afterward.
Even when using third-party turn detection, you may want to increase min_turn_silence or max_turn_silence if users are likely to speak slowly or dictate entities. While this adds latency, it improves accuracy by giving the model more audio context before emitting a transcript and keeping the full entity complete within the same turn.
VAD configuration
With turn_detection="stt", AssemblyAI also sends SpeechStarted events that LiveKit uses for barge-in/interruption handling.
Silero VAD is not strictly required in this mode, but it is still recommended as Silero runs locally and it can be faster than waiting for AssemblyAI’s SpeechStarted signal. LiveKit respects whichever signal arrives first, so Silero provides faster interruption while AssemblyAI’s signal serves as a reliable backup.
With MultilingualModel(), Silero VAD is required, as it is the only source of START_OF_SPEECH events for interruption in this mode. AssemblyAI’s SpeechStarted event is not used.
Threshold alignment
LiveKit’s Silero VAD defaults to an activation_threshold of 0.5. AssemblyAI’s vad_threshold defaults to 0.3. For best performance, we recommend setting both to 0.3.
Both should be adjusted together to the same value to ensure accurate transcription and consistent barge-in thresholds.
When the thresholds are mismatched, you get a dead zone: if Silero is at 0.5 and AssemblyAI is at 0.3, AssemblyAI will be actively transcribing speech that LiveKit hasn’t detected yet, delaying interruption. Keeping them aligned eliminates this.
If you’re in a noisy environment and receiving false speech triggers, raise both stt.vad_threshold and vad.activation_threshold thresholds together.
Prompt engineering
Beta feature
Prompting is considered a beta feature for Universal-3 Pro.
While it can be a powerful tool for customizing transcription output or improving accuracy in certain use cases, we recommend starting without a prompt to first establish baseline performance.
Once the default prompt has been tested, you can experiment with custom prompts to further optimize for your use case (e.g., language mix to expect (e.g., English and Hindi), use case or domain (e.g., medical, legal), etc.).
Universal-3 Pro supports a prompt parameter for custom transcription instructions. When no prompt is provided, a default prompt optimized for native (i.e. STT-based) turn detection is used automatically.
Tips:
- Start with no prompt: the default prompt delivers strong accuracy out of the box, only add a custom prompt if you need to alter this behavior.
- Specify the audio context: accent, domain, expected utterance length, etc.
- Define punctuation rules: can improve downstream LLM processing
- Preserve speech patterns: instruct the model to keep disfluencies and filler words for more natural agent interactions
Key terms boosting
Instead of prompt, use keyterms_prompt to boost recognition of specific names, brands, or domain terms:
Updating configuration mid-stream
You can update prompt, keyterms_prompt, min_turn_silence, and max_turn_silence during an active session using update_options.
This is useful for dynamically adjusting turn detection behavior, like increasing max_turn_silence when expecting entity dictation, then lowering it again afterward. For more information, see update configuration mid-stream.
Build and run your agent
Installation
Install the plugin and necessary packages (silero, codecs, dotenv) from PyPI:
Make sure to install the latest version of livekit-agents from PyPI (support for Universal-3 Pro was added in livekit-agents@1.4.4).
Older versions of the plugin will not recognize the u3-rt-pro model, resulting in a validation error.
If you plan to use LiveKit turn detection with MultilingualModel(), you also need to install the turn detector plugin:
Noise cancellation can introduce audio artifacts that negatively impact transcription quality. In most cases, the artifacts introduced by noise cancellation cause more harm than the background noise itself, so we recommend not adding any audio pre-processing before it reaches Universal-3 Pro.
For a complete voice agent, you will also need to install LLM and TTS plugins for your chosen providers. See the LiveKit plugins documentation for available options.
Authentication
Set your API keys in a .env file:
You can obtain an AssemblyAI API key by signing up here and navigating to the API Keys tab of the dashboard.
Recommended configuration
The following example uses turn_detection="stt" (recommended).
Pay close attention to the comments for using with MultilingualModel().
Running your agent
Start in development mode
Test in the LiveKit Playground
- Go to agents-playground.livekit.io
- Connect to your LiveKit Cloud project (same credentials as your
.env) - Click Connect — a room will be created, your agent will join, and you can start talking
Parameters reference
Universal-3 Pro parameters
These are the key parameters to tune for LiveKit when using Universal-3 Pro:
Set to "u3-rt-pro" for Universal-3 Pro.
List of terms to boost recognition for. Appended to the default prompt automatically.
Custom transcription instructions for the model. When not provided, a default prompt optimized for native turn detection is automatically applied.
Prompting is a beta feature for Universal-3 Pro. Start with no prompt to establish baseline performance before experimenting with custom prompts.
Milliseconds of silence before a speculative end-of-turn check. When the check fires, the model looks for terminal punctuation to decide whether the turn has ended.
Maximum milliseconds of silence before the turn is forced to end, regardless of punctuation. The LiveKit plugin defaults to 100. Set to 1000 when using turn_detection="stt".
AssemblyAI’s internal Silero VAD threshold. Universal-3 Pro defaults to 0.3, unlike Universal-Streaming’s 0.4. Align with LiveKit’s Silero activation_threshold for consistent behavior.
Universal-3 Pro code-switches natively between supported languages. This parameter controls whether language_code and language_confidence are included in turn messages. Defaults to true in the LiveKit plugin, but false when using the API directly.
General STT parameters
These parameters apply to all AssemblyAI streaming models and can remain the same between models:
The sample rate of the audio stream.
The encoding of the audio stream. Allowed values: pcm_s16le, pcm_mulaw.
Legacy parameters
These parameters apply to the universal-streaming-english and universal-streaming-multilingual AssemblyAI streaming models, but do not affect Universal-3 Pro:
Confidence threshold for end-of-turn detection. Universal-3 Pro uses punctuation-based turn detection instead.
Whether to return formatted final transcripts. Universal-3 Pro always returns formatted transcripts, so this parameter no longer applies.
Troubleshooting
Migration from standard AssemblyAI STT
If you are migrating from the standard AssemblyAI streaming model: