Universal-3 Pro on LiveKit

Overview

This guide covers integrating AssemblyAI’s Universal-3 Pro streaming speech-to-text model into a LiveKit voice agent using the Agents framework.

When not explicitly provided, the default endpointing parameters for Universal-3 Pro differ on LiveKit versus using AssemblyAI’s API directly:

  • LiveKit AssemblyAI plugin defaults:
    • min_turn_silence=100
    • max_turn_silence=100
  • AssemblyAI API defaults:
    • min_turn_silence=100
    • max_turn_silence=1000

However, you can always override these by passing your own preferred values explicitly.

Misconfiguring these parameters is the most common cause of poor performance. Read the Turn detection section below for the recommended values per turn detection mode.

Support for Universal-3 Pro requires livekit-agents version 1.4.4 or later.

Turn detection

In LiveKit, how your agent detects the end of a user’s turn is controlled by the turn_detection parameter in AgentSession.

Universal-3 Pro uses a punctuation-based turn detection system, which checks for terminal punctuation (. ? !) after periods of silence rather than using a confidence score.

This means the min_turn_silence and max_turn_silence parameters you pass to AssemblyAI directly control when transcripts are emitted and when turns end. For more details on how this works, see Configuring turn detection.

Default parameter differences

Universal-3 Pro’s endpointing is controlled by two AssemblyAI API parameters — min_turn_silence and max_turn_silence — that you pass to the STT plugin. These are separate from LiveKit’s min_endpointing_delay and max_endpointing_delay.

ParameterAssemblyAI API defaultLiveKit plugin defaultDescription
min_turn_silence100 ms100 msSilence before a speculative end-of-turn check. If terminal punctuation (. ? !) is found, the turn ends. If not, a partial is emitted and the turn continues.
max_turn_silence1000 ms100 msMaximum silence before forcing the turn to end, regardless of punctuation.

The LiveKit plugin defaults are optimized for third-party turn detection models, where you want transcripts handed off as fast as possible. When using turn_detection="stt", you should explicitly set max_turn_silence=1000 if you’d like to mimic the behavior of streaming directly to the API without LiveKit.

Tuning endpointing parameters

These are the default values used when no parameters are explicitly provided. You will likely need to experiment with different values depending on your use case:

  • Increase min_turn_silence — when brief pauses cause the speculative EOT check to fire too early, ending turns on terminal punctuation before the user has finished speaking.
  • Increase max_turn_silence — when the forced turn end is cutting off users mid-thought or splitting entities like phone numbers across turns, a higher value lets the model wait longer before forcing the turn to end when the model is unsure.

See the Entity splitting tradeoff section for examples.

With turn_detection="stt", AssemblyAI’s built-in punctuation-based turn detection determines when the user has finished speaking. AssemblyAI’s end_of_turn signals are then used directly by LiveKit to commit the turn.

In this mode, we recommend explicitly setting min_turn_silence=100 and max_turn_silence=1000. These are AssemblyAI’s API defaults and provide a good balance of responsiveness and accuracy.

The LiveKit plugin defaults to min_turn_silence=100 and max_turn_silence=100, which might be too aggressive for STT-based turn detection.

Recommended starting parameters (set on assemblyai.STT(), not on AgentSession):

ParameterDefaultDescription
min_turn_silence100 msSilence duration before a speculative end-of-turn (EOT) check fires.
max_turn_silence1000 msMaximum silence before a turn is forced to end.

How it works:

  1. User speaks → audio streams to AssemblyAI
  2. User pauses for 100ms → AssemblyAI checks for terminal punctuation
  3. If terminal punctuation (. ? !) → turn ends immediately
  4. If no terminal punctuation → partial emitted, turn continues waiting
  5. If silence reaches 1000ms → turn is forced to end regardless of punctuation

min_endpointing_delay is additive in STT mode

LiveKit’s min_endpointing_delay (default 0.5 seconds) is applied on top of AssemblyAI’s own endpointing. In STT mode, this delay starts after the STT end-of-speech signal, meaning it adds up to 500ms of extra latency by default.

Set min_endpointing_delay=0 to avoid this. AssemblyAI’s own endpointing parameters (min_turn_silence and max_turn_silence) already control the timing, so an additional delay on the LiveKit side is unnecessary latency.

1session = AgentSession(
2 turn_detection="stt",
3 stt=assemblyai.STT(
4 model="u3-rt-pro",
5 min_turn_silence=100, # Silence (ms) before a speculative end-of-turn check
6 max_turn_silence=1000, # Max silence (ms) before forcing the turn to end
7 vad_threshold=0.3,
8 ),
9 vad=silero.VAD.load(
10 activation_threshold=0.3,
11 ),
12 min_endpointing_delay=0,
13)

LiveKit turn detection (with MultilingualModel())

As a third-party turn detection model, LiveKit’s turn detector runs on top of STT output to make turn decisions. AssemblyAI’s role is then just to provide transcripts as quickly as possible, while the turn detection model decides when the user is actually done speaking.

Use MultilingualModel() rather than EnglishModel(), as Universal-3 Pro supports English, Spanish, German, French, Portuguese, and Italian. MultilingualModel() covers support for all of these languages.

The LiveKit plugin defaults of min_turn_silence=100 and max_turn_silence=100 work well here, as max_turn_silence is brought down to match min_turn_silence so that transcripts are handed off to the turn detection model as fast as possible.

MultilingualModel parameters (set on AgentSession, not on the STT plugin):

ParameterDefaultDescription
min_endpointing_delay0.5 sTime to wait before committing a turn when the model predicts a likely boundary.
max_endpointing_delay3.0 sMaximum time to wait when the model predicts the user will continue speaking. Has no effect without a turn detector model.

How it works:

  1. User speaks → audio streams to AssemblyAI
  2. User pauses for 100ms → AssemblyAI emits transcript (final and partial are the same) immediately
  3. LiveKit’s MultilingualModel() evaluates the transcript in conversational context
  4. If the model predicts a likely turn boundary → waits min_endpointing_delay (0.5s) then commits the turn
  5. If the model predicts the user will continue → waits up to max_endpointing_delay (3.0s) for more speech
1from livekit.plugins.turn_detector.multilingual import MultilingualModel
2
3session = AgentSession(
4 turn_detection=MultilingualModel(),
5 stt=assemblyai.STT(
6 model="u3-rt-pro",
7 vad_threshold=0.3,
8 ),
9 vad=silero.VAD.load(
10 activation_threshold=0.3,
11 ),
12 min_endpointing_delay=0.5, # Time (s) to wait before committing a turn when the model is confident
13 max_endpointing_delay=3.0, # Max time (s) to wait when the model is not confident
14)

Other turn detection modes

  • vad:

    • Detect end of turn from speech and silence data alone using Silero VAD.
    • Turn boundaries are determined purely by voice activity without semantic context.
    • AssemblyAI’s turn detection parameters still control when transcripts are emitted, but it is recommended to leave them at the plugin defaults (min_turn_silence=100, max_turn_silence=100) so transcripts arrive as quickly as possible.
  • manual:

    • Disable automatic turn detection entirely.
    • You control turns explicitly using session.commit_user_turn(), session.clear_user_turn(), and session.interrupt().
    • See the manual turn control docs for details.

Entity splitting tradeoff

Lower min_turn_silence and max_turn_silence values produce faster transcripts but can split entities or utterances across turns. The two parameters affect this differently.

min_turn_silence too low

  • Speculative check fires too early, splitting entities on punctuation.

  • Example: User spells out an email address with brief pauses between parts. The speculative check fires at 100ms of silence, and the model adds terminal punctuation to each segment, ending the turn prematurely.

1# With (min_turn_silence=100, max_turn_silence=1000)
2"It's John." → FINAL (100ms pause, check fires, period found → turn ends)
3"Smith." → FINAL
4"At gmail.com." → FINAL
5
6# With (min_turn_silence=400, max_turn_silence=1000)
7"It's john.smith@gmail.com." → FINAL (single turn, properly formatted)

max_turn_silence too low

  • Forced turn-end cuts off user mid-thought.

  • Example: User pauses longer than 1 second to think mid-sentence. The forced end fires at 1000ms, splitting the utterance into two turns regardless of punctuation.

1# With (min_turn_silence=100, max_turn_silence=1000)
2"I wanted to check on my order from—" → FINAL (1000ms silence, forced end)
3"last Tuesday, order number 4829." → FINAL (new turn)
4
5# With (min_turn_silence=100, max_turn_silence=2000)
6"I wanted to check on my order from last Tuesday, order number 4829." → FINAL (single turn)

Universal-3 Pro’s formatting is significantly better when it has full context in a single turn — email addresses, phone numbers, credit card numbers, and physical addresses all benefit from this.

LLMs downstream can usually piece together split entities, but if your use case involves alphanumeric dictation or entity extraction, consider increasing min_turn_silence and max_turn_silence during those portions of the conversation.

You can update configuration mid-stream to raise max_turn_silence temporarily (e.g., to 20004000 ms) when expecting entity input, then lower it again afterward.

Even when using third-party turn detection, you may want to increase min_turn_silence or max_turn_silence if users are likely to speak slowly or dictate entities. While this adds latency, it improves accuracy by giving the model more audio context before emitting a transcript and keeping the full entity complete within the same turn.

VAD configuration

With turn_detection="stt", AssemblyAI also sends SpeechStarted events that LiveKit uses for barge-in/interruption handling.

Silero VAD is not strictly required in this mode, but it is still recommended as Silero runs locally and it can be faster than waiting for AssemblyAI’s SpeechStarted signal. LiveKit respects whichever signal arrives first, so Silero provides faster interruption while AssemblyAI’s signal serves as a reliable backup.

With MultilingualModel(), Silero VAD is required, as it is the only source of START_OF_SPEECH events for interruption in this mode. AssemblyAI’s SpeechStarted event is not used.

Threshold alignment

LiveKit’s Silero VAD defaults to an activation_threshold of 0.5. AssemblyAI’s vad_threshold defaults to 0.3. For best performance, we recommend setting both to 0.3.

Both should be adjusted together to the same value to ensure accurate transcription and consistent barge-in thresholds.

When the thresholds are mismatched, you get a dead zone: if Silero is at 0.5 and AssemblyAI is at 0.3, AssemblyAI will be actively transcribing speech that LiveKit hasn’t detected yet, delaying interruption. Keeping them aligned eliminates this.

1session = AgentSession(
2 stt=assemblyai.STT(
3 model="u3-rt-pro",
4 vad_threshold=0.3, # AssemblyAI's internal VAD onset
5 ),
6 vad=silero.VAD.load(
7 activation_threshold=0.3, # Match AssemblyAI's threshold
8 ),
9)

If you’re in a noisy environment and receiving false speech triggers, raise both stt.vad_threshold and vad.activation_threshold thresholds together.

Prompt engineering

Beta feature

Prompting is considered a beta feature for Universal-3 Pro.

While it can be a powerful tool for customizing transcription output or improving accuracy in certain use cases, we recommend starting without a prompt to first establish baseline performance.

Once the default prompt has been tested, you can experiment with custom prompts to further optimize for your use case (e.g., language mix to expect (e.g., English and Hindi), use case or domain (e.g., medical, legal), etc.).

Universal-3 Pro supports a prompt parameter for custom transcription instructions. When no prompt is provided, a default prompt optimized for native (i.e. STT-based) turn detection is used automatically.

1stt=assemblyai.STT(
2 model="u3-rt-pro",
3 prompt="Your custom transcription instructions.",
4)

Tips:

  • Start with no prompt: the default prompt delivers strong accuracy out of the box, only add a custom prompt if you need to alter this behavior.
  • Specify the audio context: accent, domain, expected utterance length, etc.
  • Define punctuation rules: can improve downstream LLM processing
  • Preserve speech patterns: instruct the model to keep disfluencies and filler words for more natural agent interactions

Key terms boosting

Instead of prompt, use keyterms_prompt to boost recognition of specific names, brands, or domain terms:

1stt=assemblyai.STT(
2 model="u3-rt-pro",
3 keyterms_prompt=["AssemblyAI", "LiveKit", "Universal-3 Pro"],
4)

Updating configuration mid-stream

You can update prompt, keyterms_prompt, min_turn_silence, and max_turn_silence during an active session using update_options.

This is useful for dynamically adjusting turn detection behavior, like increasing max_turn_silence when expecting entity dictation, then lowering it again afterward. For more information, see update configuration mid-stream.

Build and run your agent

Installation

Install the plugin and necessary packages (silero, codecs, dotenv) from PyPI:

$pip install "livekit-agents[assemblyai,silero,codecs]~=1.0" \
> python-dotenv

Make sure to install the latest version of livekit-agents from PyPI (support for Universal-3 Pro was added in livekit-agents@1.4.4). Older versions of the plugin will not recognize the u3-rt-pro model, resulting in a validation error.

If you plan to use LiveKit turn detection with MultilingualModel(), you also need to install the turn detector plugin:

$pip install "livekit-plugins-turn-detector~=1.0"

Noise cancellation can introduce audio artifacts that negatively impact transcription quality. In most cases, the artifacts introduced by noise cancellation cause more harm than the background noise itself, so we recommend not adding any audio pre-processing before it reaches Universal-3 Pro.

For a complete voice agent, you will also need to install LLM and TTS plugins for your chosen providers. See the LiveKit plugins documentation for available options.

Authentication

Set your API keys in a .env file:

1LIVEKIT_URL=wss://your-project.livekit.cloud
2LIVEKIT_API_KEY=your_livekit_api_key
3LIVEKIT_API_SECRET=your_livekit_api_secret
4ASSEMBLYAI_API_KEY=your_assemblyai_key
5# Add API keys for your chosen LLM and TTS providers

You can obtain an AssemblyAI API key by signing up here and navigating to the API Keys tab of the dashboard.

The following example uses turn_detection="stt" (recommended).

Pay close attention to the comments for using with MultilingualModel().

1from dotenv import load_dotenv
2from livekit import agents
3from livekit.agents import AgentSession, Agent
4from livekit.plugins import (
5 assemblyai,
6 silero,
7)
8# For MultilingualModel, uncomment the following:
9# from livekit.plugins.turn_detector.multilingual import MultilingualModel
10
11load_dotenv()
12
13
14class Assistant(Agent):
15 def __init__(self) -> None:
16 super().__init__(instructions="You are a helpful voice AI assistant.")
17
18
19async def entrypoint(ctx: agents.JobContext):
20 await ctx.connect()
21
22 session = AgentSession(
23 stt=assemblyai.STT(
24 model="u3-rt-pro",
25 min_turn_silence=100,
26 max_turn_silence=1000, # When turn_detection="stt", override plugin default of 100.
27 # If using MultilingualModel(), the plugin defaults (min: 100, max: 100) work well. Omit min_turn_silence and max_turn_silence above if preferred.
28 vad_threshold=0.3, # Match Silero's activation_threshold
29 ),
30 # llm=your_llm_plugin(), # Add your LLM provider here
31 # tts=your_tts_plugin(), # Add your TTS provider here
32 vad=silero.VAD.load(
33 activation_threshold=0.3, # Match AssemblyAI's internal VAD threshold
34 ),
35 turn_detection="stt",
36 # To use LiveKit's turn detection instead, replace the line above with:
37 # turn_detection=MultilingualModel(),
38
39 min_endpointing_delay=0, # Avoid additive delay in STT mode
40 # If using MultilingualModel(), set these instead:
41 # min_endpointing_delay=0.5,
42 # max_endpointing_delay=3.0,
43 )
44
45 await session.start(
46 room=ctx.room,
47 agent=Assistant(),
48 )
49
50 await session.generate_reply(
51 instructions="Greet the user and offer your assistance."
52 )
53
54
55if __name__ == "__main__":
56 agents.cli.run_app(agents.WorkerOptions(entrypoint_fnc=entrypoint))

Running your agent

Start in development mode

$python your_agent_file.py dev

Test in the LiveKit Playground

  1. Go to agents-playground.livekit.io
  2. Connect to your LiveKit Cloud project (same credentials as your .env)
  3. Click Connect — a room will be created, your agent will join, and you can start talking

Parameters reference

Universal-3 Pro parameters

These are the key parameters to tune for LiveKit when using Universal-3 Pro:

speech_model
string

Set to "u3-rt-pro" for Universal-3 Pro.

keyterms_prompt
list of strings

List of terms to boost recognition for. Appended to the default prompt automatically.

prompt
string

Custom transcription instructions for the model. When not provided, a default prompt optimized for native turn detection is automatically applied.

Prompting is a beta feature for Universal-3 Pro. Start with no prompt to establish baseline performance before experimenting with custom prompts.

min_turn_silence
integerDefaults to 100

Milliseconds of silence before a speculative end-of-turn check. When the check fires, the model looks for terminal punctuation to decide whether the turn has ended.

max_turn_silence
integerDefaults to 100

Maximum milliseconds of silence before the turn is forced to end, regardless of punctuation. The LiveKit plugin defaults to 100. Set to 1000 when using turn_detection="stt".

vad_threshold
floatDefaults to 0.3

AssemblyAI’s internal Silero VAD threshold. Universal-3 Pro defaults to 0.3, unlike Universal-Streaming’s 0.4. Align with LiveKit’s Silero activation_threshold for consistent behavior.

language_detection
booleanDefaults to true

Universal-3 Pro code-switches natively between supported languages. This parameter controls whether language_code and language_confidence are included in turn messages. Defaults to true in the LiveKit plugin, but false when using the API directly.

General STT parameters

These parameters apply to all AssemblyAI streaming models and can remain the same between models:

sample_rate
intDefaults to 16000

The sample rate of the audio stream.

encoding
strDefaults to pcm_s16le

The encoding of the audio stream. Allowed values: pcm_s16le, pcm_mulaw.

Legacy parameters

These parameters apply to the universal-streaming-english and universal-streaming-multilingual AssemblyAI streaming models, but do not affect Universal-3 Pro:

end_of_turn_confidence_threshold
floatDefaults to 0.4

Confidence threshold for end-of-turn detection. Universal-3 Pro uses punctuation-based turn detection instead.

format_turns
booleanDefaults to false

Whether to return formatted final transcripts. Universal-3 Pro always returns formatted transcripts, so this parameter no longer applies.

Troubleshooting

IssueCauseSolution
Extra latency with turn_detection="stt"LiveKit’s min_endpointing_delay is additive in STT modeSet min_endpointing_delay=0 on AgentSession
No interruption handlingMissing VADEnsure vad=silero.VAD is set, with activation_threshold equal to vad_threshold (default 0.3)
Turn over-segmentationmin_turn_silence too lowIncrease from 100 to 200500
Entities split across turnsmax_turn_silence too lowIncrease max_turn_silence (e.g., 15003500)
Latency on non-terminal utterancesmax_turn_silence too highLower max_turn_silence

Migration from standard AssemblyAI STT

If you are migrating from the standard AssemblyAI streaming model:

ChangeFromTo
Modelassemblyai.STT()assemblyai.STT(model="u3-rt-pro")
Turn detectionturn_detection="stt" or EnglishModel()turn_detection="stt" or MultilingualModel()
VADOptionalSet vad=silero.VAD.load() to match vad_threshold
min_turn_silence400 (old default)100 (new default)
max_turn_silence1280 (old default)1000 (API default) or 100 (with 3rd-party turn detector)
end_of_turn_confidence_thresholdConfigurableNot applicableUniversal-3 Pro uses punctuation-based turn detection
min_endpointing_delayDefault 0.5Set to 0 when using turn_detection="stt"