Streaming Speech-to-Text
AssemblyAI’s Streaming Speech-to-Text (STT) allows you to transcribe live audio streams with high accuracy and low latency. By streaming your audio data to our secure WebSocket API, you can receive transcripts back within a few hundred milliseconds.
Supported languages
Streaming Speech-to-Text is only available for English.
Audio requirements
The audio format must conform to the following requirements:
- PCM16 or Mu-law encoding (See Specify the encoding)
- A sample rate that matches the value of the supplied
sample_rate
parameter - Single-channel
- 100 to 2000 milliseconds of audio per message
Audio segments with a duration between 100 ms and 450 ms produce the best results in transcription accuracy.
Specify the encoding
By default, transcriptions expect PCM16 encoding. If you want to use Mu-law encoding, you must set the WithRealTimeEncoding
parameter to aai.RealTimeEncodingPCMMulaw
:
Add custom vocabulary
You can add up to 2500 characters of custom vocabulary to boost their transcription probability.
For this, create a list of strings and specify the WithRealTimeWordBoost
parameter:
If you’re not using one of the SDKs, you must ensure that the word_boost
parameter is a JSON array that is URL encoded. See this code
example.
Authenticate with a temporary token
If you need to authenticate on the client, you can avoid exposing your API key by using temporary authentication tokens. You should generate this token on your server and pass it to the client.
To generate a temporary token, call client.RealTime.CreateTemporaryToken()
.
Use the second parameter to specify how long the token should be valid for, in seconds.
The client should retrieve the token from the server and use the token to authenticate the transcriber.
Each token has a one-time use restriction and can only be used for a single session.
To use it, specify the WithRealTimeAuthToken
parameter when creating the real-time client.
Manually end current utterance
To manually end an utterance, call ForceEndUtterance()
:
Manually ending an utterance immediately produces a final transcript.
Configure the threshold for automatic utterance detection
You can configure the threshold for how long to wait before ending an utterance.
To change the threshold, set SetEndUtteranceSilenceThreshold
while the client is connected.
By default, Streaming Speech-to-Text ends an utterance after 700 milliseconds of silence. You can configure the duration threshold any number of times during a session after the session has started. The valid range is between 0 and 20000.
Disable partial transcripts
If you’re only using the final transcript, you can disable partial transcripts to reduce network traffic.
Partial transcripts are disabled by default. Enable them by defining the OnPartialTranscript
callback.
Enable extra session information
If you enable extra session information, the client receives a SessionInformation
message right before receiving the session termination message.
To enable it, register a RealTimeTranscriber
with a OnSessionInformation
callback.
For best practices, see the Best Practices section in the Streaming guide.