Whisper streaming

Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Bashkir, Basque, Belarusian, Bengali, Bosnian, Breton, Bulgarian, Cantonese, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Faroese, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Lao, Latin, Latvian, Lingala, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Myanmar, Nepali, Norwegian, Nynorsk, Occitan, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskrit, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tagalog, Tajik, Tamil, Tatar, Telugu, Thai, Tibetan, Turkish, Turkmen, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, Yiddish, Yoruba

Whisper streaming allows you to transcribe audio streams in 99 languages using the WhisperLiveKit model.

Configuration

To utilize Whisper streaming, you need to include "speech_model":"whisper-rt" as a query parameter in the WebSocket URL.

The whisper-rt model does not support the language parameter. The model automatically detects the language being spoken. Do not include a language parameter when using this model.

Supported languages

Whisper streaming supports 99 languages:

CodeLanguage
afAfrikaans
amAmharic
arArabic
asAssamese
azAzerbaijani
baBashkir
beBelarusian
bgBulgarian
bnBengali
boTibetan
brBreton
bsBosnian
caCatalan
csCzech
cyWelsh
daDanish
deGerman
elGreek
enEnglish
esSpanish
etEstonian
euBasque
faPersian
fiFinnish
foFaroese
frFrench
glGalician
guGujarati
haHausa
hawHawaiian
heHebrew
hiHindi
hrCroatian
htHaitian Creole
huHungarian
hyArmenian
idIndonesian
isIcelandic
itItalian
jaJapanese
jwJavanese
kaGeorgian
kkKazakh
kmKhmer
knKannada
koKorean
laLatin
lbLuxembourgish
lnLingala
loLao
ltLithuanian
lvLatvian
mgMalagasy
miMaori
mkMacedonian
mlMalayalam
mnMongolian
mrMarathi
msMalay
mtMaltese
myMyanmar
neNepali
nlDutch
nnNynorsk
noNorwegian
ocOccitan
paPunjabi
plPolish
psPashto
ptPortuguese
roRomanian
ruRussian
saSanskrit
sdSindhi
siSinhala
skSlovak
slSlovenian
snShona
soSomali
sqAlbanian
srSerbian
suSundanese
svSwedish
swSwahili
taTamil
teTelugu
tgTajik
thThai
tkTurkmen
tlTagalog
trTurkish
ttTatar
ukUkrainian
urUrdu
uzUzbek
viVietnamese
yiYiddish
yoYoruba
yueCantonese
zhChinese

Language detection

The Whisper streaming model supports automatic language detection, allowing you to identify which language is being spoken in real-time. When enabled, the model returns the detected language code and confidence score with each complete utterance and final turn.

Configuration

To enable language detection, include language_detection=true as a query parameter in the WebSocket URL:

wss://streaming.assemblyai.com/v3/ws?sample_rate=16000&speech_model=whisper-rt&language_detection=true

Output format

When language detection is enabled, each Turn message (with either a complete utterance or end_of_turn: true) will include two additional fields:

  • language_code: The language code of the detected language (e.g., "es" for Spanish, "fr" for French)
  • language_confidence: A confidence score between 0 and 1 indicating how confident the model is in the language detection

The language_code and language_confidence fields only appear when either:

  • The utterance field is non-empty and contains a complete utterance - The end_of_turn field is true

Example response

Here’s an example Turn message with language detection enabled, showing Spanish being detected:

1{
2 "turn_order": 0,
3 "turn_is_formatted": false,
4 "end_of_turn": true,
5 "transcript": "buenos días",
6 "end_of_turn_confidence": 1.0,
7 "words": [
8 {
9 "start": 1200,
10 "end": 2596,
11 "text": "buenos",
12 "confidence": 0.0,
13 "word_is_final": true
14 },
15 {
16 "start": 2828,
17 "end": 3760,
18 "text": "días",
19 "confidence": 0.0,
20 "word_is_final": true
21 }
22 ],
23 "utterance": "Buenos días.",
24 "language_code": "es",
25 "language_confidence": 0.846999,
26 "type": "Turn"
27}

In this example, the model detected Spanish ("es") with a confidence of 0.846999.

Non-speech tags

The Whisper streaming model can detect and transcribe non-speech audio events. These are returned as bracketed tags in the utterance field. Common non-speech tags include:

  • [Silence] - Periods of silence or no speech
  • [Música] / [Music] - Background music detected
  • Other audio events may appear in similar bracketed format

Example response with non-speech

Here’s an example Turn message showing silence detection:

1{
2 "turn_order": 1,
3 "turn_is_formatted": false,
4 "end_of_turn": true,
5 "transcript": " silence silence",
6 "end_of_turn_confidence": 1.0,
7 "words": [
8 {
9 "start": 6300,
10 "end": 6338,
11 "text": "",
12 "confidence": 0.0,
13 "word_is_final": true
14 },
15 {
16 "start": 6376,
17 "end": 6687,
18 "text": "silence",
19 "confidence": 0.0,
20 "word_is_final": true
21 }
22 ],
23 "utterance": "[ Silence] [ Silence]",
24 "language_code": "fr",
25 "language_confidence": 0.480619,
26 "type": "Turn"
27}

Non-speech tags appear in the utterance field with brackets. The transcript field contains the raw text without formatting. You can filter out non-speech turns by checking if the utterance contains bracketed tags like [Silence] or [Music].

Understanding formatting

By default, the Whisper streaming model returns unformatted transcripts. To receive formatted transcripts with proper punctuation and capitalization, you must set format_turns=true as a query parameter.

Enabling format_turns adds additional latency to the transcription. We recommend keeping it off for voice agents where low latency is critical, and on for notetaking applications where formatted output is more important than speed.

Configuration

To enable formatted transcripts, include format_turns=true in the WebSocket URL:

wss://streaming.assemblyai.com/v3/ws?sample_rate=16000&speech_model=whisper-rt&format_turns=true

Example comparison

Here’s how the same Spanish phrase appears with and without formatting:

Unformatted (format_turns=false, default):

1{
2 "transcript": "buenos días",
3 "turn_is_formatted": false
4}

Formatted (format_turns=true):

1{
2 "transcript": "Buenos días.",
3 "turn_is_formatted": true
4}

When formatting is enabled, the transcript includes proper capitalization and punctuation.

Quickstart

Firstly, install the required dependencies.

$pip install websockets pyaudio

The Python example uses the websockets library. If you’re using websockets version 13.0 or later, use additional_headers parameter. For older versions (< 13.0), use extra_headers instead.

1import websockets
2import asyncio
3import json
4from urllib.parse import urlencode
5
6import pyaudio
7
8FRAMES_PER_BUFFER = 3200
9FORMAT = pyaudio.paInt16
10CHANNELS = 1
11RATE = 48000
12p = pyaudio.PyAudio()
13
14stream = p.open(
15 format=FORMAT,
16 channels=CHANNELS,
17 rate=RATE,
18 input=True,
19 frames_per_buffer=FRAMES_PER_BUFFER
20)
21
22BASE_URL = "wss://streaming.assemblyai.com/v3/ws"
23CONNECTION_PARAMS = {
24 "sample_rate": RATE,
25 "speech_model": "whisper-rt",
26 "language_detection": True,
27}
28URL = f"{BASE_URL}?{urlencode(CONNECTION_PARAMS)}"
29
30async def send_receive():
31
32 print(f'Connecting websocket to url ${URL}')
33
34 async with websockets.connect(
35 URL,
36 additional_headers={"Authorization": "YOUR-API-KEY"},
37 ping_interval=5,
38 ping_timeout=20
39 ) as _ws:
40 await asyncio.sleep(0.1)
41 print("Receiving SessionBegins ...")
42
43 session_begins = await _ws.recv()
44 print(session_begins)
45 print("Sending messages ...")
46
47 async def send():
48 while True:
49 try:
50 data = stream.read(FRAMES_PER_BUFFER, exception_on_overflow=False)
51 await _ws.send(data)
52 except websockets.exceptions.ConnectionClosedError as e:
53 print(e)
54 except Exception as e:
55 print(e)
56 await asyncio.sleep(0.01)
57
58 async def receive():
59 while True:
60 try:
61 result_str = await _ws.recv()
62 data = json.loads(result_str)
63 transcript = data['transcript']
64 utterance = data['utterance']
65
66 if data['type'] == 'Turn':
67 if not data.get('end_of_turn') and transcript:
68 print(f"[PARTIAL TURN TRANSCRIPT]: {transcript}")
69 if data.get('utterance'):
70 print(f"[PARTIAL TURN UTTERANCE]: {utterance}")
71 # Display language detection info if available
72 if 'language_code' in data:
73 print(f"[UTTERANCE LANGUAGE DETECTION]: {data['language_code']} - {data['language_confidence']:.2%}")
74 if data.get('end_of_turn'):
75 print(f"[FULL TURN TRANSCRIPT]: {transcript}")
76 # Display language detection info if available
77 if 'language_code' in data:
78 print(f"[END OF TURN LANGUAGE DETECTION]: {data['language_code']} - {data['language_confidence']:.2%}")
79 else:
80 pass
81
82 except websockets.exceptions.ConnectionClosed:
83 break
84 except Exception as e:
85 print(f"\nError receiving data: {e}")
86 break
87
88 try:
89 await asyncio.gather(send(), receive())
90 except KeyboardInterrupt:
91 await _ws.send({"type": "Terminate"})
92 # Wait for the server to close the connection after receiving the message
93 await _ws.wait_closed()
94 print("Session terminated and connection closed.")
95
96if __name__ == "__main__":
97 try:
98 asyncio.run(send_receive())
99 finally:
100 stream.stop_stream()
101 stream.close()
102 p.terminate()