Transcribe streaming audio from a microphone in TypeScript
Learn how to transcribe streaming audio in TypeScript.
Overview
By the end of this tutorial, you’ll be able to transcribe audio from your microphone in TypeScript.
Supported languages
Streaming Speech-to-Text is only available for English.
Before you begin
To complete this tutorial, you need:
- Node.js installed. You can check to see if it is installed with
node -v
. - TypeScript installed. You can check to see if it is installed with
tsc -v
- An AssemblyAI account with credit card set up.
Here’s the full sample code for what you’ll build in this tutorial:
Step 1: Install the SDK
Run npm init
to create an NPM package, and then install the AssemblyAI package via NPM:
Step 2: Configure the API key
In this step, you’ll create an SDK client and configure it to use your API key.
Browse to Account, and then click the text under Your API key to copy it.
Step 3: Create a streaming service
Create a new streaming service from the AssemblyAI client. If you don’t set a sample rate, it defaults to 16 kHz.
Sample rate
The sample_rate
is the number of audio samples per second, measured in hertz (Hz). Higher sample rates result in higher quality audio, which may lead to better transcripts, but also more data being sent over the network.
We recommend the following sample rates:
- Minimum quality:
8_000
(8 kHz) - Medium quality:
16_000
(16 kHz) - Maximum quality:
48_000
(48 kHz)
Create another function to handle transcripts. The real-time transcriber returns two types of transcripts: partial and final.
- Partial transcripts are returned as the audio is being streamed to AssemblyAI.
- Final transcripts are returned when the service detects a pause in speech.
End of utterance controls
You can configure the silence threshold for automatic utterance detection and programmatically force the end of an utterance to immediately get a Final transcript.
You can also use the on("transcript.partial")
, and on("transcript.final")
callbacks to handle partial and final transcripts separately.
Step 4: Connect the streaming service
Streaming Speech-to-Text uses WebSockets to stream audio to AssemblyAI. This requires first establishing a connection to the API.
Step 5: Record audio from microphone
In this step, you’ll use SoX, a cross-platform audio library, to record audio from your microphone.
Download the sox.ts (or sox.js) script to the root of your project, and
The SoxRecording
class lets you interact with SoX more easily.
In the on("open")
callback, create a new microphone stream. The sampleRate
needs to be the same value as the real-time service settings.
Audio data format
The SoxRecording
formats the audio data for you. If you want to stream data from elsewhere, make sure that your audio data is in the following format:
- Single channel
- 16-bit signed integer PCM or mu-law encoding
By default, the Streaming STT service expects PCM16-encoded audio. If you want to use mu-law encoding, see Specifying the encoding.
Step 6: Disconnect the real-time service
When you are done, disconnect the transcriber to close the connection.
To run the program, use tsc main.ts
to compile the JavaScript file, and then run node main.js
to run it.
Next steps
To learn more about Streaming Speech-to-Text, see the following resources:
Need some help?
If you get stuck, or have any other questions, we’d love to help you out. Contact our support team at support@assemblyai.com or create a support ticket.