Prompting Guide (Async)
Looking for streaming prompting?
Prompting behavior differs between async (pre-recorded) and streaming use cases. This guide covers prompting for async (pre-recorded audio). If you’re working with real-time audio, see the Prompting Guide (Streaming).
Use prompt engineering to control transcription style and improve accuracy for domain-specific terminology. This guide documents best practices for crafting effective prompts for Universal-3 Pro async speech transcription.
Start with no prompt
We strongly recommend testing with no prompt first. When you omit the prompt parameter, Universal-3 Pro automatically applies a built-in default prompt that is already optimized for accuracy across a wide range of audio types.
If you’re going to build a prompt, start with one of the recommended prompts and then tweak it for your use case. You should not start from scratch with your prompt — use a recommended prompt and then build off of it.
Remember, prompts are primarily instructional, so adding a large amount of context may not make a significant impact on accuracy and could reduce instruction-following coherence. Feel free to layer in additional instructions from this guide.
How prompting works
Universal-3 Pro is a Speech-augmented Large Language Model (SpeechLLM). The architecture is a multi-modal LLM with an audio encoder and LLM decoder designed to understand and process speech, audio, and text inputs in the same workflow.
SpeechLLM prompting works more like selecting modes and knobs than open-ended instruction following. The model is trained primarily to transcribe, then fine-tuned to respond to common transcription instructions for style, speakers, and speech events.
Prompting is more instructional than contextual — the model responds best to explicit formatting rules and behavioral instructions (e.g., “include all filler words” or “use periods only for complete sentences”). Providing domain context like “this is a cardiology appointment” is most effective when paired with specific instructions telling the model how to transcribe. We are actively working to make the model more contextual in the future. For boosting specific domain terms today, use keyterms prompting.
What prompts can do
Recommended prompts
The following prompts are our top recommendations for different use cases. Start here before exploring the detailed prompt capabilities below.
Best all around (default)
This is the current default prompt, providing strong accuracy with minimal instructions:
This gives the model clear guidance to always attempt transcription while keeping instructions minimal. It’s a great starting point for most use cases.
Verbatim with multilingual support
If you need maximum verbatim capture and multilingual code-switching support, use this prompt:
This prompt maximizes speech pattern capture, preserves code-switching, and tells the model to always attempt transcription even on difficult audio. The trade-off is that the model may occasionally hallucinate disfluencies or language switches that don’t exist in the audio.
Handling unclear audio with [masked]
Recommended for reducing hallucinations
This prompt is one of the most effective strategies for avoiding hallucinations on unclear or difficult audio. Instead of forcing the model to guess, it explicitly flags uncertain segments, giving you visibility into areas of uncertainty in the transcript.
You can also use [unclear] instead of [masked]:
The [masked] tag may also be applied to profanity in the audio. If
preserving profanity is important for your use case, use [unclear] instead
to avoid profanity being tagged.
This prompt tells the model to never guess on unclear or difficult audio and instead label it explicitly. The result is a transcript where:
- Hallucinations are materially reduced — the model doesn’t force potentially incorrect guesses on uncertain audio segments.
- Uncertain sections are explicitly flagged as
[masked]or[unclear], giving you transparency into exactly where audio quality was insufficient for confident transcription. - Genuine but difficult speech is still preserved — the model transcribes what it can hear clearly while honestly marking what it cannot.
This is especially useful for quality-sensitive workflows where incorrect guesses are worse than gaps, and for building review pipelines where human reviewers can focus on flagged segments.
System prompts
Current system prompt
The current built-in system prompt used by Universal-3 Pro when no prompt parameter is provided:
This prompt provides the model with clear guidance to always attempt transcription while keeping instructions minimal.
Prior system prompt (February 20, 2026 – February 25, 2026)
The previous built-in system prompt was:
Prior system prompt (before February 20, 2026)
The previous built-in system prompt was:
Evaluating transcription accuracy
When evaluating Universal-3 Pro output against human-labeled ground truth files, be aware that the model frequently outperforms human transcribers. If your word error rate (WER) evaluation shows unexpected insertions from Universal-3 Pro, listen back to the original audio before assuming the model is wrong. In many cases, the model is correctly transcribing audio that a human transcriber missed or normalized. Similarly, some substitutions are purely semantic or formatting differences (e.g., “offsite” vs. “off site,” “alright” vs. “all right”) that inflate WER without representing meaningful errors.
Tips for accurate evaluation:
- Use the
[unclear]tag in your evaluation prompt to prevent the model from guessing on audio that a human transcriber would also miss. This improves WER alignment. - Review insertions manually by listening to the audio at the flagged timestamps. Many apparent errors are actually the model being more accurate.
- Watch for semantic substitutions — formatting-level differences inflate WER without representing meaningful errors.
- Consider Semantic WER over traditional normalized WER for a more accurate evaluation. Semantic WER won’t penalize formatting-level substitutions or insertions that are actually correct transcription, giving you a more realistic measure of true transcription quality.
In addition to the default prompt, we recommend testing with the Best all around (default) prompt and the Handling unclear audio with [masked] prompt to find the best fit for your evaluation and use case.
Prompt capabilities
Each capability below acts as a “knob” you can turn. Combine 3-6 capabilities maximum for best results. Each section includes an audio demo showing the before/after effect of prompting.
1. Verbatim transcription and disfluencies
What it does: Preserves natural speech patterns including filler words, false starts, repetitions, and self-corrections.
Reliability: High
Without prompt:
With prompt, the model better captures filler words like “uh” and false starts like “we, we, we’re friends”.
Example prompts:
2. Audio event tags
What it does: Marks non-speech sounds like music, laughter, applause, and background noise.
Reliability: Experimental (YMMV)
Without prompt:
With prompting, non-speech events like beeps are called out in the transcript.
Here are some examples of audio tags you can prompt for: [music], [laughter], [applause], [noise], [pause], [inaudible], [sigh], [gasp], [cheering], [sound], [screaming], [bell], [beep], [sound effect], [buzzer], and more.
Example prompts:
3. Labeling crosstalk
What it does: Labels overlapping speech, interruptions, and crosstalk segments in the transcript.
Reliability: Experimental (YMMV)
Without prompt:
With prompt:
Example prompts:
4. Output style and formatting
What it does: Controls punctuation, capitalization, and readability without changing words.
Reliability: High
Without prompt:
With prompt, the model accurately captures the speaker’s emotional state through punctuation, adding exclamation marks during moments of yelling and emphasis.
Example prompts:
5. Numbers and measurements
What it does: Controls how numbers, percentages, and measurements are formatted.
Reliability: Medium
Without prompt:
With prompt:
Example prompts:
6. Context aware clues
What it does: Helps with jargon, names, and domain expectations that are known from the audio file.
Reliability: Medium
Without prompt:
With prompt, adding ‘clinical history evaluation’ as a context clue corrects spelling of ‘Glicoside’ to ‘Glycoside’.
Example prompts:
Context alone does not tell the model how to transcribe. Providing domain context is most effective when paired with specific instructions.
For example: This is a doctor-patient visit gives the model domain context but no actionable guidance on how to improve the transcript.
A more effective prompt would be: This is a doctor-patient visit, prioritize accurately transcribing medications and diseases wherever possible.
The instruction tells the model what to pay attention to when transcribing, while the context tells it what domain to expect.
7. Entity accuracy and spelling
What it does: Improves accuracy for proper nouns, brands, technical terms, and domain vocabulary.
Reliability: Medium
Without prompt:
With prompt, the model corrects the misrecognition of “Anktiva,” which would otherwise be transcribed as “Entiva”.
Example prompts:
The model works best here when you tell it the pattern of entities to identify and how you wish for it to consider those entities as it transcribes speech.
Over instructing the model to follow specific examples that occur in a file
can cause hallucinations when these examples are encountered. We recommend
listing the pattern vs the specific error (i.e. Pharmaceutical accuracy required across all medications and drug names vs Pharmaceutical accuracy required (omeprazole over omeprizole, metformin over metforman)).
8. Speaker attribution
What it does: Marks speaker turns and adds identifying labels.
Reliability: Experimental (YMMV)
Without prompt:
With prompt:
Without prompting, it may appear that one speaker said everything. But with prompting, the model correctly identifies this as 5 separate speaker turns, capturing utterances as short as a single word, like “good”.
Example prompts:
Speaker labels can be tagged with names, roles, genders, and more from the audio file. Simply add the desired category for the labels into your prompt.
Speaker attribution generated by the model is in addition to the speaker diarization and speaker identification feature. We recommend using one or the other.
Speaker diarization and speaker identification are stable, consistent models whereas speaker attribution via prompting is experimental and may produce inconsistent results, especially across longer files where the model processes audio in chunks. Using the word speaker anywhere in your prompt will generate labels, so avoid this word to ensure this capability is not activated.
For production use cases requiring consistent speaker labels, use the speaker diarization and speaker identification features. In the future we will be natively building the model’s capabilities here into these features.
9. Native code switching
What it does: Handles audio where speakers switch between languages.
Reliability: Medium
Example prompts:
If you expect languages beyond those supported by Universal-3 Pro, we recommend setting language_detection: true on your request.
Universal-3 Pro is natively multilingual for English, Spanish, French, German, Italian, and Portuguese. If a language outside of these is encountered, the model will attempt to transcribe and fail and/or mark as [FOREIGN LANGUAGE].
Using language_detection: true ensures files in other languages are routed to different models which can more reliably transcribe the audio.
10. Difficult audio handling
What it does: Controls how the model handles uncertain or unclear audio segments. You can choose between two opposite strategies: maximizing guesses or flagging uncertainty.
Reliability: Experimental (YMMV)
Strategy 1: Maximize guesses
Tell the model to always attempt a transcription, even when confidence is low:
This is useful when you want the most complete transcript possible and plan to verify accuracy downstream.
Strategy 2: Flag uncertainty
Tell the model to mark segments it is unsure about instead of guessing:
This is useful for quality-sensitive workflows where incorrect guesses are worse than gaps.
Combining both strategies:
You can run the same audio file with both strategies to create a powerful review workflow. The “best guess” transcript gives you the most complete output, while the “flag uncertainty” transcript highlights exactly which segments need human review.
For a more robust approach to handling unclear audio, see the Handling
unclear audio with [masked] section in
Recommended prompts above. The [masked] strategy provides explicit flagging
of uncertain segments and materially reduces hallucinations.
11. PII redaction
What it does: Tags personal identifiable information such as names, addresses, and contact details within the transcript.
Reliability: Experimental (YMMV)
Example prompts:
Be specific about which types of PII you want tagged. A vague prompt like redact PII may not give the model enough guidance. Enumerate the categories you care about.
For production PII redaction, we recommend using our dedicated PII Redaction feature, which provides stable and consistent results. PII tagging via prompting is experimental and best suited for exploration or supplementary workflows.
Best practices
What helps
What hurts
Prompting vs. keyterms prompting
Universal-3 Pro supports two methods for improving transcription accuracy: open-ended prompting (the prompt parameter) and keyterms prompting (the keyterms_prompt parameter).
The prompt and keyterms_prompt parameters are mutually exclusive at the
API level. However, you can include key terms directly within your open prompt
as a workaround using the Context: prefix.
We recommend using either prompt OR keyterms_prompt individually, not
both together. Combining both can result in overprompting, leading to
unpredictable or degraded results. If you do combine them, keep your prompt
concise and limit the number of keyterms.
To combine both in a single request, append your keyterms as context within the prompt parameter:
Python
JavaScript
Prompt generator
This prompt generator helps you create a starting prompt based on your selected transcription style. Paste a sample of your transcript and select your preferred style to get a customized prompt recommendation.
Click a button to open your preferred AI assistant with your transcript sample and instructions pre-loaded. The AI will generate an optimized prompt based on our prompt engineering best practices.
Prompt library
Browse community-submitted prompts, vote on the ones that work best, and share your own.
Domain-specific sample prompts
Legal transcription
Best for: Court proceedings, depositions, legal hearings
Why it works: Combines authoritative language (Mandatory, Required, Non-negotiable), clear disfluency instructions, speaker attribution guidance, and domain terminology.
Medical transcription
Best for: Clinical documentation, medical dictation, patient-provider conversations
Why it works: Combines authoritative language (Mandatory, Required) with clear disfluency instructions, while ensuring clinical terminology accuracy and clear speaker attribution for medical documentation.
Financial/Earnings calls
Best for: Quarterly earnings calls, investor presentations, financial meetings
Why it works: Balances financial terminology precision with verbatim capture of speech patterns, listing specific financial terms for domain accuracy.
Software/Technical meetings
Best for: Engineering standups, code reviews, technical discussions
Why it works: Preserves natural developer speech patterns while listing specific technical terms for domain accuracy.
Code-switching (Bilingual)
Best for: Multilingual conversations, Spanglish, language mixing
Why it works: Explicitly instructs preservation over translation, handles cross-language disfluencies, and uses pattern-based accuracy guidance for bilingual context.
Customer support call
Best for: Contact center calls, customer service interactions, agent-customer conversations
Why it works: Combines domain context with actionable instructions for entity accuracy, multichannel awareness for overlapping speech, and verbatim speech preservation for quality assurance and compliance review.
How to build your prompt
Step 1: Start with your base need
Choose your primary transcription goal:
Step 2: Add authoritative language
Prefix each instruction with:
Non-negotiable:Mandatory:Required:Strict requirement:
Step 3: Add instructions one by one
We recommend you layer on each instruction one by one to see the impact on the transcription output. Since conflicting instructions can cause outputs to degrade, adding each instruction one by one allows you to test and evaluate how each instruction improves/degrades your transcription output.
Step 4: Iterate and test
- Identify target terms - What words/phrases are being transcribed incorrectly?
- Find the error pattern - Vowel substitution? Sound-alike? Phonetic spelling?
- Choose example terms - Pick 2-3 common terms with the SAME error pattern
- Test and verify - Listen to the audio to confirm correctness
- Measure success rate - Test variations on sample files
Prompt Repair Wizard: If you need help iterating on your prompts, try the Prompt Repair Wizard on the dashboard. Paste your current prompt, describe the issues you’re seeing in the output, and it will suggest improvements based on prompting best practices.
Need help?
Prompt engineering is a new and evolving concept with SpeechLLM models. If you need help generating a prompt, our Engineering team is happy to support. Feel free to open a new live chat or send an email in the widget in the bottom right hand corner (more contact info here).