End-to-end examples
End-to-end examples
Copy-paste pipelines that combine multiple AssemblyAI products in a single script.
Overview
Each example below is a self-contained script that wires together several AssemblyAI products into a working pipeline. Run one, see the polished output, and customize from there.
Every example uses placeholder API keys (YOUR_API_KEY). Replace them with your actual key from the AssemblyAI dashboard.
Pre-recorded pipelines
These pipelines transcribe an existing audio file, then enrich the transcript with Speech Understanding features and LLM Gateway analysis.
Streaming pipelines
These pipelines use the Streaming STT API to transcribe audio in real time from a microphone, with optional LLM Gateway integration for live analysis.
Customize and extend
Each pipeline above is a starting point. Here are common ways to build on them:
- Swap LLM models — Change the
modelparameter in LLM Gateway requests to use any of the 20+ supported models (Claude, GPT, Gemini, and more). - Add structured output — Use Structured Outputs to constrain LLM responses to a JSON schema for easier downstream processing.
- Add PII redaction — Enable PII Redaction to automatically mask sensitive information before it reaches the LLM.
- Use Speaker Identification — Replace generic speaker labels with real names using Speaker Identification.
- Add Translation — Translate transcripts into 20+ languages using Translation.
- Use webhooks — Replace polling with webhooks for production workloads so your server gets notified when transcription completes.
Next steps
- Pre-recorded STT quickstart — Step-by-step guide for your first transcription
- Streaming STT quickstart — Set up real-time transcription
- LLM Gateway overview — Explore all available models and features
- Use case guides — In-depth guides for meeting notetakers, medical scribes, voice agents, and more