Estimate Input Token Costs for LLM Gateway
AssemblyAI’s LLM Gateway is a unified API providing access to 15+ models from Claude, GPT, and Gemini through a single interface. It’s a powerful way to extract insights from transcripts generated from audio and video files. Given how varied the type of input and output could be for these use cases, the pricing for LLM Gateway is based on both input and output tokens.
Output tokens will vary depending on the model and the complexity of your request, but how do you determine the amount of input tokens you’ll be sending to LLM Gateway? How many tokens does an audio file and your prompt contain? This guide will show you how to roughly calculate that information to help predict LLM Gateway’s input token cost ahead of time.
This guide calculates input token costs only. Output token costs will vary based on the model used and the length of the generated response.
To see the specific cost of each model (per 1M input and output tokens) applicable to your AssemblyAI account, refer to the Rates table on the Billing page of the dashboard.
Quickstart
Step-by-Step Guide
Install dependencies
Install the requests library if you haven’t already:
Set up your API key
Import the necessary libraries and set your AssemblyAI API key, which can be found on your account dashboard:
Transcribe your audio file
Transcribe your audio file using AssemblyAI:
Calculate character count
We’ll count the characters in both the transcript and your prompt:
For this specific file with the example prompt, the transcript contains approximately 4,880 characters and the prompt contains 42 characters, for a total of 4,922 characters.
Estimate tokens
Different LLM providers use different tokenization methods, but a rough estimate is that 4 characters equals approximately 1 token. This is based on guidance from:
Language considerations
Token counts can differ significantly across languages. Non-English languages typically require more tokens per character than English. For instance, text in languages like Spanish, Chinese, or Arabic may use 2-3 characters per token instead of 4, resulting in higher token costs for the same amount of content.
Calculate input token costs
LLM Gateway’s pricing is calculated per 1M input tokens. Here are the current rates for popular models:
For our example file with approximately 1,230 input tokens:
- GPT-5 (
gpt-5): ~$0.0015 - Claude 4.5 Sonnet (
claude-sonnet-4-5-20250929): ~$0.0037 - Gemini 2.5 Pro (
gemini-2.5-pro): ~$0.0015
These calculations estimate input token costs only. Output tokens are not included and will vary based on:
- The model you choose
- The complexity of your request
- The length of the generated response
To see the complete pricing for both input and output tokens for all available models, visit the Rates table on the Billing page of your dashboard.
Next steps
- LLM Gateway Overview - Learn about all available models and capabilities
- Apply LLM Gateway to Audio Transcripts - Complete guide to using LLM Gateway with transcripts
- Billing page - View LLM pricing for your account