Basic Chat Completions
Overview
Basic chat completions allow you to send a message and receive a response from the model. This is the simplest way to interact with the LLM Gateway.
Getting started
Send a message and receive a response:
Python
JavaScript
Streamed responses
You can stream responses from OpenAI models by setting stream to true. This returns partial responses as server-sent events (SSE), allowing you to display output as it’s generated.
Streamed responses are currently supported on OpenAI models only.
Python
JavaScript
API reference
Request
The LLM Gateway accepts POST requests to https://llm-gateway.assemblyai.com/v1/chat/completions with the following parameters:
Request parameters
Message object
Content part object
Response
The API returns a JSON response with the model’s completion:
Response fields
Error response
If an error occurs, the API returns an error response: