Implement a Sales Playbook Using LLM Gateway

This guide will show you how to use AssemblyAI’s LLM Gateway to implement a sales playbook with a call from a sales representative to a client.

This guide aims to show different ways of using structured prompts with a hypothetical sales use case to produce personalized, precise responses. Using LLM Gateway, a user can immediately evaluate large numbers of sales calls and ensure that prospecting steps are followed, including quotes in the response, which can inform future sales by identifying trends and quantitative performance tracking.

In this example, we will demonstrate how to use structured prompts with context, answer formats, and answer options to create effective sales call evaluations with LLM Gateway. You can use the concepts in this guide to create custom specifications to evaluate your sales representatives.

Quickstart

1import requests
2import time
3
4base_url = "https://api.assemblyai.com"
5headers = {"authorization": "<YOUR_API_KEY>"}
6
7# Step 1: Transcribe the sales call
8with open("./sales-call.mp3", "rb") as f:
9 response = requests.post(base_url + "/v2/upload", headers=headers, data=f)
10
11upload_url = response.json()["upload_url"]
12data = {"audio_url": upload_url}
13
14response = requests.post(base_url + "/v2/transcript", json=data, headers=headers)
15transcript_id = response.json()['id']
16polling_endpoint = base_url + "/v2/transcript/" + transcript_id
17
18while True:
19 transcription_result = requests.get(polling_endpoint, headers=headers).json()
20 if transcription_result['status'] == 'completed':
21 break
22 elif transcription_result['status'] == 'error':
23 raise RuntimeError(f"Transcription failed: {transcription_result['error']}")
24 else:
25 time.sleep(3)
26
27# Step 2: Evaluate with LLM Gateway
28context = "There are sales interactions between a salesperson who is selling an internet plan to customers who are warm leads."
29answer_format = """
30Answer with JSON in the following format:
31{
32 "Answer": "<answer_options>",
33 "Reason": "<justification for the answer in one sentence including quotes>"
34}
35"""
36
37questions = [
38 {
39 "question": "Did the salesperson start the conversation with a professional greeting?",
40 "answer_options": ["Poor", "Satisfactory", "Excellent"]
41 },
42 {
43 "question": "How well did the salesperson answer questions during the call?",
44 "answer_options": ["Poor", "Good", "Excellent"]
45 },
46 {
47 "question": "Did the salesperson discuss next steps clearly?",
48 "answer_options": ["Yes", "No"]
49 }
50]
51
52for q in questions:
53 prompt = f"""
54{q['question']}
55
56Context: {context}
57
58Answer Options: {', '.join(q['answer_options'])}
59
60{answer_format}
61"""
62
63 llm_gateway_data = {
64 "model": "claude-sonnet-4-5-20250929",
65 "messages": [
66 {"role": "user", "content": f"{prompt}\n\nTranscript: {transcription_result['text']}"}
67 ],
68 "max_tokens": 500
69 }
70
71 response = requests.post(
72 "https://llm-gateway.assemblyai.com/v1/chat/completions",
73 headers=headers,
74 json=llm_gateway_data
75 )
76
77 result = response.json()["choices"][0]["message"]["content"]
78 print(f"Question: {q['question']}")
79 print(f"Answer: {result}")
80 print()

Get Started

Before we begin, make sure you have an AssemblyAI account and an API key. You can sign up for an AssemblyAI account and get your API key from your dashboard.

Step-by-Step Instructions

In this guide, we will ask three questions evaluating the prospecting performance of the sales representative. Each question has slightly different parameters based on the use case but largely has a fixed context that we will apply to each question.

Install the required packages and set up the API client:

1import requests
2import time
3
4base_url = "https://api.assemblyai.com"
5headers = {"authorization": "<YOUR_API_KEY>"}

Transcribe the sales call audio file:

1with open("./sales-call.mp3", "rb") as f:
2 response = requests.post(base_url + "/v2/upload", headers=headers, data=f)
3
4upload_url = response.json()["upload_url"]
5data = {"audio_url": upload_url} # You can also use a URL to an audio or video file on the web
6
7response = requests.post(base_url + "/v2/transcript", json=data, headers=headers)
8transcript_id = response.json()['id']
9polling_endpoint = base_url + "/v2/transcript/" + transcript_id
10
11while True:
12 transcription_result = requests.get(polling_endpoint, headers=headers).json()
13
14 if transcription_result['status'] == 'completed':
15 print(f"Transcription completed: {transcript_id}")
16 break
17 elif transcription_result['status'] == 'error':
18 raise RuntimeError(f"Transcription failed: {transcription_result['error']}")
19 else:
20 time.sleep(3)

Define your evaluation context and answer format for structured responses:

1context = "There are sales interactions between a salesperson who is selling an internet plan to customers who are warm leads."
2answer_format = """
3Answer with JSON in the following format:
4{
5 "Answer": "<answer_options>",
6 "Reason": "<justification for the answer in one sentence including quotes>"
7}
8"""

Next, define your evaluation questions for your sales playbook processes. Note: You can edit the questions and answer options to provide custom evaluations for each aspect of the sales call.

1questions = [
2 {
3 "question": "Did the salesperson start the conversation with a professional greeting?",
4 "answer_options": ["Poor", "Satisfactory", "Excellent"]
5 },
6 {
7 "question": "How well did the salesperson answer questions during the call?",
8 "answer_options": ["Poor", "Good", "Excellent"]
9 },
10 {
11 "question": "Did the salesperson discuss next steps clearly?",
12 "answer_options": ["Yes", "No"]
13 }
14]

Evaluate each question using LLM Gateway and print the results:

1for q in questions:
2 prompt = f"""
3{q['question']}
4
5Context: {context}
6
7Answer Options: {', '.join(q['answer_options'])}
8
9{answer_format}
10"""
11
12 llm_gateway_data = {
13 "model": "claude-sonnet-4-5-20250929",
14 "messages": [
15 {"role": "user", "content": f"{prompt}\n\nTranscript: {transcription_result['text']}"}
16 ],
17 "max_tokens": 500
18 }
19
20 response = requests.post(
21 "https://llm-gateway.assemblyai.com/v1/chat/completions",
22 headers=headers,
23 json=llm_gateway_data
24 )
25
26 result = response.json()["choices"][0]["message"]["content"]
27 print(f"Question: {q['question']}")
28 print(f"Answer: {result}")
29 print()