Prompt A Structured Q&A Response Using LLM Gateway
This cookbook will demonstrate how to use AssemblyAI’s LLM Gateway framework to prompt a structured question and answer response.
Quickstart
Python
JavaScript
1 import requests 2 import time 3 import xml.etree.ElementTree as ET 4 5 API_KEY = "YOUR_API_KEY" 6 audio_url = "https://storage.googleapis.com/aai-web-samples/meeting.mp4" 7 8 # ------------------------------- 9 # Step 1: Transcribe the audio 10 # ------------------------------- 11 transcript_request = requests.post( 12 "https://api.assemblyai.com/v2/transcript", 13 headers={"authorization": API_KEY, "content-type": "application/json"}, 14 json={"audio_url": audio_url, "speech_models": ["universal-3-pro"]}, 15 ) 16 17 transcript_id = transcript_request.json()["id"] 18 19 # Poll for completion 20 while True: 21 polling_response = requests.get( 22 f"https://api.assemblyai.com/v2/transcript/{transcript_id}", 23 headers={"authorization": API_KEY}, 24 ) 25 status = polling_response.json()["status"] 26 27 if status == "completed": 28 transcript_text = polling_response.json()["text"] 29 break 30 elif status == "error": 31 raise RuntimeError(f"Transcription failed: {polling_response.json()['error']}") 32 else: 33 print(f"Transcription status: {status}") 34 time.sleep(3) 35 36 # ------------------------------- 37 # Step 2: Build question helper functions 38 # ------------------------------- 39 def construct_question(question): 40 question_str = f"Question: {question['question']}" 41 42 if question.get("context"): 43 question_str += f"\nContext: {question['context']}" 44 45 # Default answer_format 46 if not question.get("answer_format"): 47 question["answer_format"] = "short sentence" 48 49 question_str += f"\nAnswer Format: {question['answer_format']}" 50 51 if question.get("answer_options"): 52 options_str = ", ".join(question["answer_options"]) 53 question_str += f"\nOptions: {options_str}" 54 55 return question_str + "\n" 56 57 def escape_xml_characters(xml_string): 58 return xml_string.replace("&", "&") 59 60 # ------------------------------- 61 # Step 3: Define questions 62 # ------------------------------- 63 questions = [ 64 { 65 "question": "What are the top level KPIs for engineering?", 66 "context": "KPI stands for key performance indicator", 67 "answer_format": "short sentence", 68 }, 69 { 70 "question": "How many days has it been since the data team has gotten updated metrics?", 71 "answer_options": ["1", "2", "3", "4", "5", "6", "7", "more than 7"], 72 }, 73 {"question": "What are the future plans for the project?"}, 74 ] 75 76 question_str = "\n".join(construct_question(q) for q in questions) 77 78 # ------------------------------- 79 # Step 4: Build the LLM prompt 80 # ------------------------------- 81 prompt = f"""You are an expert at giving accurate answers to questions about texts. 82 No preamble. 83 Given the series of questions, answer the questions. 84 Each question may follow up with answer format, answer options, and context for each question. 85 It is critical that you follow the answer format and answer options for each question. 86 When context is provided with a question, refer to it when answering the question. 87 You are useful, true and concise, and write in perfect English. 88 Only the question is allowed between the <question> tag. Do not include the answer format, options, or question context in your response. 89 Only text is allowed between the <question> and <answer> tags. 90 XML tags are not allowed between the <question> and <answer> tags. 91 End your response with a closing </responses> tag. 92 For each question-answer pair, format your response according to the template provided below: 93 94 Template for response: 95 <responses> 96 <response> 97 <question>The question</question> 98 <answer>Your answer</answer> 99 </response> 100 <response> 101 ... 102 </response> 103 ... 104 </responses> 105 106 These are the questions: 107 {question_str} 108 109 Transcript: 110 {transcript_text} 111 """ 112 113 # ------------------------------- 114 # Step 5: Query LLM Gateway 115 # ------------------------------- 116 headers = {"authorization": API_KEY} 117 118 response = requests.post( 119 "https://llm-gateway.assemblyai.com/v1/chat/completions", 120 headers=headers, 121 json={ 122 "model": "claude-sonnet-4-5-20250929", 123 "messages": [{"role": "user", "content": prompt}], 124 "max_tokens": 2000, 125 }, 126 ) 127 128 response_json = response.json() 129 llm_output = response_json["choices"][0]["message"]["content"] 130 131 # ------------------------------- 132 # Step 6: Parse and print XML response 133 # ------------------------------- 134 clean_response = escape_xml_characters(llm_output).strip() 135 136 try: 137 root = ET.fromstring(clean_response) 138 for resp in root.findall("response"): 139 question = resp.find("question").text 140 answer = resp.find("answer").text 141 print(f"Question: {question}") 142 print(f"Answer: {answer}\n") 143 except ET.ParseError as e: 144 print("Could not parse XML response.") 145 print("Raw model output:\n", llm_output)
Getting Started
Before we begin, make sure you have an AssemblyAI account and an API key. You can sign up for an AssemblyAI account and get your API key from your dashboard.
Find more details on the current LLM Gateway pricing in the AssemblyAI pricing page.
Step-by-Step Instructions
In this guide, we will prompt LLM Gateway with a structured Q&A format and generate an XML response.
Install the required packages:
Python
JavaScript
$ pip install requests
Import the necessary libraries and set API key.
Python
JavaScript
1 import requests 2 import time 3 import xml.etree.ElementTree as ET 4 5 API_KEY = "YOUR_API_KEY"
Next, we’ll use AssemblyAI to transcribe a file and save our transcript.
Python
JavaScript
1 audio_url = "https://storage.googleapis.com/aai-web-samples/meeting.mp4" 2 3 # ------------------------------- 4 # Step 1: Transcribe the audio 5 # ------------------------------- 6 transcript_request = requests.post( 7 "https://api.assemblyai.com/v2/transcript", 8 headers={"authorization": API_KEY, "content-type": "application/json"}, 9 json={"audio_url": audio_url, "speech_models": ["universal-3-pro"]}, 10 ) 11 12 transcript_id = transcript_request.json()["id"] 13 14 # Poll for completion 15 while True: 16 polling_response = requests.get( 17 f"https://api.assemblyai.com/v2/transcript/{transcript_id}", 18 headers={"authorization": API_KEY}, 19 ) 20 status = polling_response.json()["status"] 21 22 if status == "completed": 23 transcript_text = polling_response.json()["text"] 24 break 25 elif status == "error": 26 raise RuntimeError(f"Transcription failed: {polling_response.json()['error']}") 27 else: 28 print(f"Transcription status: {status}") 29 time.sleep(3)
Construct a formatted string to structure the questions. This includes the question text, context, an answer format, any answer options, then returns the formatted string.
Python
JavaScript
1 # ------------------------------- 2 # Step 2: Build question helper functions 3 # ------------------------------- 4 def construct_question(question): 5 question_str = f"Question: {question['question']}" 6 7 if question.get("context"): 8 question_str += f"\nContext: {question['context']}" 9 10 # Default answer_format 11 if not question.get("answer_format"): 12 question["answer_format"] = "short sentence" 13 14 question_str += f"\nAnswer Format: {question['answer_format']}" 15 16 if question.get("answer_options"): 17 options_str = ", ".join(question["answer_options"]) 18 question_str += f"\nOptions: {options_str}" 19 20 return question_str + "\n" 21 22 def escape_xml_characters(xml_string): 23 return xml_string.replace("&", "&")
Define a list of questions. For each question, you can define additional context and specify either an answer_format or a list of answer_options.
Python
JavaScript
1 # ------------------------------- 2 # Step 3: Define questions 3 # ------------------------------- 4 questions = [ 5 { 6 "question": "What are the top level KPIs for engineering?", 7 "context": "KPI stands for key performance indicator", 8 "answer_format": "short sentence", 9 }, 10 { 11 "question": "How many days has it been since the data team has gotten updated metrics?", 12 "answer_options": ["1", "2", "3", "4", "5", "6", "7", "more than 7"], 13 }, 14 {"question": "What are the future plans for the project?"}, 15 ]
Construct the formatted question string for all the questions and build the LLM prompt.
Python
JavaScript
1 question_str = '\n'.join(construct_question(q) for q in questions)
Provide detailed instructions to prompt LLM Gateway to answer a series of questions. This also defines a structured XML template for the responses.
Python
JavaScript
1 # ------------------------------- 2 # Step 4: Build the LLM prompt 3 # ------------------------------- 4 prompt = f"""You are an expert at giving accurate answers to questions about texts. 5 No preamble. 6 Given the series of questions, answer the questions. 7 Each question may follow up with answer format, answer options, and context for each question. 8 It is critical that you follow the answer format and answer options for each question. 9 When context is provided with a question, refer to it when answering the question. 10 You are useful, true and concise, and write in perfect English. 11 Only the question is allowed between the <question> tag. Do not include the answer format, options, or question context in your response. 12 Only text is allowed between the <question> and <answer> tags. 13 XML tags are not allowed between the <question> and <answer> tags. 14 End your response with a closing </responses> tag. 15 For each question-answer pair, format your response according to the template provided below: 16 17 Template for response: 18 <responses> 19 <response> 20 <question>The question</question> 21 <answer>Your answer</answer> 22 </response> 23 <response> 24 ... 25 </response> 26 ... 27 </responses> 28 29 These are the questions: 30 {question_str} 31 32 Transcript: 33 {transcript_text} 34 """ 35 36 # ------------------------------- 37 # Step 5: Query LLM Gateway 38 # ------------------------------- 39 headers = {"authorization": API_KEY} 40 41 response = requests.post( 42 "https://llm-gateway.assemblyai.com/v1/chat/completions", 43 headers=headers, 44 json={ 45 "model": "claude-sonnet-4-5-20250929", 46 "messages": [{"role": "user", "content": prompt}], 47 "max_tokens": 2000, 48 }, 49 ) 50 51 response_json = response.json() 52 llm_output = response_json["choices"][0]["message"]["content"] 53 54 # ------------------------------- 55 # Step 6: Parse and print XML response 56 # ------------------------------- 57 clean_response = escape_xml_characters(llm_output).strip() 58 59 try: 60 root = ET.fromstring(clean_response) 61 for resp in root.findall("response"): 62 question = resp.find("question").text 63 answer = resp.find("answer").text 64 print(f"Question: {question}") 65 print(f"Answer: {answer}\n") 66 except ET.ParseError as e: 67 print("Could not parse XML response.") 68 print("Raw model output:\n", llm_output)