Change model and parameters

Learn how you can customize LeMUR parameters to alter the outcome.

Change the model type

LeMUR features the following LLMs:

  • Claude 3.7 Sonnet
  • Claude 3.5 Sonnet
  • Claude 3.5 Haiku
  • Claude 3 Opus
  • Claude 3 Haiku
  • Claude 3 Sonnet

You can switch the model by specifying the final_model parameter.

1result = transcript.lemur.task(
2 prompt,
3 final_model=aai.LemurModel.claude3_7_sonnet_20250219
4)
ModelSDK ParameterDescription
Claude 3.7 Sonnetaai.LemurModel.claude3_7_sonnet_20250219The newest and most advanced model featuring enhanced reasoning capabilities. Strong at complex reasoning tasks.
Claude 3.5 Sonnetaai.LemurModel.claude3_5_sonnetA mid-tier upgrade balancing power and performance. This uses Anthropic’s Claude 3.5 Sonnet model version claude-3-5-sonnet-20240620.
Claude 3.5 Haikuaai.LemurModel.claude3_5_haiku_20241022The fastest model in the family, optimized for quick responses while maintaining good reasoning.
Claude 3.0 Opusaai.LemurModel.claude3_opusThe most powerful legacy Claude 3 model, excels at complex writing and analysis.
Claude 3.0 Haikuaai.LemurModel.claude3_haikuAn entry-level, fast legacy model for everyday tasks.
Claude 3.0 Sonnetaai.LemurModel.claude3_sonnetA legacy mid-tier model balancing power and speed.

You can find more information on pricing for each model here.

Change the maximum output size

You can change the maximum output size in tokens by specifying the max_output_size parameter.

ModelAPI ParameterMax Tokens Allowed
Claude 3.7 Sonnetanthropic/claude-3-7-sonnet-2025021964000
Claude 3.5 Sonnetanthropic/claude-3-5-sonnet4000
Claude 3.5 Haikuanthropic/claude-3-5-haiku-202410228192
Claude 3.0 Opusanthropic/claude-3-opus4000
Claude 3.0 Haikuanthropic/claude-3-haiku4000
Claude 3.0 Sonnetanthropic/claude-3-sonnet4000


1result = transcript.lemur.task(
2 prompt,
3 final_model,
4 max_output_size=1000
5)

Change the temperature

You can change the temperature by specifying the temperature parameter, ranging from 0.0 to 1.0.

Higher values result in answers that are more creative, lower values are more conservative.

1result = transcript.lemur.task(
2 prompt,
3 final_model,
4 temperature=0.7
5)

Send customized input

You can submit custom text inputs to LeMUR without transcript IDs. This allows you to customize the input, for example, you could include the speaker labels for the LLM.

To submit custom text input, use the input_text parameter

1text_with_speaker_labels = ""
2for utt in transcript.utterances:
3 text_with_speaker_labels += f"Speaker {utt.speaker}:\n{utt.text}\n"
4
5result = aai.Lemur().task(
6 prompt,
7 final_model,
8 input_text=text_with_speaker_labels
9)

Submit multiple transcripts

LeMUR can easily ingest multiple transcripts in a single API call.

You can feed in up to a maximum of 100 files or 100 hours, whichever is lower.

1transcript_group = transcriber.transcribe_group(
2 [
3 "https://example.org/customer1.mp3",
4 "https://example.org/customer2.mp3",
5 "https://example.org/customer3.mp3",
6 ],
7)
8
9# Or use existing transcripts:
10# transcript_group = aai.TranscriptGroup.get_by_ids([id1, id2, id3])
11
12result = transcript_group.lemur.task(
13 prompt="Provide a summary of these customer calls."
14)

Delete data

You can delete the data for a previously submitted LeMUR request.

Response data from the LLM, as well as any context provided in the original request will be removed.

1result = transcript.lemur.task(prompt)
2
3deletion_response = aai.Lemur.purge_request_data(result.request_id)

API reference

You can find detailed information about all LeMUR API endpoints and parameters in the LeMUR API reference.