Change model and parameters

Learn how you can customize LeMUR parameters to alter the outcome.

Change the model type

LeMUR features the following LLMs:

  • Claude 3.5 Sonnet
  • Claude 3 Opus
  • Claude 3 Haiku
  • Claude 3 Sonnet

You can switch the model by specifying the final_model parameter.

1result = transcript.lemur.task(
2 prompt,
3 final_model=aai.LemurModel.claude3_5_sonnet
4)
ModelSDK ParameterDescription
Claude 3.5 Sonnetaai.LemurModel.claude3_5_sonnetClaude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet. This uses Anthropic’s Claude 3.5 Sonnet model version claude-3-5-sonnet-20240620.
Claude 3.0 Opusaai.LemurModel.claude3_opusClaude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks.
Claude 3.0 Haikuaai.LemurModel.claude3_haikuClaude 3 Haiku is the fastest model that can execute lightweight actions.
Claude 3.0 Sonnetaai.LemurModel.claude3_sonnetClaude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks.

You can find more information on pricing for each model here.

Change the maximum output size

You can change the maximum output size in tokens by specifying the max_output_size parameter. Up to 4000 tokens are allowed.

1result = transcript.lemur.task(
2 prompt,
3 max_output_size=1000
4)

Change the temperature

You can change the temperature by specifying the temperature parameter, ranging from 0.0 to 1.0.

Higher values result in answers that are more creative, lower values are more conservative.

1result = transcript.lemur.task(
2 prompt,
3 temperature=0.7
4)

Send customized input

You can submit custom text inputs to LeMUR without transcript IDs. This allows you to customize the input, for example, you could include the speaker labels for the LLM.

To submit custom text input, use the input_text parameter on aai.Lemur().task().

1config = aai.TranscriptionConfig(
2 speaker_labels=True,
3)
4transcript = transcriber.transcribe(audio_url, config=config)
5
6text_with_speaker_labels = ""
7for utt in transcript.utterances:
8 text_with_speaker_labels += f"Speaker {utt.speaker}:\n{utt.text}\n"
9
10result = aai.Lemur().task(
11 prompt,
12 input_text=text_with_speaker_labels
13)

Submit multiple transcripts

LeMUR can easily ingest multiple transcripts in a single API call.

You can feed in up to a maximum of 100 files or 100 hours, whichever is lower.

1transcript_group = transcriber.transcribe_group(
2 [
3 "https://example.org/customer1.mp3",
4 "https://example.org/customer2.mp3",
5 "https://example.org/customer3.mp3",
6 ],
7)
8
9# Or use existing transcripts:
10# transcript_group = aai.TranscriptGroup.get_by_ids([id1, id2, id3])
11
12result = transcript_group.lemur.task(
13 prompt="Provide a summary of these customer calls."
14)

Delete data

You can delete the data for a previously submitted LeMUR request.

Response data from the LLM, as well as any context provided in the original request will be removed.

1result = transcript.lemur.task(prompt)
2
3deletion_response = aai.Lemur.purge_request_data(result.request_id)

API reference

You can find detailed information about all LeMUR API endpoints and parameters in the LeMUR API reference.

Built with