What LeMUR unlocks
Apply LLMs to multiple audio transcripts
LeMUR enables users to get responses from LLMs on multiple audio files at once and transcripts up to 10 hours in duration, which effectively translates to a context window of ~150K tokens.
Reliable & safe outputs
Because LeMUR includes safety measures and content filters, it will provide users with responses from an LLM that are less likely to generate harmful or biased language.
Inject context specific to your use case
LeMUR enables users to provide additional context at inference time that an LLM can use to provide personalized and more accurate results when generating outputs.
Modular, fast integration
LeMUR consistently returns structured data in the form of consumable JSON. Users can further customize the format of LeMUR’s output, to ensure responses are in the format their next piece of business logic expects (for example, boolean answers to questions). This eliminates the need for building custom code to handle the output of LLMs, making LeMUR just a few lines of code to practically bring LLM capabilities to users’ products.
Continuously state-of-the-art
New LLM technologies and models are continually being released. AssemblyAI pulls in the newest breakthroughs into LeMUR and all of our ASR models to ensure users can build with the latest AI technology.