LlamaIndex is a flexible data framework for connecting custom data sources to Large Language Models (LLMs).
With LlamaIndex, you can easily store and index your data, and then use them with LLMs to build applications.
However, LLMs only work with textual data, so you need to transcribe the audio files to text first.
Note
AssemblyAI also has an integration for LlamaIndex Python through LlamaHub. Learn how to use audio data in LlamaIndex with Python in this tutorial.
The AssemblyAI integration is built into the llamaindex
package, so you can start using AssemblyAI's speech-to-text immediately without any extra dependencies.
Here's a sample LlamaIndex.TS application that can answer questions about an audio file.
The AudioTranscriptReader
uses AssemblyAI to transcribe the audio file, then the queryEngine
uses OpenAI to generate the response to the question.
import { VectorStoreIndex, AudioTranscriptReader } from "llamaindex";
async function main() {
const reader = new AudioTranscriptReader();
// Transcribe audio and store transcript in documents
const docs = await reader.loadData({
// You can also use a local path to an audio file, like ./sports_injuries.mp3
audio: "https://storage.googleapis.com/aai-docs-samples/sports_injuries.mp3",
language_code: "en_us",
});
// Split text and create embeddings. Store them in a VectorStoreIndex
const index = await VectorStoreIndex.fromDocuments(docs);
// Query the index
const queryEngine = index.asQueryEngine();
const response = await queryEngine.query("What is a runner's knee?");
// Output response
console.log(response.toString());
}
main();
The output of the application looks like this:
Runner's knee is a condition characterized by pain behind or around the kneecap. It is caused by overuse muscle imbalance and inadequate stretching. Symptoms include pain under or around the kneecap, pain when walking.
Info
Next steps
If you want to learn how to build the application above, check out this step-by-step tutorial on how to build a LlamaIndex.TS Q&A application for audio files.
AssemblyAI also has its own pre-built solution called LeMUR (Leveraging Large Language Models to Understand Recognized Speech). With LeMUR you can use an LLM to perform tasks over large amounts of long audio files. You can learn more about using the LeMUR API in the docs.