LangChain is a framework for developing applications using Large Language Models (LLM). LangChain provides common components when building integrations with LLMs. However, LLMs only operate on textual data and don’t understand what is said in audio files. With our recent contribution to LangChain.js, you can now integrate AssemblyAI's transcription models using a set of document loaders, with more integrations to come.
Note
The AssemblyAI integration is also supported on the Python version of LangChain. Read the AssemblyAI integration document for Python LangChain.
The AssemblyAI integration is built into the langchain
package, so you can start using AssemblyAI's document loaders immediately without any extra dependencies.
Here's a sample LangChain.js application that can answer questions about an audio file.
The AudioTranscriptLoader
uses AssemblyAI to transcribe the audio file and OpenAI to generate the response to the question.
import { OpenAI } from "langchain/llms/openai";
import { loadQAStuffChain } from 'langchain/chains';
import { AudioTranscriptLoader } from 'langchain/document_loaders/web/assemblyai';
(async () => {
const llm = new OpenAI({});
const chain = loadQAStuffChain(llm);
const loader = new AudioTranscriptLoader({
// You can also use a local path to an audio file, like ./sports_injuries.mp3
audio_url: "https://storage.googleapis.com/aai-docs-samples/sports_injuries.mp3",
language_code: "en_us"
});
const docs = await loader.load();
const response = await chain.call({
input_documents: docs,
question: "What is a runner's knee?",
});
console.log(response.text);
})();
The output of the application looks like this:
Runner's knee is a condition characterized by pain behind or around the kneecap. It is caused by overuse muscle imbalance and inadequate stretching. Symptoms include pain under or around the kneecap, pain when walking.
Note
Next steps
If you want to learn how to build the application above, check out this step-by-step tutorial on how to build a LangChain Q&A application for audio files. AssemblyAI also has its own pre-built solution called LeMUR (Leveraging Large Language Models to Understand Recognized Speech). With LeMUR you can use an LLM to perform tasks over large amounts of long audio files. You can learn more about using the LeMUR API in the docs.