How Bluedot built with AssemblyAI to increase user conversion rate
In our Built with AssemblyAI series, we're featuring Bluedot, an AI-powered productivity app for asynchronous work, who says they achieved more with fewer engineers.



In our "Built with AssemblyAI" series, we showcase developer projects, innovative startups, and impressive products created using AssemblyAI’s Speech AI models for transcribing and understanding human speech.
Dima Eremin, CEO of Bluedot, an AI-powered productivity app for asynchronous work, shares why they decided to partner with AssemblyAI and details the value in AssemblyAI’s models, ease of use, and pricing.
Tell me about Bluedot and its products.
Bluedot is a productivity app for asynchronous work. We help companies automate documentation and reduce the number of meetings using AI. One of our products is a Chrome extension that assists remote teams by recording, transcribing, and summarizing their Google Meet meetings, with AI-generated notes tailored to their needs.

What led you to AssemblyAI?
We were in search of a robust speech-to-text provider to transcribe our users' meetings. Our primary criteria were accuracy in transcribing different accents, speaker recognition, the availability of timestamped transcripts, multi-language support, and cost-effectiveness.
We tested at least five different providers, and AssemblyAI surfaced as the best choice in all aspects, particularly in transcript quality and pricing. The choice for us to select them was a no-brainer.
We were previously using Google's speech-to-text service, however, the transcription quality was not sufficiently accurate, leading to numerous complaints from our customers. Speaker recognition was a critical factor for us, as Google's speech-to-text service consistently mislabeled speakers, an error that was crucial for us to avoid.
What AI models do you currently use?
We are utilizing both Speech-to-Text and Audio Intelligence models. The Speech-to-Text model is essential to our product, as we produce technical reports, meeting notes, feature specifications, etc., based on the transcriptions.
What were the core challenges you needed to solve by integrating Speech AI?
Our main challenge was to find a precise transcription that could identify technical terms, acronyms, and industry-specific language. The Custom Vocabulary feature has been a game-changer for us, as it lets us train the model based on the acronyms our customers use. After introducing AssemblyAI, our conversion rate from free to paid increased from 2% to 3% since many users selected us instead of our competitors. The accuracy of the AssemblyAI transcript has proven to be a huge competitive advantage and has helped us secure multiple deals.
For example, Frontierx, one of our customers, generates how-to guides and technical reports from their conversations. AssemblyAI has provided invaluable assistance in detecting all the technical acronyms necessary for documenting architectural decisions.
What results or benefits have you seen after integrating Speech-to-Text and Audio Intelligence?
Our CTO is hugely impressed with AssemblyAI. Before, he had to ensure that we decoupled audio from video using the correct format, aligned timestamps with transcripts, etc. With AssemblyAI, simply sending a link to the video is sufficient, as everything else is automated. Moreover, if we have any queries, the support team is always ready to respond on the same day. We couldn’t be happier using AssemblyAI.
We are currently testing LeMUR, hoping that it will help us replace OpenAI. LeMUR is a much more affordable alternative and would be more convenient as we are already using AssemblyAI.
If you want to avoid all the headaches associated with transcribing, analyzing, and summarizing meetings, you can outsource everything to AssemblyAI. All you need to do is send your video/audio file's link and you quickly receive everything you need. AssemblyAI allowed us to achieve more with fewer engineers.
Want to read additional AssemblyAI case studies? Read them all here.