Content Moderation
The Content Moderation model lets you detect inappropriate content in audio files to ensure that your content is safe for all audiences.
The model pinpoints sensitive discussions in spoken data and their severity.
Quickstart
Python
TypeScript
Go
Java
C#
Ruby
Example output
Adjust the confidence threshold
The confidence threshold determines how likely something is to be flagged as inappropriate content. A threshold of 50% (which is the default) means any label with a confidence score of 50% or greater is flagged.
Python
TypeScript
Go
Java
C#
Ruby
To adjust the confidence threshold for your transcription, include content_safety_confidence
in the transcription config.
API reference
Request
Response
The response also includes the request parameters used to generate the transcript.
Supported labels
Frequently asked questions
Why is the Content Moderation model not detecting sensitive content in my audio file?
There could be a few reasons for this. First, make sure that the audio file contains speech, and not just background noise or music. Additionally, the model may not have been trained on the specific type of sensitive content you’re looking for. If you believe the model should be able to detect the content but it’s not, you can reach out to AssemblyAI’s support team for assistance.
Why is the Content Moderation model flagging content that isn't actually sensitive?
The model may occasionally flag content as sensitive that isn’t actually problematic. This can happen if the model isn’t trained on the specific context or nuances of the language being used. In these cases, you can manually review the flagged content and determine if it’s actually sensitive or not. If you believe the model is consistently flagging content incorrectly, you can contact AssemblyAI’s support team to report the issue.
How do I know which specific parts of the audio file contain sensitive content?
The Content Moderation model provides segment-level results that pinpoint where in the audio the sensitive content was discussed, as well as the degree to which it was discussed. You can access this information in the results key of the API response. Each result in the list contains a text key that shows the sensitive content, and a labels key that shows the detected sensitive topics along with their confidence and severity scores.
Can the Content Moderation model be used in real-time applications?
The model is designed to process batches of segments in significantly less than 1 second, making it suitable for real-time applications. However, keep in mind that the actual processing time depends on the length of the audio file and the number of segments it’s divided into. Additionally, the model may occasionally require additional time to process particularly complex or long segments.
Why am I receiving an error message when using the Content Moderation model?
If you receive an error message, it may be due to an issue with your request format or parameters. Double-check that your request includes the correct audio_url
parameter. If you continue to experience issues, you can reach out to AssemblyAI’s support team for assistance.