Identifying hate speech in audio or video files
Our Content Moderation model can help you ensure that your content is safe and appropriate for all audiences.
The model pinpoints sensitive discussions in spoken data and provides information on the severity to which they occurred.
In this guide, we'll learn how to use the Content Moderation model, and look at an example response to understand its structure.
Step-by-step instructions
- 1
Create a new file and import the necessary libraries for making an HTTP request.
- 2
Set up the API endpoint and headers. The headers should include your API key.
- 3
Upload your local file to the AssemblyAI API.
- 4
Use the
upload_url
returned by the AssemblyAI API to create a JSON payload containing theaudio_url
parameter and thecontent_safety
parameter set toTrue
. - 5
Make a
POST
request to the AssemblyAI API endpoint with the payload and headers. - 6
After making the request, you'll receive an ID for the transcription. Use it to poll the API every few seconds to check the status of the transcript job. Once the status is
completed
, you can retrieve the transcript from the API response, using thecontent_safety_labels
key to view the results.
Understanding the response
In the JSON response, there'll be an additional key called content_safety_labels
that contains information about any sensitive content detected. The full text is contained in the text
key, and each problematic utterance has its own labels
and timestamp
. The entire audio is assigned a summary
and a severity_score_summary
for each category of unsafe content. Each label is returned with a confidence score and a severity score.
For more information, see Content Moderation model documentation and API reference.
Conclusion
The AssemblyAI API supports many different content safety labels. Identifying hate speech is only a single, important use case for automated content moderation, and you can learn about others on the AssemblyAI blog.