Identifying hate speech in audio or video files

Our Content Moderation model can help you ensure that your content is safe and appropriate for all audiences.

The model pinpoints sensitive discussions in spoken data and provides information on the severity to which they occurred.

In this guide, we’ll learn how to use the Content Moderation model, and look at an example response to understand its structure.

Get started

Before we begin, make sure you have an AssemblyAI account and an API key. You can sign up for a free account and get your API key from your dashboard.

The complete source code for this guide can be viewed here.

Here is an audio example for this guide:

$https://assembly.ai/wildfires.mp3

Step-by-step instructions

1

Create a new file and request.

1import requests
2import time
2

Set up the API endpoint and headers. The headers should include your API key.

1base_url = "https://api.assemblyai.com/v2"
2
3headers = {
4 "authorization": "<YOUR_API_KEY>"
5}
3

Upload your local file to the AssemblyAI API.

1with open("./my-audio.mp3", "rb") as f:
2 response = requests.post(base_url + "/upload",
3 headers=headers,
4 data=f)
5
6upload_url = response.json()["upload_url"]
4

Use the upload_url returned by the AssemblyAI API to create a JSON payload containing the audio_url parameter and the content_safety parameter set to True.

1data = {
2 "audio_url": upload_url,
3 "content_safety": True
4}
5

Make a POST request to the AssemblyAI API endpoint with the payload and headers.

1url = base_url + "/transcript"
2response = requests.post(url, json=data, headers=headers)
6

After making the request, you’ll receive an ID for the transcription. Use it to poll the API every few seconds to check the status of the transcript job. Once the status is completed, you can retrieve the transcript from the API response, using the content_safety_labels key to view the results.

1transcript_id = response.json()['id']
2polling_endpoint = f"https://api.assemblyai.com/v2/transcript/{transcript_id}"
3
4while True:
5 transcription_result = requests.get(polling_endpoint, headers=headers).json()
6
7 if transcription_result['status'] == 'completed':
8 # Uncomment the next line to print everything
9 # print(transcription_result['content_safety_labels'])
10
11 content_safety_labels = transcription_result['content_safety_labels']['results']
12 for results in content_safety_labels:
13 labels = results['labels']
14 for label in labels:
15 # The severity score measures how severe the flagged content is on a scale of 0-1, with 1 being the most severe.
16 if label['label'] == 'hate_speech' and label['severity'] >= 0.5:
17 print("Hate speech detected with severity score:", label['severity'])
18 # Do something with this information, such as flagging the transcription for review
19 break
20
21 elif transcription_result['status'] == 'error':
22 raise RuntimeError(f"Transcription failed: {transcription_result['error']}")
23
24 else:
25 time.sleep(3)

Understanding the response

In the JSON response, there’ll be an additional key called content_safety_labels that contains information about any sensitive content detected. The full text is contained in the text key, and each problematic utterance has its own labels and timestamp. The entire audio is assigned a summary and a severity_score_summary for each category of unsafe content. Each label is returned with a confidence score and a severity score.

For more information, see Content Moderation model documentation and API reference.

Conclusion

The AssemblyAI API supports many different content safety labels. Identifying hate speech is only a single, important use case for automated content moderation, and you can learn about others on the AssemblyAI blog.