Skip to content

Hate speech detection

Overview

The Hate Speech detector aims at detecting and classifying instances of direct hate speech delivered through private messages, comments, social media posts and other short texts.

More specifically, it is designed to both extract the single instances of offensive and violent language and categorize each instance according to different hate speech categories.

Categorization

Categorization works in a similar way to document classification and is based on a taxonomy.

Warning

Unlike the API resources dedicated to document classification, this is an information detector and the category tree of its taxonomy is not obtainable with API self-documentation resources like for document classification taxonomies, so it is indicated below.

The detector is able to identify three main categories of hate speech based on purpose:

  • Personal insult
  • Discrimination and harassment
  • Threat and violence

Discrimination and harassment can be further divided into seven specific sub-categories that give information about the kind of discrimination perpetrated in the hate speech instances.

The full category tree is:

1000 Personal Insult
2000 Discrimination and Harassment
    2100 Racism  
    2200 Sexism
    2300 Ableism
    2400 Religious Hatred
    2500 Homophobia
    2600 Classism
    2700 Body Shaming 
3000 Threat and Violence

There are three main categories and Discrimination and harassment has sub-categories to indicate the kind of discrimination.

For example, when analyzing this text:

We should hang John Doe.

output category is 3000 (Threat and violence).

Each category is associated with specific extractions.

Extraction

Introduction

The information extraction activity of the detector finds and returns records of extracted information. Each record contains data fields and its structure—the possible fields—is called template.
A template can be compared to a table and the template fields to the columns of the table, as shown in the following figure.

Hate_speech_detection template

Records of the Hate_speech_detection template can have these fields:

NameDescriptionExampleNormailzed value
full_instanceStereotypes, generalizations or hateful messages.Girls can't drive!
targetRecipients of violent messages, sexual harassment, personal insults.We should hang John Doeindividual or animal
target_1John Doe is a retardpeople with disabilities
target_2Fatties are uglyindividuals
target_3Rednecks have very low IQsocial class
target_4All gays should be eliminatedLGBT group
target_5Nigga stinkethnic group
target_6Believe me, Christians should be crucified.religious group
target_7Girls can't drive!women
sexual_harassmentDirect abusive communications characterized by sexual contents, appreciations or purposes.I'd like to grab her tits
violenceThreats or violent purposes.Let's bomb the government
cyberbullyingDirect abusive language, typically posted online, especially if it contains personal insults or body shaming. It is usually paired with instances classes target or target_2.You are a bitch

When an extracted value is recognized as a slur or a specific discriminated social group, extracted text is replaced with a standard value—see the Normalized value in the table above—in the extraction output. This for example doesn't apply to:

Let's bomb the government.

where government is extracted without any normalization as the value of class target.

The extraction of all fields except full_instance is related to the categorization according to these relationships:

Category Classes
1000 Personal Insult target, cyberbullying
2100 Racism target_5
2200 Sexism target_7, target
2300 Ableism target_1
2400 Religious Hatred target_6
2500 Homophobia target_4
2600 Classism target_3
2700 Body Shaming target_2
3000 Threat and Violence target, target_1, target_2, target_3, target_4, target_5, target_6, target_7, sexual_harassment, violence

Useful resources