In fact, the owner of the social media giant, Mark Zuckerberg, announced last month that his company would hire 3.000 additional people to review content. Facebook already has 4.500 moderators working on millions of reports each day. According to The Guardian, each moderator has 10 seconds to decide how to filter each reported post. The only help moderators get is some manuals, which give examples of what to censor for each kind of abuse, such as graphic violence, cruelty to animals, credible threats of violence, and non-sexual child abuse.
Different types of disturbing content
Of course, there are specific rules for each of the categories of content that moderators filter. Each moderator gets an internal manual for every category which describes how they should filter disturbing content. From child abuse to self-harm and even revenge porn, everything is included in these internal manuals. Here is a list of examples that The Guardian provides, citing Facebook’s rules:
“Remarks such as “Someone shoot Trump” should be deleted, because he is in a protected category. It can be permissible to say “To snap a bi**ch’s neck, make sure to apply all your pressure to the middle of her throat”. This is not regarded a credible threat. Footage of graphic violence should be marked as disturbing, but not always deleted. Facebook believes said videos can help create awareness for mental illness. Facebook allows users to post footage of child abuse, as long as it’s not sexual. They mark it as disturbing but don’t remove it. Videos of child abuse that people share with sadism and celebration are removed. All “handmade” art showing nudity and sexual activity are allowed but digitally made art showing sexual activity is not. Facebook will allow people to live stream attempts to self-harm because it “doesn’t want to censor or punish people in distress”. Photos of animal abuse can be shared, with only extremely upsetting imagery to be marked as “disturbing”.”