Facebook glitch leads to a failure in hate speech control

A few days ago, ‘Janet’, or the pen name by which she is referred to, reported hate speech on Facebook. The company’s support team had sent her a message thanking her for “doing the right thing” and claimed to have “removed both the group and all its posts, including the one (she) reported”.

In reality, none of this occurred and the content Janet flagged stayed online. Facebook is now investigating a possible glitch in the system it uses to moderate its network’s problematic content. The company allegedly told BBC that they “are investigating the issue, and will share information as soon as they can”.

The glitch informs users through an automated message that their flagged content has been removed when in fact Facebook’s staff has ruled that it should remain on its network.

“It’s a huge breach of trust,” said Brandie Nonnecke, an affiliate of the Center for Information Technology Research in the Interest of Society.

When inquired as to the magnitude of their system problems, Facebook was unable to provide a response.

The content Janet flagged had supposedly been a group of 50,000 members who would post various anti-immigrant and anti-Muslim rhetoric. The group’s name “LARGEST GROUP EVER! We need 10,000,000 members to Make America Great Again”.

Facebook has been attempting to combat various forms of hate speech on its servers following blowback due to alt-right activity on its networks. Janet claims they have been promoting themselves in her newsfeed saying they wish to keep our democracy safe “by eliminating content that is false and divisive”.

Over 6 months ago, options began to appear in people’s feeds asking if they believed every piece of content they encountered on the site could constitute hate speech. While this feature had been live for only an hour at the time it garnered a lot of attention. Hate speech cannot be combatted in the same way as spam, for example, simply because the nuance makes it significantly more difficult for any computer algorithm to detect. Therefore, users like Janet are inherently vital to the functionality of Facebook’s anti-hate speech initiatives. 

According to Facebook’s own statistics, between January and March of this year 62% of hate speech removed by Facebook’s staff was due to public user flagging.