Abstract
Social media is rife with hate speech. Although Facebook prohibits this content on its site, little is known about how much of the hate speech reported by users is actually removed by the company. Given the enormous power Facebook has to shape the universe of discourse, this study sought to determine what proportion of reported hate speech is removed from the platform and whether patterns exist in Facebook’s decision-making process.
To understand how the company is interpreting and applying its own Community Standards regarding hate speech, the authors identified and reported hundreds of comments, posts, and images featuring hate speech to the company (n=311) and recorded Facebook’s decision regarding whether or not to remove the reported content.
A qualitative content analysis was then performed on the content that was and was not removed to identify trends in Facebook’s content moderation decisions about hate speech. Of particular interest was whether the company’s 2018 policy update resulted in any meaningful change.
Our results indicated that only about half of reported content containing hate speech was removed. The 2018 policy change also appeared to have little impact on the company’s decision-making.
The results suggest that Facebook also had substantial issues including:
- removing misogynistic hate speech,
- establishing consistency in removing attacks and threats,
- an inability to consider context in removal decisions, and
- a general lack of transparency within the hate speech removal processes.
Full text (HTML)
Labels:
hate_speech, social_media, content_moderation, freedom_of_expression,
No comments:
Post a Comment