10.07.2022 - 21:44 [ Electronic Frontier Foundation ]

One Database to Rule Them All: The Invisible Content Cartel that Undermines the Freedom of Expression Online

(August 27, 2020)

During the past decade, however, social media platforms have also come under increasing pressure from governments around the globe to respond to violent and extremist content on their platforms. Spurred by the terrorist attacks in Paris and Brussels in 2015 and 2016, respectively, and guided by the shortsighted belief that censorship is an effective tool against extremism, governments have been turning to content moderation as a means to fix international terrorism.

Commercial content moderation is the process through which platforms—more specifically, human reviewers or, very often, machines—make decisions about what content can and cannot be on their sites, based on their own Terms of Service, “community standards,” or other rules.

During the coronavirus pandemic, social media companies have been less able to use human content reviewers, and are instead increasingly relying on machine learning algorithms to moderate content as well as flag it. Those algorithms, which are really just a set of instructions for doing something, are fed with an initial set of rules and lots of training data in the hopes that they will learn to identify similar content.