Sunday, May 28, 2017 by Ethan Huff
The incoherent way that Facebook chooses to police the content published on its social media platform is getting stranger and more controversial by the day. New reports indicate that the tech giant will no longer block videos and imagery of extreme violence, including gory death and abortion, because doing so represents a form of “censorship” that supposedly violates the rights of its users (but Facebook is happy to censor content over political views it doesn’t like).
Leaked documents outlining the secret rules and guidelines that Facebook uses to determine the appropriateness of people’s online posts suggest that Zuckerberg and company really don’t have a grasp on how to effectively parse morality and ethics from free speech. More than 100 internal training manuals, spreadsheets, and flowcharts analyzed by The Guardian suggest that Facebook no longer has a handle on how to maintain a safe and civil online environment for the diversity of its users.
With more than two billion users worldwide, Facebook has a responsibility, experts say, to ensure that inappropriate content is properly moderated. But with only 4,500 employees to accomplish this – that’s only one Facebook moderator for every roughly 450,000 users – this is proving to be particularly impossible, especially when the company doesn’t even seem to know how to properly identify what’s inappropriate.
Content that Facebook has deemed appropriate for its platform, according to leaked documents, include live streams, video, and photos of users attempting to inflict self-harm; handmade “art” depicting nudity; animal abuse; non-sexual physical abuse and bullying of children; abortions, so long as they don’t depict nudity; and other forms of violent death.
As far as extreme written content, Facebook is similarly positioned to do almost nothing in response. Off-the-cuff threats of violence, even against children, are mostly okay except in very rare cases where Facebook deems them to be “credible.” And instructions on how to harm or kill someone? No problem.
Facebook’s rationale for allowing all of this vile filth to be shared via its platform, according to the leaked documents, is that “people use violent language to express frustration online.” Because of this, users should feel “safe to do so” when it comes to posting it.
“We should say that violent language is most often no credible until specificity of language gives us a reasonable ground to accept that there is no longer simply an expression of emotion but a transition to a plot or design,” the same document goes on to explain.
Protecting free speech is one thing. But allowing some of the most evil depictions of what humans are capable of to spread or even be encouraged is just downright irresponsible and utterly immoral. This is especially true when one considers the fact that Facebook has taken a much different stance with regards to “fake news,” which is somehow not considered to be free speech and is now being actively censored on the social media site.
This blatant double standard is hypocrisy by definition, and it shows that Facebook isn’t really concerned with protecting the free speech rights of its users. If it was, the company would stop censoring news content that it arbitrarily deems as “fake” and let users decide for themselves what’s true. By all appearances, it almost seems as if Facebook actually wants to promote evil in the name of free speech, while simultaneously keeping a lid on anything that might expose it.
“Facebook cannot keep control of its content,” one source said about the matter. “It has grown too big, too quickly.”