A Look at Facebook’s Sad and Severe Censorship Saga

Ryan Hartwig
4 min readJul 9, 2020
Facebook Insider Ryan Hartwig

I was a content moderator for Facebook and over the course of nearly two years, I noticed countless examples of bias and political censorship that were given as directives as well as being built into the policy.

The experience gave me no choice but to come forward. Over the course of many months, I documented, with the help of Project Veritas, how Facebook censored conservative viewpoints and promoted leftist ideology.

When I watched Facebook CEO Mark Zuckerberg’s April 2018 Capitol Hill testimony, I noticed a stark contrast between his statements dismissing any suggestion that Facebook censored political speech and the long list of exceptions that Facebook issued to content moderators like me. It was an odd dissonance as Zuckerberg testified in front of Congress. I, along with all the other content moderators, were censoring political speech upon his orders.

In 2019, former Arizona senator Jon Kyl completed a so-called civic audit for Facebook in response to complaints from conservatives. Afterwards, I noticed a few things did change. First of all, Facebook began tracking the exceptions to their community standards policy. The exceptions they gave us were now numbered and listed. They also stopped using the phrase “newsworthy exceptions.”

This did not mean there was real reform, however. Facebook began using new terminology to allow for them to make decisions about how we actioned content. For example, the word we would often throw around would be “align.”

We need all the content moderators to “align” on the same decision. This is so our scores wouldn’t be negatively influenced and so that QA and the reps(content moderators) were on the same page.

Yes, it is necessary to align because the policy is very nuanced. However, if there are gray areas or as they call them “edge cases”, a decision is made from above. So Facebook could call it something else, but they could still dictate to us how to action particular jobs under the guise of alignment.

To give you an idea of how nuanced the policy is, please see the following example.

All you stupid white people are the worst=ignore

Stupid is not an attack, but a way for the poster to target a group of the PC (white people)

Ugly White men are evil = ignore tier 2 attack on subset

White men are ugly and evil = delete tier 2 attack

Policy Exception — Don Lemon — -White men are the biggest terror threat in this country.” Allow this on the platform despite that phrase violating our hate speech policy.

The result of such policy nuances is that users will not know what will get deleted. They think they are engaging in political speech , but end up getting banned.

Hal 9000 from “2001: A Space Odyssey”

Even as someone who studied the Implementation Standards policy for two years, I still struggled with the legalese and intricacies of the policy language.

The decision as to whether something gets taken down is a complex decision and depends on a variety of factors. The average American would be baffled as to why something as simple as freedom of speech has been condensed into a rulebook any legal scholar would enjoy.

This bafflement was created by Congress in the Communications Decency Act of 1996, specially its Section 230, which classified Internet Service Providers as different from publishers. Facebook and other social media platforms have taken refuge within this safe harbor protected from libel suits or other sanctions as a legacy of the law.

When Facebook censors content it published as supplied by users, it labels that content hateful, false or inciting violence — which inescapably impugns the content supplier. Everywhere else in the media, a publisher would open themselves up to sanctions, such as a libel suit.

If we take away Section 230, Facebook’s whole business model is suddenly all wrong. The company would have to break up into its different business units or go out of business.

Jon Schweppe and Craig Parshall made the same point in their Feb. 14 piece for The Federalist, where they argued an effective method would be to make the legal distinction between a pure non-edited platform and an online publisher.

Let’s start by clearly delineating between “platforms” and “publishers.” If a company wants to create an open forum or platform that adheres in good faith to a First Amendment standard of free speech and expression, they can do that. If they want to selectively edit and present a particular point of view and be a publisher, they are free to do that as well.

If Jon Kyl’s civic audit proved one thing, it was that Facebook recognizes it is in trouble. Kyl’s civic audit and its policy proposals were useless because it assumed there was already a level political playing field at Facebook.

There is not.

Facebook is biased against conservatives and Facebook makes exceptions to its policy to protect certain individuals, viewpoints, and movements.

I strongly recommend that Section 230 be re-written to exclude Facebook and remove its protections under that law. Only by so doing can we begin to dismantle the stranglehold Big Tech has on the United States.

Sincerely,

Ryan Hartwig

Facebook Insider

@realryanhartwig on Twitter

therealryanhartwig@gmail.com

--

--