Facebook Rulebook Leaked

PUBLISHED: 1:00 PM 3 Jan 2019
UPDATED: 5:57 PM 3 Jan 2019

Facebook ‘Rulebook’ For Policing Speech Exposed

The unidentified person released the rulebook to the New York Times because he "feared that the company was exercising too much power, with too little oversight — and making too many mistakes."

Facebook's speech policing rulebook leaked.

Facebook’s censorship is in full swing, with ‘bias, gaps, and outright errors,’ according to a bombshell New York Times report.

The ‘rulebook’ on how to police speech on the social media platform was leaked by a concerned employee who told the Times he brought the 1,400 pages to them because he “feared that the company was exercising too much power, with too little oversight — and making too many mistakes.”

In the blockbuster report, the publication found the mega-platform was “a far more powerful arbiter of global speech than has been publicly recognized or acknowledged by the company itself,” and was actually instructing moderators on how to shape conversations about voting.

The report comes on the heels of another recent revelation involving emails that demonstrated Facebook planned and performed to sell user’s information without their consent or knowledge.

The Times discovered a range of problems, including instances where Facebook allowed extremism in some counties while censoring mainstream speech in others, especially the U.S. and Britain.

This proves what many people have been saying for years—the social media platform is unscrupulous regarding personal information and it pushes a specific leftist agenda. Everyone remembers the massive purge that occurred in October, right before the mid-term elections.

Mark Zuckerberg’s company is trying to monitor billions of posts per day by outsourcing the task to other companies that tend to hire unskilled workers, according to the newspaper’s report.

The moderators “have mere seconds to recall countless rules and apply them to the hundreds of posts that dash across their screens each day. When is a reference to “jihad,” for example, forbidden? When is a “crying laughter” emoji a warning sign?”

One moderator complained that leaving up a post might lead to violence, “You feel like you killed someone by not acting,” the person claimed, speaking anonymously because he had signed a nondisclosure agreement.

However, other people argue that such actions are not the business of a private company, which is touted as a communications platform.

They argue that policing free speech is wrong, and that if the posts do not break any current laws, the company is acting like other fascist governments who regulate free speech.

The Times published a wide range of slides from the rulebook and detailed a number of discrepancies when the ‘rules’ simply failed.

For example, guidelines for the Balkans appear “dangerously out of date,” an expert on that region told reporters, and a legal scholar in India found “troubling mistakes” in the guidelines that pertain to his country.

In the U.S., Facebook has blocked an advertisement about the caravan of Central American migrants put out by President Trump’s political team.

“It’s not our place to correct people’s speech, but we do want to enforce our community standards on our platform,” Sara Su, a senior engineer said. “When you’re in our community, we want to make sure that we’re balancing freedom of expression and safety.”

However, the only freedom of expression the social media platform seems to allow must align with leftist ideals.

Facebook’s most politically consequential document could be an Excel spreadsheet that lists every group and individual the company has barred as a “hate figure.”

Moderators are instructed to remove any post praising, supporting or representing any of the people on that list.

Moreover, during Pakistan’s July elections, Facebook handed its moderators a 40-page document describing “political parties, expected trends and guidelines,” in order to shape conversations over the social media platform used for news and voting discussions.

The Times reported, “The document most likely shaped those conversations — even if Pakistanis themselves had no way of knowing it. Moderators were urged, in one instance, to apply extra scrutiny to Jamiat Ulema-e-Islam, a hard-line religious party. But another religious party, Jamaat-e-Islami, was described as ‘benign.’

“Though Facebook says its focus is protecting users, the documents suggest that other concerns come into play. Pakistan guidelines warned moderators against creating a ‘PR fire’ by taking any action that could ‘have a negative impact on Facebook’s reputation or even put the company at legal risk.’”

Expert Anton Shekhovtsov told the publication he was “confused about the methodology.”

The company bans an impressive array of American and British groups, he added, but only a few in places like the Ukraine or Russia.

For a tech company to make these decisions is “extremely problematic,” Jonas Kaiser, a Harvard University expert on online extremism said. “It puts social networks in the position to make judgment calls that are traditionally the job of the courts.”

Regarding how Facebook identifies hate speech, the New York Times reported, “The guidelines for identifying hate speech, a problem that has bedeviled Facebook, run to 200 jargon-filled, head-spinning pages.

“Moderators must sort a post into one of three ‘tiers’ of severity. They must bear in mind lists like the six ‘designated dehumanizing comparisons,’ among them comparing Jews to rats.

The leaked documents provide another look into the leftist, free speech-hating people who operate Facebook.

Many people hope that as more evidence is uncovered, more people will begin switching off their accounts and leave the platform behind.