Facebook Proactively Removed 80% of Posts Pulled for Hate Speech in Q2, Q3

The social network released the fourth edition of its Community Standards Enforcement Report

Facebook unveiled a new page to provide more clarity to users on what types of content are allowed on and prohibited - Credit by Facebook
Headshot of David Cohen

Facebook released the fourth edition of its Community Standards Enforcement Report Wednesday, covering the second and third quarters of 2019.

CEO Mark Zuckerberg said during a press call discussing the report, “While we err on the side of free expression, we do have community standards defining what is acceptable on our platform and what isn’t. This is a tiny fraction of content on our platform, but this is some of the worst content out there.”

Vice president of integrity Guy Rosen said in a Newsroom post that the report now includes metrics across 10 policies on Facebook and four on Instagram.

The Facebook policies covered are: adult nudity and sexual activity; bullying and harassment; child nudity and sexual exploitation of children; fake accounts; hate speech; regulated goods (drugs and firearms); spam; suicide and self-injury; terrorist propaganda; and violent and graphic content.

And for Instagram, policies covered are: child nudity and sexual exploitation of children; regulated goods (drugs and firearms); suicide and self-injury; and terrorist propaganda.

Instagram head of product Vishal Shah said during the press call that future reports will include data on additional policy areas.

Metrics detailed by Facebook include how often content that violates its policies was viewed, how much content action was taken on, how much of that content was detected before someone reported it, how many appeals there were on content after taking action and how much of that content was restored after the company initially took action.

Rosen said of the first metric, which Facebook refers to as prevalence, “Think of this as an air quality test to determine the amount of pollutants in the air. We focus on how much content is seen.”

Rosen said data on appeals and restores was not available for Instagram, as appeals were not added to that platform until the second quarter, but such data will be included in future reports.

He also pointed out that metrics may vary between Facebook and Instagram, as the latter does not offer links or reshares in feed, pages or groups.

Rosen was also asked about how the figures being reported by Facebook Wednesday accounted for Facebook Stories and Instagram Stories, and whether the fact that content in that form disappears within 24 hours affected enforcement, and he replied that the same user reporting capabilities and proactive detection systems apply to Stories, and the approach is by and large the same.

Rosen said Facebook’s rate of removing hate speech proactively is up to 80%, compared with 68% in its last Community Standards Enforcement Report, attributing the increase to investments and advances in detection techniques such as text and image matching, as well as machine learning classifiers that examine language and reactions and comments to posts.

On automatic removals, he wrote, “We only do this in select instances, and it has only been possible because our automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks. In all other cases when our systems proactively detect potential hate speech, the content is still sent to our review teams to make a final determination.”

“Defining hate speech involves a lot of linguistic nuance,” Zuckerberg said during the press call.

Facebook said its rate of detecting and removing content associated with al-Qaida, Isis (Islamic State) and their affiliates remains above 99%, and for all terrorist organizations, that figure is 98.5% on Facebook and 92.2% on Instagram.

Facebook vp of global policy management Monika Bickert said during the press call that the company expanded its report to include enforcement against all terrorist organizations, adding, “We are always evolving our tactics because we know bad actors will continue to change theirs.”

She also revealed that there are over 350 people working at Facebook “whose primary responsibility is countering terrorist group members’ attempts to use our services.”

Rosen addressed March’s terror attack in Christchurch, New Zealand, in a separate blog post, saying that from March 15 through Sept. 30, some 4.5 million pieces of related content were removed, with over 97% identified proactively before being reported by anyone.

During the third quarter of this year, some 11.6 million pieces of content were removed from Facebook for violating the social network’s policies on child nudity and exploitation of children, up from roughly 5.8 million in the first quarter of 2019. More than 99% of that content was proactively detected.

On Instagram, 754,000 pieces of content were removed for child nudity or exploitation of children in the third quarter, 94.6% of which were detected proactively, up from 512,000 and 92.5%, respectively, in the second quarter.

For suicide and self-injury, Facebook took action on 2.5 million pieces of content in the third quarter, detecting 97.3% of it proactively, compared with 2 million and 96.1%, respectively, in the previous period.

On Instagram, some 845,000 pieces of content were removed in the third quarter, with 79.1% of that detected proactively, versus 835,000 and 77.8%, respectively, in the second quarter of 2019.

Rosen also shared data on content removed for promoting sales of illicit firearms and drugs.

On Facebook, 4.4 million pieces of drug sale content were removed in the third quarter, 97.6% of which were detected proactively, up from 841,000 and 84.4%, respectively, in the first quarter of 2019.

As for firearms, 2.3 million pieces of content were removed in the third quarter, with 93.8% being detected proactively, compared with 609,000 and 69.9%, respectively, in the year’s initial quarter.

On Instagram, 1.5 million pieces of drug sale content were removed in the third quarter, with 95.3% being detected proactively, and 58,600 pieces of firearm sales content were pulled, 91.3% of which were detected proactively.

Bickert addressed the mix between full-time Facebook employees and contractors moderating content, saying that all content reviewers, regardless of their status, receive the same training and are subject to the same quality audits, and adding, “Our goal is making sure that we have coverage around the clock and coverage around the globe, and we have many languages we need to review.”

Facebook also released its Transparency Report for the first half of 2019, and vp and deputy general counsel Chris Sonderby shared details in a blog post.

Government requests for user data were up 16% in the first six months of the year, to 128,617 from 110,634 in the second half of 2018.

Sonderby said the largest volume of requests came from the U.S. (50,741, up 23% from the previous period), followed by India, the U.K., Germany and France.

Content restrictions based on local law fell 50% in the first half of the year compared with the second half of last year, to 17,807 from 35,792.

Sonderby said there was an unusual spike in the last reporting period, caused by a Delhi High Court order that spurred the restriction of 16,600 items in India.

In the most recent reporting period, 58% of restrictions originated from Pakistan and Mexico.

The social network also shared data on internet disruptions caused by governments, identifying 67 disruptions of Facebook services in 15 countries during the first six months of the year, versus 53 disruptions in nine countries during the second half of last year.

During the first half of 2019, Facebook took down 3,234,393 pieces of content based on 568,836 copyright reports, 255,222 pieces of content based on 96,501 trademark reports and 821,727 pieces of content based on 101,582 counterfeit reports.

Finally, Facebook unveiled a new page to provide more clarity to users on what types of content are allowed on and prohibited from its services.

Zuckerberg said during the press call that the company is now investing more in safety and security than its total revenue when it had its initial public offering in May 2012, and more people are dedicated to that area than the total number of employees at the time of its IPO, adding, “There’s no question for me: That’s the right thing to do.”


david.cohen@adweek.com David Cohen is editor of Adweek's Social Pro Daily.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}