Facebook Labeled 50 Million Pieces of Covid-19 Misinformation in April

The platform is taking new measures to root out false and harmful content

illustration of someone climbing a ladder with the word facebook next to them on the left
Facebook released its latest transparency report Tuesday. Facebook
Headshot of Scott Nover

Key insights:

It’s spring cleaning for Facebook.

On Tuesday, Facebook announced it had labeled 50 million pieces of content related to Covid-19 misinformation in April alone, based on 75,000 articles from the independent news organizations that serve as the company’s fact-checking partners. On Facebook’s marketplace, the company also removed 2.5 million items for the “sale of masks, hand sanitizers, surface disinfecting wipes and Covid-19 test kits.”

The disclosures Tuesday came as the social media giant released its annual transparency report and relayed information about misinformation, hate speech, memes and more. This is the fifth time Facebook has released a transparency report, but it’s the first since the onset of the Covid-19 pandemic.

Facebook said there was an uptick in content removed, but attributed that to improvements in its artificial intelligence. The company “can now detect almost 90% of the content” before anyone reports it.

“I know our systems aren’t perfect,” CEO Mark Zuckerberg said on a press call Tuesday. “They have certainly been impacted by having less human review during Covid-19, and we do unfortunately expect to make more mistakes until we’re able to ramp everything back up.”

Facebook, which has come under sustained fire from critics and regulators for its role in the spread of misinformation online, has traditionally used a combination of human workers and machine learning systems to root out content that violates its community standards. 

But amid the Covid-19 pandemic, with most employees working from home, they have found themselves relying more heavily on artificial intelligence in recent weeks, Zuckerberg said. The CEO cited privacy concerns with remote work and certain unspecific “safeguards” to ensure the emotional well-being of those reviewing disturbing content as the reason for the slowdown in the human output.

As The Verge first reported on Tuesday, Facebook agreed to a landmark settlement of $52 million to current and former content moderators who developed PTSD while on the job.

“When we temporarily sent our content reviewers home due to the Covid-19 pandemic, we increased our reliance on these automated systems and prioritized high-severity content for our teams to review in order to continue to keep our apps safe during this time,” Guy Rosen, Facebook vp of integrity, wrote in a blog post.

In this report, Facebook emphasized its work removing hate speech and terrorist content from the platform, noting it has been one year since the Christchurch mosque terrorist attack in New Zealand was livestreamed on Facebook. 

The company said that in the first three months of 2020, its moderators removed 4.7 million pieces of content “connected to organized hate,” more than 3 million more than the previous quarter. On Instagram, they removed 175,00 pieces of related content, up from 139,800 in the final three months of 2019. Facebook said its proactive detection rate for such content increased from 89.6% to 96.7% on Facebook and from 57.6% to 68.9% on Instagram between Q4 2019 and Q1 2020. 

The company is also doubling down on efforts to detect images that violate its community standards, particularly memes. Having already orchestrated a Deepfake Detection Challenge to train its AI to recognize manipulated media, Facebook announced a Hateful Memes Challenge to help its system better detect “multimodal hate speech.” Facebook said that memes are commonly used in “organized hate” efforts. 

Amid the Covid-19 crisis, Facebook has struggled to rid its platform of misinformation but has taken unprecedented steps to promote reliable, authoritative information from public health agencies and organizations like the World Health Organization and the U.S. Centers for Disease Control. They affixed a permanent banner to the user interface with Covid-19 information from these sources and have given the WHO “as many free ads as they need for their coronavirus response.” However, early research is mixed about whether these efforts have actually been effective in informing users. 

Last week, conspiracy theory video Plandemic spread like wildfire on Facebook, one of the most blatant and successful pieces of Covid-19 misinformation on social media in recent weeks. Both Facebook and YouTube have banned the video in the last few days. 

On the call, a Bloomberg reporter asked whether Facebook would prioritize hiring additional moderators since they have indicated that its processes are strained. Rosen said they’re still trying to get existing moderators back to work on secure systems, but did not address the reporter’s question.


@ScottNover scott.nover@adweek.com Scott Nover is a platforms reporter at Adweek, covering social media companies and their influence.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}