Social Media Platforms Have a Tough Task Combating Coronavirus Misinformation

Facebook, Twitter and others are limiting employee travel—and trying to stop the flow of misinformation on their platforms.

Many of the largest social media companies are grappling with misinformation surrounding the coronavirus outbreak. - Credit by Getty Images
Headshot of Scott Nover

Key Insights:

As the coronavirus outbreak continues to spread, many of the largest companies in Silicon Valley have had to think not only about the health of their employees but also the health of their platforms.

In recent weeks, the tech sector has seen an avalanche of event cancellations, modifications and withdrawals. Facebook, Twitter and TikTok have all pulled out of the South by Southwest festival in Austin, Texas, which is still set to run from March 13-22.

But it’s not just SXSW that’s on the chopping block. Facebook and Google canceled their developer conferences—F8 and Google I/O, respectively—while other upcoming events including Google Cloud Next, Microsoft MVP Summit and Adobe Summit have removed the in-person element and shifted to virtual, or “digital-first.”

Meanwhile, Twitter has suspended nonessential business travel for employees and has encouraged personnel to work from home when possible—staff in Hong Kong, Japan and South Korea have been mandated to do so. Pinterest has “suspended all business travel to China, Iran, Italy, South Korea and Japan,” citing Center for Disease Control and World Health Organization guidelines. Snap has not yet announced similar decisions, but told Adweek it is monitoring the situation and recommending that employees avoid low-priority travel.

But the threat is magnified on many of these companies’ platforms—where misinformation, if not controlled, travels quickly. 

Staying safe online in the time of coronavirus

While the spread of misinformation on social media is always a concern, in times of an epidemic—and perhaps soon a pandemic—the consequences can be much more immediate and dire.

Carl Bergstrom, a professor of biology at the University of Washington who studies how disinformation flows on social platforms, said we’ve never quite seen a scenario like this play out on social media. “The last time we really looked at something like this was in 2009 with the swine flu,” he noted, “and social media just didn’t have the same prominence in people’s lives at the time.”

Disinformation campaigns around the coronavirus outbreak share some similarities with those related to the ebola outbreak in 2014, Bergstrom noted, as well as elections and moments of crisis around the world. 

U.S. State Department officials have already claimed that thousands of Russia-linked social media accounts have coordinated to spread misinformation about the coronavirus on Facebook, Twitter and Instagram.

And Chinese social media platforms like Weibo and WeChat have been rife with rumor, disinformation and state censorship—including the silencing of virus whistleblower Li Wenliang, who died from COVID-19 in February after Chinese police told him to stop spreading false news.

According to Bergstrom, disinformation—the deliberate spread of falsehoods—on social media can take a number of different forms: Some actors are trying to profit, some are trying to misinform, and some are just trying to create chaos.

There are “efforts to undermine trust in institutions we might usually look to for information: national governments, NGOs like the WHO, trusted fact-based media sources, et cetera,” he said.

How social media platforms are fighting back

Most of the major social platforms have outfitted their search functions to prompt users with information from the CDC or WHO. A simple search on Twitter, for instance, pops up this message: “Know the facts: To make sure you get the best information on the novel coronavirus, resources are available from the Centers for Disease Control and Prevention (CDC).” It also links to the CDC’s website and Twitter account. 

Facebook CEO Mark Zuckerberg wrote in a post Tuesday that his company is “giving the WHO as many free ads as they need for their coronavirus response, along with other in-kind support.” Facebook is also “removing false claims and conspiracy theories that have been flagged by leading global health organizations” and is “blocking people from running ads that try to exploit the situation” like claiming they have a product that can cure COVID-19.

Twitter is also giving free ads to nonprofits disseminating “reputable health information.”

In emails to Adweek over the past few days, spokespeople for the major social media companies outlined comprehensive plans to stem the tide of misinformation on their platforms, mainly through content moderation (though some, like Snap, are designed differently without a centralized feed of user-generated content).

But Bergstrom noted that in times of crisis like this, many of these companies like Facebook, Twitter and YouTube are fighting against their own algorithms that reward the most engaging content—no matter the accuracy of the source. And after-the-fact content moderation is really difficult. 

“The entire structure of the system is set up in ways that unfortunately promote the spread of spectacular disinformation around crisis events,” Bergstrom said. “The algorithms that are used for choosing what content we see have not been designed so that we see the most accurate content, but so that we see the most engaging content.”


@ScottNover scott.nover@adweek.com Scott Nover is a platforms reporter at Adweek, covering social media companies and their influence.
Publish date: March 4, 2020 https://stage.adweek.com/digital/social-media-platforms-have-a-tough-task-combating-coronavirus-misinformation/ © 2020 Adweek, LLC. - All Rights Reserved and NOT FOR REPRINT
{"taxonomy":"","sortby":"","label":"","shouldShow":""}