IE 11 is not supported. For an optimal experience visit our site on another browser.

Facebook says it labeled 50 million pieces of coronavirus misinformation in April

The company released this snapshot of data at the same time as a more comprehensive report of how it enforced its content policies banning different types of content.
Image: A Facebook logo is displayed on a smartphone.
A Facebook logo is displayed on a smartphone.Dado Ruvic / Reuters file

Facebook put misinformation warning labels on about 50 million pieces of content related to COVID-19 during the month of April, the company announced Tuesday.

The social networking site attaches these warnings to posts sharing articles that have been reviewed by the company’s independent fact-checking partners. The company said that the warnings greatly reduce the number of people who view the original content.

The company released this snapshot of data at the same time as a more comprehensive report of how it enforced its content policies banning different types of content — including nudity, bullying, terrorist propaganda and child sexual exploitation material — during the last quarter of 2019 and the first quarter of 2020.

The report, which Facebook has released twice a year since May 2018, highlights how many items of content the company has “taken action” on across Facebook and Instagram, which means removed or marked as graphic.

Facebook and other tech platforms have taken a variety of steps to counter the spread of coronavirus misinformation, but misleading claims and conspiracy theories have proven hard to contain, particularly when shared in public and private groups.

This edition of the report exposes some data points for the first time, including the number of content moderation decisions on Instagram that users appealed and the company subsequently restored.

For example, of the 8.1 million items of content related to adult nudity and sexual activity that Instagram took action on in the first quarter of 2020, there were about 509,000 appeals. More than 98,000 posts were later restored.

Across the platform, about one-fifth of appeals across all categories resulted in the content being restored, the company said.

The report also shows how the platform has been handling content from organized hate groups for the first time, including white nationalism and white separatism.

In the last quarter of 2019, the company took action on 1.6 million items of content it categorized as "organized hate," rising to 4.7 million in the first three months of 2020. The increase in removals is in part because automated systems became better at finding violating content, the company said.

The report comes at a time when the company’s content moderation teams are operating at reduced capacity and it is more reliant on automated systems that use things like image matching technology to identify violating content.

“We’ve spent the last few years building tools, teams and technologies to help protect elections from interference, prevent misinformation from spreading on our apps and keep people safe from harmful content,” said the company in a blog post.

“So when the COVID-19 crisis emerged, we had the tools and processes in place to move quickly and we were able to continue finding and removing content that violates our policies.”

The company said it plans to release subsequent reports every quarter.

CORRECTION (May 12, 2020, 1:04 p.m. ET): An earlier version of this article misstated when Facebook first reported details of its appeals process. It has previously disclosed Facebook appeals data; Tuesday's report is not the first time it has done so.