The monitoring programme is a transparency measure to ensure the public accountability of the signatories of the Code of Practice on Disinformation. It was set up following the 10 June 2020 Joint Communication “Tackling COVID-19 disinformation - Getting the facts right”. Today we publish the November update reports, focusing on actions taken in October 2020 by Facebook, Google, Microsoft, Twitter and TikTok to limit online disinformation related to COVID-19.
Fighting COVID-19 vaccines disinformation
Along with recently-reported progress towards the distribution of COVID-19 vaccines, the topic of COVID-19 vaccines is gaining traction online and so is disinformation about it. The Commission has therefore asked platforms to include in their monthly monitoring reports a specific section highlighting actions taken to fight disinformation and misinformation around COVID-19 vaccines. Signatories have promptly responded, reporting on their actions in their fourth reports.
Platforms report that they have increased efforts, in particular working with public authorities and international organisations like the United Nations and the World Health Organization, to increase public preparedness for vaccines distribution and to build resilience against the spread of disinformation on the subject. They have also updated their awareness-raising tools as well as their terms of service and enforcement guidelines to include vaccines disinformation as grounds for the demotion or removal of content.
- During October, Twitter engaged with governments, civil society organisations and experts from health and research institutions across the EU to prepare to counter COVID-19 vaccine misinformation. Also, Twitter introduced a vaccine prompt in Belgium, France, and Norway; such prompts have now been rolled out in partnership with national or federal public health agencies or the WHO in a total of 37 countries and 15 different languages.
- Microsoft reports that a person entering a vaccine-related search query on Bing is presented with reliable COVID-19 vaccine-related information, news, and authoritative sources. Also, under its policies on misleading content, Microsoft prohibits advertising that exploits the COVID-19 crisis, spreads disinformation, or endangers user health or safety, including advertising that contains disinformation about COVID-19 vaccines.
- In October, TikTok announced Project Halo, an initiative of the UN and the Vaccine Confidence Project to engage millions of people around the world in the story of the international effort to find safe and effective vaccines for the coronavirus. Using the hashtag #TeamHalo, a group of scientists and clinicians tell stories about their daily work and educate the public on the development of a vaccine. This community so far has created over 100 videos with over 2 million views and 4000 videos shared globally.
- On October 14, Google expanded YouTube’s COVID-19 medical misinformation policy to include potential vaccine claims that contradict expert consensus from local health authorities or the WHO. As a result, YouTube will remove content with false claims regarding the COVID-19 vaccine, such as, for example, that the COVID-19 vaccine will kill people. Such false claims would also be prohibited under content policies for Google Ads and AdSense, or in Search features.
- Facebook has updated their ad policies to reject ads that discourage people from getting vaccines, and it is working to amplify the voices of public health partners by working with them on campaigns to increase immunisation rates. It has also launched a new flu vaccine information campaign, including new product features that provide additional vaccine-related content. In addition, Facebook made news on 3 December 2020 by announcing that it will remove false claims about COVID-19 vaccines that have been debunked by health experts.
More generally, the reports indicate continuing efforts by the platform signatories to address disinformation around the COVID-19 by:
- promoting authoritative information sources through various tools;
- working to limit the appearance or reduce the prominence of content containing false or misleading information;
- increasing efforts to limit manipulative behaviour on their services;
- enhancing collaborations with fact-checkers and researchers, and increasing the visibility of content that is fact-checked;
- providing grants and free ad space to governmental and international organisations to promote campaigns and information on the pandemic;
- funding media literacy actions and actions to sustain good journalism; and
- setting up actions to limit the flow of advertising linked to COVID-19 disinformation.
The reports provide some quantitative data illustrating these actions and their impact through October 2020. For example:
- In October, Google blocked or removed over 2.3 million coronavirus-related ads - including Shopping ads - from EU-based advertisers and buyers for policy violations, including price-gouging and making misleading claims about cures, and it took action on over 1100 URLs with COVID-19 related content, for harmful health claims.
- In October 2020, Microsoft prevented more than 1.4 million advertiser submissions directly related to COVID-19, and intended for display to users in European markets, which could be perceived as deceptive, fraudulent or harmful to site visitors. During October, Bing had more than 4.2 million visitors from EU countries whose COVID-19 related search queries presented them with authoritative information from trustworthy sources.
- Facebook reported that its COVID-19 Information Center had been visited by 14 million people in the EU during October. In October, the company removed from Facebook and Instagram over 28.000 pieces of COVID-19 related content in the EU for containing misinformation that may lead to imminent physical harm, and over 19.000 pieces of COVID-19 related content in the EU for violation of the company’s medical supply sales standards.
- In October, Tik Tok tagged 81.385 videos across its four major European markets (Germany, France, Italy and Spain), attaching a sticker with the message ‘Learn the facts about Covid-19’ and directing users to trusted, verifiable information sources. Across these same markets, it also blocked more than 1300 videos related to COVID-19 for violation of its policies, including videos deemed to contain medical misinformation.
The reports provide a good overview of the evolution of the measures put in place by the platforms signatories of the Code of Practice on disinformation to limit the spread of COVID-19 disinformation. As noted above, upon the request of the Commission, the platforms have also begun providing additional information specifically on actions to combat the disinformation around COVID-19 vaccines. At the same time, as highlighted in connection with the platforms’ prior reports, the reports published today lack sufficient EU- or Member State-level data for some indicators identified in the Joint Communication, and they would benefit from more granular data quantifying the impact of measures taken – for example, engagement with tools to improve user awareness. In order to improve transparency and enable consistent monitoring of platforms’ actions and their impact in the EU, the signatories are invited to step up their efforts in this regard.
- Reports August 2020 – Fighting COVID-19 Disinformation
- Reports September 2020 – Fighting COVID-19 Disinformation
- Reports October 2020 – Fighting COVID-19 Disinformation
- Reports December 2020 – Fighting COVID-19 Disinformation
- Reports January 2021 – Fighting COVID-19 Disinformation
- Reports February 2021 – Fighting COVID-19 Disinformation
- Reports April 2021 – Fighting COVID-19 Disinformation
- Reports May 2021 – Fighting COVID-19 Disinformation
- Reports June 2021 – Fighting COVID-19 Disinformation
- Reports July 2021 – Fighting COVID-19 Disinformation