They also included policies in their terms of services to remove disinformation on vaccines, notably blocking hundreds of thousands accounts, offers and advertiser submissions related to coronavirus and vaccine-related misinformation and stepped up their work with fact checkers to make fact-checked content on vaccination more prominent. This batch of reports marks the end of the initial 6 months reporting period. Given the relevance of this reporting in the current epidemiological context, the programme will continue for the next 6 months. The Commission also asked online platforms to provide more data on the evolution of the spread of disinformation during the coronavirus crisis and on the granular impact of their actions at the level of EU countries.
Věra Jourová, Vice-President for Values and Transparency, said:
The pandemic has become a breeding ground for false claims and conspiracy theories and platforms are important amplifier of this type of messages. We must continue working together to improve our fight with disinformation, but we need more transparency and better effort from the online platforms. The extension of the monitoring programme is a valuable lesson in our work to overhaul the Code of Practice against Disinformation.
Thierry Breton, Commissioner for Internal Market, said:
The COVID 19 pandemic has highlighted the societal role platforms play, which comes with corresponding responsibilities. Substantial steps need to be taken to prevent disinformation from hampering the common efforts of all EU countries regarding vaccination. Platforms have to become more transparent, including on the effectiveness of the measures taken.
This monthly reporting programme was created under the 10 June 2020 Joint Communication to ensure accountability towards the public of the efforts made by platforms and relevant industry associations to limit online disinformation related to the coronavirus. Today's reports focus on actions taken in December 2020 by Facebook, Google, Microsoft, Twitter and TikTok.