30 Signatories of the Code of Practice on Disinformation, including all major online platform signatories (Google, Meta, Microsoft, TikTok, Twitter), have submitted their first baseline reports on the implementation of the commitments they took under the Code of Practice on Disinformation.
Signatories had six months after signing the Code to put in place actions to fulfil the commitments they subscribed to. These baseline reports provide a first state of play of the steps taken to implement commitments and measures under the Code, and a first set of qualitative and quantitative reporting elements covering the first month of implementation.
The Transparency Centre
As of today, citizens can download the full reports or consult them online. The Transparency Centre is one of the Code’s concrete deliverables, and it aims to ensure that the public receives accurate and timely information about the implementation of the Code. This is a step-change regarding transparency.
The Transparency Centre website has been set up and is maintained by the Signatories. The current version is a very first edition. Signatories should further improve and develop it in the coming weeks to make sure that it is user-friendly, well searchable and kept up to date, in line with the Signatories’ relevant commitments under the Code. The Commission also encourages Signatories to consider additional, new state-of-the art features, as foreseen in the Code, to give the best user experience.
The first baseline reports
The baseline reports follow a common harmonised template consisting of 152 reporting elements (111 qualitative reporting elements, and 42 service level indicators/quantitative indicators) across the Code's chapters. The harmonised reporting templates – developed with the support of ERGA (European Regulators Group for Audiovisual Media Services) - are a great step ahead for the alignment, reviewability, and accuracy of the signatories’ reporting. Notably, all individual actions and metrics are matched with the commitments and measures of the Code that they implement.
Overall, this first exercise attests of the signatories’ efforts to ensure a timely delivery of complex reports with detailed data. It is the first time that platforms provide such insight into their actions to fight disinformation.
Most major online platforms (Google, Meta, TikTok and Microsoft) demonstrated strong commitment to the reporting, providing an unprecedented level of detail about the implementation of their commitments under the Code, and - for the first time - data at Member State level. Twitter, however, provides little specific information and no targeted data in relation to its commitments.
Smaller signatories delivered reports along their obligations (proportionate with the size and type of the organisations), with useful information and data, showing their positive engagement and reflecting their active contribution to the Code.
These baseline reports mark an important first step in establishing the monitoring and reporting under the new 2022 Code of Practice, while the methodology and granularity of the data provision needs to be further developed. Signatories provide data for different periods of time, indicating difficulties to pull data for only one month of implementation. This shall be aligned for the next reporting period. As the reports contain certain gaps in data and have some shortcomings in granularity or Member State level data, we expect this to be improved by the next reporting period in line with the Code’s relevant provisions.
The next set of reports from major online platform Signatories are due in July and are expected to provide further insight on the Code’s implementation and more stable data, covering 6 months.
The Commission also expects Signatories to deliver the first set of Structural Indicators for the next reporting exercise in July. Such indicators can provide important insights feeding into the assessment of the implementation of the Code and its impact in reducing the spread of disinformation online.
The Permanent Task-force
The Code’s Permanent Task-force and its subgroups will continue their intense work in 2023. From June 2022, subgroups have been working on implementing the Code and deepening the work in different areas, for instance providing a common list of Tactics, Techniques and Procedures (TTPs), developing the indicator measuring demonetisation efforts, adopting the harmonised templates and establishing the Transparency Centre.
The Subgroups have also fostered exchanges between Signatories, in particular regarding fighting the spread of disinformation in the context of crisis, notably Russia’s war of aggression against Ukraine and COVID-19. In this context, major online platforms also delivered specific dedicated sections in their baseline reports with detailed information on what measures they have taken under the Code to reduce disinformation around these crises. Such targeted exchanges will continue further within the Task-force, with a strong focus on fighting disinformation on Ukraine.
New Signatories and Expressions of Interest to join the Code
The Code remains open for new Signatories to join, and expressions of interest can now be submitted through the Transparency Centre. Since the signature of the Code in June 2022, four additional Signatories joined the 2022 Code, attesting of the Code’s appeal for the sector: Alliance4Europe, Ebiquity plc, the Global Disinformation Index and Science Feedback.
Some examples of data from the reports
- Google indicates that in Q3 2022 it prevented more than EUR 13 million of advertising revenues from flowing to disinformation actors in the EU.
- MediaMath, a demand-side platform that allows ad buyers better management of programmatic ads, provides an EU overall estimate of EUR 18 million of advertising revenues prevented from funding sites identified as purveyors of disinformation and misinformation.
- TikTok reported that in Q3 2022 they removed more than 800,000 fake accounts, while more than 18 million users were following these accounts. They also indicate that the fake accounts removed represent 0.6% of the EU monthly active users.
- Meta reported that in December 2022, about 28 million fact-checking labels were applied on Facebook and 1.7 million on Instagram. When it comes to the effectiveness of these labels, Meta indicates that on average, 25% of Facebook users do not forward content after receiving a warning that the content has been indicated as false by fact-checkers. This percentage increases to 38% on Instagram. Meta also provides data on Member State level regarding fact-checking efforts.
- Microsoft reported that the news reliability ratings provided under its partnership with Newsguard (itself also a signatory of the Code) were displayed 84,211 times in the Edge browser discover pane to EU users in December 2022
- Twitch reported that, to preserve the integrity of their services, between October and December 2022, it blocked 270,921 inauthentic accounts and botnets created on its platform and took action against 32 hijacking and impersonation attempts. Additionally, thanks to collaboration with the Global Disinformation Index (GDI), Twitch successfully removed 6 accounts actively dedicated to promoting QAnon on the platform.
- Faktograf, a factchecker from Croatia, within the contract agreements they have in place with major online platforms, published in Croatian language 41 articles in January 2023, and 248 articles in the last 6 months.
- Maldita.es, a factchecker from Spain, inspired the creation of UkraineFacts, a debunk database open to fact-checkers from all around the world that resulted in sharing over 5,000 articles related to the war in Ukraine from almost 100 organisations.
Additional data examples from the Crisis section of the reports
On the war of aggression in Ukraine:
- Google’s YouTube blocked more than 800 channels and more than 4 million videos related to the Russia/Ukraine conflict since 24 February 2022.
- Microsoft Advertising prevented between February and December 2022 about 25,000 advertiser submissions relating to the Ukrainian crisis globally, and removed 2,328 domains.
- TikTok, from October to December 2022, 90 videos were fact-checked related to the war, and 29 videos removed a consequence of fact-checking activity.
- Meta, since the beginning of the pandemic, removed more than 24 million pieces of content globally for violating its COVID-19 misinformation policies across Facebook and Instagram.
- Google AdSense, from 1 January 2020 to 30 April 2022, has taken action under the Misrepresentative Content Policy, for harmful health claims on over 31,900 URLs with COVID-19 related content.
- TikTok, between October and December 2022, removed in the EU 1802 videos violating their misinformation policy on COVID-19 after being reported, and 1557 proactively.