Skip to main content
Shaping Europe’s digital future
Report / Study | Publication

Digital Services Act study: Risk management framework for online disinformation campaigns

The Directorate-General for Communications Networks, Content and Technology has published an independent study that introduces an approach to assess the effectiveness of the online platforms’ measures against Russian disinformation on the basis of the Digital Services Act’s (DSA) risk management principles and risk mitigation requirements.

Under the DSA very large online platforms and search engines will be obliged to assess and mitigate systemic risks to civic discourse, such as foreign disinformation efforts, as of end August 2023 for the first batch of services designated by the European Commission. They will be obliged to make a public version of a report setting out the results of such risk assessment available to the public at the latest 15 months after the date of application of the DSA, but are encouraged to do so earlier.

In this context, the independent experts that authored this report analysed systemic risks caused by pro-Kremlin disinformation on six online platforms: Facebook, Instagram, Twitter, YouTube, TikTok, and Telegram in 2022. With the exception of Telegram, all of these have since been designated as Very Large Online Platforms which need to comply with the DSA.

The authors propose methodological approaches for civil society and the broader expert community to contribute to assessing the different types of risks caused by online platforms. The study also seeks to encourage the development of a stakeholder community that can help improve the public scrutiny of digital services.

According to the study, evidence suggests that the examined tech companies’ efforts to limit the Kremlin’s malign activities on their platforms were insufficient in the period under analysis, though limited data access impose some caveats on this assessment. While most platforms introduced restrictions on Russian state-controlled media outlets, no company introduced policies against all accounts operated by the Russian Federation. Moreover, experimental investigations in Central and Eastern European languages suggest that platforms only moderated a small share of violent content related to the war, even when it was reported to them via their own notice and action channels. Lastly, efforts by companies such as Meta and Twitter to limit the algorithmic amplification of Kremlin-sponsored disinformation were only partially effective, given that they were limited to manually curated sets of accounts; they did not significantly curtail AI-based amplification at systemic level.

This report is one of the many sources that the Commission services can take into account when analysing the risk assessments submitted by Very Large Online Platforms under the DSA. The framework in the report can also contribute to the ongoing discussion on appropriate risks assessments which the European Commission, the Member State network of Digital Services Coordinators, online platforms and the research community of civil society and academic actors may be able to build over time. To this effect, other studies will be procured and published by the Directorate-General for Communications Networks, Content and Technology.

Full report: Application of the risk management framework to Russian disinformation campaigns