Democracy Interference Observatory
Home
Observations: Hungary
Articles
Interference explained
About DIO
Democracy Interference Observatory
Home
Observations: Hungary
Articles
Interference explained
About DIO
More
  • Home
  • Observations: Hungary
  • Articles
  • Interference explained
  • About DIO
  • Home
  • Observations: Hungary
  • Articles
  • Interference explained
  • About DIO

Understanding Election Interference

The EU’s Censorship Operating System explained

How Brussels built a system to manage Europe’s political narrative 


by Norman Lewis

2026/03/10


Over the past decade, the European Union has quietly constructed what can best be described as a Censorship Operating System: an integrated system of legislation, NGOs, ‘independent’ fact-checkers, technology platforms and judicial enforcement designed to shape the political information environment across Europe.

As our report Manufacturing Misinformation: The EU-funded propaganda war against free speech uncovered, the Commission has funded hundreds of unaccountable non-governmental organisations (NGOs) and universities to carry out 349 projects related to countering ‘hate speech’ and ‘disinformation’ to the tune of almost €650million.

The system does not resemble traditional censorship. There are no jackboots, the banning of newspapers, or the shutting down of broadcasters. Instead, the EU has built its own Ministry of Truth as a distributed architecture of narrative control in which responsibility for censorship is outsourced and fragmented. Legislation creates the framework for the unholy alliance of unaccountable actors – from EU-funded NGOs, fact-checkers ‘trusted flaggers’ and Big Tech platforms – who define what is ‘hate speech’ and disinformation and thus enforce a systematic mechanism for manipulating political discourse while preserving the appearance of neutrality.


Legislation: Building the legal architecture


The foundation of the system is EU legislation governing online speech. The most important of these laws is the Digital Services Act (DSA), which gives the European Commission unprecedented authority to regulate the information environment.

The DSA requires large online platforms to assess and mitigate ‘systemic risks’, including disinformation, threats to civic discourse and risks to democratic processes. While presented as a tool to remove illegal content, the law goes far beyond illegality. Platforms are expected to identify and suppress speech that regulators believe may influence elections or public debate in undesirable ways.

Failure to comply can lead to fines of up to 6% of global annual turnover, effectively forcing platforms to err on the side of over-censorship.

Alongside the DSA sits the Code of Practice on Disinformation, which brings together major technology companies with a network of NGOs and fact-checking organisations tasked with identifying and countering disinformation narratives.

The legislative framework is now being expanded through initiatives such as the European Democracy Shield, which aims to strengthen EU oversight of political information flows under the justification of combating foreign interference.

The pattern is clear: legislation establishes the obligation for platforms to manage and correct political speech, turning private companies into enforcement arms of EU regulatory policy.


NGOs: the outsourced narrative police 


Once the legal framework is in place, the system relies on a vast network of EU-funded ‘independent’ NGOs, research institutes and civil society organisations to identify which narratives require correction.

Through programmes such as Horizon Europe, the Citizens, Equality, Rights and Values programme (CERV), and various democracy initiatives, the European Commission funds hundreds of projects focused on combating disinformation and countering populist narratives.

These organisations are frequently described as independent watchdogs, yet many depend directly on EU funding for their activities.

Projects funded under these programmes routinely monitor political discourse, map ‘disinformation ecosystems,’ and develop strategies for countering narratives that challenge EU institutions or policies.

For example, projects such as ACT4RULE mobilise networks of civil society groups across multiple countries to shape debates around rule-of-law issues. While presented as civic engagement, such initiatives effectively organise transnational advocacy campaigns aligned with EU institutional priorities.

In other words, Brussels funds the organisations that produce the research and analysis identifying threats to democracy — threats that conveniently justify further EU intervention in the information space.


Fact-checkers: certifying political truth


The next layer of the censorship operating system is the fact-checking industry.

Under the Code of Practice on Disinformation and platform policies aligned with the DSA, online platforms rely heavily on third-party fact-checking organisations to evaluate disputed claims.

Content labelled ‘false,’ ‘misleading,’ or ‘missing context’ is then algorithmically suppressed, demonetised or restricted in distribution.

Many of these fact-checking organisations operate within the same EU-funded ecosystem as the NGOs monitoring disinformation.

This creates a structural conflict of interest: organisations receiving EU funding are tasked with certifying the truthfulness of claims that often involve EU policies, institutions or political priorities.

Rather than functioning as neutral arbiters of truth, fact-checkers frequently act as narrative gatekeepers, determining which interpretations of political events are permitted to circulate widely online.

Because these decisions affect distribution rather than legality, they often occur without transparency or accountability.


Trusted flaggers: institutionalising censorship


The creation of ‘Trusted Flaggers’ under the Digital Services Act, is a critical step in narrative enforcement.

These organisations are granted privileged status to report illegal or harmful content to platforms. Their reports must be prioritised and processed quickly, giving them substantial influence over what content is removed or restricted.

The terminology itself is strikingly Orwellian. The label ‘trusted’ implies neutrality and expertise, yet many trusted flaggers are advocacy organisations deeply embedded in the EU’s disinformation-countering ecosystem.

Yet, as our report, A shield against democracy: How the Democracy Shield protects the EU from the electorate, thirteen of the 37 officially designated organisations – over one-third – have received over €8.7million in EU funding. Many of these ‘independent’, ‘expert’ organisations are engaged in ideological youth programming. Their dual role as educators and enforcers represents a profound conflict of interest: those teaching young Europeans what to think are also authorised to censor dissenting thought.

This is not an accidental flaw in the system. It is the mechanism through which narrative control is exercised.


Big Tech: the enforcement layer


While NGOs and fact-checkers identify problematic narratives, enforcement is carried out by large technology platforms.

Companies such as Meta, Google, TikTok and X must demonstrate compliance with the Digital Services Act by actively mitigating risks related to disinformation and election interference.

To do so, platforms deploy a combination of content removal, algorithmic demotion, shadow-banning and account suspensions.

Because the financial penalties for failing to act are severe, platforms are incentivised to remove or suppress content pre-emptively.

This arrangement allows EU institutions to shape online discourse without directly censoring speech themselves. Moderation decisions appear to be private corporate actions, even though they are taken under regulatory pressure.

The result is a form of outsourced censorship, implemented by platforms but driven by policy frameworks created in Brussels.


Courts: legal backing for the system


Finally, the system is reinforced through judicial mechanisms.

Under the Digital Services Act the European Commission has the authority to investigate platforms and impose massive fines if they fail to manage systemic risks adequately.

National authorities also implement complementary legislation targeting online harms, disinformation and hate speech.

Together these mechanisms ensure that the censorship operating system is ultimately backed by the coercive power of law.

A direct threat to democratic debate

The EU presents this architecture as a defence of democracy against disinformation and foreign manipulation.

In reality it represents something far more troubling: the institutionalisation of political narrative management at a continental scale.

Democracy depends on open contestation of ideas. Political narratives must compete freely, and citizens must be able to challenge institutions, policies and elites without fear of algorithmic suppression.


By constructing a system that identifies, labels and suppresses certain political narratives, the EU is transforming democratic debate into a managed information environment.

The deeper irony is that the system is justified in the name of protecting democracy.

Yet by delegating control of political discourse to networks of unelected and unaccountable EU-funded NGOs, fact-checkers and regulatory authorities, Brussels is undermining the very democratic pluralism it claims to defend.


This is why the EU’s censorship operating system is not simply a regulatory framework. It is a systematic attack on free speech — and therefore on democracy itself.



  • Home
  • Observations: Hungary
  • Interference explained

Democracy Interference Observatory

Copyright © 2026 MCC Brussels AISBL - All Rights Reserved.


https://brussels.mcc.hu/documents/gdpr/general_privacy_notice.pdf

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

DeclineAccept