If you’re tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.
As the European Union moves aggressively to shape online discourse through the Digital Services Act (DSA), EU Commissioner for Technology Henna Virkkunen has been deflecting scrutiny abroad, pointing fingers at the United States for what she describes as a more extensive censorship regime.
Relying on transparency data, she argues that platforms like Meta and X primarily remove content based on their own terms and conditions rather than due to DSA directives. But this framing misrepresents how enforcement works in practice, and downplays the EU’s systemic role in pushing platforms toward silence through legal design, not open decrees.
Virkkunen highlighted that between September 2023 and April 2024, 99 percent of content takedowns occurred under platform terms of service, with only 1 percent resulting from “trusted flaggers” authorized under the DSA. A mere 0.001 percent were direct orders from state authorities.
On paper, this paints a picture of platform autonomy. But in reality, the architecture of the DSA ensures that removals appear “voluntary” precisely because they are incentivized by looming regulatory consequences.
Under the DSA, platforms are held legally accountable for failing to remove certain types of content.
This liability drives a strong incentive to err on the side of over-removal, creating a culture where companies preemptively censor to minimize risk. Virkkunen frames these decisions as internal, but in truth, many of them reflect anticipatory compliance with European legal expectations.
The fact that content is flagged and removed “under T&Cs” does not indicate independence, it reflects a strategy of risk avoidance in response to EU enforcement pressure.
This dynamic is by design. The DSA doesn’t rely on high numbers of direct takedown orders from governments. Instead, it outsources content control to the platforms themselves, embedding speech restrictions in the guise of corporate policy.
The regulatory burden falls on private actors, but the agenda is shaped by Brussels. Delegating enforcement doesn’t dilute state influence; it conceals it. The veneer of decentralization does not remove the fact that the state has created the framework and exerts ongoing leverage over what platforms consider acceptable.
Virkkunen focuses on the relatively small share of removals attributed to official “trusted flaggers,” but that figure masks the broader apparatus of influence. Government pressure doesn’t always arrive in the form of formal requests. Political threats, regulatory investigations, and public rebukes can have as much, if not more, impact on how companies set and enforce their rules. These unlogged pressures don’t show up in transparency reports, yet they are instrumental in shaping content moderation.
The idea that moderation practices can be cleanly divided along national lines is misleading.
Most platforms operate with global content policies that are then tweaked to satisfy local laws. So when content is taken down under a platform’s own policies, it’s often driven by a global posture shaped by European legal requirements.
The assumption that removals reflect American norms ignores how deeply EU regulations have penetrated platform governance across all jurisdictions.
Virkkunen’s comparison rests on a selective timeframe. The data she references comes from a period still under the Biden administration, which coordinated with platforms to suppress speech on a broad range of topics, from pandemic-related information to election narratives. That moment in US policy was exceptional and has already faced legal challenges in American courts. To use it as a benchmark without acknowledging its departure from traditional US free speech norms is a distortion.
Even the metrics Virkkunen relies on tell us little about actual speech protections. High takedown numbers under platform rules don’t mean governments have the legal authority to compel removals.
In the United States, constitutional limits sharply restrict state involvement in content control. In the EU, the DSA hands formal power to government-approved entities, embedding state-aligned censorship directly into law. That legal distinction is far more consequential than any spreadsheet of removal percentages.
When platforms remove content under their own policies, it is often a form of strategic alignment with EU regulators.
Given the economic weight of the European market, companies know that noncompliance could lead to steep fines, legal complications, or operational restrictions. The “choice” to remove content frequently reflects a decision to avoid regulatory hostility rather than any principled editorial stance.
When companies operate under the specter of EU sanctions, disclosures about removals tell us little about whether the speech would have been protected in a less pressured environment.
If you’re tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.
The post EU Commissioner Defends EU’s Censorship Law While Downplaying Brussels’ Indirect Influence Over Online Speech appeared first on Reclaim The Net.