No talking symbol with a black silhouette of a speaking head crossed out by a red prohibition circle, set against a dynamic pixelated background of blue and red colors.

If you’re tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

The UK’s increasingly controversial Office of Communications, Ofcom, is charting a path that could reshape the internet as we know it, and not for the better.

Under the banner of the Online Safety Act, the regulator is proposing a sweeping expansion of its authority that, if enacted, would hand it unprecedented influence over what we see, share and say online.

Part of Ofcom’s plan is the goal of preventing illegal content from gaining traction.

Platforms would be required to block material that even appears to be unlawful from being recommended by algorithms until it’s reviewed by a human moderator.

The idea, on paper, is to stop harmful content from “going viral.”

In practice, it risks creating a system where lawful speech is caught in digital limbo, held back by automated systems that err on the side of caution.

Ofcom frames these proposals as a necessary response to modern online threats.

It talks about “highly effective age assurance,” a term that sounds innocuous enough but points toward invasive digital ID checks.

The aim is to ensure that children aren’t exposed to harmful material, but the solution would come at the cost of privacy and anonymity for everyone, two pillars of an open internet.

This new regime would compel tech firms to act as frontline enforcers of ill-defined standards of legality, long before a court has had a chance to weigh in.

In times of crisis; riots, terror attacks, or other major incidents; platforms would be under pressure to throttle spikes in content rapidly.

That effectively puts Ofcom in the position of deciding, in real-time, what the public is allowed to see.

One of the more troubling proposals targets livestreaming; a tool that has become vital for journalists, activists, and artists.

All of it would be wrapped in tighter age verification systems that threaten to chill participation and expression.

The regulator also wants to see wider deployment of technologies like perceptual hash matching and automated tools; not just for known illegal content, but for material that might be illegal or harmful.

That includes everything from suicide-related posts to fraudulent schemes. While the intent is understandable, the risk of overreach is significant.

Without proper safeguards, lawful speech could be swept into censorship systems, and surveillance could become embedded in the core of our digital infrastructure.

Oliver Griffiths, who leads Ofcom’s Online Safety Group, summed up the regulator’s stance: “We’re holding platforms to account and launching swift enforcement action where we have concerns.”

It’s a statement that highlights how determined Ofcom is to push these changes through, no matter the consequences.

The public has until 20 October 2025 to respond to Ofcom’s consultation.

Given the political climate, the proposals seem likely to pass with little resistance.

But if they do, the UK’s online environment may come to be defined not by the free exchange of ideas, but by cautious, preemptive censorship and intrusive oversight; all in the name of safety.

If you’re tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

The post UK’s Ofcom Seeks New Powers to Preemptively Block Viral Content, Censor Potentially Illegal Speech, and Mandate Broad Digital ID appeared first on Reclaim The Net.



Comment on this Article Via Your Disqus Account