Fining the Internet: A big week in UK Child Safety Online, but are the overseas internet giants willing to play by the rules?
Very rarely is there a quiet week in the world of data protection, and the past couple (albeit a little stranger than most) has proved no exception.
Yesterday, Ofcom fined 4chan £520,000 for failing to protect children from accessing pornography on its platform. Then, in truly bizarre and arguably quite belittling fashion, 4chan’s lawyer responded by posting an AI-generated cartoon of a giant hamster on social media and informing Ofcom that the United Kingdom lost the American Revolutionary War. The response, however strange, leaves little reason to believe the fine will ever be paid or indeed regarded seriously.
This not long after the ICO fined Reddit £14.47 million for unlawfully processing the personal data of children under 13. Reddit, to its credit, didn’t retort with AI slop and has already begun implementing changes, albeit ones that the ICO is actively reviewing.
Together, these two high-profile cases form the basis of what looks to be an accelerating shift in how the UK holds the online world to account when children are involved. They also, worryingly in some cases, illustrate the uphill battle the UK faces in implementing its laws and protecting its citizens when overseas platforms (mainly in the United States) are concerned.
Two regulators with one goal
The UK’s approach runs through two channels. The ICO enforces UK GDPR and the Age Appropriate Design Code (the Children’s Code), which requires platforms likely to be accessed by children to verify who their users are whilst applying high privacy defaults and treating children’s interests as a primary consideration. Ofcom enforces the Online Safety Act 2023, which places duties on platforms to protect users from harmful content, with particular emphasis on children. The two remits overlap considerably, and both regulators have taken no trouble in hiding that they are working together.
The talking point now, though, is not the changes in law over the past few years; it is the regulators’ evident willingness to use it.
Regulation with teeth?
The events of the last couple of weeks are hardly the first in terms of regulatory action and children’s safeguarding. Looking back to April 2023, the ICO fined TikTok £12.7 million after finding that up to 1.4 million children under 13 were using the platform in 2020, without TikTok having any lawful basis to process their data. TikTok has appealed, and a hearing is listed for May 2026, adding to the recent flurry of activity on this topic. The message, even back then, however, was unambiguous: This was no longer asking, this was telling.
Jump forward to February 2026, Imgur was fined £247,590 for the same structural failure: relying on terms-of-service prohibitions rather than any actual age verification mechanism. Rather than implement the required changes, Imgur blocked all UK users and withdrew from the market. A stark response, choosing to withdraw from a major economy rather than comply with its national child safety laws.
Then, the very recent case with Reddit, which at £14.47 million makes it the recipient of the ICO’s largest children’s privacy fine to date. Reddit failed to introduce meaningful age assurance until July 2025 and did not conduct a mandatory Data Protection Impact Assessment until January of the same year. The statement that followed from John Edwards (UK Information Commissioner) was blunt. Saying it was “concerning that a company the size of Reddit failed in its legal duty to protect the personal information of UK children.”
As for Ofcom, in November 2025, they issued a £55,000 fine against Itai Tech, an AI-powered site generating nude images from uploaded photographs, for failing to implement adequate age verification. Itai Tech paid and blocked UK users. But then came the start of the 4chan episode, seeing them fined £20,000 in August 2025 simply for ignoring Ofcom’s information requests, followed by Thursday’s £520,000 fine and the circus that followed.
So it’s obvious that the big UK regulators aren’t afraid to gnash sharpened teeth anymore, but faced with what has at best been a mixed response, it begs the question:
Are the big overseas companies actually listening?
The responses across these cases vary greatly in the weight given to the regulators’ rulings. Itai Tech paid and complied. Imgur withdrew. A plethora of pornography sites quietly added age verification. Reddit engaged, partially, and faces continuing scrutiny.
4chan’s engagement with Ofcom, meanwhile, can hardly be taken seriously at all. Its lawyer has described Ofcom as an “industry-funded global censorship bureau,” filed a lawsuit in a US federal court challenging the Online Safety Act as an unlawful extraterritorial power grab, making it abundantly clear that his client will comply with UK censorship laws, in his words, “when pigs fly.” The US political backdrop adds texture: last year in Paris, Vice President JD Vance warned European leaders off strangling US Tech Innovation with overseas regulation, a sentiment that now seems to sit at the heart of 4chan’s latest ‘response’.
What happens next?
Ofcom’s real power play, other than seeking criminal prosecution, is the ability to require UK internet service providers to block non-compliant platforms entirely. We don’t seem to be near that point yet, but the framework’s logic points there if fines continue to be frivolously ignored. Whether blocking sites like 4chan would materially protect children is debatable, though, given how easy it is for even the least tech-savvy of today’s youth to circumvent such measures.
But even with such threats, it all feeds into a feeling that the bigger players in the states see themselves, to a degree, as untouchable. The UK’s child safety framework looks in practice to be more effective at changing behaviour in the broad middle of the market (platforms with UK commercial presence, reputational stakes, and legal teams invested in regulatory relationships) than on the wider global stage (insert quip here about wider dwindling UK Prestige if you wish), where platforms like 4chan feel they have nothing to lose and say so openly, mockingly even.
That isn’t to say that all the regulators’ work/ action will be in vain. The age assurance compliance figure among top pornography sites represents genuine progress. The Reddit fine signals that obligations under the Children’s Code carry real financial consequences, calibrated to global turnover. And the UK, crucially, is not the only country trying to make a difference in this field of data protection: the EU’s Digital Services Act, Australia’s under-sixteen social media ban, and Canada’s revised privacy framework all point in largely the same direction.
That direction is a very real possibility that in the not-too-distant future, the big American platforms will face regulatory pressure on a number of fronts, hit by fines and action across multiple global jurisdictions simultaneously. Perhaps then we’ll see less AI hamsters and more real change.
For the moment, though, the UK regulators have a singular goal in mind. As Ofcom’s Suzanne Cater summarised: “Companies, wherever they’re based, are not allowed to sell unsafe toys to children in the UK. The digital world should be no different.” It’s a catchy statement, but we will have to wait and see whether the tools available are truly sufficient to make the sentiment stick when the company receiving the fine responds with AI nonsense and a lawsuit.