LONDON — A quiet warning landed this week on the desks of some of the biggest technology companies in the world. It didn’t come with fireworks or spectacle. Just a deadline — and a clear message.
The United Kingdom’s digital regulator, Ofcom, told major technology firms Thursday that they should begin putting real age-verification systems in place or face potential penalties under the country’s Online Safety Act.
The move arrives as governments around the world wrestle with the same uneasy question: how do you keep children safe online without reshaping the internet itself? The debate has spread well beyond Britain, with similar age-verification efforts underway across Western Europe, Australia and parts of the United States.
According to the regulator, letters were sent to government relations and compliance teams at the parent companies behind platforms including Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube.
Those companies have until April 30 to report back on what progress they’ve made toward deploying stronger age-verification tools.
Regulators say they will review those responses and later publish an assessment outlining how well the companies are complying.
Ofcom Chief Executive Melanie Dawes said the platforms’ public commitments to child safety have not always translated into meaningful protections.
“These online services are household names, but they’re failing to put children’s safety at the heart of their products,” Dawes said. “There is a gap between what tech companies promise in private and what they’re doing publicly to keep children safe on their platforms.”
Dawes added, “Without the right protections, like effective age checks, children have been routinely exposed to risks they didn’t choose, on services they can’t realistically avoid. That must now change quickly, or Ofcom will act.”
Regulators outlined four specific expectations for the companies.
The first calls for “effective minimum-age policies.” The second requires “failsafe grooming protections.” The third focuses on creating “safer feeds for children.” And the fourth calls for “an end to product testing on children.”
Together, the measures are intended to help meet the Online Safety Act’s broader requirement that platforms adopt “age-appropriate design” and prevent minors from accessing services that are not meant for them.
Chris Sherwood, head of the child-protection charity National Society for the Prevention of Cruelty to Children, said stronger oversight has been overdue.
“For too long, social media giants have looked the other way while harmful and addictive content floods children’s feeds, undermining their safety and wellbeing,” Sherwood said.
“That’s why Ofcom’s demand for far greater transparency about the risks children face online, and how tech companies plan to protect them, is absolutely essential,” he added. “We’ve long called for minimum age limits to be properly enforced on social media, so it’s encouraging to see Ofcom confront this head-on.”
The regulator’s push also coincides with a separate warning from the U.K.’s data-privacy authority, the Information Commissioner’s Office, which sent a letter to “social media and video sharing platforms operating in the U.K.”
The letter stated, “We understand that most services are relying on self-declaration to identify whether children are 13 or over, with a limited number also utilising some form of profiling to enforce minimum age requirements.”
“As currently deployed, we don’t think that these tools are effective and therefore they should not continue to be relied upon to prevent access to under-13s.”
The letter was signed by Paul Arnold, whose agency oversees information rights, transparency in public bodies and personal data protections across the United Kingdom.
The regulator’s latest demands arrive just days after lawmakers in the U.K. Parliament declined to adopt an Australia-style proposal that would have barred all social media use for anyone under the age of 16.
The War on Porn Regular Updates about the Assault on The Adult Industry