Major social media platforms accused of ‘failing to put children’s safety at the heart of their products’

As regulators push technology companies to introduce stronger age checks to prevent under-13s from signing up, Major social media platforms such as Facebook are being accused of “failing to put children’s safety at the heart of their products”.

Major social media platforms such as Facebook are being accused of “failing to put children’s safety at the heart of their products”, as regulators push technology companies to introduce stronger age checks to prevent under-13s from signing up.

Melanie Dawes, chief executive of Ofcom, made the comments as the UK communications regulator and the Information Commissioner’s Office wrote to several major platforms urging them to strengthen protections for younger users.

The companies contacted include Facebook, Instagram, Snapchat, TikTok, YouTube, Roblox and X.

Regulators said the services should introduce more robust age-verification measures similar to systems already required for online platforms that host adult content.

At present, many social media services rely on users declaring their own age when creating accounts.

Ofcom research indicates 86 percent of children aged 10 to 12 have their own social media profile despite the common minimum age requirement of 13.

But Melanie insists the current approach was inadequate.

Regulators said stronger age checks were needed because self-declared ages could easily be bypassed by younger users.

The Information Commissioner’s Office said in an open letter to social media and video platforms: “As self-declaration is easily circumvented, this means underage children can easily access services that have not been designed for them.”

Paul Arnold, chief executive of the Information Commissioner’s Office, also outlined concerns about how platforms process the data of younger users.

He said in the letter: “Where services have set a minimum age – such as 13 – they generally have no lawful basis for processing the personal data of children under that age on their service.”

The push for stronger safeguards has also been backed by the UK government.

Liz Kendall, the technology secretary, said platforms should take greater responsibility for protecting younger users online.

She said: “No company should need a court order to act responsibly to protect children.”

Liz added no platform would receive a free pass when it came to child safety online.

Technology companies contacted by the regulators said they already had measures in place designed to protect children using their services.

Google, which owns YouTube, said it was surprised by the approach taken by Ofcom and suggested regulators should prioritise platforms posing greater risk.

The company added it was surprised by Ofcom’s “move away from a risk-based approach, particularly given that we routinely update them and other regulators on our industry-leading work on youth safety”.

Meta, the parent company of Facebook and Instagram, said it had already introduced several measures recommended by Ofcom.

The company added it was using “AI to detect users’ age based on their activity, and facial age estimation technology”.

Meta also said introducing age verification through app stores would mean “parents and teens will only need to provide their personal information once”.

Snapchat said it was testing additional age verification tools.

TikTok announced it used “enhanced technologies” to detect and remove accounts belonging to under-13 users.

The company also said it had removed more than 90 million suspected under-13 accounts between October 2024 and September 2025.

Roblox says it has introduced more than 140 safety features over the past year, including “new mandatory age checks that all players must complete in order to access chat features”.

A spokesperson for Roblox added the company looked forward to demonstrating its efforts during ongoing discussions with Ofcom.

The BBC reported X was contacted for comment but did not respond.

Close Bitnami banner
Bitnami