Social media giants urged to tighten child safety after UK rejects blanket ban for teens

Photo: Jaromir Chalabala / Getty

U.K. regulators are calling on social media giants to enforce stricter protection for children on their platforms after a blanket ban for under-16s was rejected by lawmakers.

Online safety organizations Ofcom and the Information Commissioner’s Office said they had written to YouTube, TikTok, Facebook, Instagram, and Snapchat on Thursday, urging them to tackle a broad range of child safety issues, from implementing stringent age verification measures to tackling child grooming on their platforms.

It comes after U.K. lawmakers voted against a proposal to include a social media ban for under-16s in the a piece of child welfare legislation being debated earlier this month.

The U.K. government has launched a consultation on children’s social media use to gather views of parents and young people on whether a social media ban would be effective.

Governments across Europe are weighing stricter regulations to limit teens’ use of social media after Australia became the first country to enforce a sweeping ban for under-16s in December. Spain, France, and Denmark are among the countries weighing similar measures.

Australia blocked teens from using social media in December in a new regulation.

Meta urges Australia to rethink under-16 social media ban after blocking over 500,000 accounts

Better age verification technologies

Ofcom said it had written to social media platforms calling on them to report on what they’re doing to keep children off their platforms, with a deadline of April 30 for them to respond.

Its demands included better enforcement of minimum age requirements, preventing strangers from being able to contact children, safer content for teens, and an end to product testing, such as AI, on children.

Tech giants are “failing to put children’s safety at the heart of their products,” and are falling short on promises to keep children safe online,” said Ofcom CEO Melanie Dawes.

“Without the right protections, like effective age checks, children have been routinely exposed to risks they didn’t choose, on services they can’t realistically avoid,” Dawes said.

The ICO published an open letter on Thursday, saying that social media platforms need to use facial age estimation, digital ID, or one-time photo matching to get better at age verification.

Many platforms rely on “self-declaration” as the main way to check a user’s age, but this is “easily circumvented” and ineffective, according to the regulator.

“This puts under-13s at risk by allowing their information to be collected and used unlawfully, without the protections they are entitled to,” ICO’s CEO Paul Arnold said in the letter.

“With ever-growing public concern, the status quo is not working, and industry must do more to protect children. You should act now to identify and implement current viable technologies to prevent children under your minimum age from accessing your service,” Arnold added.

Meta complied with Australia’s social media ban, blocking over 500,000 accounts believed to belong to under-16s from Instagram, Facebook, and Threads in the initial days. But it called on the Australian government to reconsider, saying a blanket ban would drive teens to circumvent the law and access social media sites without the necessary safeguards.

Instagram said it would alert parents when their teens repeatedly search for terms like suicide and self-harm over a short period of time.

A landmark trial brought against Meta and Alphabet kicked off in January, focusing on a young woman and her mother who allege that Instagram and YouTube have design features that contribute to addiction.

Meta CEO Mark Zuckerberg and Instagram CEO Adam Mosseri have already testified, with an outcome expected in mid-March. The case could set a precedent on what responsibility social media companies have over their youngest users.

The European Commission opened an investigation in January into Elon Musk’s X over the spreading of sexually explicit material of children by its AI chatbot Grok. Additionally, the ICO issued a £14 million fine ($18 million) against Reddit for unlawfully processing children’s personal data in February.

What tech firms say

In a statement, a Meta spokesperson told CNBC that it already implements certain measures that the regulators outlined, including using “AI to detect users’ age based on their activity, and facial age estimation technology.”

It also has a separate teen account with built-in protections, the spokesperson said. “With teens using on average 40 apps per week, we believe the most effective way to complement our own age assurance approach is to verify age centrally at the app store level,” they added.

TikTok says its rolled out enhanced technologies across Europe since January to detect and remove accounts that belong to anyone under its minimum age requirement of 13, with the help of specialist moderators.

It also utilizes facial age estimation, credit card authorization, or government-approved identification to confirm users’ ages, the company said.

Snapchat and YouTube did not immediately respond to requests for comment from CNBC.

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.

Source – CNBC