Social media curfew suggested for uner-16s as Ofcom publishes Children's Code

Watch more of our videos on ShotsTV.com 
and on Freeview 262 or Freely 565
Visit Shots! now
The government could bring in social media curfews for children.

Technology Secretary Peter Kyle revealed he was “watching very carefully” the introduction of TikTok’s 10pm curfew for users under 16 and examining tools for parents to switch off access at set times.

“These are things I am looking at,” he told the Daily Telegraph. “I’m not going to act on something that will have a profound impact on every single child in the country without making sure that the evidence supports it.”

Hide Ad
Hide Ad

The proposal came amid concerns about how the “addictive” nature of social media was interrupting sleep schedules and disrupting schooling and family life.

Mr Kyle said he was considering enforcement options under the Online Safety Act following regulator Ofcom’s publication of the Children’s Code. He described the new rules as a “sea change” under which parents can expect their child’s social media experience to “look and feel different”.

Mr Kyle said he would not be “short of encouraging Ofcom to use its powers to the full” to fine social media companies and imprison offenders. The Online Safety Act began coming into effect last month and requires platforms to follow new codes of practice set by the regulator Ofcom, in order to keep users safe online.

Rt Hon Peter Kyle MPRt Hon Peter Kyle MP
Rt Hon Peter Kyle MP

It comes after the Internet Watch Foundation (IWF), which finds and helps remove abuse imagery online, said 291,273 reports of child sexual abuse imagery were reported in 2024.

Hide Ad
Hide Ad

In its annual report, the organisation said it was seeing rising numbers of cases being driven by threats, including AI-generated sexual abuse content, sextortion and the malicious sharing of nude or sexual imagery.

It said under-18s were now facing a crisis of sexual exploitation and risk online. In response, the IWF announced it was making a new safety tool available to smaller websites for free, to help them spot and prevent the spread of abuse material on their platforms.

The tool, known as Image Intercept, can spot and block images in the IWF’s database of more than 2.8 million which have been digitally marked as criminal imagery.

The IWF said it will give wide swathes of the internet new, 24-hour protection and help smaller firms comply with the Online Safety Act.

Hide Ad
Hide Ad

Derek Ray-Hill, interim chief executive at the IWF, said: “Young people are facing rising threats online where they risk sexual exploitation, and where images and videos of that exploitation can spread like wildfire. New threats like AI and sexually coerced extortion are only making things more dangerous.

“Many well-intentioned and responsible platforms do not have the resources to protect their sites against people who deliberately upload child sexual abuse material. That is why we have taken the initiative to help these operators create safer online spaces by providing a free-of-charge hash checking service that will identify known criminal content.”

Measures proposed by Ofcom in the Children’s Code

  • Safer feeds. Personalised recommendations are children’s main pathway to encountering harmful content online. Any provider that operates a recommender system and poses a medium or high risk of harmful content must configure their algorithms to filter out harmful content from children’s feeds.
  • Effective age checks. The riskiest services must use highly effective age assurance to identify which users are children. This means they can protect them from harmful material, while preserving adults’ rights to access legal content. That may involve preventing children from accessing the entire site or app, or only some parts or kinds of content. If services have minimum age requirements but are not using strong age checks, they must assume younger children are on their service and ensure they have an age-appropriate experience.
  • Fast action. All sites and apps must have processes in place to review, assess and quickly tackle harmful content when they become aware of it.
  • More choice and support for children. Sites and apps are required to give children more control over their online experience. This includes allowing them to indicate what content they don’t like, to accept or decline group chat invitations, to block and mute accounts and to disable comments on their own posts. There must be supportive information for children who may have encountered, or have searched for harmful content.
  • Easier reporting and complaints. Children will find it straightforward to report content or complain, and providers should respond with appropriate action. Terms of service must be clear so children can understand them.
  • Strong governance. All services must have a named person accountable for children’s safety, and a senior body should annually review the management of risk to children.
Related topics:

Comment Guidelines

National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.

Telling news your way
Follow us
©National World Publishing Ltd. All rights reserved.Cookie SettingsTerms and ConditionsPrivacy notice