It is crunch time for social media platforms and websites used by young people, as Ofcom has released a wave of online safety regulations which will expand the Online Safety Bill. The ITV drama, Adolescence, recently highlighted that children today have to navigate a complex online environment where they are served harmful and influential content by sophisticated AI algorithms.
It calls for ‘transformational new protections’ that Ofcom announced yesterday, ordering platforms to comply with the critical changes to install age verification systems and adapt algorithms to filter out erosive content by 25 July 2025.
Harmful sites towards children not only include adult sites, but also limitless search browsers, the dark web and social media sites which feed normalised, explicit content that encourages self-harm, suicide, or eating disorders. Tougher action to restrict access by under-18s will be taken by the expansion of the Online Safety regulations.
If big tech companies fail to take accountability, after repeated warnings changes are coming, this could result in substantial fines or, in extreme cases, court orders to block access to services in the UK.
Whilst preventative measures are the best option for everyone, removing harmful content that already festers on the world wide web is vital. It is not as simple as removing all AI which is now embedded into our lives, but its benefits in data tools and biometrics can be leveraged for this purpose.
The NSPCC welcomed the move as “a pivotal moment for children’s safety online,” while also urging more action to provision encrypted private messaging, which remains difficult to monitor.
Dame Melanie Dawes, Chief Executive of Ofcom, described the introduction of the new Codes as a “gamechanger” and “legally binding”. The codes mandate over 40 practical measures aimed at enhancing “safety first” for children online.
The new Protection of Children Codes and Guidance protecting children in the UK will now progress through Parliament. 27,000 children and 13,000 parents were involved in a public consultation, providing feedback on the regulation proposal.
As the leading British biometric verification firm, iProov, has been watching this development closely and is in strong support, with Campbell Cowie, Head of Policy at iProov, commenting that yesterday’s guidance “provides a vital step towards creating safer online spaces for our most vulnerable users”.
“Ofcom’s commitment to ‘highly effective’ measures aligns with our conviction that only certainty, powered by verification and secured by liveness, can truly shield young people in the online realm and direct them to experiences designed for their age”.
Robust age verification will be enforced that meets the accreditation standards – and that means the “definitive matching of government-issued IDs with biometric data” – preventing authentication that allows anyone to self assert their permissions.
Ofcom has said big tech giants will have to seriously adjust to the expectations, whilst the reality will mean reconfiguring advanced AI algorithms.
Platforms must also simplify terms of service for younger users, offering opt-outs from potentially unsafe group chats, and ensuring accessible support for children who encounter distressing content. Additionally, they are required to appoint a named individual responsible for children’s safety and conduct annual reviews of child safety risks.