More than 100,000 tech companies scoped by the Online Safety Act are facing imminent calls to remove harmful content relating to suicide, pornography or child sexual abuse from their platforms.
Social media companies, which dominate the internet use of children and young people, have chosen to treat safety measures as an afterthought, technology secretary, Peter Kyle said. However, media regulator Ofcom and the government are locked into constructive regulatory engagement with companies to enforce age assurance.
The crackdown is “just the beginning” Kyle said. If companies do not comply with the safeguarding rules, they face huge fines and potential litigations. Companies could face an unwanted fine of up to £18m or 10% of their global revenue, which for big companies like Meta could equal billions of pounds. The act lists 130 “priority offences” that should be eradicated from platforms by increasing age verification systems.
Ofcom’s Code of Conduct to prevent companies breaching the act outlines account management features for example, hiding children’s profiles and locations from users they do not know and implementing blocking tools.
Ofcom’s enforcement director Suzanne Cater said, “platforms must now act quickly to come into compliance with their legal duties, and our codes are designed to help them do that” by enforcing “hash matching” technology or reporting channels for organisations.
This comes after Ofcom’s announcement on Monday stated storage and filesharing services would be asked how they prevent paedophiles from distributing child sexual abuse material.