The Online Safety Act of the United Kingdom has introduced sweeping regulatory powers aimed at protecting children and curbing harmful content on digital platforms. The legislation, enforced by the Office of Communications, covers a wide range of services including social media, online forums, other sensitive websites, and even certain cloud-sharing tools. It has sparked significant debate regarding its implications for privacy, surveillance, and free expression online.
How the Online Safety Act Aims to Keep Children Safe on Social Platforms
Background
The Online Safety Act was originally introduced as the Online Safety Bill by the United Kingdom government under Prime Minister Boris Johnson. It was spearheaded by the then Department for Digital, Culture, Media and Sport or DCMS, and championed by then Digital Secretary Nadine Dorries. Later stages of the bill were overseen by her successors and colleagues like Michelle Donelan of the Department for Science, Innovation, and Technology.
Growing concerns about online harms, especially involving children, were the driving point for its proposal. High-profile cases, such as the death of teenager Molly Russell in 2017, which was linked to exposure to self-harm content on social media, intensified demand for stricter regulation. The government aimed to hold tech companies accountable for harmful content and require stronger safety measures like age verification and content moderation.
The Online Safety Bill underwent several revisions and parliamentary debate starting in 2021. It received Royal Assent on 26 October 2023 and was officially labeled as the Online Safety Act of 2023. Full enforcement provisions are being phased in since. The legislation names the Office of Communications as the regulatory body in charge of implementing and overseeing its provisions. Specific compliance deadlines are targeted through 2024 and 2025.
Key Provisions
The law requires platforms to block or filter content harmful to minors. There are two categories of content defined. These are primary priority and priority harms. Materials like adult content and self-harm instructions must be fully blocked. Materials that include harmful challenges, bullying, or substance abuse must be filtered. These rules apply to mainstream platforms and other smaller online communities or platforms. Below are the key provisions of the law:
Role of Office of Communications: The Online Safety Act of 2023 empowers the Office of Communications to audit platforms, request internal data, and impose penalties of up to 10 percent of global turnover or GBP 18 million. It also holds the authority to block access to non-compliant services in the United Kingdom.
Duty of Care for Online Platforms: Platforms hosting user-generated content must take active measures to prevent illegal material and shield children from harmful content. This includes performing regular safety audits, implementing protective features, and reporting compliance to the Office of Communications.
Special Category 1 Platform Provision: Large platforms with significant user bases, including mainstream social networks like Facebook and X, have additional duties or responsibilities. These include offering adult users tools to filter legal but harmful content and providing algorithmic transparency.
Age Verification for Adult Content: Services containing adult content and other explicit materials are required to adopt effective age verification systems. These include official identification checks using government-issued documents or facial recognition features. The purpose is to ensure minors cannot gain access.
Content Categorization and Filtering: Harmful content is classified as either primary priority, such as pornography and self-harm instructions, or priority harms, such as bullying and dangerous stunts. Platforms are mandated to either block or filter content and other materials based on which content they fall under.
Mandatory Risk Assessment: Companies are required to perform routine assessments of how their services expose users, particularly minors, to potential harm. These assessments must inform their specific safety policies and be submitted to the Office of Communications upon request for compliance evaluation.
Reporting and Complaint Mechanisms: The same companies or their platforms are also mandated to provide accessible channels for users, including children and parents, to report harmful or illegal content. The Online Safety Act obligates them to respond promptly and maintain records of their response actions.
Annual Transparency Reports: Companies are also required to regularly publish annual transparency reports. Each report should outline moderation practices, harm mitigation efforts, and compliance measures related to child safety done in the covered year. This ensures oversight and public accountability.
Criminal Liability for Senior Managers: Senior executives of companies, like chief executive and chief operating officers, can be held personally liable and face potential criminal penalties, including imprisonment, if they neglect statutory obligations. This is particularly true for severe cases of non-compliance.
Encrypted Services and Cloud Platforms: The law extends to encrypted messaging services and file-sharing platforms if they host user-to-user content. Examples include cloud platforms and even messaging platforms. They are required to comply with scanning and safety rules under defined conditions.
Merits
Proponents argue the Online Safety Act responds to urgent safety concerns. Cases where exposure to harmful material online contributed to self-harm or suicide among minors have been cited to prove a point. Organizations such as the National Crime Agency have endorsed the law. They insisted that it fills longstanding gaps in platform accountability and ensures companies prioritize the welfare of younger users over commercial interests.
Others view the legislation as disproportionately heavy-handed. Privacy groups stress that storing identification data or scanning private messages poses significant risks of breaches and extortion. Campaigners also warn that determined individuals may circumvent restrictions by using virtual private networks or migrating to unregulated platforms. This can undermine the effectiveness of these protective measures in safeguarding vulnerable users.
The implementation phase is expected to challenge smaller websites and forums that lack resources for extensive compliance frameworks. Volunteer-based communities, particularly those hosting sensitive topics like mental health or activism, may face closure or scaled-back services in the United Kingdom. This possibility raises wider concerns about the narrowing of digital spaces available for grassroots discussion and knowledge sharing.
A parliamentary petition to repeal the Online Safety Act has surpassed hundreds of thousands of signatures, triggering debate over whether the measures strike the right balance between safety and individual freedoms. Legislators face the challenge of refining enforcement while addressing mounting public pressure, especially from parents and technologists concerned about unintended consequences for online privacy and open communication.
The approach of the United Kingdom is notably stricter than that of other countries. American policy, shaped by the First Amendment, avoids broad content regulation and focuses on targeted laws like the Online Privacy Protection of Children Act. The United Kingdom, however, mandates proactive content filtering and age verification. This raises unique questions about balancing child protection with constitutional-style freedom of speech principles abroad.
Note that the Digital Services Act of the European Union provides another point of comparison. It also requires platforms to mitigate risks and remove illegal content, but stops short of mandating widespread age verification. It emphasizes transparency and algorithmic accountability rather than identity checks. This contrast highlights divergent regulatory philosophies across nations grappling with online harm and privacy challenges simultaneously.