The Online Safety Act 2023 received Royal Assent on 26 October 2023.
Ofcom, the online safety regulator, is implementing the act in three phases, as summarised in a roadmap (October 2024) and webpage on important dates for online safety compliance (December 2024). The act is expected to be fully implemented in 2026.
The Department for Science, Innovation and Technology (DSIT) published a draft statement of strategic priorities for Ofcom in November 2024. This sets out the following priorities that Ofcom must consider when exercising its online safety functions:
- safety by design
- transparency and accountability
- agile regulation
- inclusivity and resilience
- technology and innovation.
DSIT has published an “explainer” on the act (May 2024). This summarises, among other things, the types of content and activity that the act seeks to tackle, the companies that are in scope, the duties they will have to comply with, and how the legislation will be enforced.
Detailed information on the act and its implementation is available from the online safety section of Ofcom’s website.
Illegal content
Under the act, all platforms must implement measures to reduce the risk of their services being used for illegal activity. They must also put in place systems for removing illegal content when it does appear. Search services must also take steps to reduce the risk of users encountering illegal content via their services.
The act sets out a list of priority offences. These reflect the most serious and prevalent illegal content and activity, against which companies must take proactive measures.
Platforms must also remove any other illegal content where there is an individual victim, where it is flagged to them by users, or where they become aware of it through any other means.
The kinds of illegal content and activity that platforms must protect users from includes:
- child sexual abuse
- controlling or coercive behaviour
- extreme sexual violence
- extreme pornography
- fraud
- racially or religiously aggravated public order offences
- inciting violence
- illegal immigration and people smuggling
- promoting or facilitating suicide
- intimate image abuse
- selling illegal drugs or weapons
- sexual exploitation
- terrorism
Protecting children
To protect children, platforms must:
- remove illegal content quickly or prevent it from appearing in the first place, including content promoting self-harm
- prevent children from accessing harmful and age-inappropriate content including pornographic content, content that promotes, encourages or provides instructions for suicide, self-harm or eating disorders, content depicting or encouraging serious violence or bullying content
- enforce age limits and use age-checking measures on platforms where content harmful to children is published
- ensure social media platforms are more transparent about the risks and dangers posed to children on their sites, including by publishing risk assessments
- provide parents and children with clear and accessible ways to report problems online when they do arise
Protecting adults
The act offers adults a “triple shield” of protection which is designed to:
- make sure illegal content is removed
- enforce the promises social media platforms make to users when they sign up, through terms and conditions
- offer users the option to filter out content, such as online abuse, that they do not want to see.
Enforcing the act
Companies can be fined up to £18 million, or 10% of their global revenue (whichever is greater), if they fail to comply with their duties under the act.
Implementation of the act so far
Illegal content
On 16 December 2024, Ofcom’s draft illegal content codes of practice were laid before Parliament. A written ministerial statement of the same date explained:
The illegal content duties apply to all regulated user-to-user and search services under the act, no matter their size or reach. These include new duties to have systems and processes in place to tackle illegal content and activity. Ofcom, as the independent regulator for this regime, is required to set out measures in codes of practice that providers can take to fulfil these statutory duties. Ofcom has now submitted to me the drafts of its first codes of practice for the illegal content duties to lay these in Parliament for scrutiny. If neither House objects to the draft codes, Ofcom must issue the codes and the illegal content duties will come into force 21 calendar days later. Once the codes have come into force, the statutory safety duties will begin to apply to service providers, and Ofcom will be able to enforce against non-compliance.
Ofcom has also published its guidance on how providers should carry out risk assessments for illegal content and activity. Providers now have three months to complete their illegal content risk assessment.
The completion of the risk assessments should coincide with the codes of practice coming into force if they pass the statutory laying period. Ofcom’s codes will set out steps service providers can take to address identified risks. The draft codes will drive significant improvements in online safety in several areas. They will ensure service providers put in place effective systems and processes to take down illegal content, including for content that amounts to terrorism, child sexual abuse material (CSAM), public order offences, assisting suicide, intimate image abuse content and other offences. They will make it materially harder for strangers to contact children online, to protect children from grooming. They will significantly expand the number of services that use automated tools to detect CSAM. They will make it significantly easier for the police and the Financial Conduct Authority (FCA) to report fraud and scams to online service providers. And they will make it easier for users to report potentially illegal content.
The draft codes are a vital step in implementing the new regime. Ofcom fully intends to build on these foundations and has announced plans to launch a consultation in spring 2025 on additional measures for the codes. This includes consulting on how automated tools can be used to proactively detect illegal content, including the content most harmful to children, going beyond the automated detection measures that Ofcom have already included. Bringing in the codes will be a key milestone in creating a safer online environment for UK citizens as the duties begin to apply and become enforceable.
Ofcom also published a statement on protecting people from illegal harms online (16 December 2024).
Threshold conditions for categorisation of services
The 2023 act introduces a system for categorising some regulated services based on characteristics such as user numbers and functionality. The providers of categorised services will be required to comply with additional duties depending on which category they fall within (category 1, 2A or 2B) if they meet certain thresholds set out in secondary legislation. Draft regulations on the threshold conditions for categorising services as Category 1, 2A or 2B were laid before Parliament on 16 December 2024. Under the regulations, the category 1 threshold conditions would be met by a regulated user-to-user service where it:
- has an average number of monthly active United Kingdom users that exceeds 34 million and uses a content recommender system, or
- has an average number of monthly active United Kingdom users that exceeds 7 million, uses a content recommender system and provides a functionality for users to forward or share regulated user-generated content on the service with other users of that service.
The category 2A threshold conditions would be met by a search engine of a regulated search service or a combined service where it:
- has an average number of monthly active United Kingdom users that exceeds 7 million, and
- is not a vertical search engine (a search engine which only enables a user to search selected websites or databases in relation to a specific topic, theme or genre of search content).
The category 2B threshold conditions would be met by a regulated user-to-user service where it:
- has an average number of monthly active United Kingdom users that exceeds 3 million and provides a functionality for users to send direct messages to other users of the same service.
In a written ministerial statement of 16 December 2024, Peter Kyle, the Secretary of State for Science, Innovation and Technology, explained why the threshold conditions had not been set to capture “small but risky” services:
…The act required Ofcom to carry out research within six months of Royal Assent, and to then provide the Secretary of State with advice on the threshold conditions for each of the three categories. This research included a call for evidence so that stakeholder feedback could be considered in Ofcom’s advice.
After considering Ofcom’s advice and subsequent clarificatory information in public letters, I have decided to set threshold conditions for categorisation in accordance with Ofcom’s recommendations. I am satisfied that Ofcom’s advice, which was published in March, is the culmination of an objective, evidence-based process. I have taken this decision in line with the factors set out in schedule 11 of the act. I have been very clear to date, and want to reiterate, that my priority is the swift implementation of the act’s duties to create a safer online environment for everyone. I am open to further research in the future and to update thresholds in force if necessary.
I appreciate that there may be some concerns that, at this time, threshold conditions have not been set to capture so-called ‘small but risky’ services by reference to certain functionalities and characteristics or factors. My decision to proceed with the thresholds recommended by Ofcom, rather than to take the approach of discounting user number thresholds, reflects the fact that any threshold condition created by the government should take into account the factors as set out in the act, be evidence-based and avoid the risk of unintended consequences.
I also welcome Ofcom’s statement that it is keenly aware that the smallest online services can represent a significant risk to UK citizens, that it has established a dedicated ‘small but risky’ supervision taskforce and that it will use the tools available under the act to identify, manage and enforce against such services where there is a failure to comply with the duties that all regulated services will be subject to…
Age assurance and children’s access
The act requires in-scope services to assess any risks to children from using their platforms and then set appropriate age restrictions to protect them harmful content. Websites with age restrictions need to specify in their terms of service what measures they are using to prevent underage access and apply these terms consistently. There are different technologies – age assurance technologies – that can be used to check people’s ages online. The act requires services to use “highly effective age assurance” technologies.
On 16 January 2025, Ofcom published industry guidance on how it expects age assurance to be implemented. Ofcom’s position on “highly effective age assurance”:
-
confirms that any age-checking methods deployed by services must be technically accurate, robust, reliable and fair in order to be considered highly effective;
-
sets out a non-exhaustive list of methods that we consider are capable of being highly effective. They include: open banking, photo ID matching, facial age estimation, mobile network operator age checks, credit card checks, digital identity services and email-based age estimation;
-
confirms that methods including self-declaration of age and online payments which don’t require a person to be 18 are not highly effective;
-
stipulates that pornographic content must not be visible to users before, or during, the process of completing an age check. Nor should services host or permit content that directs or encourages users to attempt to circumvent an age assurance process; and
-
sets expectations that sites and apps consider the interests of all users when implementing age assurance – affording strong protection to children, while taking care that privacy rights are respected and adults can still access legal pornography.
Ofcom also set out what services are required to do, and by which dates:
-
Requirement to carry out a children’s access assessment. All user-to-user and search services – defined as ‘Part 3’ services – in scope of the act, must carry out a children’s access assessment to establish if their service – or part of their service – is likely to be accessed by children. From today, these services have three months to complete their children’s access assessments, in line with our guidance, with a final deadline of 16 April. Unless they are already using highly effective age assurance and can evidence this, we anticipate that most of these services will need to conclude that they are likely to be accessed by children within the meaning of the act. Services that fall into this category must comply with the children’s risk assessment duties and the children’s safety duties.
- Measures to protect children on social media and other user-to-user services. We will publish our Protection of Children Codes and children’s risk assessment guidance in April 2025. This means that services that are likely to be accessed by children will need to conduct a children’s risk assessment by July 2025 – that is, within three months. Following this, they will need to implement measures to protect children on their services, in line with our Protection of Children Codes to address the risks of harm identified. These measures may include introducing age checks to determine which of their users are under-18 and protect them from harmful content.
-
Services that allow pornography must introduce processes to check the age of users: all services which allow pornography must have highly effective age assurance processes in place by July 2025 at the latest to protect children from encountering it. The act imposes different deadlines on different types of providers. Services that publish their own pornographic content (defined as ‘Part 5 Services) including certain Generative AI tools, must begin taking steps immediately to introduce robust age checks, in line with our published guidance. Services that allow user-generated pornographic content – which fall under ‘Part 3’ services – must have fully implemented age checks by July.
Criticism of the act and its implementation
Online Safety Act Network
The Online Safety Act Network is an organisation that “aims to keep all those with an interest in the successful implementation of the Online Safety Act engaged and connected”. It has criticised the implementation of the act in a number of areas (see the analysis section of the Network’s website).
In a commentary published on 7 February 2025, the Network argued that one of the “most prominent” gaps in Ofcom’s implementation was its failure to “come up with requirements that deliver on the act’s “safe by design” objective”:
…Linked to this is the omission of specific measures to mitigate the multiple, evidenced risks to children from livestreaming functionality in the first set of illegal harms codes – risks that Ofcom itself detailed comprehensively in its own risk register back in November 2023. We are promised proposals on this – and a number of other measures – in a further consultation due in April; it’s not clear whether some of the other omissions which we flagged from the first set of codes, including measures to reduce the risk to children of location information, large group messaging or ephemeral messaging, will be included in the new proposals. Either way, those measures won’t appear in a new version of the illegal harms code nor be enforceable until well into 2026.
The Network also claimed that the above proposals would not “address the concerns that have fuelled the clamour for more extensive legislative action to address the impact of social media and smartphone use on children and young people”:
…The rise of the Smartphone Free Childhood campaign and grassroots initiatives by parents to delay or ban children’s access to phones and social media is in part a reaction to the very slow pace of change in online protections for children – despite the repeated government assurances. It also reacts to an increasing awareness of the addictive, attention-grabbing functionality of the platforms and apps that children access via their smartphones, something which the previous Government refused to address in the legislation…Nor do Ofcom’s measures – or the Act itself, given its overly dominant focus on content – really get at the underlying effect of the social media business model and how it manifests in particular risks to children, for example the financial incentives for influencers propagating harmful content or views.
The Network has criticised the secretary of state’s decision on threshold conditions for categorising services, claiming that this “will allow small, risky platforms to continue to operate without the most stringent regulatory restrictions available and leave significant numbers of vulnerable users, women and minoritised groups at risk of serious harm from the targeted activities on these platforms”.
An August 2024 article discussed the act’s failings in relation to misinformation and disinformation following last summer’s riots.
Internet Watch Foundation
The Internet Watch Foundation (IWF) works to remove online child sexual abuse images and videos.
In January 2025, Catherine Brown, IWF Chair, wrote to the Prime Minister urging him to strengthen online safety regulation. The letter was prompted by “a tidal wave of online abuse”. In 2024, the IWF acted to remove 291,270 webpages containing images or videos of children suffering sexual abuse or links to that content. This was the highest number of child sexual abuse webpages the IWF had discovered and represented a 5% increase on the 275,650 webpages identified in 2023. Catherine Brown said the 2023 act had the “potential to be transformational in protecting children from online exploitation”. Referring to Ofcom’s December 2024 codes of practice and guidance on tackling illegal harms, she said that, for the first time, platforms would be legally required to scan for known child sexual abuse imagery. However, the IWF was “deeply concerned” that the codes allow services to remove illegal content only when it is ‘technically feasible’, which will incentivise platforms to avoid finding ways to remove illegal content in order to evade compliance”:
…This undermines the Act’s effectiveness in combatting online child sexual abuse. We urge you to instruct Ofcom to urgently review and mitigate this blatant get-out clause.
The publication of the Codes also highlights the weaknesses within the legislation itself. For example, the Act does not mandate companies to moderate content uploaded in private communications. As a result, illegal content that is blocked elsewhere on the internet can still be freely shared in private online spaces.
Furthermore, the rules-based nature of the regulations means that platforms will be in compliance with their duties if they follow the measures in the Codes, rather than needing to effectively and proactively address the harms identified in their risk assessments.
We call on your Government to remove the safe harbour inadvertently offered to platforms by the Act, especially those that facilitate the sharing of child sexual abuse material. Additional legislation should be introduced to ensure there are no safe havens for criminals in private communications.
Molly Russell Foundation
In January 2025, Ian Russell, Chair of the Molly Russell Foundation, wrote to the Prime Minister about protecting children and young people online. Mr Russell’s daughter, Molly, committeed suicide in 2017 after viewing images promoting suicide and self-harm. In his letter, Ian Russell said that Ofcom’s implementation of the 2023 act had been a “disaster”:
…Ofcom’s choices when implementing the Act have starkly highlighted intrinsic structural weaknesses with the legislative framework. We not only have a regulator which has fundamentally failed to grasp the urgency and scale of its mission, but a regulatory model that inherently and significantly constrains its ability to reduce preventable harms.
While I understand that this is legislation you inherited and did not design, the reality is that unless you now commit to act decisively to fix the Online Safety Act, the streams of life-sucking content seen by children will soon become torrents: a digital disaster driven by the actions of tech firms, and being left unchallenged by a failing regulatory model. This preventable harm would be happening on your watch.
At the same time as the UK’s overdue regulation is falling badly short, the very industry being regulated is ominously changing. The Online Safety Act was developed around a premise that large tech firms would increasingly accept their responsibilities, and we would see safer social networks develop over time.
However, this is a time when platforms readily say they will now ‘catch less of the bad stuff’; allow underdeveloped AI to generate and spread disinformation; and promote targeted and polarising content leading to isolation and despair. We have now entered a different era, and we will need a different regulatory approach that can address it.
(…)
It’s on that basis that I now encourage you to act swiftly and decisively to address this coming flood of preventable harm. In the immediate term, that means strengthening the existing framework. Beyond that, it means committing to substantially strengthened regulatory model – with primary legislation being introduced as soon as possible to enact its provisions.
Further reading
Parliamentary material