Documents to download

What’s the problem?

There is increasing concern about harmful content and activity online. This includes cyberbullying, the intimidation of public figures, material promoting violence and self-harm, and age inappropriate content. The Covid-19 pandemic has seen groups using social media platforms to spread anti-vaccine misinformation.

Critics, including parliamentary committees, academics, and children’s charities, have argued that self-regulation by internet companies is not enough to keep users safe and that statutory regulation should be introduced.

The Online Harms White Paper (April 2019) – a new regulatory framework?

An Online Harms White Paper (April 2019) argued that existing regulatory and voluntary initiatives had “not gone far or fast enough” to keep users safe. The Paper proposed a single regulatory framework to tackle a range of harms. At its core would be a duty of care for internet companies, including social media platforms. An independent regulator would oversee and enforce compliance with the duty. A consultation on the proposals closed in July 2019.

The White Paper received a mixed reaction. Children’s charities were positive. However, some commentators raised concerns that harms were insufficiently defined. The Open Rights Group and the Index on Censorship warned that the proposals could threaten freedom of expression.

Government response to the White Paper consultation (December 2020)

An initial response to the consultation was published in February 2020. This stated, among other things, that the Government was minded to make Ofcom the regulator for online harms.

A full response was published in December 2020. This confirmed that a duty of care would be introduced through an Online Safety Bill and that Ofcom would be the regulator. The framework would apply to companies whose services: 

  • host user-generated content which can be accessed by users in the UK; and/or
  • facilitate public or private online interaction between service users, one or more of whom is in the UK.

It would also apply to search engines.

The legislation would define harmful content and activity as that which “gives rise to a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”.

Priority categories of harmful content would be set out in secondary legislation:

  • criminal offences (e.g. child sexual exploitation and abuse, terrorism, hate crime and the sale of illegal drugs and weapons);
  • harmful content and activity affecting children (e.g. pornography); and
  • harmful content and activity that is legal when accessed by adults, but which may be harmful to them (e.g. content about eating disorders, self-harm or suicide).

The duty of care would require companies to take proportionate steps to address relevant illegal content and activity and to protect children. The largest tech companies would additionally be required to act in respect of content or activity on their services which is legal but harmful to adults.

The Government has published a factsheet summarising its plans.

Reaction to the Government’s proposals has again been mixed. Some commentators continue to argue that the framework would threaten freedom of expression and privacy. Others have raised concerns about the definition of harm.

Documents to download

Related posts