There is increasing concern about harmful content and activity on social media. This includes cyberbullying and abuse, the intimidation of public figures, disinformation, age-inappropriate content, and material promoting violence and self-harm.

In a December 2017 report, the Committee on Standards in Public Life said that social media was the “most significant factor accelerating and enabling intimidatory behaviour in recent years.” The Digital, Culture, Media and Sport Committee’s inquiry into disinformation and ‘fake news’ said that the role of social media could sometimes have “devastating” consequences.

A January 2019 report by the Commons Science and Technology Committee noted the negative impacts of cyberbullying, grooming and ‘sexting’ on young people’s health.

The National Society for the Prevention of Cruelty to Children (NSPCC) has been campaigning for an end to what it calls the “Wild West Web.”

What’s the current position?

Criminal law applies to online activity in the same way as to offline activity. As the May Government noted in its 2017 Internet Safety Strategy Green Paper, legislation passed before the digital age “has shown itself to be flexible and capable of catching and punishing offenders whether their crimes are committed by digital means or otherwise.” The Crown Prosecution Service has published guidance on offences on social media.

Various regulators play a role for certain types of online activity, for example Ofcom, the Competition and Markets Authority, the Advertising Standards Authority, the Information Commissioner’s Office, and the Financial Conduct Authority. As the Lords Communications and Digital Committee noted in its January 2019 report, the internet is not quite an unregulated ‘Wild West’.

However, there is no overall regulator nor specific content regulator. Under the e-Commerce Directive, social media companies are exempt from liability for illegal content they host if they “play a neutral, merely technical and passive role” towards it. Once they become aware of illegal material, companies must remove or disable access to it.

For content that is harmful or inappropriate, but not illegal, social media platforms self-regulate through “community standards” and “terms of use.”

A duty of care for social media companies?

The e-Commerce Directive does not prevent EU Member States from requiring online service providers to apply duties of care.

Lorna Woods, Professor of Internet Law at the University of Essex and William Perrin, Trustee, Carnegie UK Trust have proposed a regulatory regime, centred on a statutory duty of care, to reduce online harm. They published a series of blog posts in 2018, with a “refined” proposal set out in January 2019. According to Woods and Perrin, social media service providers should be responsible for the public space they have created, much as property owners or operators are in the physical world.

A February 2019 NSPCC report drew heavily on the work of Woods and Perrin and argued for a regulator to enforce a legal duty of care to protect children on social media.

The Commons Science and Technology, Lords Communications and Digital, and Commons Digital, Culture, Media and Sport Committees have all called for a duty of care to be imposed on social media companies.

The Online Harms White Paper: A new regulatory framework?

The May Government published an Online Harms White Paper in April 2019. This set out the then Government’s approach for tackling “content or activity that harms individual users, particularly children, or threatens our way of life in the UK.” According to the Government, existing regulatory and voluntary initiatives had “not gone far or fast enough, or been consistent enough between different companies” to keep users safe.

The White Paper proposed a single regulatory framework to tackle a range of harms. At the core would be a statutory duty of care for internet companies, including social media platforms. An independent regulator would oversee and enforce compliance with the duty.

A consultation on the proposals closed on 1 July 2019. The background briefing to the October 2019 Queen’s Speech said that the responses were still being analysed, but draft legislation on online harms would be forthcoming.

Reaction to the White Paper

The White Paper received a mixed response. In a June 2019 Carnegie UK Trust blog, Lorna Woods, William Perrin and Maeve Walsh said it was a “significant step in attempts to improve the online environment.” However, among other things, they raised concerns about the scope of harms covered by the paper, as well as its failure to name a regulator (for example Ofcom).

The NSPCC said that the White Paper was a “hugely significant commitment” that could make the UK a “world pioneer in protecting children online.” The Children’s Charities’ Coalition on Internet Safety and the Children’s Commissioner also welcomed the proposals.

Others have not been so positive. In an October 2018 blog, Graham Smith, an internet lawyer, challenged the idea that social media platforms should be viewed as having responsibilities for a public space. In another blog (April 2019), Smith argued that the White Paper’s “impermissibly vague” concept of harm could cause problems for legislation.

Smith, and others such as the Index on Censorship, have also claimed that a duty of care could damage freedom of expression.

Further reading

Insights for the new Parliament

This article is part of our series of Insights for the new Parliament. This series covers a range of topics that will take centre stage in UK and international politics in the new Parliament.