Online safety for children and young people
There will be a Westminster Hall debate on online safety for children and young people on 26 November 2024. The debate will be opened by Lola McEvoy MP.
There will be a Westminster Hall debate on preventing misinformation and disinformation in online filter bubbles on 16 January 2024 at 2.30pm. The debate will be led by John Penrose MP.
There will be a Westminster Hall debate on preventing misinformation and disinformation in online filter bubbles on 16 January 2024 at 2.30pm. The debate will be led by John Penrose MP (Conservative).
The term filter bubble was coined by Eli Pariser in his 2011 book of the same name The Filter Bubble: What the Internet is Hiding from You. It refers to the ways in which information provided through digital platforms can be personalised based on an individual’s “web history”. This creates what Pariser describes as a “filter bubble” which is the “unique, personal universe of information created just for you by this array of personalizing filters”. Concerns have been raised by stakeholders that filter bubbles, to quote the Reuters Institute, might erode:
…the possibility of a relatively shared common ground – as we might be shown more and more of things we like, while things we are not prone to like are hidden from us.
This could then, some analysts have suggested, “fuel polarisation, diminish mutual understanding, and ultimately lead to a situation where people are so far apart that they have no common ground”.
Several studies have disputed the connection between algorithmic selection by digital platforms (the personalised display of particular information by a search engine or user-to-user service) and less diverse news use. In the Reuters Institute’s 2022 literature review of studies on echo chambers, filter bubbles and polarisation, it argued that empirical studies suggest that “secondary gatekeepers such as search engines and social media”, are in most cases “associated with more diverse news use”. However, it also found that “self-selection” (the personalisation that an individual voluntarily does themselves), primarily among a “small minority of highly partisan individuals, can lead people to opt in to echo chambers”.
Similarly, Richard Fletcher, a Senior Fellow at the Reuters Institute, gave the following summary of the evidence on filter bubbles in a talk in 2020:
Most of the best available independent empirical evidence seems to suggest that online news use on search and social media is more diverse. But there’s a possibility that this diversity is causing some kind of polarisation, in both attitudes and usage.
The UK Government defines disinformation as the:
…deliberate creation and spreading of false and/or manipulated information that is intended to deceive and mislead people, either for the purposes of causing harm, or for political, personal or financial gain.
Misinformation is defined as the inadvertent spread of false information.
According to Ofcom’s 2023 report on news consumption in the UK (PDF), the most used platform for news amongst adults is broadcast television (used by 70% of UK adults). This rises to 75% when on-demand content is included. Online sources are the second most used platforms for news (used by 68% of UK adults). 47% of UK adults use social media for news. This compares to 83% of 16-24 year olds who said that they consume news online. Broadcast TV is used for news by 47% of this age group.
Stakeholders have raised concerns that due to the wider range of sources available online, it may be more difficult for users to distinguish between true and false information online. For example, in 2021 Ipsos Mori and Google undertook research into global users’ experiences of media literacy. 55% of users said that they were interested in learning more about how to use tools to distinguish between true and false information online.
In 2021, Ofcom published a discussion paper which aimed to understand the prevalence of online false information in the UK (PDF). It used NewsGuard’s classification of websites that repeatedly publish false information, and web analytics to measure the scale of “user engagement with false information websites” in the UK:
For the period between September 2018 and August 2020, we have assessed the total monthly traffic from UK audiences on mobile and desktop devices to the 2,093 trustworthy information websites and the 177 false information websites in our sample. Trustworthy information websites attract around two billion visits every month, while false information websites attract around 14 million visits every month (approximately a 140:1 ratio).
Content on user-to-user services (for instance, X or Facebook) is governed primarily by the individual platform’s terms of service. For instance, the misinformation policy within Facebook’s Community Standards. These guidelines suggest that it is difficult to publish a “comprehensive list of what is prohibited”, as the “world is changing constantly, and what is true one minute may not be true the next minute”. In the cases of both X, formerly Twitter, and Meta, these organisations partner with external independent experts to assess the truth of a given piece of content.
The Online Safety Act 2023 received Royal Assent on 26 October 2023. It applies to England, Wales, Scotland and Northern Ireland. The 2023 Act aims to increase user safety and to improve users’ ability to keep themselves safe online. All regulated services must protect users from illegal content that reaches the criminal threshold.
In the context of misinformation, section 71 of the 2023 Act requires category 1 services (the largest platforms, to be designated through subsequent secondary legislation) to ensure that they have adhered to their own terms and conditions. For example, removing misinformation or disinformation content that meets the thresholds set out in their own policies.
Ofcom has enforcement powers including issuing fines of up to £18 million or 10% of a company’s worldwide revenue (whichever was higher), as well as business disruption measures. It also empowers Ofcom to require the largest service providers to publish annual transparency reports. Ofcom would be able to specify the information service providers included in these.
Section 179 of the 2023 Act sets out a new false communications offence. A person commits an offence if they send a message that conveys information they know to be false, which is intended to, under section 179(1)(c): “cause non-trivial psychological or physical harm to a likely audience”. The person must also have no reasonable excuse for sending the message.
Under section 180 of the 2023 Act, recognised news publishers and broadcasters (including on demand services) are exempt from this provision.
Under section 152 of the 2023 Act, Ofcom is required to establish and maintain an advisory committee on disinformation and misinformation. The function of the committee is to provide Ofcom advice about:
(a)how providers of regulated services should deal with disinformation and misinformation on such services,
(b)OFCOM’s exercise of the power conferred by section 77 to require information about a matter listed in Part 1 or 2 of Schedule 8, so far as relating to disinformation and misinformation, and
(c)OFCOM’s exercise of their functions under section 11 of the Communications Act (duties to promote media literacy) in relation to countering disinformation and misinformation on regulated services.
On 24 October 2023, Lord Clement-Jones (Liberal Democrat) asked the Government whether it would commit to “setting up the advisory committee on disinformation and misinformation as soon as possible”. In response, Viscount Camrose, Parliamentary Under Secretary of State (Department for Science, Innovation and Technology), said that it was the “role of Ofcom to create it. I will undertake to liaise with it to make sure that that is speeded up.”
The 2023 Act also updates Ofcom’s media literacy duties, as set out in section 11 of the Communications Act 2003, so as to include objectives related to regulated user-to-user services and search services. One component of this, under section 165(3) of the 2023 Act, is for Ofcom to take steps to heighten the public’s awareness and understanding of ways in which “they can protect themselves and others when using regulated services”. For instance, by understanding “the nature and impact of disinformation and misinformation”.
The Government published an Online Media Literacy Strategy (PDF) and action plan in 2021. One of the strategy’s commitments was to build “audience resilience to misinformation and disinformation: using media literacy as a tool to reduce the harm of misinformation and disinformation”.
The Counter-Disinformation Unit was set up by the Government in 2019. It uses publicly available data to “develop an understanding of disinformation narratives and trends”, focusing on content targeted at the UK audience which may pose a risk to public health, public safety, or national security.
Broadly, in the context of misinformation and disinformation, there has been much debate amongst stakeholders as to either the adequacy or necessity of the Government’s policies in this area.
Full Fact, a charity that provides fact checking tools, for example, said that the Online Safety Act did not contain a “credible plan to tackle the harms from online misinformation”, as it does not provide “regulatory oversight” of what is included in internet companies’ “terms of service and how they will address it [misinformation]”.
Others, however, have argued that the regulation of misinformation might infringe upon an individual’s right to freedom of expression. The Open Rights Group, an organisation that campaigns on digital rights issues in the UK, for instance, said that the 2023 Act’s emphasis upon a platform’s terms of service could mean that “tech companies” in effect “decide what is and isn’t legal”.
Policy implications of artificial intelligence (AI), 9 January 2024
Department for Science, Innovation and Technology, A guide to the Online Safety Bill, 30 August 2023
Cabinet Office and Department for Science, Innovation and Technology, Fact Sheet on the CDU and RRU, 9 June 2023
Department for Science, Innovation and Technology, Online Media Literacy Strategy, August 2021
Internet Matters, What is fake news and misinformation? (accessed 15 January 2024)
World Health Organisation, Combatting misinformation online (accessed 15 January 2024)
UN, Special Rapporteur on freedom of opinion and expression (accessed 15 January 2024)
Ofcom, News consumption in the UK: 2023 (PDF), 20 July 2023
The Reuters Institute, Digital News Report 2023, June 2023
Nathalie Van Raemdonck, Instagram Stories and the spread of harmful content; where’s the friction?, London School of Economics, 19 December 2022
Eva Carrillo Roas, The fine line between fake news and freedom of speech, King’s College London, 18 May 2022
The Royal Society, The online information environment, 19 January 2022
The Reuters Institute, Echo chambers, filter bubbles, and polarisation: a literature review, 19 January 2022
TED Talks, Eli Pariser: Beware online “filter bubbles”, March 2011
There will be a Westminster Hall debate on online safety for children and young people on 26 November 2024. The debate will be opened by Lola McEvoy MP.
This briefing paper provides information on the Government's targets for rolling out gigabit broadband, its policies to support the roll-out by industry, and its public funding programme, Project Gigabit.
A general debate on rural affairs has been scheduled in the Commons Chamber for 11 November.