Artificial intelligence could have the potential to add trillions of dollars a year to the global economy. Using it could increase labour productivity to bring social and economic benefits. But the risks need careful analysis, as experts warn that bias and misinformation may proliferate.

Artificial intelligence (AI) is developing rapidly. Although there is no universally agreed definition, the 2023 UK Government policy paper A pro-innovation approach to AI regulation defined AI technologies or systems as products and services that are ‘adaptable’ and ‘autonomous’.

Adaptability refers to AI systems developing new ways of finding patterns in data that are not expected by their human programmers. Autonomy refers to some AI systems that can make decisions without the intent or control of human beings.

A 2023 report by the McKinsey consultancy estimated that AI technologies have the potential to add between $2.6 trillion to $4.4 trillion annually to the global economy. For comparison, in 2021 the UK’s gross domestic product (GDP) was $3.1 trillion.

Impacts on the economy

In 2021, a UK Government-commissioned report by consultants PwC estimated that 7% of existing UK jobs could face a high probability (over 70%) of automation in the next five years. This would rise to 18% in 10 years, and to around 30% after 20 years. The Office for National Statistics said in 2019 that women and young people are most likely to work in jobs at higher risk of automation, such as clerical work.

However, the PwC research argued that the overall impact of AI on employment levels would be broadly neutral, because of AI-related productivity growth.

The Competition and Markets Authority, the UK’s market regulator, published an AI Foundation Model update in 2024. It found that large technology companies typically have greater access to the necessary tools (such as data) for developing AI models. This might reduce fair, open and effective competition.

Use of AI in public services

One of the proposed benefits of AI is its potential to reduce administrative work.

In education, AI could lessen administrative work, help with marking, and identify learners’ needs. However, differing access to AI technologies may exacerbate existing inequalities.

In healthcare, AI could lead to better outcomes by helping clinicians make decisions, diagnose disease and develop new drugs.  However, expanded use of digital and AI technologies may present barriers to digitally excluded people such as some older people, according to research produced by the Ada Lovelace Institute in 2023.

In 2022, a House of Lords report on AI and the justice system noted that AI could help predict crimes, assist visa-issuing authorities, and employ facial recognition tools.  However, some, such as the Alan Turing Institute, are concerned about the use of AI trained on historical crime data that may be discriminatory.

Algorithmic bias and discrimination

There is a risk that AI technologies may make biased decisions. For example, data used to train the AI may contain biases, or decisions made by humans when designing it introduce bias.

Use of AI systems with algorithmic bias could lead to discriminatory outcomes and exacerbated inequalities. For example, according to academic research in 2019, an algorithm used to allocate healthcare in US hospitals was less likely to refer Black people who were equally as sick as White people to healthcare programmes.

Misinformation, disinformation, and scams

‘Generative AI’ uses prompts to produce original content including realistic pictures and video that can be used for disinformation and misinformation, or ‘deepfakes’.  Experts have expressed concerns that malicious actors may use generative AI to more easily produce disinformation at scale, which could erode public trust, including in elections.

Image ownership and copyright

AI can be used to recreate voices and images imitating living or deceased individuals. This technology can have creative uses, such as ‘de-aging’ actors in films.

However, creative sector trade unions have raised concerns around fair remuneration by companies recreating the likeness of living or dead performers. In the US and some EU states there exists a legal right to own your image. There is no such right in the UK. However, some protection may be provided by other legislation (for instance, contract law). In 2023 the UK Government said it would ratify the Beijing Treaty on Audiovisual Performances, giving intellectual property rights to performers.

Generative AI tools are trained using datasets. Outputs can mimic the style of specific human creators if their works are present in the datasets. In ongoing court cases, some rightsholders such as Getty Images have sued model developers in the UK and US alleging copyright infringement.

In February 2024, the UK Government announced that a working group consisting of the Intellectual Property Office, rightsholders and AI developers were unable to agree on a voluntary code of practice for AI and copyright.

Implications for privacy and security

AI training data can be personal or non-personal. In the UK, the Data Protection Act 2018 and UK General Data Protection Regulation (GDPR) regulate the collection and use of personal data.  An article published by Wired in 2023 suggested that most AI training data has come from publicly available information, such as Wikipedia.

Privacy and common law may limit how employers can use AI in decision-making and places restrictions on the use of monitoring tools. Some stakeholders have said that live facial recognition, predictive policing and profiling could restrict civil liberties and affect privacy. For instance, 32 human rights organisations wrote to the government in 2021 calling for the ban of facial recognition technology.

UK and international regulatory environment

No UK law specifically regulates AI. However, there are numerous laws that restrict how AI can be used in practice, such as data protection law, equalities, privacy and common law, and intellectual property law.  In February 2024 the UK Government confirmed a context-based approach to AI technologies. This sets out five principles (such as fairness) for existing regulators to interpret and apply to the use of AI in their particular sector.

The EU has an AI Act designed to work with existing EU legislation, such as GDPR, and it bans “high-risk” applications such as live facial recognition. In the US, a Blueprint for an AI Bill of Rights outlines non-binding guidelines, addressing discrimination, data privacy, and transparency. In October 2023, US President Joe Biden signed an Executive Order mandating standards for AI companies  and measures to protect workers and disadvantaged groups.

Further reading


Author: Simon Brawley, POST

The Parliamentary Office of Science and Technology (POST) is an impartial research and knowledge exchange service. It works to ensure that the best available research evidence is brought to bear on the legislative process. Find POST’s series of AI briefings at post.parliament.uk.

Photo Credit: (© By Leonid – stock.adobe.com).