Documents to download

What is artificial intelligence?

Artificial intelligence (AI) can take many different forms and there is no single, universally agreed, definition. The term is frequently used as a shorthand to refer to technologies that perform the types of cognitive functions typically associated with humans, including reasoning, learning and solving problems.

To perform these types of functions, AI systems generally rely on vast amounts of data. This data may be ‘structured’ or ‘unstructured’. Structured data is typically stored in a fixed format and can be more easily analysed and processed, such as financial transactions that have a date, time and amount. Unstructured data includes images, videos and text files; it is not organised according to a predefined structure, it is generally unformatted and is much harder to analyse.

Both types of data can be used to ‘train’ AI so that it can recognise patterns and correlations. This is achieved by the AI system applying rules (algorithms), based on the training dataset, to interpret new data and perform a specific task. In some instances, the AI is supervised and trained with data sets labelled by humans, as explained in this example from IBM:

A data scientist training an image recognition model to recognize dogs and cats must label sample images as “dog” or “cat”, as well as key features—like size, shape or fur—that inform those primary labels.  The model can then, during training, use these labels to infer the visual characteristics typical of “dog” and “cat.”

This is useful for AI systems designed to look for specific things, such as spam emails.

In other instances, the system is unsupervised and the data is left unlabelled. Under these conditions, the system autonomously identifies patterns in the data. This is useful where the AI is designed to find something that is not known in advance, such as online shopping recommendations.

UK Government policy and regulation

The UK does not have any AI-specific regulation or legislation covering the technology. Instead, AI is regulated in the context in which it is used, through existing legal frameworks, such as financial services legislation.

Some regulators, however, have oversight of the development, implementation, and use of AI more broadly. For example, the Information Commissioner’s Office (the UK’s independent body established to uphold information rights) has guidance on its website covering AI and data protection and explaining decisions made with AI.

The Johnson and Sunak governments started developing a more comprehensive regulatory framework for AI. This included publishing strategy documents and a white paper on AI.

As part of its aim to “ensure the UK gets the national and international governance of AI technologies right” the government ran a public consultation on regulating AI in 2022. A white paper – A pro-innovation approach to AI regulation – followed in March 2023.

In the paper, the government proposed that AI would continue to be overseen by existing regulators covering specific sectors, such as Ofcom (the UK’s communications regulator), Ofgem (the energy regulator in Great Britain), and the Financial Conduct Authority (the UK’s conduct regulator for financial services). This context-based approach to regulation was favoured by the government, rather than creating a single regulatory function, and uniform rules, to govern AI.

The government proposed that AI regulation would be informed by five, cross-sector principles which regulators will “interpret and apply within their remits in order to drive safe, responsible AI innovation”. The principles are:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

While the Conservative government decided against creating a single regulatory function to govern AI, it proposed that existing regulators would be aided by “central support functions”, established by the government, such as horizon scanning for emerging risks and trends, and monitoring the overall regulatory framework. The government also proposed that the five principles would not, at least initially, be subject to new regulation, but instead would be implemented by existing regulators.

A further consultation accompanied the publication of the white paper. The government’s response was published in February 2024 and confirmed that it  remained committed to its cross-sector principles and “context-specific” approach to regulation. It said that it would seek to build on this in the future, only legislating when it was “confident that it is the right thing to do”.

Both the Labour Party’s 2024 election manifesto and the King’s Speech indicate that the Labour government is looking to take a different approach to the previous government towards regulating AI. In its 2024 manifesto (PDF) the Labour Party said it would “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models”. Similarly, in the King’s Speech, the Labour Government said that it would “harness the power of artificial intelligence as we look to strengthen safety frameworks” and signalled that it would “place requirements on those working to develop the most powerful artificial intelligence models”.

Use of AI in different sectors

AI is currently being used across many different industries, from finance to healthcare. In 2022, the UK Government reported that “around 15% of all businesses (432,000 companies) had adopted at least one AI technology”. AI is also used in the public sector. The NHS AI Lab, for example, is focused on developing and deploying AI systems in health and care. In 2023, the government provided funding of £21 million to roll out AI imaging and decision support tools to help analyse chest X-rays, support stroke diagnosis and manage conditions at home.

The Food Standards Agency uses an AI tool to help local authorities prioritise inspections by predicting which establishments “might be at a higher risk of non-compliance with food hygiene regulations”. The government also announced in January 2023 that it would be using AI to help “find and prevent more fraud across the public sector”.

Safety and ethics

The UK Government interprets AI safety to mean the prevention and mitigation of harms (whether accidental or deliberate) from AI. Harms may arise from the ethical challenges that complex AI can present. These include the ‘black box problem’, where the inputs to and outputs from an AI system are known but humans cannot decipher – and the AI cannot explain – the process it went through to reach a particular conclusion, decision, or output. The AI’s decision-making process, in other words, is not transparent nor accountable to humans.

Such decisions may, in addition, be susceptible to biases, such as ’embedded bias’. Embedded biases arise from relying on training data that reflects social and historical inequalities. These inequalities are then perpetuated in the outputs from the system. As IBM explains, “using flawed training data can result in algorithms that repeatedly produce errors, unfair outcomes, or even amplify the bias inherent in the flawed data”. In addition, there are concerns about the privacy and security implications of AI, including the use of sensitive data to train AI systems, as well as the ability of those systems to infer personal information.

The UK Government and others have also considered “loss of control risks” in which human oversight and control over a highly advanced, autonomous AI system is lost, leaving it free to take harmful actions. There is an ongoing, contentious debate about such ‘existential risks’ and their likelihood. The Ada Lovelace Institute has emphasised that we should not lose sight of the harms that can arise from existing (rather than futuristic) AI systems.


Documents to download

Related posts