Published
Artificial intelligence (AI) is developing at a rapid pace. From generative language models like ChatGPT to advances in medical screening technology, policymakers and the developers of the technology alike believe that it could deliver fundamental change across almost every area of our lives. But such change is not without risk. Debate is ongoing on how best to regulate these innovative technologies and differences of approach have already emerged internationally as countries across the world examine how best to adapt.
Table of contents
- 1. What is artificial intelligence?skip to link
- 2. Ongoing development of AI: Potential benefits and risksskip to link
- 2.1 Current contribution of AI to the UK economyskip to link
- 2.2 Potential benefits and risks of AIskip to link
- 2.3 Potential impact on the UK employment marketskip to link
- 2.4 Case study: Potential impact on the knowledge and creative industries (House of Lords Communications and Digital Committee report, January 2023)skip to link
- 3. Calls for rapid regulatory adaptationskip to link
- 4. Proposed regulatory approaches: UKskip to link
- 5. Other regulatory approaches: Examples from around the worldskip to link
On 24 July 2023, the House of Lords is due to debate the following motion:
Lord Ravensdale (Crossbench) to move that this House takes note of the ongoing development of advanced artificial intelligence, associated risks and potential approaches to regulation within the UK and internationally.
1. What is artificial intelligence?
Artificial intelligence (AI) can take many forms. As such, there is no agreed single definition of what it encompasses. In broad terms, it can be regarded as the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. According to IBM, the current real-world applications of AI include:
- extracting information from pictures (computer vision)
- transcribing or understanding spoken words (speech to text and natural language processing)
- pulling insights and patterns out of written text (natural language understanding)
- speaking what has been written (text to speech, natural language processing)
- autonomously moving through spaces based on its senses (robotics)
- generally looking for patterns in large amounts of data (machine learning)
In banking, for example, AI is currently used to detect and flag suspicious activity to a bank’s fraud department, such as unusual debit card usage and large account deposits. The NHS also reports that AI is being used to benefit people in health and care by analysing X-ray images to support radiologists in making assessments and helping clinicians read brain scans more quickly, by supporting people in ‘virtual wards’, who would otherwise be in hospital to receive the care and treatment they need, and through remote monitoring technology such as apps and medical devices which can assess patients’ health and care while they are being cared for at home.
To achieve this, AI systems rely upon large datasets from which they can decipher patterns and correlations, thereby enabling the system to ‘learn’ how to anticipate future events. It does this by relying upon and/or creating algorithms based on the dataset which it can use to interpret new data. This data can be structured, such as bank transactions, or unstructured, such as enabling a driverless car to respond to the environment around it.
The different forms that AI can take range from so-called ‘narrow’ AI designed to perform specific tasks to what is known as ‘strong’ or ‘general’ AI with the capacity to learn and reason. The House of Commons Library recently drew upon research from Stanford University and other sources to offer the following definitions:
- Narrow AI is designed to perform a specific task (such as speech recognition), using information from specific datasets, and cannot adapt to perform another task. These are often tools that aim to assist, rather than replace, the work of humans.
- Artificial general intelligence (AGI—also referred to as ‘strong’ AI) is an AI system that can undertake any intellectual task/problem that a human can. AGI is a system that can reason, analyse and achieve a level of understanding that is on a par with humans; something that has yet to be achieved by AI. The US computer scientist Nils John Nilsson, for example, proposed that one way to test if a system had achieved AGI was if it could successfully learn the skills to perform the different jobs “ordinarily performed by humans”, from “knowledge work” (such as a Library assistant) to “manual labour” (such as a roofer).
- Machine learning is a method that can be used to achieve narrow AI; it allows a system to learn and improve from examples, without all its instructions being explicitly programmed. It does this by finding patterns in large amounts of data, which it can then use to make predictions (for example what film or TV programme you might like to watch next on a streaming platform). The AI can then independently amend its algorithm based on the accuracy of its predictions.
- Deep learning is a type of machine learning whose design has been informed by the structure and function of the human brain and the way it transmits information. The application of deep learning can be seen in ‘foundation models’, of which ‘large language models (LLMs)’ such as ChatGPT, are one example. The term refers to those models that are trained on very large, unlabelled datasets and which can be adapted to do a wide range of tasks, despite not having been trained explicitly to do those tasks. In other words, the model can take information it has learnt about in one situation and apply it to another, different situation. Sometimes LLMs are refined or ‘fine-tuned’ (trained using additional data) to achieve a specific goal. ChatGPT, for example, has been fine-tuned to allow users to ask it a question, or make a request, and for it to generate “human-like text” in response.
Read more at
https://lordslibrary.parliament.uk/artificial-intelligence-development-risks-and-regulation/