Skip page header and navigation

The EU’s AI Act

What does it mean and what is happening in the UK?

The EU’s AI Act – what does it mean and what is happening in the UK?

With the release of a white paper and ongoing consultation, the UK has recently moved forward with its steps to regulate Artificial Intelligence (AI), and the European Union is likewise moving forward in this area. It is widely accepted globally that AI can bring a wide array of economic and societal benefits, both for individuals and for the public interest. At the same time, the use of AI gives rise to certain risks including the risk of bias, risk to safety and potentially to personal freedoms. The use of AI can also create uncertainty in relation to the monitoring of compliance with the law (including data protection law), liability and intellectual property rights. 

In the healthcare sector in particular, AI is integrated into patient care in many different ways, from healthcare apps, to AI in diagnostic imaging, and AI-assisted surgery. We have explored the use of AI in healthcare in more detail in our article dated 5 August 2022 ‘Artificial Intelligence – The new world order’.  

The position in the EU

In April 2021, the European Commission announced a proposal for the first ever legal framework on AI, addressing the risks of specific uses of AI. This announcement followed a 2020 white paper. The white paper, entitled ‘On Artificial Intelligence – A European approach to excellence and trust’, set out the potential risks which the use of AI entails, such as opaque decision making, discrimination, intrusion into the private lives of citizens and use of AI for criminal purposes. 

The European Commission’s proposal, entitled ‘Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’ is subject to the ordinary legislative procedure for the EU, and there is a possibility that it will be amended during this process. Members of the European Parliament committees voted on the political agreement on the Artificial Intelligence Act (the ‘Act’) in April 2022. In December 2022 the Council of Europe adopted its coming position on the Act which is expected to be approved this year, with a two year commencement period.

As currently drafted, the proposed European legislation sets out a regulatory framework for AI going forward, adopting a risk-based approach which differentiates between uses of AI that create (i) an unacceptable risk, (ii) a high risk, and (iii) low and minimal risk. It is this classification of systems which has appeared to be one of the most contentious areas of debate, as systems that are categorised as higher risk will be subject to more stringent rules.  

The proposed regulation sets out the harmonised rules for the placing on the market, putting into service and use of AI systems; prohibits certain AI practices; sets out the specific requirements for high-risk AI systems and obligations for operators of such systems; sets out harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content; and the rules on market monitoring and surveillance. A more topical issue that has developed in recent months relates to ensuring stricter obligations exist for models such as ChatGPT, as this was not covered in the original proposal for the Act. 

The proposed regulation has extraterritorial scope, applying to both private and public sector providers and/or users of AI systems. Providers in third countries (such as those based in Great Britain) will need to comply with the proposed regulation if they are to provide services with AI systems, or use AI to produce outputs which are used within the EU’s single market. EU member states will enforce the regulation at a national level and will be required to set out local rules on enforcement. Article 71 of the proposed regulation sets out the penalties for non-compliance, with potential fines of up to 30,000,000 EUR or 6% of total worldwide annual turnover for the preceding financial year, whichever is higher. The proposal also suggests the establishment of a European Artificial Intelligence Board, to oversee implementation and governance within the EU.  

The position in the UK

As it stands, the regulation of AI in the UK is spread across varying different regulatory bodies, including the Medicines and Healthcare products Regulatory Agency (MHRA) and the Information Commissioner’s Office (ICO). The upcoming Online Safety Bill also has provisions specifically concerning the design and use of algorithms.  

In September 2021, the UK government announced a 10-year plan, described as the ‘National AI Strategy’, with aims to invest and plan for the long-term needs of the AI ecosystem, support the transition to an AI-enabled economy and ensure that the UK get the national and international governance of AI technologies right. Responsible for overseeing the implementation of the National AI Strategy is the Office for AI, a joint unit which is a part of both the Department for Digital, Culture, Media & Sport (DCMS) and the Department for Business, Energy & Industrial Strategy (BEIS). 

Separately, in its Technology Strategy for 2018-2021, the Information Commissioner’s Office (ICO) classified AI as one of its top three priorities, noting that ‘The ability for AI to intrude into private life and affect human behaviour by manipulating personal data makes highlighting the importance of this topic a priority for the ICO’. The ICO has since published guidance on AI and data protection, to help organisations mitigate the risks arising from a data protection perspective in relation to AI. The ICO also launched a data analytics toolkit in February 2021 as part of its AI priority work for organisations to refer to. 

Most recently, in March 2023, the UK government has published its white paper ‘A pro-innovation approach to AI regulation’ which sets out plans to implement a ‘flexible’ outcome-oriented approach to AI regulation. The white paper proposes five principles intended to underpin the regulatory framework however in the first instance at least, these principles will not be enforced using legislation. You can read more about the UK proposals in our article here. The consultation accompanying the white paper is open until 21 June 2023. 

Hill Dickinson will be communicating more about developments in these areas. Please keep an eye on our website and social media feeds for further information. 

Our life sciences team provides practical, commercial legal advice to companies at all stages of development, from start-up to established multinational.

We support clients from an idea in a lab, to helping incorporate the company, raising capital, protecting and licensing intellectual property, signing strategic partnerships and, ultimately, commercialising life-changing treatments and technologies.

We also help clients navigate a legal and regulatory landscape that is continuously evolving in response to innovation as well as societal and ethical challenges.

Our interdisciplinary team blends insight and pragmatism to provide high-quality, trusted advice to some of the world’s leading life sciences companies.

Areas of expertise in which we work include biotech, pharma, cell and gene therapies, medtech and medical devices, medical cannabis and psychedelic medicine, agritech and alternative proteins, IVF and reproductive technologies and embryo research.