Skip page header and navigation

Artificial intelligence: to regulate or not to regulate?

Artificial intelligence: to regulate or not to regulate?

The increasing application of artificial intelligence is set to significantly impact almost all industrial sectors over the coming decade, including maritime. Given maritime’s global reach, work is already well underway on specific regulatory regimes for the maritime industry. 

Among other things, work is progressing internationally on the development of ships that operate with varying levels of autonomy. Autonomous ships are essentially unmanned vessels, which can navigate and operate with no or minimal crew. The International Maritime Organisation (IMO) has produced a draft MASS (Mass Autonomous Surface Ships) Code that will likely be introduced on a non-mandatory basis in 2025, with a mandatory Code expected to enter into force on 1 January 2028. The IMO is also considering how existing international shipping conventions can accommodate new technologies in shipping.

In addition, the increasing use of smart contracts in shipping and trade, particularly smart (electronic) bills of lading, has led to the enactment of the UK’s Electronic Trade Documents Act, which came into force on 20 September 2023.

More generally, on 8 December 2023, the European Parliament and European Council reached a provisional agreement on an EU Artificial Intelligence Act (AI Act). The regulation is aimed at establishing obligations for artificial intelligence (AI) based on its potential risk and level of impact. Once the agreed text is formally adopted, it will become EU law. 

It is currently expected that the AI Act will come into force in the second half of 2024, although certain provisions may not apply until sometime thereafter (likely to be between six months and two years). It is expected that this ‘window’ will permit those that come within the scope of the AI Act sufficient time to take all necessary steps to ensure compliance with its requirements.

By contrast, the UK Government’s current stance is not to enact a specific statutory instrument to regulate AI. Instead, in a White Paper dated March 2023, the Government identified key principles for regulating the use of AI that will be non-statutory and will be applied by regulators within their remits. This is considered by the UK Government to be a pre-innovation approach. 

In this article, we consider these contrasting approaches.

EU AI Act

It took a long time and extended discussions before the EU negotiators reached a provisional agreement on the text of the AI Act. At one stage, it was thought that there would be no agreement and the proposed legislation would be abandoned. Nonetheless, a political deal was finally reached and the AI Act has been described as a global first.

The fact that it took so long for the deal to be done reflects the difficulties involved in achieving a balance between minimising risk and protecting users of AI with enabling innovation and encouraging investment in AI. 

A ‘softly softly’ approach may open the door to unregulated use of high-risk AI systems, while a heavy-handed approach may stifle innovation and investment altogether. Notwithstanding the EU reportedly coming under pressure from the technology industry to water down the AI Act’s provisions, the EU has nevertheless, perhaps unsurprisingly, taken a pro-regulatory stance.

At the time the provisional text was agreed, it was clear the EU had taken a risk-based approach, with AI systems being classified according to four levels of risk: 

  1. Unacceptable risk 
      
    These would be banned in the EU as a threat to people, with limited exceptions. Examples include: 
      
    1. social scoring (classifying people according to behaviour, socio-economic status or personal characteristics); and 
        
    2. cognitive behavioural manipulation of people or vulnerable groups. 
          
      Some exceptions would, however, be allowed for law enforcement purposes e.g. biometric identification systems (such as for facial recognition).
       
  2. High risk
       
    These were systems that affected safety and fundamental rights. They would be subject to comprehensive mandatory compliance obligations focussing on eg risk mitigation, human oversight, transparency, accuracy, cybersecurity and data governance. 

    High risk AI systems that fell into certain specific areas would have to be registered in an EU database e.g. law enforcement; education and vocational training; migration, asylum and border control management; and employment, worker management and access to self-employment. 

    All high risk systems would be assessed before being put on the market and also throughout their lifecycle.
     

  3. Limited risk 
        
    An example was chatbots. These would be subject to reduced transparency obligations eg designing the system to prevent it from generating illegal content. The requirements would then allow users to make an informed decision.
      
  4. No or minimal risk. These systems could be used freely, although voluntary codes of conduct would be recommended. An example was a spam filter.
     

Non-compliance could lead to significant fines, depending on the infringement and the size of the company. A serious violation could lead to a fine of up to Euros 35 million or 7% of a company’s annual turnover, whichever was the higher. Enforcement would be by the designated competent national authorities in the member states. 

As to definition of an AI system, in November 2023, the Organisation for Economic Cooperation and Development (OECD) updated its definition of AI and an AI system, with the aim that the new definition would inform any definition incorporated into the AI Act. The OECD definition is as follows:

“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

It is also intended that there will be regulatory sandboxes for innovation that will be supervised by the relevant authorities. SMEs will receive priority access and testing outside the sandbox may be permitted with regulatory oversight.

AI Act – final text

On 22 January 2024, it appeared that the final text of the AI Act had been agreed and shared with member states when the text was leaked. The leaked text reflects the expectations summarised above. 

Additional points to note however are that the AI Act will apply to entities involved in developing, deploying, and using AI systems within the EU. This definition, therefore, extends to providers, deployers, importers, distributors, manufacturers, and affected persons. 

Furthermore, EU regulations dealing with data protection, communications, IP and privacy will also need to be complied with.

The European Commission has also proposed to launch the AI Pact, which is intended to help companies prepare to comply with the AI Act. The AI Pact will be formally launched once the AI Act comes into force and the Commission is seeking to work with interested organisations and stakeholders to build a common understanding of the legislation, to share knowledge and increase the visibility and credibility of safeguards put into place and to build trust in AI technologies.

The UK position

As set out in the Government White Paper in March 2023, the framework to govern the responsible development and use of AI is underpinned by five core principles:

  • Safety, security and robustness. AI systems should be technically secure. 
  • Appropriate transparency and explainability. Parties should have access to the decision-making processes of an AI system.
  • Fairness. AI systems should not undermine the rights of individuals and organisations, discriminate unfairly or create unfair outcomes. 
  • Accountability and governance. AI systems should have governance measures that produce effective oversight with clear lines of accountability across their life cycle.
  • Contestability and redress. Where appropriate, affected third parties should be able to challenge an AI decision or outcome that is harmful or creates a material risk.

The principles would be issued on a non-statutory basis and implemented by existing regulators. In the Government’s view, imposing new rigid and burdensome legislation on businesses could hold back AI innovation and reduce the UK’s ability to respond quickly and in a proportionate way to future technological advances. 

The regulators would have specific domain expertise to enable them to apply the principles as appropriate to the specific context. Furthermore, a statutory duty might in due course be imposed on regulators requiring them to have regard to the principles. This would give the regulators flexibility as to how they applied the principles whilst enhancing their mandate to do so.

The Government would also provide a number of central support functions to help monitor, evaluate and assess the effectiveness and implementation of the principles, and to monitor and assess risk to the economy arising from AI.

The White Paper Consultation period ended on 21 June 2023. As at the time of writing, the Government’s response is still awaited.

In November 2023, the UK Government’s Frontier AI Taskforce, launched in April 2023 to conduct safety research and evaluation, evolved into an AI Safety Institute that has three core functions:

  • To develop and conduct evaluations on advanced AI systems, aiming to understand the safety and security of systems, and assess their societal impacts;
  • To drive AI safety research, including through launching a range of exploratory research projects and convening external researchers; and 
  • To facilitate information exchange, including by establishing voluntary and clear information-sharing channels with stakeholders, both public and private and both national and international.

At the same time, the UK Government also hosted a global AI Safety Summit which resulted in 28 countries signing the Bletchley Declaration, which aims to coordinate global cooperation on AI safety. 

At the Summit, a number of leading AI companies including Google, Microsoft and Meta signed a number of voluntary commitments regarding the safety of their AI products and agreed to evaluation by the AI Safety Institute of powerful models that underpin products such as ChatGPT before they are made publicly available. 

Another development in November 2023 was the publication of the first global guidelines for the development of secure AI systems. These were developed by UK and US cybersecurity agencies in collaboration with industry experts and agencies from a number of other countries. The guidelines are the first of their kind to be agreed globally.

The UK Government’s perspective appears to be that if the technology industry voluntarily cooperates, then there is less of a case for regulation. 

However, the Government has indicated that a failure by AI companies to comply with their voluntary agreements to recognise and publish the potential risks their technology poses will trigger tighter legislation. The Government is apparently planning to publish specific tests in March 2024 to determine the criteria that might trigger regulation.

It is also worth noting that, in November 2023, a private member’s bill, the Artificial Intelligence (Regulation) Bill, was introduced into the UK Parliament but is unlikely to be given much parliamentary time as it does not have government backing. Given the UK is facing a general election in 2024, however, it may be that a change of government leads to a change of policy. 

Either way, it is expected that the Government will eventually need to regulate AI. However, when this will be and the scope of the regulatory regime remains unclear.

Comment

The UK’s initiatives have encouraged other countries to follow suit. In November 2023, the US National Institute for Standards and Technology announced it was establishing a US AI Safety Institute.

However, and notwithstanding the significant strides already made in relation to creating specific regulatory regimes for shipping and trade, more general AI regulation will remain relevant to the maritime sector, particularly for shipping stakeholders who are looking to rely increasingly on technology to enhance their processes and expedite their operations.
 

Artificial Intelligence (AI) is the use of technology to create systems capable of performing tasks which commonly require human intelligence.

In recent years, the implementation of AI projects and services has increased to the extent that AI now effects the majority of businesses and organisations in their day-to-day operations. 

Technological advancements in AI have brought about new developments in automation, robotics, healthcare and much more, but with this progression comes legal and compliance risks.

How we can advise you

As a market-leading provider of legal services to organisations leveraging both established and emerging technologies, we have extensive experience advising within the unique regulatory environment of technology and innovation. 

We provide a full legal service offering to our clients, acting as a long-term partner to support the development and commercialisation of technologies and innovations, while also offering strategic guidance on the compliance and legal issues surrounding technology. 

We have experience working with large companies, SMEs, tech start-ups, public and private healthcare providers and academic institutions across all sectors.

We have a particular specialism in health and life sciences, sectors in which AI has potential to make a significant difference through its ability to analyse large quantities of complex information. Our team are able to offer insight into potential artificial intelligence usage liability issues, including this article analysing three real-world hypothetical claims from a practitioner’s perspective . We run an AI in Healthcare Forum which connects NHS businesses and tech suppliers – please contact Jamie Foster  if you would be interested in joining this.

We can assist with:

  • Data governance and compliance solutions for data-driven products and services, including AI
  • Classification and CE marking of AI-driven software as medical devices
  • Contracting for AI solutions
  • Liability in claims arising from the use of AI, including medical negligence claims