circle circle
 Regulating artificial intelligence: The EU AI Act
Intellectual property & technology

Regulating artificial intelligence: The EU AI Act

Share

INSIGHTS

The rise of generative artificial intelligence tools such as ChatGPT and DALLE-E has brought AI into the spotlight, but it has also raised significant concerns. As AI continues to advance, governments have seen a necessity to close the regulatory gap as AI systems’ development continuously outpaces legislation. As Thierry Breton, the European commissioner for the internal market, put it: “AI raises a lot of questions socially, ethically, and economically. But now is not the time to hit any ‘pause button.’ On the contrary, it is about acting fast and taking responsibility.”

In April 2021, the European Commission proposed the EU AI Act, the first-ever regulatory framework for AI. This year, the European Parliament progressed its version of the Act. The European Commission, the European Council, and the Parliament will now start negotiations called the trilogue, for an agreement on the final text. There is a target to complete this by the end of the year and issues such as definitions, proposed categories of application, and outright bans included in the proposed text will be discussed at length before a final form is agreed.

As it currently stands, the Act provides as follows.

Who will the Act apply to?

  • Providers: actors who develop and place such systems on the market.
  • Deployers: natural or legal persons using AI in the context of professional activity.
  • Importers: actors who bring AI systems into the EU from a third country.
  • Distributors: actors who distribute AI systems within the EU.
  • Authorised representatives: actors who act on behalf of a provider of AI systems in the EU.

The AI Act will also apply to AI systems that are used outside the EU, provided that the output produced is to be used in the EU. However, personal, non-professional AI activities will fall outside the Act’s scope.

What does the Act regulate?

The European Parliament’s priority is to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. This is to be achieved by establishing a technology-neutral, uniform definition for AI that can be applied to future AI systems.

Harmonising a unified definition of “artificial intelligence system” was a key focus of the discussions. The current definition of AI, “machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments,” is now more closely aligned with the Organisation for Economic Co-operation and Development’s (OECD) definition, which favours global coordination on AI standards.

The Act adopts a risk-based approach, subjecting AI systems to evaluation based on their potential risks before deployment. The different levels of risk, from minimal to unacceptable, will determine how the severity of restrictions.

“Unacceptable AI systems”, those considered threatening to individuals, will face an outright ban. This category includes AI systems that:

  • manipulate persons through subliminal techniques or exploit the fragility of vulnerable individuals, and could potentially harm the manipulated individual or third person.
  • use social scoring such as those in China which assign people a social credit score based on their behaviour including bad driving, posting on social media, buying too many video games, etc.
  • use real-time and remote biometric identification including facial recognition.

However, certain exceptions may be allowed  – such as where there is a search for potential victims of crime, including missing children, the prevention of a specific, substantial, and imminent threat to the life or physical safety of persons or a terrorist attack, or where it relates to the detection, localisation, identification or prosecution of a perpetrator or individual suspected of a criminal offence referred to in the European Arrest Warrant Framework Decision.

Additionally, the EU Parliament has expanded the list of AI systems and applications that should be classified as high-risk. These high-risk AI systems are permitted on the European market only if they comply with certain mandatory requirements and undergo an ex-ante conformity assessment. All remote biometric identification systems are considered high risk, but this could also include AI systems used in transport, which could put the life and health of citizens at risk, or in migration, asylum, and border control management, such as the verification of the authenticity of travel documents.

Under the Parliament’s proposed rules, providers of certain AI systems may challenge the decision that their system is high-risk. To do so, they must submit a notification to a supervisory authority or the AI Office (if the AI system is intended to be used in more than one Member State). The authority or office will then review the notification and respond within three months, stating whether they believe the AI system is high-risk.

The Act necessitates transparency in AI systems, obliging generative AI providers like ChatGPT to inform users when content is AI-generated, safeguard against content that violates EU laws and make publicly available documentation summarising the use of training data protected under copyright law. Limited-risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions.

What are the penalties?

The proposals include fines which go much further than the General Data Protection Regulation (GDPR). While the GDPR sets fines up to €10 million or up to 2% of a company’s global turnover, the Act suggests much more substantial penalties. Specifically, the proposed fines can reach up to €40,000,000 or, if the offender is a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever amount is higher. These penalties apply in cases of non-compliance with the rules regarding prohibited AI practices.

When will the Act apply?

As mentioned previously, the next step involves trilogue negotiations between the European Commission, Parliament, and Council to finalise the text of the AI Act. The goal is to reach an agreement by the end of the year. Once the agreement is reached, the AI Act, being a regulation, will be directly applicable. However, there will be a transitional period of 24 months, allowing all affected parties to adequately prepare for its implementation.

While still in the drafting process, the AI Act’s impact is highly anticipated as the first piece of legislation of this kind worldwide and already influencing AI regulation around the world.

CONTACT US

Glasgow Edinburgh Inverness Elgin Thurso Shetland
Get in touch

Call us for free on 0330 912 0294 or complete our online form below for legal advice or to arrange a call back.

Speak to us today on 0330 159 5555

Get in touch

CONTACT US

Get in touch

Call us for free on 0330 159 5555 or complete our online form below to submit your enquiry or arrange a call back.