EU AI Act: deep dive

AI risk category

Every business touching AI needs to understand how that AI is classified under the EU AI Act: banned, high risk or limited risk

This determines compliance obligations, particularly for high-risk AI systems and general purpose AI models

AI sytems

How are AI systems classified under the EU AI Act?

The EU AI Act is designed to be risk based rather than pinpointing any specific types of technology used for artificial intelligence.

AI systems, depending on what they can be used for, are risk-categorised:

Unacceptable risk = prohibited

High risk = stringent compliance obligations, including technical and transparency obligations

Low risk = less strict compliance obligations, mainly about transparency

Minimal risk = (broadly) free use

What AI systems are prohibited because they are an unacceptable risk?

Anything considered particularly harmful and threatening to fundamental rights is be banned entirely.

Reflecting the level of risk, these systems must be phased out within 6 months of the AI Act coming into force, and fines for non-compliance could be up to the higher of 7% of global turnover or €35 million

The EU AI Act lists these ‘AI practices’, which are banned if an organisation places them onto market, or puts them into service, or uses them:

  • AI systems that use subliminal techniques beyond a person’s consciousness, or purposefully manipulative/ deceptive techniques, which (by objective or effect):

    • materially distort person’s behaviour by appreciably impairing ability to make an informed decision

    • thereby causing person to make a decision they would not otherwise have taken

    • that decision causes or likely to cause that person or any other person significant harm

      • For example, an AI system that identified people who are experiencing financial stress and caused them to take out a very high-interest loan without all the required statutory safeguards

  • AI systems which exploit vulnerabilities (age, disability or a specific social or economic situation) which (by objective or effect):

    • materially distort person’s behaviour

    • thereby causing/likely to cause that person or any other person significant harm

      • For example, an AI system that identified people addicted to online gambling and bypassed thresholds designed to control excessive gambling

  • AI systems used to evaluate or classify people based on their social behaviour or their ‘known, inferred or predicted personal or personality characteristics’ (known as social scoring), where this social scoring leads to:

    • detrimental/unfavourable treatment of the person in social contexts that are unrelated to the contexts in which the data was generated/collected and/or

    • detrimental/unfavourable treatment of the person that is unjustified/disproportionate to their social behaviour or its gravity

      • A well-known example is China’s Social Credit System which collects and data on behaviour, ranging from financial history to social media activity, and assigns each Chinese citizen a social credit score. The score is used as a gateway to determine access services and opportunities, such as employment and travel

  • AI systems which risk access whether a person is likely to commit a criminal offence (known as predictive policing):

    • based on profiling them or

      • For example, an AI system which risk assesses on race-based profiles

    • based on assessing their personality traits or characteristics

      • For example, an AI system which risk assesses based on mental health conditions

(Note it is the predictive element that matters: the EU AI Act does not prohibit AI systems being used to support human assessment of whether or not a person has been involved in a crime that has already been committed, based on objective and verifiable facts directly linked to a criminal activity, for example DNA evidence analysis.)

  • AI systems that create or expand facial recognition databases through:

    • untargeted scraping of facial images from the internet or

      • This specifically targets activities like those carried out by Clearview AI - it has created an intelligence platform for government agencies powered by 40+ billion facial images from the internet, including ‘news media, mugshot websites, public social media, and other open sources

    • untargeted scraping of facial images from CCTV footage

  • AI systems that use emotion recognition:

    • if the use is at work or in education

      • An example in use outside EU is in Japan at NEC

    • but not prohibited if for medical or safety reason

  • Biometric categorisation systems that categorise based on their biometric data to deduce/infer:

    • race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation

      • For example, system that used image and sound analysis to detect a person’s potential voting intention as a precursor to seeking to manipulate their vote

    • but not if labelling/filtering of legal biometric datasets in law enforcement

      • For example, fingerprint recognition systems

  • Real-time biometric identification systems in publicly accessible spaces for law enforcement, unless for one of these objectives:

    • targeted searches for victims/missing persons

    • prevention in the context of a specific threat to life/physical safety genuine threat of terrorist attack

    • finding location or identifying suspects connected to a serious criminal offence (the EU AI Act contains a list of these offences)

What AI systems are considered to be high risk?

AI systems that pose a significant risk to health, safety or fundamental rights will be considered high risk.

High risk systems are arguably the main focus of the EU AI Act - other than entirely banned systems, the EU AI Act places the most stringent obligations on these systems.

There are two ways by which a system may be classified as high risk:

  1. High risk because a safety component of a product– high risk because it is safety component of a product (or a product itself) covered by certain EU harmonization legislation and is required to undergo a conformity assessment under those laws, such as machinery, safety of toys, lifts and safety components for lifts, and radio equipment.

  2. High risk classification because of use case – high-risk because it comes under the specific list in the AI Act (see High risk based on 2. above below)

(There are exceptions to high risk classification - see High risk exceptions further below.)

High risk based on 2. above

This high risk list is:

  • AI systems used for remote biometric identification (unless solely for ID confirmation), as well as for:

    • biometric categorisation based on sensitive/protected characteristics of people or

      • For example, facial recognition systems which have skin tone as part of the model

    • biometric recognition of emotions of people

      • For example, MorphCast can recognise a variety of facial expressions and emotional states

  • AI systems used in safety of critical infrastructure:

    • critical digital infrastructure

    • road traffic

    • supply of water, gas, heating or electricity

  • AI systems used in educational and vocational training to:

    • determine if or where a person can access education

      • For example, a system doing a first pass on university applications

    • evaluate learning outcomes (including where that outcome is used to determine a person’s learning pathway)

      • For example, a system marking test papers

    • assess what level of education a person can receive

      • For example, a system making an assessment about whether a person stays on after the minimum school leaving age

    • monitor and detect prohibited behaviour in training and exams

      • For example, plagiarism tools and remote proctoring systems

  • AI systems used in employment, workers management, and access to self-employment to:

    • recruitment or select people, including targeting job ads, analyse and filter applications and evaluate candidates

      • For example, a jobs board or work-finding platform

    • make decisions about people affecting their work contract terms, promotion or termination

      • For example, analytics relating to the productivity of a person in their role

    • allocate tasks based on individual behaviour/personal traits/characteristics

      • For example, an automated work allocation system based on a person’s previous productivity

    • monitor and evaluate performance and behaviour of a person

      • For example, applications which monitor remote workers

  • AI systems used in access to and enjoyment of essential private services and public services and benefits to:

    • evaluate eligibility (including for healthcare and benefits)

    • evaluate creditworthiness/credit score (unless to prevent fraud)

    • risk assess and price life and health insurance

    • evaluate and classify emergency calls and response

  • AI systems used by law enforcement authorities (or on their behalf) in permitted law enforcement activities to:

    • assess a person’s risk of becoming a victim of a criminal offense

    • be used as polygraphs or similar tools

    • evaluate evidence as part of investigation/prosecution

    • assess likelihood a person offending or reoffending (not solely based on profiling)

    • assess personality traits and characteristics or past criminal behaviour

    • profile in the course of detection, investigation or prosecution of offences

  • AI systems used in permitted migration, asylum, and border control activities to:

    • assess migration-related risk (irregular migration or health risk)

    • assist examination for applications for asylum, visa or residence permits (and any associated eligibility complaints and assessments of evidence)

    • detect, recognise or identify persons (except verification of travel documents)

  • AI systems used in administration of justice and democratic processes to:

    • assist judicial authority in research, applying facts and law, and applying law to facts, or in a way similar to dispute resolution

    • influence outcome of election or referendum or voting behaviour of a person in them (does not include AI system output to which the person is not exposed, such as tools to organise, optimise or structure political campaign administration/logistics)

High risk exceptions

An AI system is not high-risk if it does not post a significant risk of harm to the health, safety or fundamental rights of a person, including by not materially influencing the outcome of decision making.

The EU AI Act gives a list of situations this will be the case, because the AI system is intended to:

  • perform a narrow procedural task

  • improve the result of a previously completed human activity

  • detect decision-making patterns or deviations from prior decision-making patterns (and is not meant to replace or influence the previous assessment by a person without proper review by a person)

  • do a preparatory task related to an assessment relevant to the list of high-risk use cases (see list at High risk based on 2. above)

But (!) high-risk systems on that list which carry out profiling are always high risk.

Limited risk

Systems which are benign enough to avoid being high risk can still pose risks beause of lack of transparency - so they are subject to some rules to ensure people understand they are dealing with AI

The limited risk list is:

  • AI systems that interact with people

    • For example, chatbots

  • AI systems that generate synthetic audio, image, video or text content

    • Most obviously, ChatGPT

  • AI systems that involve emotion recognition or biometric categorisation

  • AI systems that generate synthetic audio, image, video or text content which amounts to a deep fakes

    • For example, tools like DeepSwap

Minimal risk

If the AI system does not come under one of the above, the EU AI Act is not concerned with policing it.

General purpose AI

How is general purpose AI treated differently under the EU AI Act?

General purpose AI (GPAI) is considered to be especially risky, so is treated differently, with special rules where the GPAI is so powerful that may have ‘systemic’ effect on society

A GPAI model is defined under the AI Act as one with the following characteristics:

  • trained with a large amount of data using self-supervision at scale

  • displays significant generality

  • can competently perform a wide range of distinct tasks

  • capable of a variety of downstream integrations with systems or applications

When does GPAI get categorised as ‘systemic’?

This refers to the potential GPAI models to cause significant negative impacts on public health, safety, security, fundamental rights, or society at large.

It releates to the high-impact capabilities of GPAIs, which can affect the EU market widely and propagate risks across the value chain.

To determine if a GPAI poses a systemic risk, the EU AI Act considers whether the GPAI model has high impact capabilities.

A threshold metric is used: number of floating-point operations (FLOPs) during the training process, set at 10^25 FLOPs.

GPAI models meeting or exceeding this are presumed to have systemic risks​.

For example, OpenAI GPT-4 and Google/DeepMind Gemini Ultra

Need help determining which category you AI or GPAI is? Or which obligations apply/don’t apply to you because of this?

Let’s talk.