EU AI Act: checklist

High-risk AI & limited-risk AI

This checklist summarises the core practical tasks associated with high-risk & limited-risk AI, for providers and deployers

An organisation becomes a provider by doing these things:

develops an AI system (AI system) or general purpose AI model (GPAI) - internally and/or by outsourcing development or

puts an AI system or GPAI into the market (for example, for others to license and use) or

puts an AI system or GPAI into service under own name or trademark (for example, as part of its own product)

An organisation becomes a deployer by using an AI system under its authority (for example, incorporating a licensed in AI system into its own product)

HIGH RISK AI

Provider of high-risk AI MUST

See here for explanation of what qualifies as high-risk AI

  1. Have documented quality management system & risk management

  2. Use high quality training, validation & testing data

  3. Keep documentation and logs

  4. Provide information to be transparent about your AI

  5. Ensure real human oversight

  6. Make sure the AI is technically robust, operates accurately and is secure (cybersecurity)

  7. Include provider details in the AI system (packaging/documentation)

  8. Make sure AI system undergoes any conformity assessments

  9. Make sure provider registers AI system

  10. Take corrective actions and provide necessary information

  11. Comply with accessibility requirements

Deployers of high-risk AI MUST

See here for explanation of what qualifies as high-risk AI

  1. Make sure AI used in accordance with instructions from provider

  2. Monitor operation of AI system

  3. Ensure real human oversight

  4. Ensure input data is relevant and representative (in the context of the AI system’s intended use)

  5. Before using AI system in workplace, consult with workers representatives (in countries where this applies)

  6. Where the system makes decisions or assists in decision-making related to people (e.g. hiring or education), inform those people of the required information (e.g. that they are subject to the use of the system)

  7. Keep logs for 6 months

  8. If there is a serious incident or malfunctioning, or if you believe the AI system might harm someone, you must inform the provider or distributor and national supervisory authorities and stop using it

LIMITED-RISK AI

Provider of limited risk AI (generally) MUST

See here for explanation of what qualifies as limited-risk AI

  1. If AI system interacts with people, must design & develop so that people are informed that they are interacting with an AI system (unless obvious)

  2. If the AI generates synthetic audio, image, video or text content: AI system must mark the outputs in a machine-readable format and detectable as artificially generated/manipulated

Provider of limited risk AI (because it’s been downgraded from high-risk AI) MUST

See here for explanation of exceptions to high-risk AI

  1. Register AI system

  2. Document basis of ‘no significant risk’ decision

Deployer of limited risk AI (generally) MUST

See here for explanation of what qualifies as limited-risk AI

  1. If the AI involves emotion recognition or biometric categorisation: inform people and process in accordance with data protection laws

  2. If the AI generates or manipulates image, audio, or video content constituting a deep fake: disclose the fact that the content has been artificially generated or manipulated (plus the person that generated it), and the fact the content is unauthentic, plus:

    • if deepfake “evidently artistic, creative, satirical, fictional analogous work or programme, the transparency obligations … are limited to” disclosing this in an appropriate way that doesn’t “hamper the display or enjoyment” of the work

    • if text has been generated or manipulated for informing public of a public interest matter, then transparency obligation does not apply if some editorial oversight

Need help working through your checklists?

Let’s talk!