EU AI Act: checklist

Provider & deployer

This checklist summarises the core practical tasks assoicated with deployers and providers, for high-risk and limited-risk AI

PROVIDER

Provider of high-risk AI MUST

See here for explanation of what qualifies as high-risk AI

  1. Have documented quality management system & risk management

  2. Use high quality training, validation & testing data

  3. Keep documentation and logs

  4. Provide information to be transparent about your AI

  5. Ensure real human oversight

  6. Make sure the AI is technically robust, operates accurately and is secure (cybersecurity)

  7. Make sure above are in fact complied with

  8. Include provider details in the AI system (packaging/documentation)

  9. Make sure AI system undergoes any conformity assessments

  10. Make sure provider registers AI system

  11. Take corrective actions and provide necessary information

  12. Comply with accessibility requirements

Provider of limited risk AI (because it’s been downgraded from high-risk AI) MUST

See here for explanation of exceptions to high-risk AI

  1. Register AI system

  2. Document basis of ‘no significant risk’ decision

Provider of limited risk AI (generally) MUST

See here for explanation of what qualifies as limited-risk AI

  1. If AI system interacts with people, must design & develop so that people are informed that they are interacting with an AI system (unless obvious)

  2. If the AI generates synthetic audio, image, video or text content: AI system must mark the outputs in a machine-readable format and detectable as artificially generated/manipulated

DEPLOYER

Deployer of high-risk AI MUST

See here for explanation of what qualifies as high-risk AI

  1. Make sure AI used in accordance with instructions from provider

  2. Monitor operation of AI system

  3. Ensure real human oversight

  4. Ensure input data is relevant and representative (in the context of the AI system’s intended use)

  5. Before using AI system in workplace, consult with workers representatives (in countries where this applies)

  6. Where the system makes decisions or assists in decision-making related to people (e.g. hiring or education), inform those people of the required information (e.g. that they are subject to the use of the system)

  7. Keep logs for 6 months

  8. If there is a serious incident or malfunctioning, or if you believe the AI system might harm someone, you must inform the provider or distributor and national supervisory authorities and stop using it

Deployer of limited risk AI (generally) MUST

See here for explanation of what qualifies as limited-risk AI

  1. If the AI involves emotion recognition or biometric categorisation: inform people and process in accordance with data protection laws

  2. If the AI generates or manipulates image, audio, or video content constituting a deep fake: disclose the fact that the content has been artificially generated or manipulated (plus the person that generated it), and the fact the content is unauthentic, plus:

    • if deepfake “evidently artistic, creative, satirical, fictional analogous work or programme, the transparency obligations … are limited to” disclosing this in an appropriate way that doesn’t “hamper the display or enjoyment” of the work

    • if text has been generated or manipulated for informing public of a public interest matter, then transparency obligation does not apply if some editorial oversight

Need help working through your checklists?

Let’s talk!