EU AI Act: checklist
High-risk AI & limited-risk AI
This checklist summarises the core practical tasks associated with high-risk & limited-risk AI, for providers and deployers
An organisation becomes a provider by doing these things:
develops an AI system (AI system) or general purpose AI model (GPAI) - internally and/or by outsourcing development or
puts an AI system or GPAI into the market (for example, for others to license and use) or
puts an AI system or GPAI into service under own name or trademark (for example, as part of its own product)
An organisation becomes a deployer by using an AI system under its authority (for example, incorporating a licensed in AI system into its own product)
HIGH RISK AI
Provider of high-risk AI MUST
See here for explanation of what qualifies as high-risk AI
Have documented quality management system & risk management
Use high quality training, validation & testing data
Keep documentation and logs
Provide information to be transparent about your AI
Ensure real human oversight
Make sure the AI is technically robust, operates accurately and is secure (cybersecurity)
Include provider details in the AI system (packaging/documentation)
Make sure AI system undergoes any conformity assessments
Make sure provider registers AI system
Take corrective actions and provide necessary information
Comply with accessibility requirements
Deployers of high-risk AI MUST
See here for explanation of what qualifies as high-risk AI
Make sure AI used in accordance with instructions from provider
Monitor operation of AI system
Ensure real human oversight
Ensure input data is relevant and representative (in the context of the AI system’s intended use)
Before using AI system in workplace, consult with workers representatives (in countries where this applies)
Where the system makes decisions or assists in decision-making related to people (e.g. hiring or education), inform those people of the required information (e.g. that they are subject to the use of the system)
Keep logs for 6 months
If there is a serious incident or malfunctioning, or if you believe the AI system might harm someone, you must inform the provider or distributor and national supervisory authorities and stop using it
LIMITED-RISK AI
Provider of limited risk AI (generally) MUST
See here for explanation of what qualifies as limited-risk AI
If AI system interacts with people, must design & develop so that people are informed that they are interacting with an AI system (unless obvious)
If the AI generates synthetic audio, image, video or text content: AI system must mark the outputs in a machine-readable format and detectable as artificially generated/manipulated
Provider of limited risk AI (because it’s been downgraded from high-risk AI) MUST
See here for explanation of exceptions to high-risk AI
Register AI system
Document basis of ‘no significant risk’ decision
Deployer of limited risk AI (generally) MUST
See here for explanation of what qualifies as limited-risk AI
If the AI involves emotion recognition or biometric categorisation: inform people and process in accordance with data protection laws
If the AI generates or manipulates image, audio, or video content constituting a deep fake: disclose the fact that the content has been artificially generated or manipulated (plus the person that generated it), and the fact the content is unauthentic, plus:
if deepfake “evidently artistic, creative, satirical, fictional analogous work or programme, the transparency obligations … are limited to” disclosing this in an appropriate way that doesn’t “hamper the display or enjoyment” of the work
if text has been generated or manipulated for informing public of a public interest matter, then transparency obligation does not apply if some editorial oversight
Need help working through your checklists?
Let’s talk!