EU AI Act: checklist
Provider & deployer
This checklist summarises the core practical tasks assoicated with deployers and providers, for high-risk and limited-risk AI
PROVIDER
Provider of high-risk AI MUST
See here for explanation of what qualifies as high-risk AI
Have documented quality management system & risk management
Use high quality training, validation & testing data
Keep documentation and logs
Provide information to be transparent about your AI
Ensure real human oversight
Make sure the AI is technically robust, operates accurately and is secure (cybersecurity)
Make sure above are in fact complied with
Include provider details in the AI system (packaging/documentation)
Make sure AI system undergoes any conformity assessments
Make sure provider registers AI system
Take corrective actions and provide necessary information
Comply with accessibility requirements
Provider of limited risk AI (because it’s been downgraded from high-risk AI) MUST
See here for explanation of exceptions to high-risk AI
Register AI system
Document basis of ‘no significant risk’ decision
Provider of limited risk AI (generally) MUST
See here for explanation of what qualifies as limited-risk AI
If AI system interacts with people, must design & develop so that people are informed that they are interacting with an AI system (unless obvious)
If the AI generates synthetic audio, image, video or text content: AI system must mark the outputs in a machine-readable format and detectable as artificially generated/manipulated
DEPLOYER
Deployer of high-risk AI MUST
See here for explanation of what qualifies as high-risk AI
Make sure AI used in accordance with instructions from provider
Monitor operation of AI system
Ensure real human oversight
Ensure input data is relevant and representative (in the context of the AI system’s intended use)
Before using AI system in workplace, consult with workers representatives (in countries where this applies)
Where the system makes decisions or assists in decision-making related to people (e.g. hiring or education), inform those people of the required information (e.g. that they are subject to the use of the system)
Keep logs for 6 months
If there is a serious incident or malfunctioning, or if you believe the AI system might harm someone, you must inform the provider or distributor and national supervisory authorities and stop using it
Deployer of limited risk AI (generally) MUST
See here for explanation of what qualifies as limited-risk AI
If the AI involves emotion recognition or biometric categorisation: inform people and process in accordance with data protection laws
If the AI generates or manipulates image, audio, or video content constituting a deep fake: disclose the fact that the content has been artificially generated or manipulated (plus the person that generated it), and the fact the content is unauthentic, plus:
if deepfake “evidently artistic, creative, satirical, fictional analogous work or programme, the transparency obligations … are limited to” disclosing this in an appropriate way that doesn’t “hamper the display or enjoyment” of the work
if text has been generated or manipulated for informing public of a public interest matter, then transparency obligation does not apply if some editorial oversight
Need help working through your checklists?