HEAD’S UP: ICO AUDIT OF AI USE IN RECRUITMENT TOOLS
Guess what, the tools are doing more than you think and your compliance probably won’t cut it right now
See here for a recent note on how artificial intelligence (AI) is being used in recruitment. So it will come as no surprise that the UK’s Information Commissioner’s Office has taken a closer look. They have audited developers and providers of AI-powered sourcing, screening and selection tools.
The audit covered many efficiency-based use cases which are enabled by machine learning, but did not look at generative AI - probably a good decision, because the ‘under the hood’ matching and selecting AI has far more of an impact in this sector than the more hyped generative AI ‘stuff’.
The key takeaways from the audit are:
things are being inferred about people, and not always in a good way
far more personal information is being collected than needed
incorrect application of controller/processor under UK GDPR
but actually also some good stuff: providers giving recruiters models which they can tailor to their own needs
The ICO also says “we made almost 300 recommendations to improve compliance, all of which were accepted”. So it’s clear that much work needs to be done to ensure compliant use of (very useful) tools.
These recommendations included:
processing personal information fairly in the AI
monitoring for potential/actual fairness, accuracy or bias issues in the AI and outputs
pay special attention to processing of special category data (an inferring is processing!)
explaining the process clearly
this means “detailed privacy information” about what personal information is processed by AI and how, the logic involved in making predictions or producing outputs, and how personal information is used for training, testing or otherwise developing the AI
This means tool providers need to provide this stuff and the contracts need to say they are responsible for it
keeping personal information collected to a minimum and not repurposing or processing personal information unlawfully
providers need to avoid overreach with personal information used to develop, train, test and operate (which includes deleting!)
recruiters then need to use the personal information in the tooling only for the intended purposes
conducting risk assessments to understand the impact on people’s privacy
data protection impact assessments are critical, and the ICO recognises that developers (even if only a processor) must play a part in supporting controller
controller and processor relationships must be considered and documented - and it is not right to assume all tool providers are only a controller
“AI provider is the controller if it exercises overall control of the means and purposes of processing in practice. For example, if it uses the personal information it processes on the recruiter’s behalf to develop a central AI model that they deploy to all recruiters.”
recruiters must give comprehensive processing instructions to the AI provider, e.g.
specific data fields
means and purposes of processing
output required
minimum safeguards to protect personal information
Here is where recruiters will be relying heavily on the AI provider: in effect we get to a situation where the AI provider say’s ‘hey, my tool can do this’ and the recruiter says ‘hey, my instructions are for you to do the things your tool does’ <- SO THE CONTRACT WILL MATTER
providers and recruiters must pay attention to lawful basis & additional condition
identify lawful basis for each instance of personal information processing where a controller, before processing
identify the additional condition where processing special category data
“document, describe in privacy information, and record in contracts”
actually do legitimate interest assessment (one of the most forgotten items of compliance, in my view)
ensure consent is “specific, granular, has a clear opt-in, appropriately logged and refreshed at periodic intervals, and as easy to withdraw as to give”
The ICO recognises the very delicate balance between “opportunities … for society, such as efficiency, scalability, consistency and process simplification” and “opaque systems [have] inherent risks to people and their privacy”.
Remember, this was a consensual audit by the ICO, with a lot of recommendations - IF YOU DON’T GET YOUR HOUSE IN ORDER NOW, THE ICO WILL NOT BE HAPPY
SO LIGHTEN THE LOAD AND SPEAK TO ME ABOUT MY AI COMPLIANCE WORKBOOK :)
THANKS FOR READING