DATA SUBJECT RIGHTS AND ARTIFICIAL INTELLIGENCE
Let’s face it: artificial intelligence (AI) does make dealing with data subject rights more difficult - but it’s not insurmountable.
This is mainly the case with more complex AI, such as complex machine learning. Mainly because these types of AI don’t ‘keep’ copies of things, but are able to ‘recreate’ things based on statistical probability.
But first things first: what are the data subject rights under GDPR?
Under the UK GDPR, people have these data subject rights (which of course exist in other parts of the world, and independently of any AI-specific rules):
Right to be informed - people have the right to know what is being collected about them and what happens to it (hint: this is what privacy notices are for!)
Right to access – people have the right to access their personal data and receive a copy of their personal data
Right to rectification – people can request corrections to their personal data if it is inaccurate or incomplete
Right to erasure (right to be forgotten) – people can request the deletion of their personal data under certain conditions
Right to restrict processing – people can limit how their personal data is processed
Right to data portability – people can receive their personal data in a structured, commonly used format and transfer it to another data controller
Right to object – people can object to the processing of their personal data for certain purposes, such as direct marketing
So what makes it more difficult when you throw AI into the mix?
Bias and fairness
AI systems can inadvertently perpetuate biases present in their training data, leading to unfair or discriminatory outcomes. Ensuring fairness and addressing bias in AI is critical to upholding data subject rights. Organisations must implement measures to identify and mitigate biases, ensuring that AI systems do not discriminate against people based on race, gender, or other protected characteristics. Hat’s off to IBM for this excellent summary on bias and fairness in an easily digestible form.
Transparency and explainability
AI algorithms can be incredibly complex, and their decision-making processes are not always easily understood. This lack of transparency can make it difficult for people to exercise their individual right to access and understand how their data is used. There's a growing demand for AI systems to be more explainable, ensuring that users can understand the logic behind automated decisions. The ICO has emphasised the need for transparency and the importance of making AI decision-making processes understandable to data subjects.
Data minimisation and purpose limitation
AI systems often require large datasets to function effectively. However, data subject rights emphasise the importance of data minimisation – collecting only the data necessary for a specific purpose. Ensuring AI systems adhere to this principle can be challenging but is crucial for protecting the privacy of individuals. The ICO stresses the importance of ensuring only the data necessary for the specific purpose is collected and processed. Let’s put this plainly: you don’t get to collect and keep personal data just in case it might be useful one day.
Automated decision-making and profiling
Under the UK GDPR, people have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or significantly affects them. AI systems making decisions about credit scores, job applications, or insurance claims must consider this right, providing human intervention where necessary. The ICO has highlighted the need for human oversight and the implementation of safeguards to ensure fairness and compliance.
Data portability and interoperability
The right to data portability allows people to move their data across different platforms. For AI, this means developing systems that can seamlessly transfer data without compromising privacy or security. Interoperability between AI systems is key to ensuring that users can exercise this right effectively. (But this right is one which really only works in limited circumstances, if we’re honest about it.)
How can you support making it easier for people to exercise their data subject rights?
Data Protection Impact Assessments (DPIAs)
Businesses should conduct DPIAs to assess and mitigate data protection risks associated with AI systems. This proactive approach helps identify potential issues early in the development process and ensures that data subject rights are considered. DPIAs are often hard, complicated and time-consuming - so don’t wait until the moment before the product is used/switched on.
Clear communication and user-friendly tools
To facilitate transparency, businesses should provide clear and accessible information about AI systems and their use of personal data. The ICO has developed resources like the AI and Data Protection Risk Toolkit to support organisations in assessing and addressing risks. But this toolkit is not for the faint-hearted, it will need the support not just of legal expertise (like me!), but also stakeholders in product, engineering and elsewhere in the business. Please, plese don’t go it alone!
Human oversight
Ensuring that there is a human element in decision-making processes can help safeguard each individual’s rights. The ICO advises that automated decisions should include mechanisms for human intervention, especially when they have significant effects on people. But Human oversight is complex.
Regular updates and continuous improvement
The pace of AI development means that you will have to do keep track of the regular updates to guidance and practices. The ICO has restructured its guidance to reflect the evolving nature of AI and data protection, but without some external support, most businesses are unlikely to be able to keep up with this.
THANKS FOR READING