BIAS & DISCRIMINATION IN ARTIFICIAL INTELLIGENCE - IS IT ALL DOOM & GLOOM?
While artificial intelligence (AI) promises efficiency and impartiality, it's not free from the age-old human flaws of bias and discrimination. But, if harnessed correctly, AI could also be part of the solution.
Bias in AI refers to systematic errors in data processing or algorithmic decision-making that lead to unfair outcomes or preferences for certain groups over others. Discrimination, on the other hand, happens when these biased outputs systematically disadvantage individuals based on characteristics like race, gender, age, or sexual orientation. Unlike human bias, which can be conscious or unconscious, AI bias is usually an unintended consequence of the underlying data or algorithmic design.
What causes bias and discrimination in AI?
The root causes of AI bias and discrimination can be traced back to a few key sources.
Unrepresentative data
If the data used to train an AI system is not representative of the general population or specific groups, the AI's decisions will likely be biased toward the majority or the overrepresented groups in the dataset. For example, if the model is trained using UK voting behaviours before women were allowed to vote, then it would not be able to properly represent how women would vote once they were legally permitted to vote.
Historical data
Often AI systems learn from historical data which may contain inherent prejudices that were present at the time the data was collected. This can lead AI systems to perpetuate or even amplify these biases. (And of course models can be corrected - and even over corrected for this. See Google’s recent attempts at correcting for historical data.)
Choices (by humans)
The way an AI model is designed, including the selection of variables and the structuring of algorithms, can also introduce bias. Certain algorithmic models might oversimplify complex human behaviours and lead to biased outcomes.
When might bias & discrimination in AI have effects in pactice?
Bias and discrimination in AI use can have real world, significant effects on people.
Recruitment tools
some companies have used AI-driven tools to screen CVs and job applications. However, these tools can unintentionally favour candidates based on characteristics like attending certain schools or even being named in a certain way, reflecting historical hiring biases.
Facial recognition
AI in facial recognition technologies has shown less accuracy for women and people of colour. This can lead to higher rates of misidentification in law enforcement uses or biased performance in applications like automated interviewing.
Healthcare algorithms
AI models might favour data from majority groups, leading to less accurate diagnoses or treatment recommendations for minority groups.
These examples all highlight how AI can perpetuate historical inequalities when not carefully managed.
Can AI ever reduce bias & discrimination?
It would be wrong of us to shy away from using AI due to the risk of bias and discrimination. If developed, managed and used correctly, AI also holds substantial promise to reduce human bias in decision-making:
Consistency
AI systems, when designed with fairness in mind, can apply the same rules to everyone without the interpersonal variations that might occur with human judgments.
Identifying & correcting bias
With advanced analytics, AI can be used to detect patterns of bias in decisions made by human beings. Once identified, these patterns can be corrected in the AI system, potentially making it more equitable than traditional methods.
Transparency & accountability
By building AI systems that are transparent about how decisions are made, stakeholders can identify potential biases in decision-making processes more easily than with human decision-makers, who may not always be able to explain their rationale fully.
But of course, humans!
Ultimately, AI is created by humans and trained on data based on past human events, content, opinions, etc. And there is a fact we cannot ignore – humans are flawed. Regrettably, we have a history of inherent bias and discrimination which was not caused by AI. There is a risk that AI accentuates this. We can, however, reduce this risk, and even use AI to reduce systematic bias and discrimination in processes more generally. It is critical for providers of systems to use diverse data collection to ensure the training data reflects all demographics. Deployers must check this in their due diligence of systems. Testing, and re-testing, using a wide range of people and contexts, is also critical. As is implementing regular audits to detect biases and adjust the algorithms accordingly, and ensuring transparency and continuous feedback. If these things are done successfully, AI may be able to reduce bias in processes by applying the same rules to everyone (when, unfortunately, unconscious human bias may have previously hindered this). Time will tell …
THANKS FOR READING