Inclusive Digital Futures

Disentangling Fairness Perceptions in Algorithmic Decision-Making (ESR 2)

The effects of explanations, human oversight, and contestability

Algorithmic systems are increasingly being used as part of decision-making processes in contexts such as healthcare, education, or finance. Despite the promises of efficiency and improved accuracy, some of these systems have been found to lead to discriminatory outcomes. Motivated by concerns about bias and unfairness, an increasing body of literature has focused on designing fair algorithmic systems. However, even when conditions for distributive fairness (i.e., equitable allocation of resources) are met and these systems are proven to be fair according to a certain mathematical definition, they might not be perceived as fair by those who are subject to algorithmic decisions. 

In this work, we go beyond distributive fairness and rather focus on how the algorithmic decision-making process and the information regarding that process affect decision subjects’ perceptions of fairness. To this end, we draw from literature in legal and organizational psychology, and we control for different algorithmic configurations with varying levels of explanations, human oversight, and contestability as defined by article 22(3) of the GDPR. We then evaluate the effects of these elements in decision subjects’ perceptions of informational and procedural fairness. 

We found that explanations lead to higher degrees of informational fairness perceptions. However, we identified a tension between participants asking for higher degrees of transparency and those concerned with information overload. We did not find an effect of human oversight on procedural fairness perceptions. However, there was a tension between those participants who were asking for higher degrees of human involvement and those who appreciated the timely nature of the algorithmic decision-making process. As far as contestability is concerned, we found that contestability leads to enhanced procedural fairness perceptions. However, we found a tension between those participants who highlighted the standardized fact-based nature of algorithmic decision-making processes and those who asked personal circumstances to be taken into account. We finally found that both informational and procedural fairness perceptions lead to overall fairness perceptions, procedural fairness perceptions having a more prominent contribution. 

In sum, through this study, we highlight the nuanced nature of fairness perceptions as well as the need to evaluate different algorithmic configurations in a space of trade-offs. We find that the HCI community should (1) further look into alternative ways of explaining algorithmic decision-making processes that are rich in information while easily digestible for all, (2) design alternative human-AI configurations, (3) further research into contestation processes that effectively give voice to decision subjects. Through this work, we also underline the multi-faceted nature of fairness and the need to develop methodologies that more appropriately capture fairness perceptions for the specific case of algorithmic decision-making processes.  

Related work:

Preview cover image credit:

Share On:
Twitter LinkedIn