This technical paper is a collaborative partnership between the Australian Human Rights Commission, Gradient Institute, Consumer Policy Research Centre, CHOICE and CSIRO’s Data61. We explore how the problem of algorithmic bias can arise in decision making that uses artificial intelligence (AI).
This problem can produce unfair, and potentially unlawful, decisions. We demonstrate how the risk of algorithmic bias can be identified, and steps that can be taken to address or mitigate this problem.
AI is increasingly used by government and businesses to make decisions that affect people’s rights, including in the provision of goods and services, as well as other important decision making such as recruitment, social security and policing. Where algorithmic bias arises in these decision-making processes, it can lead to error. Especially in high-stakes decision making, errors can cause real harm.
The harm can be particularly serious if a person is unfairly disadvantaged on the basis of their race, age, sex or other characteristics. In some circumstances, this can amount to unlawful discrimination and other forms of human rights violation.
This paper describes the outcomes of a simulation. We have simulated a typical decision-making process and identified five scenarios in which algorithmic bias may arise due to problems that may be attributed to the data set, the use of AI itself, societal inequality, or a combination of these sources.
We investigate if algorithmic bias would be likely to arise in each scenario, the nature of any bias, and consider how it might be addressed. The scenarios are framed around a consumer’s interactions with an essential service provider that most people will deal with at some point—an energy company.
Read the paper here: Using artificial intelligence to make decisions