Fairness in Machine Learning with Tractable Models

Michael Varley, Vaishak Belle

Research output: Contribution to journalArticlepeer-review

Abstract

Machine Learning techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis and insurance pricing. The prevalence of machine learning techniques has raised concerns about the potential for learned algorithms to become biased against certain groups. Many definitions have been proposed in the literature, but the fundamental task of reasoning about probabilistic events is a challenging one, owing to the intractability of inference.

The focus of this paper is taking steps towards the application of tractable probabilistic models to fairness in machine learning. Tractable probabilistic models have recently emerged that guarantee that conditional marginal can be computed in time linear in the size of the model. In particular, we show that sum product networks (SPNs) enable an effective technique for determining the statistical relationships between protected attributes and other training variables. We will also motivate the concept of “fairness through percentile equivalence”, a new definition predicated on the notion that individuals at the same percentile of their respective distributions should be treated equivalently, and this prevents unfair penalisation of those individuals who lie at the extremities of their respective distributions.

We compare the efficacy of this pre-processing technique with an alternative approach that assumes an additive contribution. It was found that when these two approaches were compared on a data set containing the results of law school applicants, the percentile equivalence method reduced the average underestimation in the exam score of ethnic minority applicants black applicants at the bottom end of their conditional distribution by about a fifth. We conclude by outlining potential improvements to our existing methodology and suggest opportunities for further work in this field.
Original languageEnglish
Article number106715
Number of pages31
JournalKnowledge-Based Systems
Volume215
Early online date9 Jan 2021
DOIs
Publication statusPublished - 5 Mar 2021

Fingerprint Dive into the research topics of 'Fairness in Machine Learning with Tractable Models'. Together they form a unique fingerprint.

Cite this