The use of automated risk assessment tools to predict a defendant’s risk of recidivism is necessarily unfair. There is a tradeoff between equal treatment and equal outcomes which constitutes the “impossibility of fairness” problem in machine learning. This article provides an account of algorithmic fairness that centers on equal treatment and requires the use of equally confirmatory algorithmic evidence. The analysis relies on a Bayesian account of evidence to assess AI predictions of recidivism risk as evidence for or against hypotheses about a black and white defendant’s probability of future rearrest. Such predictions are shown to provide weaker confirmatory evidence for a black defendant’s future recidivism risk than a white defendant. Thus, the use of such evidence is necessarily unfair to black defendants because such use violates equal treatment and thus cannot meet a necessary condition of algorithmic fairness. This proposed account of algorithmic fairness provides the theoretical resources to avoid the “impossibility of fairness” problem. On this view of algorithmic fairness, fairness is neither inevitable nor impossible. By requiring equally confirmatory scores, rather than simply the same scores, decision makers can both satisfy equal treatment and reduce racial disparities in criminal sentencing.

