Inherent Trade-Offs in the Fair Determination of Risk Scores

Jon Kleinberg,  Sendhil Mullainathan & Manish Raghavan

This is a commentary on an arivX preprint (17 Nov 2016). Please note that I have not reviewed the maths and statistics.

Abstract (edited)

Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove  there is no method that can satisfy these three conditions simultaneously.

The conditions are

  1. well-calibrated,
  2. balance for the negative class and
  3. statistical parity

Moreover, a version of this fact holds in an approximate sense as well.

Kleinberg earned his spurs working with Watts and Strogatz on The 6 Degrees of Separation.

To take one simple example, suppose one wants to determine the risk that a person is a carrier for a disease X, and suppose that a higher fraction of women than men are carriers. Then this result implies that in any test designed to estimate the probability that someone is a carrier of X, at least one of the following undesirable properties must hold:

(a) the test’s probability estimates are systematically skewed upward or downward for at least one gender; or

(b) the test assigns a higher average risk estimate to healthy people (non-carriers) in one gender than the other; or

(c) the test assigns a higher average risk estimate to carriers of the disease in one gender than the other.

The point is that this trade-off among (a), (b), and (c) is not a fact about medicine; it is simply a fact about risk estimates when the base rates differ between two groups.

Thus it appears that being fair is not all that easy if you are a computer, even if that is your intention, you can’t.

Leave a Reply

Your email address will not be published. Required fields are marked *