# About Algorithmic Fairness

Below, see the mathematical definition for each of the fairness metrics in the library.

## Average Odds

Average Odds denotes the average of difference in FPR and TPR for group 1 and group 2.

## Disparate Impact

Disparate Impact is the ratio of predictions for a “positive” outcome in a binary classification task between members of group 1 and group 2, respectively.

## Equal Opportunity

Equal Opportunity calculates the ratio of true positives to positive examples in the dataset, \(TPR = TP/P\), conditioned on a protected attribute.

## FNR Difference

FNR Difference measures the equality (or lack thereof) of the false negative rates across groups. In practice, this metric is implemented as a difference between the metric value for group 1 and group 2.

## FOR Difference

FOR Difference measures the equality (or lack thereof) across groups of the rate of inaccurate “negative” predictions by the model. It is calculated using the ratio of false negatives to negative examples in the dataset, \(FOR = FN/N\), conditioned on a protected attribute.

## Generalized Entropy Index

Generalized Entropy Index is proposed as a unified individual and group fairness measure in [1]. With \(b_i = \hat{y}_i - y_i + 1\):

- References:

## Predictive Equality

Predictive Equality is defined as the situation when accuracy of decisions is equal across two groups, as measured by false positive rate (FPR).

## Statistical Parity

Statistical Parity measures the difference in probabilities of a positive outcome across two groups.

## Theil Index

Theil Index is the generalized entropy index with \(\alpha = 1\). See Generalized Entropy Index.

## Equalized Odds

Equalized odds is a bias mitigation technique where subset of decisions of a binary classifier is flipped at uniform random in each of two groups to achieve equality of TPR and FPR across the two groups as proposed in [2]. This subset rate in each group is learned via constrained optimization.

- References: