Datasets and Software for Detecting Algorithmic Discrimination
What does it actually mean for an algorithm to be fair? Different researchers have used different notions of algorithmic fairness. We provide here three different ways of classifying fairness.
It is also refered to as statistical parity. It is a requirement that the protected groups should be treated similarly to the advantaged group or the populations as a whole.
It is a requirement that individuals should be treated consistently.
Group fairness does not consider the individual merits and may result in choosing the less qualified members of a group, whereas individual fairness assumes a similarity metric of the individuals for the classification task at hand that is generally hard to find.
This appears when different users receive different content based on user attributes that should be protected, such as gender, race, ethnicity, or religion.
It refers to biases in the information received by any user. Take for example, when some aspect is disproportionately represented in a query result or in news feeds.
This consists of rules or procedures that explicitly mention minority or disadvantaged groups based on sensitive discriminatory attributes related to group membership.
This consists of rules or procedures that, while not explicitly mentioning discriminatory attributes, intentionally or unintentionally could generate discriminatory decisions. It exists due to the correlation of the non-discriminatory items with the discriminatory ones.
This type of algorithms is typically used to find the most suitable way of odering items. It is useful when a query is large, since most people will not scan through the entire list.
This type of algorithms tackles the problem of classification subject to fairness constraints with respect to pre-defined senstive attributes such as race or gender.
These definitions of fairness can be found in following literature:
Zehlike, Meike, et al. “Fa* ir: A fair top-k ranking algorithm” Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. doi:10.1145/3132847.3132938. bibtex
Pitoura, Evaggelia et al. “On measuring Bias in Online Information.” doi:10.1145/3186549.3186553 . bibtex
Hajian, Sara et al. “Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining.” doi:10.1145/2939672.2945386 .bibtex