Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

AnonFair: A Flexible Toolkit for Algorithmic Fairness

Eoin Delaney · Zihao Fu · Sandra Wachter · Brent Mittelstadt · Chris Russell

Keywords: [ algorithmic fairness ] [ Trustworthy AI ] [ Fairness Toolkit ]


Abstract:

We present AnonFair, a new open source toolkit for enforcing algorithmic fairness. Compared to existing toolkits: (i) We support NLP and Computer Vision classification as well as standard tabular problems. (ii) We support enforcing fairness on validation data, making us robust to a wide-range of overfitting challenges. (iii) Our approach can optimize any measure that is a function of True Positives, False Positive, False Negatives, and True Negatives. This makes it easily extendable, and much more expressive than existing toolkits. It supports 9/9 and 10/10 of the group metrics of two popular review papers. AnonFair is compatible with standard ML toolkits including sklearn, Autogluon and pytorch and is available online.

Chat is not available.