Timezone: »
Recent work has explored how to train machine learning models which do not discriminate against any subgroup of the population as determined by sensitive attributes such as gender or race. To avoid disparate treatment, sensitive attributes should not be considered. On the other hand, in order to avoid disparate impact, sensitive attributes must be examined, e.g., in order to learn a fair model, or to check if a given model is fair. We introduce methods from secure multi-party computation which allow us to avoid both. By encrypting sensitive attributes, we show how an outcome-based fair model may be learned, checked, or have its outputs verified and held to account, without users revealing their sensitive attributes.
Author Information
Niki Kilbertus (MPI Tübingen & Cambridge)
Adria Gascon (The Alan Turing Institute / Warwick University)
Matt Kusner (Alan Turing Institute)
Michael Veale (UCL)
Krishna Gummadi (MPI-SWS)
Adrian Weller (University of Cambridge, Alan Turing Institute)
Adrian Weller is a Senior Research Fellow in the Machine Learning Group at the University of Cambridge, a Faculty Fellow at the Alan Turing Institute for data science and an Executive Fellow at the Leverhulme Centre for the Future of Intelligence (CFI). He is very interested in all aspects of artificial intelligence, its commercial applications and how it may be used to benefit society. At the CFI, he leads their project on Trust and Transparency. Previously, Adrian held senior roles in finance. He received a PhD in computer science from Columbia University, and an undergraduate degree in mathematics from Trinity College, Cambridge.
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Blind Justice: Fairness with Encrypted Sensitive Attributes »
Fri Jul 13th 08:20 -- 08:30 AM Room A6
More from the Same Authors
-
2020 Workshop: 5th ICML Workshop on Human Interpretability in Machine Learning (WHI) »
Adrian Weller · Alice Xiang · Amit Dhurandhar · Been Kim · Dennis Wei · Kush Varshney · Umang Bhatt -
2020 Poster: Stochastic Flows and Geometric Optimization on the Orthogonal Group »
Krzysztof Choromanski · David Cheikhi · Jared Quincy Davis · Valerii Likhosherstov · Achille Nazaret · Achraf Bahamou · Xingyou Song · Mrugank Akarte · Jack Parker-Holder · Jacob Bergquist · Yuan Gao · Aldo Pacchiano · Tamas Sarlos · Adrian Weller · Vikas Sindhwani -
2019 Workshop: Human In the Loop Learning (HILL) »
Xin Wang · Xin Wang · Fisher Yu · Shanghang Zhang · Joseph Gonzalez · Yangqing Jia · Sarah Bird · Kush Varshney · Been Kim · Adrian Weller -
2019 Poster: Unifying Orthogonal Monte Carlo Methods »
Krzysztof Choromanski · Mark Rowland · Wenyu Chen · Adrian Weller -
2019 Poster: On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning »
Hoda Heidari · Vedant Nanda · Krishna Gummadi -
2019 Oral: On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning »
Hoda Heidari · Vedant Nanda · Krishna Gummadi -
2019 Oral: Unifying Orthogonal Monte Carlo Methods »
Krzysztof Choromanski · Mark Rowland · Wenyu Chen · Adrian Weller -
2019 Poster: TibGM: A Transferable and Information-Based Graphical Model Approach for Reinforcement Learning »
Tameem Adel · Adrian Weller -
2019 Oral: TibGM: A Transferable and Information-Based Graphical Model Approach for Reinforcement Learning »
Tameem Adel · Adrian Weller -
2018 Poster: TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service »
Amartya Sanyal · Matt Kusner · Adria Gascon · Varun Kanade -
2018 Poster: Bucket Renormalization for Approximate Inference »
Sungsoo Ahn · Michael Chertkov · Adrian Weller · Jinwoo Shin -
2018 Oral: Bucket Renormalization for Approximate Inference »
Sungsoo Ahn · Michael Chertkov · Adrian Weller · Jinwoo Shin -
2018 Oral: TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service »
Amartya Sanyal · Matt Kusner · Adria Gascon · Varun Kanade -
2018 Poster: Structured Evolution with Compact Architectures for Scalable Policy Optimization »
Krzysztof Choromanski · Mark Rowland · Vikas Sindhwani · Richard E Turner · Adrian Weller -
2018 Poster: Discovering Interpretable Representations for Both Deep Generative and Discriminative Models »
Tameem Adel · Zoubin Ghahramani · Adrian Weller -
2018 Poster: Learning Independent Causal Mechanisms »
Giambattista Parascandolo · Niki Kilbertus · Mateo Rojas-Carulla · Bernhard Schölkopf -
2018 Oral: Learning Independent Causal Mechanisms »
Giambattista Parascandolo · Niki Kilbertus · Mateo Rojas-Carulla · Bernhard Schölkopf -
2018 Oral: Discovering Interpretable Representations for Both Deep Generative and Discriminative Models »
Tameem Adel · Zoubin Ghahramani · Adrian Weller -
2018 Oral: Structured Evolution with Compact Architectures for Scalable Policy Optimization »
Krzysztof Choromanski · Mark Rowland · Vikas Sindhwani · Richard E Turner · Adrian Weller -
2017 Workshop: Reliable Machine Learning in the Wild »
Dylan Hadfield-Menell · Jacob Steinhardt · Adrian Weller · Smitha Milli -
2017 Workshop: Workshop on Human Interpretability in Machine Learning (WHI) »
Kush Varshney · Adrian Weller · Been Kim · Dmitry Malioutov -
2017 Poster: Lost Relatives of the Gumbel Trick »
Matej Balog · Nilesh Tripuraneni · Zoubin Ghahramani · Adrian Weller -
2017 Talk: Lost Relatives of the Gumbel Trick »
Matej Balog · Nilesh Tripuraneni · Zoubin Ghahramani · Adrian Weller