Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Humans, Algorithmic Decision-Making and Society: Modeling Interactions and Impact

Reconciling Predictive Multiplicity in Practice

Tina Behzad · Sílvia Casacuberta · Emily Diana · Alexander Tolbert


Abstract:

Many machine learning applications focus on predicting "individual probabilities"; for example, the probability that an individual develops a certain illness. Since these probabilities are inherently unknowable, a fundamental question that arises is how to resolve the (common) scenario where different models trained on the same dataset obtain different predictions on certain individuals. A well-known instance of this problem is the so-called model multiplicity (MM) phenomenon, in which a collection of comparable models present inconsistent predictions. Recently, Roth, Tolbert, and Weinstein proposed a reconciliation procedure (called the "Reconcile algorithm") as a solution to this problem: given two disagreeing models, they show how this disagreement can be leveraged to falsify and improve at least one of the two models. In this paper, we perform an empirical analysis of the Reconcile algorithm on three well-known fairness datasets: COMPAS, Communities and Crime, and Adult. We clarify how Reconcile fits within the model multiplicity literature, and compare it to the main solutions proposed in the MM setting, demonstrating the efficacy of the Reconcile algorithm. Lastly, we demonstrate ways of improving the Reconcile algorithm in theory and in practice.

Chat is not available.