Skip to yearly menu bar Skip to main content


Contributed talk
in
Workshop: ICML Workshop on Algorithmic Recourse

CounterNet: End-to-End Training of Counterfactual Aware Predictions


Abstract:

This work presents CounterNet, a novel end-to-end learning framework which integrates the predictive model training and the counterfactual (CF) explanation into a single end-to-end pipeline. Prior CF explanation techniques rely on solving separate time-intensive optimization problems to find CF examples for every single input instance, and also suffer from the misalignment of objectives between model predictions and explanations, which leads to significant shortcomings in the quality of CF explanations. CounterNet, on the other hand, integrates both prediction and explanation in the same framework, which enable the optimization of the CF example generation only once together with the predictive model. We propose a novel variant of back-propagation which can help in effectively training CounterNet's network. Finally, we conduct extensive experiments on multiple real-world datasets. Our results show that CounterNet generates high-quality predictions, and corresponding CF examples (with high validity) for any new input instance significantly faster than existing state-of-the-art baselines.