Poster
in
Workshop: Humans, Algorithmic Decision-Making and Society: Modeling Interactions and Impact
Smooth Ambiguity-Averse Preferences and Bayesian Nonparametrics for Data-Driven Distributionally Robust Optimization
Nicola Bariletto · Nhat Ho
Training machine learning and statistical models often involves optimizing a data-driven risk criterion. The risk is usually computed with respect to the empirical data distribution, but this may result in poor and unstable out-of-sample performance due to distributional uncertainty. In the spirit of distributionally robust optimization, we propose a novel robust criterion by combining insights from a recent decision-theoretic model of smooth ambiguity-averse preferences and Bayesian nonparametric statistics. The optimization procedure provably enjoys finite-sample and asymptotic statistical performance guarantees. Moreover, the smoothness of the criterion and the properties of the employed Dirichlet process prior allow for easy-to-optimize approximations. The method also achieves promising empirical results as to improving and stabilizing the out-of-sample performance of popular statistical learning algorithms.