Timezone: »
Automated hyperparameter optimization (HPO) can support practitioners to obtain peak performance in machine learning models. However, there is often a lack of valuable insights into the effects of different hyperparameters on the final model performance. This lack of comprehensibility and transparency makes it difficult to trust and understand the automated HPO process and its results. We suggest using interpretable machine learning (IML) to gain insights from the experimental data obtained during HPO and especially discuss the popular case of Bayesian optimization (BO). BO tends to focus on promising regions with potential high-performance configurations and thus induces a sampling bias. Hence, many IML techniques, like Partial Dependence Plots (PDP), carry the risk of generating biased interpretations. By leveraging the posterior uncertainty of the BO surrogate model, we introduce a variant of the PDP with estimated confidence bands. In addition, we propose to partition the hyperparameter space to obtain more confident and reliable PDPs in relevant sub-regions. In an experimental study, we provide quantitative evidence for the increased quality of the PDPs within sub-regions.
Author Information
Julia Moosbauer (Department of Statistics)
Julia Herbinger (Ludwig-Maximilians-Universität)
Giuseppe Casalicchio
Marius Lindauer (Leibniz Universität Hannover)
Bernd Bischl (LMU)
More from the Same Authors
-
2021 : Mutation is all you need »
Lennart Schneider · Florian Pfisterer · Martin Binder · Bernd Bischl -
2021 : Bag of Baselines for Multi-objective Joint Neural Architecture Search and Hyperparameter Optimization »
Sergio Izquierdo · Julia Guerrero-Viu · Sven Hauns · Guilherme Miotto · Simon Schrodi · André Biedenkapp · Thomas Elsken · Difan Deng · Marius Lindauer · Frank Hutter -
2021 : Towards Explaining Hyperparameter Optimization via Partial Dependence Plots »
Julia Moosbauer · Julia Herbinger