Timezone: »
Poster
Active Learning for Distributionally Robust Level-Set Estimation
Yu Inatsu · Shogo Iwazaki · Ichiro Takeuchi
Many cases exist in which a black-box function $f$ with high evaluation cost depends on two types of variables $\bm x$ and $\bm w$, where $\bm x$ is a controllable \emph{design} variable and $\bm w$ are uncontrollable \emph{environmental} variables that have random variation following a certain distribution $P$. In such cases, an important task is to find the range of design variables $\bm x$ such that the function $f(\bm x, \bm w)$ has the desired properties by incorporating the random variation of the environmental variables $\bm w$. A natural measure of robustness is the probability that $f(\bm x, \bm w)$ exceeds a given threshold $h$, which is known as the \emph{probability threshold robustness} (PTR) measure in the literature on robust optimization. However, this robustness measure cannot be correctly evaluated when the distribution $P$ is unknown. In this study, we addressed this problem by considering the \textit{distributionally robust PTR} (DRPTR) measure, which considers the worst-case PTR within given candidate distributions. Specifically, we studied the problem of efficiently identifying a reliable set $H$, which is defined as a region in which the DRPTR measure exceeds a certain desired probability $\alpha$, which can be interpreted as a level set estimation (LSE) problem for DRPTR. We propose a theoretically grounded and computationally efficient active learning method for this problem. We show that the proposed method has theoretical guarantees on convergence and accuracy, and confirmed through numerical experiments that the proposed method outperforms existing methods.
Author Information
Yu Inatsu (Nagoya Institute of Technology)
Shogo Iwazaki (Nagoya Institute of Technology)
Ichiro Takeuchi (Nagoya Institute of Technology / RIKEN)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Active Learning for Distributionally Robust Level-Set Estimation »
Fri. Jul 23rd 02:40 -- 02:45 AM Room
More from the Same Authors
-
2023 Poster: Randomized Gaussian Process Upper Confidence Bound with Tighter Bayesian Regret Bounds »
Shion Takeno · Yu Inatsu · Masayuki Karasuyama -
2022 Poster: Bayesian Optimization for Distributionally Robust Chance-constrained Problem »
Yu Inatsu · Shion Takeno · Masayuki Karasuyama · Ichiro Takeuchi -
2022 Spotlight: Bayesian Optimization for Distributionally Robust Chance-constrained Problem »
Yu Inatsu · Shion Takeno · Masayuki Karasuyama · Ichiro Takeuchi -
2021 Poster: More Powerful and General Selective Inference for Stepwise Feature Selection using Homotopy Method »
Kazuya Sugiyama · Vo Nguyen Le Duy · Ichiro Takeuchi -
2021 Spotlight: More Powerful and General Selective Inference for Stepwise Feature Selection using Homotopy Method »
Kazuya Sugiyama · Vo Nguyen Le Duy · Ichiro Takeuchi -
2020 Poster: Multi-fidelity Bayesian Optimization with Max-value Entropy Search and its Parallelization »
Shion Takeno · Hitoshi Fukuoka · Yuhki Tsukada · Toshiyuki Koyama · Motoki Shiga · Ichiro Takeuchi · Masayuki Karasuyama -
2019 Poster: Safe Grid Search with Optimal Complexity »
Eugene Ndiaye · Tam Le · Olivier Fercoq · Joseph Salmon · Ichiro Takeuchi -
2019 Oral: Safe Grid Search with Optimal Complexity »
Eugene Ndiaye · Tam Le · Olivier Fercoq · Joseph Salmon · Ichiro Takeuchi -
2017 Poster: Selective Inference for Sparse High-Order Interaction Models »
Shinya Suzumura · Kazuya Nakagawa · Yuta Umezu · Koji Tsuda · Ichiro Takeuchi -
2017 Talk: Selective Inference for Sparse High-Order Interaction Models »
Shinya Suzumura · Kazuya Nakagawa · Yuta Umezu · Koji Tsuda · Ichiro Takeuchi