Timezone: »
We consider the problem of configuring general-purpose solvers to run efficiently on problem instances drawn from an unknown distribution. The goal of the configurator is to find a configuration that runs fast on average on most instances, and do so with the least amount of total work. It can run a chosen solver on a random instance until the solver finishes or a timeout is reached. We propose LeapsAndBounds, an algorithm that tests configurations on randomly selected problem instances for longer and longer time. We prove that the capped expected runtime of the configuration returned by LeapsAndBounds is close to the optimal expected runtime, while our algorithm’s running time is near-optimal. Our results show that LeapsAndBounds is more efficient than the recent algorithm of Kleinberg et al. (2017), which, to our knowledge, is the only other algorithm configuration method with non-trivial theoretical guarantees. Experimental results on configuring a public SAT solver on a new benchmark dataset also stand witness to the superiority of our method.
Author Information
Gellért Weisz (DeepMind)
Andras Gyorgy (DeepMind)
Csaba Szepesvari (Deepmind)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: LeapsAndBounds: A Method for Approximately Optimal Algorithm Configuration »
Wed. Jul 11th 09:30 -- 09:40 AM Room A6
More from the Same Authors
-
2022 : Non-stationary Bandits and Meta-Learning with a Small Set of Optimal Arms »
MohammadJavad Azizi · Thang Duong · Yasin Abbasi-Yadkori · Claire Vernade · Andras Gyorgy · Mohammad Ghavamzadeh -
2022 : Improved Generalization Bounds for Transfer Learning via Neural Collapse »
Tomer Galanti · Andras Gyorgy · Marcus Hutter -
2023 Poster: Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost »
Sanae Amani · Tor Lattimore · Andras Gyorgy · Lin Yang -
2023 Poster: Understanding Self-Predictive Learning for Reinforcement Learning »
Yunhao Tang · Zhaohan Guo · Pierre Richemond · Bernardo Avila Pires · Yash Chandak · Remi Munos · Mark Rowland · Mohammad Gheshlaghi Azar · Charline Le Lan · Clare Lyle · Andras Gyorgy · Shantanu Thakoor · Will Dabney · Bilal Piot · Daniele Calandriello · Michal Valko -
2021 Poster: Adapting to Delays and Data in Adversarial Multi-Armed Bandits »
Andras Gyorgy · Pooria Joulani -
2021 Spotlight: Adapting to Delays and Data in Adversarial Multi-Armed Bandits »
Andras Gyorgy · Pooria Joulani -
2020 Poster: Non-Stationary Delayed Bandits with Intermediate Observations »
Claire Vernade · Andras Gyorgy · Timothy Mann -
2020 Poster: Learning with Good Feature Representations in Bandits and in RL with a Generative Model »
Tor Lattimore · Csaba Szepesvari · Gellért Weisz -
2020 Poster: A simpler approach to accelerated optimization: iterative averaging meets optimism »
Pooria Joulani · Anant Raj · Andras Gyorgy · Csaba Szepesvari -
2019 Poster: POLITEX: Regret Bounds for Policy Iteration using Expert Prediction »
Yasin Abbasi-Yadkori · Peter Bartlett · Kush Bhatia · Nevena Lazic · Csaba Szepesvari · Gellért Weisz -
2019 Poster: Learning from Delayed Outcomes via Proxies with Applications to Recommender Systems »
Timothy Mann · Sven Gowal · Andras Gyorgy · Huiyi Hu · Ray Jiang · Balaji Lakshminarayanan · Prav Srinivasan -
2019 Oral: POLITEX: Regret Bounds for Policy Iteration using Expert Prediction »
Yasin Abbasi-Yadkori · Peter Bartlett · Kush Bhatia · Nevena Lazic · Csaba Szepesvari · Gellért Weisz -
2019 Oral: Learning from Delayed Outcomes via Proxies with Applications to Recommender Systems »
Timothy Mann · Sven Gowal · Andras Gyorgy · Huiyi Hu · Ray Jiang · Balaji Lakshminarayanan · Prav Srinivasan -
2019 Poster: CapsAndRuns: An Improved Method for Approximately Optimal Algorithm Configuration »
Gellért Weisz · Andras Gyorgy · Csaba Szepesvari -
2019 Oral: CapsAndRuns: An Improved Method for Approximately Optimal Algorithm Configuration »
Gellért Weisz · Andras Gyorgy · Csaba Szepesvari