Timezone: »
Tree search algorithms, such as branch-and-bound, are the most widely used tools for solving combinatorial problems. These algorithms recursively partition the search space to find an optimal solution. To keep the tree small, it is crucial to carefully decide, when expanding a tree node, which variable to branch on at that node to partition the remaining space. Many partitioning techniques have been proposed, but no theory describes which is optimal. We show how to use machine learning to determine an optimal weighting of any set of partitioning procedures for the instance distribution at hand using samples. Via theory and experiments, we show that learning to branch is both practical and hugely beneficial.
Author Information
Nina Balcan (Carnegie Mellon University)

Maria-Florina Balcan is an Associate Professor in the School of Computer Science at Carnegie Mellon University. Her main research interests are machine learning and theoretical computer science. Her honors include the CMU SCS Distinguished Dissertation Award, an NSF CAREER Award, a Microsoft Faculty Research Fellowship, a Sloan Research Fellowship, and several paper awards. She has served as a Program Committee Co-chair for COLT 2014, a Program Committee Co-chair for ICML 2016, and a board member of the International Machine Learning Society.
Travis Dick (CMU)
Tuomas Sandholm (Carnegie Mellon University)
Tuomas Sandholm is Angel Jordan Professor of Computer Science at Carnegie Mellon University. He is Founder and Director of the Electronic Marketplaces Laboratory. He has published over 450 papers. With his student Vince Conitzer, he initiated the study of automated mechanism design in 2001. In parallel with his academic career, he was Founder, Chairman, and CTO/Chief Scientist of CombineNet, Inc. from 1997 until its acquisition in 2010. During this period the company commercialized over 800 of the world's largest-scale generalized combinatorial multi-attribute auctions, with over $60 billion in total spend and over $6 billion in generated savings. He is Founder and CEO of Optimized Markets, Strategic Machine, and Strategy Robot. Also, his algorithms run the UNOS kidney exchange, which includes 69% of the transplant centers in the US. He has developed the leading algorithms for several general classes of game. The team that he leads is the two-time world champion in computer Heads-Up No-Limit Texas Hold’em poker, and Libratus became the first and only AI to beat top humans at that game. Among his many honors are the NSF Career Award, inaugural ACM Autonomous Agents Research Award, Sloan Fellowship, Carnegie Science Center Award for Excellence, Edelman Laureateship, Newell Award for Research Excellence, and Computers and Thought Award. He is Fellow of the ACM, AAAI, and INFORMS. He holds an honorary doctorate from the University of Zurich.
Ellen Vitercik (Carnegie Mellon University)
Ellen Vitercik is a PhD student in computer science at Carnegie Mellon University. Her primary research interests are artificial intelligence, machine learning, theoretical computer science, and computational economics. Her honors include a National Science Foundation Graduate Research Fellowship and a Microsoft Research Women's Fellowship.
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Learning to Branch »
Fri. Jul 13th 04:15 -- 07:00 PM Room Hall B #144
More from the Same Authors
-
2022 : Meta-Learning Adversarial Bandits »
Nina Balcan · Keegan Harris · Mikhail Khodak · Steven Wu -
2023 : Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Xiangyu Liu · Tuomas Sandholm · Furong Huang · Stephen Mcaleer -
2023 : Learning with Explanation Constraints »
Rattana Pukdee · Dylan Sam · Nina Balcan · Pradeep Ravikumar -
2023 Poster: Near-Optimal $\Phi$-Regret Learning in Extensive-Form Games »
Ioannis Anagnostides · Gabriele Farina · Tuomas Sandholm -
2023 Poster: Team Belief DAG: Generalizing the Sequence Form to Team Games for Fast Computation of Correlated Team Max-Min Equilibria via Regret Minimization »
Brian Zhang · Gabriele Farina · Tuomas Sandholm -
2022 Poster: On Last-Iterate Convergence Beyond Zero-Sum Games »
Ioannis Anagnostides · Ioannis Panageas · Gabriele Farina · Tuomas Sandholm -
2022 Spotlight: On Last-Iterate Convergence Beyond Zero-Sum Games »
Ioannis Anagnostides · Ioannis Panageas · Gabriele Farina · Tuomas Sandholm -
2021 : Q&A Ellen Vitercik »
Ellen Vitercik -
2021 : Invited Talk by Ellen Vitercik: Automated Parameter Optimization for Integer Programming »
Ellen Vitercik -
2021 Poster: Connecting Optimal Ex-Ante Collusion in Teams to Extensive-Form Correlation: Faster Algorithms and Positive Complexity Results »
Gabriele Farina · Andrea Celli · Nicola Gatti · Tuomas Sandholm -
2021 Spotlight: Connecting Optimal Ex-Ante Collusion in Teams to Extensive-Form Correlation: Faster Algorithms and Positive Complexity Results »
Gabriele Farina · Andrea Celli · Nicola Gatti · Tuomas Sandholm -
2020 Poster: Refined bounds for algorithm configuration: The knife-edge of dual class approximability »
Nina Balcan · Tuomas Sandholm · Ellen Vitercik -
2020 Poster: Sparsified Linear Programming for Zero-Sum Equilibrium Finding »
Brian Zhang · Tuomas Sandholm -
2020 Poster: Stochastic Regret Minimization in Extensive-Form Games »
Gabriele Farina · Christian Kroer · Tuomas Sandholm -
2019 Poster: Provable Guarantees for Gradient-Based Meta-Learning »
Nina Balcan · Mikhail Khodak · Ameet Talwalkar -
2019 Oral: Provable Guarantees for Gradient-Based Meta-Learning »
Nina Balcan · Mikhail Khodak · Ameet Talwalkar -
2019 Poster: Deep Counterfactual Regret Minimization »
Noam Brown · Adam Lerer · Sam Gross · Tuomas Sandholm -
2019 Poster: Stable-Predictive Optimistic Counterfactual Regret Minimization »
Gabriele Farina · Christian Kroer · Noam Brown · Tuomas Sandholm -
2019 Poster: Regret Circuits: Composability of Regret Minimizers »
Gabriele Farina · Christian Kroer · Tuomas Sandholm -
2019 Oral: Deep Counterfactual Regret Minimization »
Noam Brown · Adam Lerer · Sam Gross · Tuomas Sandholm -
2019 Oral: Stable-Predictive Optimistic Counterfactual Regret Minimization »
Gabriele Farina · Christian Kroer · Noam Brown · Tuomas Sandholm -
2019 Oral: Regret Circuits: Composability of Regret Minimizers »
Gabriele Farina · Christian Kroer · Tuomas Sandholm -
2018 Tutorial: Machine Learning in Automated Mechanism Design for Pricing and Auctions »
Nina Balcan · Tuomas Sandholm · Ellen Vitercik -
2017 : Label Efficient Learning by Exploiting Multi-class Output Codes. »
Travis Dick -
2017 Poster: Differentially Private Clustering in High-Dimensional Euclidean Spaces »
Nina Balcan · Travis Dick · Yingyu Liang · Wenlong Mou · Hongyang Zhang -
2017 Talk: Differentially Private Clustering in High-Dimensional Euclidean Spaces »
Nina Balcan · Travis Dick · Yingyu Liang · Wenlong Mou · Hongyang Zhang -
2017 Poster: Regret Minimization in Behaviorally-Constrained Zero-Sum Games »
Gabriele Farina · Christian Kroer · Tuomas Sandholm -
2017 Poster: Reduced Space and Faster Convergence in Imperfect-Information Games via Pruning »
Noam Brown · Tuomas Sandholm -
2017 Talk: Reduced Space and Faster Convergence in Imperfect-Information Games via Pruning »
Noam Brown · Tuomas Sandholm -
2017 Talk: Regret Minimization in Behaviorally-Constrained Zero-Sum Games »
Gabriele Farina · Christian Kroer · Tuomas Sandholm -
2017 Poster: Risk Bounds for Transferring Representations With and Without Fine-Tuning »
Daniel McNamara · Nina Balcan -
2017 Talk: Risk Bounds for Transferring Representations With and Without Fine-Tuning »
Daniel McNamara · Nina Balcan