Timezone: »
Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information
Arman Zharmagambetov · Brandon Amos · Aaron Ferber · Taoan Huang · Bistra Dilkina · Yuandong Tian
Event URL: https://openreview.net/forum?id=6hGYFuG9F7 »
Recent works in learning-integrated optimization have shown promise in settings where the optimization problem is only partially observed or where general-purpose optimizers perform poorly without expert tuning. By learning an optimizer $\mathbf{g}$ to tackle these challenging problems with $f$ as the objective, the optimization process can be substantially accelerated by leveraging past experience. The optimizer can be trained with supervision from known optimal solutions or implicitly by optimizing the compound function $f\circ \mathbf{g}$. The implicit approach may not require optimal solutions as labels and is capable of handling problem uncertainty; however, it is slow to train and deploy due to frequent calls to optimizer $\mathbf{g}$ during both training and testing. The training is further challenged by sparse gradients of $\mathbf{g}$, especially for combinatorial solvers. To address these challenges, we propose using a smooth and learnable **Landscape Surrogate** $\mathcal{M}$ as a replacement for $f\circ \mathbf{g}$. This surrogate, learnable by neural networks, can be computed faster than the solver~$\mathbf{g}$, provides dense and smooth gradients during training, can generalize to unseen optimization problems, and is efficiently learned via alternating optimization. We test our approach on both synthetic problems, including shortest path and multidimensional knapsack, and real-world problems such as portfolio optimization, achieving comparable or superior objective values compared to state-of-the-art baselines while reducing the number of calls to $\mathbf{g}$. Notably, our approach outperforms existing methods for computationally expensive high-dimensional problems.
Recent works in learning-integrated optimization have shown promise in settings where the optimization problem is only partially observed or where general-purpose optimizers perform poorly without expert tuning. By learning an optimizer $\mathbf{g}$ to tackle these challenging problems with $f$ as the objective, the optimization process can be substantially accelerated by leveraging past experience. The optimizer can be trained with supervision from known optimal solutions or implicitly by optimizing the compound function $f\circ \mathbf{g}$. The implicit approach may not require optimal solutions as labels and is capable of handling problem uncertainty; however, it is slow to train and deploy due to frequent calls to optimizer $\mathbf{g}$ during both training and testing. The training is further challenged by sparse gradients of $\mathbf{g}$, especially for combinatorial solvers. To address these challenges, we propose using a smooth and learnable **Landscape Surrogate** $\mathcal{M}$ as a replacement for $f\circ \mathbf{g}$. This surrogate, learnable by neural networks, can be computed faster than the solver~$\mathbf{g}$, provides dense and smooth gradients during training, can generalize to unseen optimization problems, and is efficiently learned via alternating optimization. We test our approach on both synthetic problems, including shortest path and multidimensional knapsack, and real-world problems such as portfolio optimization, achieving comparable or superior objective values compared to state-of-the-art baselines while reducing the number of calls to $\mathbf{g}$. Notably, our approach outperforms existing methods for computationally expensive high-dimensional problems.
Author Information
Arman Zharmagambetov (University of California, Merced)
Brandon Amos (Meta)
Aaron Ferber (University of Southern California)

I am a Ph.D. student at the University of Southern California, advised by Prof. Bistra Dilkina working on deep integration of machine learning and combinatorial optimization for data-driven decision making.
Taoan Huang (University of Southern California)
Bistra Dilkina (University of Southern California)
Yuandong Tian (Facebook AI Research)
More from the Same Authors
-
2021 : Neural Fixed-Point Acceleration for Convex Optimization »
Shobha Venkataraman · Brandon Amos -
2023 : Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer »
Yuandong Tian · Yiping Wang · Beidi Chen · Simon Du -
2023 : Neural Optimal Transport with Lagrangian Costs »
Aram-Alexandre Pooladian · Carles Domingo i Enrich · Ricky T. Q. Chen · Brandon Amos -
2023 : H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models »
Zhenyu Zhang · Ying Sheng · Tianyi Zhou · Tianlong Chen · Lianmin Zheng · Ruisi Cai · Zhao Song · Yuandong Tian · Christopher Re · Clark Barrett · Zhangyang “Atlas” Wang · Beidi Chen -
2023 : Koopman Constrained Policy Optimization: A Koopman operator theoretic method for differentiable optimal control in robotics »
Matthew Retchin · Brandon Amos · Steven Brunton · Shuran Song -
2023 : TaskMet: Task-Driven Metric Learning for Model Learning »
Dishank Bansal · Ricky T. Q. Chen · Mustafa Mukadam · Brandon Amos -
2023 : Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information »
Arman Zharmagambetov · Brandon Amos · Aaron Ferber · Taoan Huang · Bistra Dilkina · Yuandong Tian -
2023 : SurCo: Learning Linear SURrogates for COmbinatorial Nonlinear Optimization Problems »
Aaron Ferber · Taoan Huang · Daochen Zha · Martin Schubert · Benoit Steiner · Bistra Dilkina · Yuandong Tian -
2023 : Searching Large Neighborhoods for Integer Linear Programs with Contrastive Learning »
Taoan Huang · Aaron Ferber · Yuandong Tian · Bistra Dilkina · Benoit Steiner -
2023 : Contributed talks 2 »
Simon Du · Wei Huang · Yuandong Tian -
2023 : On optimal control and machine learning »
Brandon Amos -
2023 Oral: Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time »
Zichang Liu · Jue Wang · Tri Dao · Tianyi Zhou · Binhang Yuan · Zhao Song · Anshumali Shrivastava · Ce Zhang · Yuandong Tian · Christopher Re · Beidi Chen -
2023 Poster: SurCo: Learning Linear SURrogates for COmbinatorial Nonlinear Optimization Problems »
Aaron Ferber · Taoan Huang · Daochen Zha · Martin Schubert · Benoit Steiner · Bistra Dilkina · Yuandong Tian -
2023 Poster: Meta Optimal Transport »
Brandon Amos · Giulia Luise · samuel cohen · Ievgen Redko -
2023 Poster: Searching Large Neighborhoods for Integer Linear Programs with Contrastive Learning »
Taoan Huang · Aaron Ferber · Yuandong Tian · Bistra Dilkina · Benoit Steiner -
2023 Poster: Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time »
Zichang Liu · Jue Wang · Tri Dao · Tianyi Zhou · Binhang Yuan · Zhao Song · Anshumali Shrivastava · Ce Zhang · Yuandong Tian · Christopher Re · Beidi Chen -
2023 Poster: Moccasin: Efficient Tensor Rematerialization for Neural Networks »
Burak Bartan · Haoming Li · Harris Teague · Christopher Lott · Bistra Dilkina -
2023 Poster: Multisample Flow Matching: Straightening Flows with Minibatch Couplings »
Aram-Alexandre Pooladian · Heli Ben-Hamu · Carles Domingo i Enrich · Brandon Amos · Yaron Lipman · Ricky T. Q. Chen -
2023 Poster: Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories »
Qinqing Zheng · Mikael Henaff · Brandon Amos · Aditya Grover -
2023 Poster: Learning Compiler Pass Orders using Coreset and Normalized Value Prediction »
Youwei Liang · Kevin Stone · Ali Shameli · Chris Cummins · Mostafa Elhoushi · Jiadong Guo · Benoit Steiner · Xiaomeng Yang · Pengtao Xie · Hugh Leather · Yuandong Tian -
2022 : Differentiable optimization for control and reinforcement learning »
Brandon Amos -
2022 Poster: Matching Normalizing Flows and Probability Paths on Manifolds »
Heli Ben-Hamu · samuel cohen · Joey Bose · Brandon Amos · Maximilian Nickel · Aditya Grover · Ricky T. Q. Chen · Yaron Lipman -
2022 Spotlight: Matching Normalizing Flows and Probability Paths on Manifolds »
Heli Ben-Hamu · samuel cohen · Joey Bose · Brandon Amos · Maximilian Nickel · Aditya Grover · Ricky T. Q. Chen · Yaron Lipman -
2021 Poster: CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints »
Anselm Paulus · Michal Rolinek · Vit Musil · Brandon Amos · Georg Martius -
2021 Spotlight: CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints »
Anselm Paulus · Michal Rolinek · Vit Musil · Brandon Amos · Georg Martius -
2021 Poster: Riemannian Convex Potential Maps »
samuel cohen · Brandon Amos · Yaron Lipman -
2021 Spotlight: Riemannian Convex Potential Maps »
samuel cohen · Brandon Amos · Yaron Lipman -
2020 Poster: The Differentiable Cross-Entropy Method »
Brandon Amos · Denis Yarats -
2020 Poster: Smaller, more accurate regression forests using tree alternating optimization »
Arman Zharmagambetov · Miguel Carreira-Perpinan