Timezone: »
Fixed-point iterations are at the heart of numerical computing and are often a computational bottleneck in real-time applications, which typically instead need a fast solution of moderate accuracy. Classical acceleration methods for fixed-point problems focus on designing algorithms with theoretical guarantees that apply to any fixed-point problem. We present neural fixed-point acceleration, a framework to automatically learn to accelerate convex fixed-point problems that are drawn from a distribution, using ideas from meta-learning and classical acceleration algorithms. We apply our framework to SCS, the state-of-the-art solver for convex cone programming, and design models and loss functions to overcome the challenges of learning over unrolled optimization and acceleration instabilities. Our work brings neural acceleration into any optimization problem expressible with CVXPY. This is relevant to AutoML as we (meta-)learn improvements to a convex optimization solver that replaces an acceleration component that is traditionally hand-crafted.
Upon acceptance, we will openly release the source code containing our batched and differentiable PyTorch implementation of SCS with neural acceleration and all of the supplementary files necessary to fully reproduce our results.
Author Information
Shobha Venkataraman (Facebook)
Brandon Amos (Facebook AI Research)
More from the Same Authors
-
2023 : Neural Optimal Transport with Lagrangian Costs »
Aram-Alexandre Pooladian · Carles Domingo i Enrich · Ricky T. Q. Chen · Brandon Amos -
2023 : Koopman Constrained Policy Optimization: A Koopman operator theoretic method for differentiable optimal control in robotics »
Matthew Retchin · Brandon Amos · Steven Brunton · Shuran Song -
2023 : TaskMet: Task-Driven Metric Learning for Model Learning »
Dishank Bansal · Ricky T. Q. Chen · Mustafa Mukadam · Brandon Amos -
2023 : Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information »
Arman Zharmagambetov · Brandon Amos · Aaron Ferber · Taoan Huang · Bistra Dilkina · Yuandong Tian -
2023 : Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information »
Arman Zharmagambetov · Brandon Amos · Aaron Ferber · Taoan Huang · Bistra Dilkina · Yuandong Tian -
2023 : On optimal control and machine learning »
Brandon Amos -
2023 Poster: Meta Optimal Transport »
Brandon Amos · Giulia Luise · samuel cohen · Ievgen Redko -
2023 Poster: Multisample Flow Matching: Straightening Flows with Minibatch Couplings »
Aram-Alexandre Pooladian · Heli Ben-Hamu · Carles Domingo i Enrich · Brandon Amos · Yaron Lipman · Ricky T. Q. Chen -
2023 Poster: Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories »
Qinqing Zheng · Mikael Henaff · Brandon Amos · Aditya Grover -
2022 : Differentiable optimization for control and reinforcement learning »
Brandon Amos -
2022 Poster: Matching Normalizing Flows and Probability Paths on Manifolds »
Heli Ben-Hamu · samuel cohen · Joey Bose · Brandon Amos · Maximilian Nickel · Aditya Grover · Ricky T. Q. Chen · Yaron Lipman -
2022 Spotlight: Matching Normalizing Flows and Probability Paths on Manifolds »
Heli Ben-Hamu · samuel cohen · Joey Bose · Brandon Amos · Maximilian Nickel · Aditya Grover · Ricky T. Q. Chen · Yaron Lipman -
2021 : Neural Fixed-Point Acceleration for Convex Optimization »
Shobha Venkataraman -
2021 Poster: CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints »
Anselm Paulus · Michal Rolinek · Vit Musil · Brandon Amos · Georg Martius -
2021 Spotlight: CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints »
Anselm Paulus · Michal Rolinek · Vit Musil · Brandon Amos · Georg Martius -
2021 Poster: Riemannian Convex Potential Maps »
samuel cohen · Brandon Amos · Yaron Lipman -
2021 Spotlight: Riemannian Convex Potential Maps »
samuel cohen · Brandon Amos · Yaron Lipman -
2020 Poster: The Differentiable Cross-Entropy Method »
Brandon Amos · Denis Yarats