Timezone: »

Neural Fixed-Point Acceleration for Convex Optimization
Shobha Venkataraman · Brandon Amos

Fixed-point iterations are at the heart of numerical computing and are often a computational bottleneck in real-time applications, which typically instead need a fast solution of moderate accuracy. Classical acceleration methods for fixed-point problems focus on designing algorithms with theoretical guarantees that apply to any fixed-point problem. We present neural fixed-point acceleration, a framework to automatically learn to accelerate convex fixed-point problems that are drawn from a distribution, using ideas from meta-learning and classical acceleration algorithms. We apply our framework to SCS, the state-of-the-art solver for convex cone programming, and design models and loss functions to overcome the challenges of learning over unrolled optimization and acceleration instabilities. Our work brings neural acceleration into any optimization problem expressible with CVXPY. This is relevant to AutoML as we (meta-)learn improvements to a convex optimization solver that replaces an acceleration component that is traditionally hand-crafted.

Upon acceptance, we will openly release the source code containing our batched and differentiable PyTorch implementation of SCS with neural acceleration and all of the supplementary files necessary to fully reproduce our results.

Author Information

Shobha Venkataraman (Facebook)
Brandon Amos (Facebook AI Research)

More from the Same Authors