Skip to yearly menu bar Skip to main content


Poster

Optimization-Derived Learning with Essential Convergence Analysis of Training and Hyper-training

Risheng Liu · Xuan Liu · Shangzhi Zeng · Jin Zhang · Yixuan ZHANG

Hall E #728

Keywords: [ OPT: Higher order ] [ OPT: First-order ] [ OPT: Global Optimization ] [ OPT: Non-Convex ] [ OPT: Convex ] [ T: Optimization ] [ OPT: Learning for Optimization ] [ OPT: Multi-objective Optimization ] [ OPT: Bilevel optimization ] [ OPT: Everything Else ] [ Optimization ]


Abstract:

Recently, Optimization-Derived Learning (ODL) has attracted attention from learning and vision areas, which designs learning models from the perspective of optimization. However, previous ODL approaches regard the training and hyper-training procedures as two separated stages, meaning that the hyper-training variables have to be fixed during the training process, and thus it is also impossible to simultaneously obtain the convergence of training and hyper-training variables. In this work, we design a Generalized Krasnoselskii-Mann (GKM) scheme based on fixed-point iterations as our fundamental ODL module, which unifies existing ODL methods as special cases. Under the GKM scheme, a Bilevel Meta Optimization (BMO) algorithmic framework is constructed to solve the optimal training and hyper-training variables together. We rigorously prove the essential joint convergence of the fixed-point iteration for training and the process of optimizing hyper-parameters for hyper-training, both on the approximation quality, and on the stationary analysis. Experiments demonstrate the efficiency of BMO with competitive performance on sparse coding and real-world applications such as image deconvolution and rain streak removal.

Chat is not available.