Poster
One-sided Frank-Wolfe algorithms for saddle problems
Vladimir Kolmogorov · Thomas Pock
Keywords: [ Convex Optimization ]
Abstract:
We study a class of convex-concave saddle-point problems of the form
where is
a linear operator, is the sum of a convex function
with a Lipschitz-continuous gradient and the indicator function of a
bounded convex polytope , and is a convex (possibly
nonsmooth) function. Such problem arises, for example, as a
Lagrangian relaxation of various discrete optimization problems. Our
main assumptions are the existence of an efficient {\em linear
minimization oracle} () for and an efficient {\em
proximal map} () for which motivate the solution via
a blend of proximal primal-dual algorithms and Frank-Wolfe
algorithms. In case is the indicator function of a linear
constraint and function is quadratic, we show a
convergence rate on the dual objective, requiring
calls of . If the problem comes from the constrained
optimization problem then we additionally get bound
both on the primal gap and on the infeasibility gap. In
the most general case, we show a convergence rate of the
primal-dual gap again requiring calls of . To the
best of our knowledge, this improves on the known convergence rates
for the considered class of saddle-point problems. We show
applications to labeling problems frequently appearing in machine
learning and computer vision.
Chat is not available.