Skip to yearly menu bar Skip to main content


Optimal Continuous DR-Submodular Maximization and Applications to Provable Mean Field Inference

Yatao Bian · Joachim Buhmann · Andreas Krause

Pacific Ballroom #98

Keywords: [ Non-convex Optimization ] [ Combinatorial Optimization ]


Mean field inference for discrete graphical models is generally a highly nonconvex problem, which also holds for the class of probabilistic log-submodular models. Existing optimization methods, e.g., coordinate ascent algorithms, typically only find local optima. In this work we propose provable mean filed methods for probabilistic log-submodular models and its posterior agreement (PA) with strong approximation guarantees. The main algorithmic technique is a new Double Greedy scheme, termed DR-DoubleGreedy, for continuous DR-submodular maximization with box-constraints. It is a one-pass algorithm with linear time complexity, reaching the optimal 1/2 approximation ratio, which may be of independent interest. We validate the superior performance of our algorithms against baselines on both synthetic and real-world datasets.

Live content is unavailable. Log in and register to view live content