Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Stochastic Continuous Submodular Maximization: Boosting via Non-oblivious Function

Qixin Zhang · Zengde Deng · Zaiyi Chen · Haoyuan Hu · Yu Yang

Hall E #1213

Keywords: [ MISC: Online Learning, Active Learning and Bandits ] [ OPT: Non-Convex ] [ OPT: First-order ] [ T: Online Learning and Bandits ] [ OPT: Stochastic ] [ T: Optimization ]


Abstract: In this paper, we revisit Stochastic Continuous Submodular Maximization in both offline and online settings, which can benefit wide applications in machine learning and operations research areas. We present a boosting framework covering gradient ascent and online gradient ascent. The fundamental ingredient of our methods is a novel non-oblivious function F derived from a factor-revealing optimization problem, whose any stationary point provides a (1eγ)-approximation to the global maximum of the γ-weakly DR-submodular objective function fC1,1L(X). Under the offline scenario, we propose a boosting gradient ascent method achieving (1eγϵ2)-approximation after O(1/ϵ2) iterations, which improves the (γ21+γ2) approximation ratio of the classical gradient ascent algorithm.In the online setting, for the first time we consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious function F. Meanwhile, we verify that this boosting online algorithm achieves a regret of O(D) against a (1eγ)-approximation to the best feasible solution in hindsight, where D is the sum of delays of gradient feedback. To the best of our knowledge, this is the first result to obtain O(T) regret against a (1eγ)-approximation with O(1) gradient inquiry at each time step, when no delay exists, i.e., D=T. Finally, numerical experiments demonstrate the effectiveness of our boosting methods.

Chat is not available.