Skip to yearly menu bar Skip to main content


Spotlight

Stochastic Continuous Submodular Maximization: Boosting via Non-oblivious Function

Qixin Zhang · Zengde Deng · Zaiyi Chen · Haoyuan Hu · Yu Yang

Ballroom 3 & 4

Abstract: In this paper, we revisit Stochastic Continuous Submodular Maximization in both offline and online settings, which can benefit wide applications in machine learning and operations research areas. We present a boosting framework covering gradient ascent and online gradient ascent. The fundamental ingredient of our methods is a novel non-oblivious function F derived from a factor-revealing optimization problem, whose any stationary point provides a (1eγ)-approximation to the global maximum of the γ-weakly DR-submodular objective function fCL1,1(X). Under the offline scenario, we propose a boosting gradient ascent method achieving (1eγϵ2)-approximation after O(1/ϵ2) iterations, which improves the (γ21+γ2) approximation ratio of the classical gradient ascent algorithm.In the online setting, for the first time we consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious function F. Meanwhile, we verify that this boosting online algorithm achieves a regret of O(D) against a (1eγ)-approximation to the best feasible solution in hindsight, where D is the sum of delays of gradient feedback. To the best of our knowledge, this is the first result to obtain O(T) regret against a (1eγ)-approximation with O(1) gradient inquiry at each time step, when no delay exists, i.e., D=T. Finally, numerical experiments demonstrate the effectiveness of our boosting methods.

Chat is not available.