Timezone: »
Poster
Stochastic Continuous Submodular Maximization: Boosting via Non-oblivious Function
Qixin Zhang · Zengde Deng · Zaiyi Chen · Haoyuan Hu · Yu Yang
In this paper, we revisit Stochastic Continuous Submodular Maximization in both offline and online settings, which can benefit wide applications in machine learning and operations research areas. We present a boosting framework covering gradient ascent and online gradient ascent. The fundamental ingredient of our methods is a novel non-oblivious function $F$ derived from a factor-revealing optimization problem, whose any stationary point provides a $(1-e^{-\gamma})$-approximation to the global maximum of the $\gamma$-weakly DR-submodular objective function $f\in C^{1,1}_L(\mathcal{X})$. Under the offline scenario, we propose a boosting gradient ascent method achieving $(1-e^{-\gamma}-\epsilon^{2})$-approximation after $O(1/\epsilon^2)$ iterations, which improves the $(\frac{\gamma^2}{1+\gamma^2})$ approximation ratio of the classical gradient ascent algorithm.In the online setting, for the first time we consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious function $F$. Meanwhile, we verify that this boosting online algorithm achieves a regret of $O(\sqrt{D})$ against a $(1-e^{-\gamma})$-approximation to the best feasible solution in hindsight, where $D$ is the sum of delays of gradient feedback. To the best of our knowledge, this is the first result to obtain $O(\sqrt{T})$ regret against a $(1-e^{-\gamma})$-approximation with $O(1)$ gradient inquiry at each time step, when no delay exists, i.e., $D=T$. Finally, numerical experiments demonstrate the effectiveness of our boosting methods.
Author Information
Qixin Zhang (City University of Hong Kong)
Zengde Deng (Cainiao Network)
Zaiyi Chen (University of Science and Technology of China)
Haoyuan Hu (Artificial Intelligence Department, Zhejiang Cainiao Supply Chain Management Co.)
Yu Yang (City University of Hong Kong)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Stochastic Continuous Submodular Maximization: Boosting via Non-oblivious Function »
Thu. Jul 21st 06:35 -- 06:40 PM Room Ballroom 3 & 4
More from the Same Authors
-
2021 : Unconstrained Submodular Maximization with Modular Costs: Tight Approximation and Application to Profit Maximization »
Tianyuan Jin · Yu Yang · Renchi Yang · Jieming Shi · Keke Huang · Xiaokui Xiao -
2021 : Unconstrained Submodular Maximization with Modular Costs: Tight Approximation and Application to Profit Maximization »
Xiaokui Xiao · Keke Huang · Jieming Shi · Renchi Yang · Yu Yang · Tianyuan Jin -
2021 Poster: Towards Better Laplacian Representation in Reinforcement Learning with Generalized Graph Drawing »
Kaixin Wang · Kuangqi Zhou · Qixin Zhang · Jie Shao · Bryan Hooi · Jiashi Feng -
2021 Spotlight: Towards Better Laplacian Representation in Reinforcement Learning with Generalized Graph Drawing »
Kaixin Wang · Kuangqi Zhou · Qixin Zhang · Jie Shao · Bryan Hooi · Jiashi Feng -
2019 Poster: Katalyst: Boosting Convex Katayusha for Non-Convex Problems with a Large Condition Number »
Zaiyi Chen · Yi Xu · Haoyuan Hu · Tianbao Yang -
2019 Oral: Katalyst: Boosting Convex Katayusha for Non-Convex Problems with a Large Condition Number »
Zaiyi Chen · Yi Xu · Haoyuan Hu · Tianbao Yang -
2018 Poster: SADAGRAD: Strongly Adaptive Stochastic Gradient Methods »
Zaiyi Chen · Yi Xu · Enhong Chen · Tianbao Yang -
2018 Oral: SADAGRAD: Strongly Adaptive Stochastic Gradient Methods »
Zaiyi Chen · Yi Xu · Enhong Chen · Tianbao Yang -
2018 Poster: Fast Stochastic AUC Maximization with $O(1/n)$-Convergence Rate »
Mingrui Liu · Xiaoxuan Zhang · Zaiyi Chen · Xiaoyu Wang · Tianbao Yang -
2018 Oral: Fast Stochastic AUC Maximization with $O(1/n)$-Convergence Rate »
Mingrui Liu · Xiaoxuan Zhang · Zaiyi Chen · Xiaoyu Wang · Tianbao Yang