Upper-Linearizability of Online Non-Monotone DR-Submodular Maximization over Down-Closed Convex Sets
Yiyang Lu ⋅ Hareshkumar Jadav ⋅ Mohammad Pedramfar ⋅ Ranveer Singh ⋅ Vaneet Aggarwal
Abstract
We study online maximization of non-monotone Diminishing-Return(DR)-submodular functions over down-closed convex sets, a regime where existing projection-free online methods suffer from suboptimal regret and limited feedback guarantees. Our main contribution is a new structural result showing that this class is $1/e$-Upper-Linearizable under carefully designed exponential reparametrization, scaling parameter, and surrogate potential, enabling a reduction to online linear optimization. As a result, we obtain optimal $O(T^{1/2})$ static regret with a single gradient query per round and unlock adaptive and dynamic regret guarantees, together with improved rates under semi-bandit, bandit, and zeroth-order feedback. Across all feedback models, our bounds strictly improve the state of the art.
Successful Page Load