Spotlight
Optimal regret algorithm for Pseudo-1d Bandit Convex Optimization
Aadirupa Saha · Nagarajan Natarajan · Praneeth Netrapalli · Prateek Jain
Abstract:
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure, i.e. where the output of is one-dimensional. At each round, the learner observes context , plays prediction (e.g. ) for some and observes loss where is a convex Lipschitz-continuous function. The goal is to minimize the standard regret metric. This pseudo-1d bandit convex optimization problem (\SBCO) arises frequently in domains such as online decision-making or parameter-tuning in large systems. For this problem, we first show a regret lower bound of for any algorithm, where is the number of rounds. We propose a new algorithm \sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively, guaranteeing the {\em optimal} regret bound mentioned above, up to additional logarithmic factors. In contrast, applying state-of-the-art online convex optimization methods leads to regret, that is significantly suboptimal in terms of .
Chat is not available.