Timezone: »
In this paper, we demonstrate how to learn the objective function of a decision maker while only observing the problem input data and the decision maker's corresponding decisions over multiple rounds. Our approach is based on online learning techniques and works for linear objectives over arbitrary sets for which we have a linear optimization oracle and as such generalizes previous work based on KKT-system decomposition and dualization approaches. The applicability of our framework for learning linear constraints is also discussed briefly. Our algorithm converges at a rate of O(1/sqrt(T)), and we demonstrate its effectiveness and applications in preliminary computational results.
Author Information
Sebastian Pokutta (Georgia Tech)
Andreas Bärmann (FAU Erlangen-Nürnberg)
Oskar Schneider
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Emulating the Expert: Inverse Optimization through Online Learning »
Tue. Aug 8th 08:30 AM -- 12:00 PM Room Gallery
More from the Same Authors
-
2019 Poster: Blended Conditonal Gradients »
Gábor Braun · Sebastian Pokutta · Dan Tu · Stephen Wright -
2019 Oral: Blended Conditonal Gradients »
Gábor Braun · Sebastian Pokutta · Dan Tu · Stephen Wright -
2017 Poster: Lazifying Conditional Gradient Algorithms »
Gábor Braun · Sebastian Pokutta · Daniel Zink -
2017 Poster: Conditional Accelerated Lazy Stochastic Gradient Descent »
Guanghui · Sebastian Pokutta · Yi Zhou · Daniel Zink -
2017 Talk: Conditional Accelerated Lazy Stochastic Gradient Descent »
Guanghui · Sebastian Pokutta · Yi Zhou · Daniel Zink -
2017 Talk: Lazifying Conditional Gradient Algorithms »
Gábor Braun · Sebastian Pokutta · Daniel Zink