Poster
Non-stationary Online Learning for Curved Losses: Improved Dynamic Regret via Mixability
Yu-Jie Zhang · Peng Zhao · Masashi Sugiyama
East Exhibition Hall A-B #E-1809
In this paper, we study how to learn from and make predictions with streaming data in non-stationary environments, where data arrive sequentially and their patterns can change over time. Our goal is to maintain a model that consistently makes accurate predictions throughout the entire sequence. This work is primarily theoretical, focusing on developing methods with strong mathematical guarantees on their performance. Specifically, we measure the performance of our methods using dynamic regret, where lower values indicate better adaptability to changing environments. Our main result is that by leveraging the curvature of the loss function, one can achieve better theoretical guarantees than methods that do not exploit this property. Similar results were achieved by previous work, but our method further improves the theoretical guarantees while using a simple yet effective analytical framework that avoids the complex analysis employed in earlier work.
Live content is unavailable. Log in and register to view live content